I didn’t notice the minute hand creep until I looked up and the room was dark. What started as a ten-minute query — a quick fact-check, a brainstorming nudge — had become hours of sidetracks, alternate prompts and “just one more” replies. It was fast, frictionless and entirely unjudging. It was also quietly eating months of attention.

AI chat has been sold as a productivity turbocharger. For many, including the writer who told his story to Android Authority, it was exactly that at first — an instant assistant for small tasks and a sounding board for odd curiosities. Then the conversation turned inward: the tool became the entertainment, the companion, the place to chase hypotheticals and polish prompts until the answer felt perfect. That’s when usefulness blurred into habit.

Why a chatbot is unusually good at keeping us

There are three simple ingredients in the recipe: speed, responsiveness and psychological safety. Chat interfaces give instant feedback. They don’t sigh when you ask something tangential. They can iterate with zero social cost. Those features make interactions rewarding in the same way notifications and streaming autoplay do: frequent, small hits of satisfaction.

But unlike a video loop, an LLM invites work and creativity. You’re editing, prompting, editing again. That creates a false sense of productivity — activity without progress. Researchers and commentators began calling this a “productivity paradox”: tools intended to save time that, in practice, encourage endless refinement and verification.

The New York Times recently argued that models are competing for our affection, and that competition isn’t just about better answers; it’s about emotional engagement. When an algorithm mirrors your tone or pushes back diplomatically, it can start to feel like a collaborator. That’s useful. It becomes risky when it replaces human conversation or time spent learning by doing.

The costs go beyond lost minutes

Time is the obvious casualty, but not the only one. Heavy reliance on AI can erode skills: people stop practicing the deep thinking and problem-solving that build expertise. Teams that lean too hard on quick AI drafts may find themselves editing outputs more than creating from scratch. Outages and reliability issues — which affected major LLM services in 2025 — expose another vulnerability: when the assistant goes offline, so does the workflow.

There are environmental and reputational angles too. At scale, repeated low-value queries add to computing and energy footprints. And some studies suggest people using AI excessively can be perceived as less competent, especially when outputs are passed off without proper verification.

Where organisations and users are adjusting

The reaction has been mixed. Firms pushing AI adoption without training see diminishing returns; those that teach AI literacy and set boundaries get better results. Practical fixes are simple, if enforced: query budgets, version control on AI-generated drafts, and training that clarifies tasks AI should handle versus those that require human insight.

On the user side, small rituals help. The Android Authority writer described setting a timer for sessions and returning to offline hobbies — woodworking, fiction — to rebalance attention. Others are adopting hybrid habits: use AI for idea generation, but keep final editing and synthesis firmly human.

Tools in the ecosystem are evolving too. Google’s recent work on agentic features suggests AI will increasingly automate errands like booking appointments; the agentic booking in Google’s AI Mode makes that convenience explicit, and increases the temptation to offload more of daily life. Meanwhile, models that integrate deeply into productivity stacks — such as Gemini Deep Research’s access to Gmail and Drive — can accelerate workflows but also raise questions about where the line between assistance and abdication lies.

OpenAI’s own product moves, like Sora’s mobile arrival, underscore a cultural shift: chatbots aren’t just tools on your desktop anymore — they live in pockets and on phones, ready to engage at any idle moment. See OpenAI’s Sora landing on Android for one example of this mobile push.

Practical guardrails that work

  • Habit audits: log sessions for a week. Note triggers (boredom? procrastination?) and average duration.
  • Time-boxed prompts: set a simple timer (15–30 minutes) and treat the session as a focused sprint, not a leisure activity.
  • Role rules: assign categories where AI helps (research, ideation) and where humans lead (final decisions, interpersonal conversations).
  • Tool hygiene: keep one reliable source for deep work — a distraction-free editor on a laptop — and use AI for targeted tasks. If you draft on a MacBook, for example, save AI sessions as supporting notes rather than the main document (check latest price).

Companies can scale these habits by building AI literacy programs, specifying acceptable uses, and incentivizing mastery of domain skills rather than raw speed.

This is not a plea to banish chatbots. They transform how we work, and they will continue to — as commentators at Forbes have noted — reshape jobs, creativity and daily routines in valuable ways. The question is how to keep the value and shed the vortex.

I’ll leave you with a practical test you can run this week: pick one recurring task you normally offload to a chatbot. Do it manually, end-to-end, once. Notice the difference in time, confidence and retention. If you’re happier with the result, save AI for the moments when it truly adds value. If you’re not, at least you’ll know why the machine feels like a better companion than your own attention — and that knowledge is the first step to taking some of it back.

AIProductivityBehaviorTechnologyWork