You might think of workplace assistants as task‑doers: summarize this email, draft that slide, pull numbers into a chart. But Microsoft’s own data suggests people have quietly repurposed Copilot into something far more personal — a place to ask about health, relationships and life decisions.
A mountain of anonymized chats
Microsoft says its Copilot Usage Report 2025 analyzes more than 37.5 million anonymized consumer chats — the largest chatbot usage study the company has published. The findings are striking in their ordinariness. Health tops the list of topics users bring to Copilot, beating out technology queries, news, money and entertainment. Folks ask about medicine and symptoms alongside recipe ideas and travel plans. Early mornings skew toward religion and philosophy; commutes lift travel-related questions. Over the year, the kinds of conversations shifted too — programming interest waned as society, culture and history rose.
The company stresses privacy steps: conversations were de‑identified and reduced to summaries for analysis, and the dataset excluded commercial and educational Copilot sessions. Still, the patterns are clear: people treat chat assistants like companions for intimate, everyday questions.
Why the shift matters — and why it worried some investors
This isn’t just an academic curiosity. When users start depending on AI for advice about health or relationships, the stakes rise. Accuracy matters. Tone matters. So does trust. The market noticed. Headlines and analysts flagged that Copilot’s growing intimacy could mean new regulatory scrutiny, unexpected liability, and a more complicated path to monetization — pressures that briefly roiled Microsoft’s stock.
Investors often price a company not just on product strength but on the business, legal and reputational risks that come with it. An assistant that strays into medical or legal territory — even unintentionally — invites scrutiny, and that uncertainty can make traders jittery.
From chat to agents: Microsoft’s commercial playbook
At the same time Microsoft is watching what consumers ask, it’s also doubling down on selling AI to businesses. Microsoft recently launched Copilot Business — a version of its workplace Copilot aimed at small and mid‑sized companies — and is pushing Copilot Studio to let organizations build multiagent workflows that automate entire processes across apps. The pitch is simple: move beyond one-off chat answers to agents that route, escalate and execute tasks, freeing teams from repetitive handoffs.
That sales narrative helps explain Microsoft’s push to frame Copilot as “enterprise AI built for work,” with controls, governance and integration into Microsoft 365 apps. For companies that want to automate onboarding, customer support or inventory tasks, agent orchestration can be transformative — and potentially lucrative for Microsoft.
Privacy, safety and the tricky gray area
Microsoft emphasizes technical safeguards: Copilot honors Microsoft Purview permissions, runs on the company’s cloud, and uses de‑identification for research. But the human behavior the report surfaces complicates matters. People don’t always label a sensitive question as such; they may reveal personal details in pursuit of help. That raises questions about how models are trained, what data gets logged, and whether summary extraction truly prevents re‑identification.
This tension isn’t unique to Microsoft. Competing platforms are experimenting with deeper access to user data and workflows, and those architectures raise similar privacy tradeoffs — as debates around large models accessing email, calendar and drive content have shown. For a sense of how other companies are pushing agentic features — and the privacy discussion that follows — look at contemporary work on agentic booking and scheduling systems that link into personal accounts.Google’s agentic booking experiments offer a useful comparison. Meanwhile, integration of deep search into email and Drive has sparked its own privacy questions in the AI ecosystem.Gemini’s deeper workspace integrations are raising similar guardrail conversations.
What this could mean for users and policymakers
Practically, more people turning to AI for personal advice could push firms and regulators to clarify responsibilities: when does an assistant's suggestion require a medical disclaimer? What auditing must companies perform to prevent harm? And what controls should users have to opt out of having sensitive prompts used for product improvement?
For businesses, Microsoft’s two‑track approach is revealing. The firm wants Copilot to be both a trusted workplace assistant (with governance controls attractive to IT admins) and a consumer confidant (where people seek personal advice). Those aims can conflict: the guardrails enterprise customers demand — strict data residency, limited telemetry, clear consent — can be at odds with the kinds of broad, learning‑from‑use models that fuel consumer features.
Microsoft has tried to bridge that gap: packaging Copilot Business with Purview and Defender features, and offering tools for custom agents inside Copilot Studio that don’t require heavy coding. Whether that reassures enterprises and regulators remains an open question.
A year ago, Copilot was positioned mainly as a productivity layer. The usage data shows it has quietly wormed its way into the everyday. That shift is exciting — and complicated. Companies will need clearer rules, better disclosures and smarter design to keep the help useful and the risks manageable. Users, meanwhile, should be mindful about when to treat an AI as a helpful assistant and when to seek a professional.