Millions Secretly Use AI as Therapist, 1M Chat Logs Reveal

By 813 Staff

Millions Secretly Use AI as Therapist, 1M Chat Logs Reveal

A closely watched product launch reveals Millions Secretly Use AI as Therapist, 1M Chat Logs Reveal, according to Anthropic (@AnthropicAI) (in the last 24 hours).

Source: https://x.com/AnthropicAI/status/2049927618397614466

Most AI companies publish usage statistics that feel like polished marketing gloss—showing how many people used a tool, not how they actually think about it. What’s different this time is that Anthropic (@AnthropicAI) has gone deeper, releasing an analysis of one million real conversations with Claude, focusing not on volume but on intent. Internal documents shared with the 813 Morning Brief show the company categorized these interactions by the type of guidance users sought, revealing patterns that engineers close to the project say surprised even the team.

The study, posted on April 30, 2026, examined a random sample of anonymized conversations across Claude’s free and paid tiers. Rather than counting simple queries like “write an email” or “summarize this PDF,” Anthropic’s researchers labeled each conversation for underlying purpose: advice, explanation, brainstorming, emotional support, decision-making, and several other categories. What emerged was a portrait of an AI that is increasingly being used as a thinking partner, not just a tool. Roughly 22 percent of conversations involved explicit requests for guidance on personal or professional decisions—career moves, relationship advice, product strategy. Another 18 percent centered on understanding complex topics, from scientific papers to legal documents. Notably, about 9 percent of the conversations fell into what the company calls “ethical navigation,” where users asked Claude to weigh moral trade-offs or explain conflicting values.

The rollout of this transparency report has been anything but smooth. Several privacy researchers initially raised concerns about whether the sample could be deanonymized, though Anthropic’s technical documentation confirms that all personal identifiers were stripped and that no raw conversation text was published. Still, engineers close to the project say the company is already planning a follow-up that will break down guidance-seeking behavior by user region and occupation, though that dataset remains unverified and timelines are not confirmed.

Why this matters: As regulators in Brussels and Washington push for AI accountability, understanding how people actually use large language models—rather than how companies claim they are used—will shape everything from safety guidelines to product design. For now, Anthropic’s glimpse inside one million conversations suggests that many users are treating Claude less like a search engine and more like a confidant. What remains uncertain is whether other major labs will follow with similar disclosures. The industry is watching.

Source: https://x.com/AnthropicAI/status/2049927618397614466

Related Stories

More Technology →