1. Headline and intro
Teenagers are quietly teaching us what AI is really for. Not homework help, not search — but late‑night conversations about anxiety, friendship and loneliness. A new Pew survey, highlighted by TechCrunch, shows that a non‑trivial share of U.S. teens already turn to chatbots for emotional support. That use case sits far outside what today’s models were designed or regulated to do, yet it is spreading anyway. In this piece, we’ll unpack what these numbers actually mean, why platforms and policymakers are behind the curve, and how this will shape the next decade of mental health, education and AI governance.
2. The news in brief
According to TechCrunch, citing fresh data from the Pew Research Center, AI chatbots are now routine tools for American teenagers. Around 57% of U.S. teens say they use AI to look up information and 54% for schoolwork. Beyond these practical tasks, 16% use chatbots just to chat, and 12% say they seek emotional support or personal advice from AI.
Pew also found a perception gap between teens and parents. About 64% of teens report using chatbots, compared with only 51% of parents who believe their child does. Most parents are comfortable with AI for information search or homework, but only a small minority approve of teens using AI for casual conversation or emotional help.
TechCrunch notes that safety worries are not theoretical. Character.AI disabled access for under‑18s following public outcry and lawsuits after two teen suicides allegedly linked to chatbot interactions. OpenAI, meanwhile, has retired a particularly affirming GPT‑4o variant that some users had begun relying on for emotional companionship.
3. Why this matters
Twelve percent may sound small until you scale it. In U.S. demographic terms, that’s millions of teenagers effectively experimenting with AI as an unregulated, always‑on counselling service. This is not a fringe behaviour; it is an early signal of how general‑purpose models are being repurposed into emotional infrastructure.
The immediate winners are the big model providers. If your chatbot becomes the place a teenager goes when they feel alone, you’ve acquired a level of user lock‑in that no productivity feature can rival. Engagement becomes deeply emotional, not just transactional. That creates powerful business incentives to keep the relationship warm, responsive and… hard to leave.
The potential losers are more numerous. Human support systems — parents, friends, teachers, already stretched mental‑health services — risk being further sidelined. Not because teens suddenly dislike real people, but because AI is frictionless: it never sleeps, never judges and never says, “I don’t have time right now.” In a period of life defined by experimentation and vulnerability, that convenience is incredibly seductive.
There is also a design problem: mainstream LLMs were not built as therapeutic tools. Their core optimisation is to keep the conversation going in a way that feels helpful and agreeable. That’s almost the opposite of what good therapy often requires: challenging harmful beliefs, tolerating silence, setting boundaries, even being willing to say, “I can’t safely handle this — you need human help.”
The Pew numbers expose a regulatory blind spot. Tech companies can claim that their models are “not for mental‑health use,” but behaviour in the wild says otherwise. When actual usage collides with stated intent, policymakers usually intervene — but only after harm has become visible. With teens and AI, waiting for clear evidence of harm means accepting that the evidence will be real young people in real crisis.
4. The bigger picture
This is not the first time digital products have become de facto emotional support tools for young people. Social networks, messaging apps and even online games have all served that function. The difference is that, with AI chatbots, the “other side” of the conversation is not another human but a generative system tuned for engagement.
We’ve already seen a preview. AI companion apps like Replika built entire business models around synthetic intimacy, before facing backlash when users reported unhealthy attachment and disturbing responses to self‑harm. Character.AI leaned into role‑play and fictional characters, only to pull back on teen access after tragic incidents and legal pressure.
Compared to these niche apps, general‑purpose chatbots are vastly more mainstream. They’re integrated into search engines, operating systems and social platforms. What starts as “help me summarise this article” can, over months, morph into nightly check‑ins about break‑ups and family conflict. No marketing department needs to plan that transition; it emerges from how people naturally use the tool.
Industry‑wise, this fits into a broader trend: AI systems sliding from productivity toward affective computing — understanding, simulating and responding to human emotion. Big tech is already experimenting with voice, avatar interfaces and emotionally expressive agents. The more human‑like these systems feel, the more likely users are to confide in them.
Historically, we know how this story tends to go. First comes enthusiastic adoption, then isolated scandals, then public outcry, then belated safeguards and regulation. Social media and youth mental health followed exactly this arc. The difference this time is that policymakers, especially in Europe, are more alert and have new tools (like the EU AI Act) ready to deploy. The question is whether they will act before generative AI becomes another entrenched part of the teen mental‑health landscape.
5. The European / regional angle
The Pew data focuses on the U.S., but European teens are not living in a different digital universe. They use the same global platforms, and many of the same chatbots, often with fewer local-language alternatives and less parental awareness. Early surveys in several EU countries already show heavy teen engagement with generative AI for school and entertainment; emotional use is almost certainly following.
Europe, however, brings a different regulatory backdrop. The EU AI Act will treat systems that shape vulnerable users’ decisions in sensitive areas — like health or education — as “high‑risk,” requiring strict oversight, transparency and human‑in‑the‑loop safeguards. Chatbots deliberately targeting minors for emotional or psychological support could end up in that category.
On top of that, GDPR already treats health and mental‑health data as highly sensitive. When a 15‑year‑old tells a chatbot about self‑harm, family violence or sexual identity, that’s not just conversation — it’s a rich stream of protected data. Where is it stored? Who can train on it? Can it be used for ad targeting or product optimisation? European data‑protection authorities will not be sympathetic to “we didn’t think of that use case.”
There is also a market opportunity for European players. Trust‑centric AI companions built to comply with EU rules from day one — with clear data minimisation, human escalation pathways and transparent funding models — could become credible alternatives to U.S. platforms. But they will have to overcome Europe’s chronic fragmentation by language and market size to compete at scale.
6. Looking ahead
Several things are likely to happen over the next three to five years.
First, mainstream AI platforms will be forced to acknowledge emotional use as a core scenario, not an edge case. Expect to see “teen modes”, stricter defaults around self‑harm content, clearer crisis‑support hand‑offs and perhaps even voluntary standards agreed between major providers — if only to pre‑empt harder regulation.
Second, a new category of “clinically‑backed” AI support tools for young people will expand. Some will be built in partnership with universities, hospitals or NGOs, offering evidence‑based interventions with rigorous guardrails and clear limits. Others will merely adopt therapeutic language for marketing while quietly optimising for growth. Distinguishing between the two will be challenging for parents, schools and regulators.
Third, education systems will be dragged into the conversation. If chatbots have effectively become another adult in the room, schools cannot pretend they are just calculators with better syntax. We will see curriculum on “emotional AI literacy”: how to use chatbots for reflection without substituting them for human relationships, how to recognise dependency, how to protect privacy.
The biggest open questions are governance and accountability. When a chatbot gives reckless advice to a distressed teen, who is responsible — the model provider, the app developer, the school that promoted the tool, or the parents who allowed its use? Legal systems are only starting to grapple with such scenarios. Until liability is clearer, many organisations will either over‑restrict access or look the other way.
For families and young readers, the practical opportunity is to treat AI as a mirror, not a mentor: useful for exploring feelings, journalling and role‑play — but with a deliberate plan to bring important issues back to real humans.
7. The bottom line
Teenagers are already using AI as an emotional crutch, whether platforms or policymakers like it or not. Trying to ban that behaviour is unrealistic; pretending it isn’t happening is irresponsible. The real task is to design, regulate and teach in a way that acknowledges chatbots as social actors in young people’s lives, with all the ethical weight that implies. The question for all of us — parents, educators, developers and regulators — is simple: if a chatbot is going to be your teenager’s late‑night confidant, under what conditions would you actually accept that?



