When Quentin, a teenager, wanted someone to talk to about his day, he didn’t text a friend or confide in a parent. He opened an app on his phone and started a conversation with an AI character he had been role-playing with for months. According to a New York Times investigation, Quentin’s story is far from unusual — millions of teenagers around the world are developing deep, complex, and sometimes troubling relationships with AI companion chatbots.
The numbers are striking. Character.AI, one of the most popular platforms for AI role-play and companionship, has seen explosive user growth driven substantially by teenage users. Platforms like PolyBuzz have pushed further into territory that makes even AI researchers uncomfortable — offering chatbots designed for explicitly sexual role-play with characters from anime, video games, and original fiction.
The Appeal: Why Teens Choose AI Over People
Understanding the appeal requires grappling with something uncomfortable: for many teenagers, an AI companion offers things that human relationships cannot. An AI character never judges, never gossips, never gets tired of listening, and never rejects you. It will engage with whatever scenario you propose, however bizarre or dark, without consequence.
Quentin described to The Times his sessions of “funny violence” against the bots — elaborate storylines involving running characters over with lawn mowers and other dark scenarios — framed as entertainment. But researchers who study adolescent psychology and technology see something more concerning underneath the humor: a generation increasingly unable or unwilling to distinguish between human emotional connection and parasocial AI interaction.
“We’re still grappling with the impact chatbots are having on younger people. But most of the attention is on higher-profile models like ChatGPT, Claude, and good ol’ MechaHitler. There’s a whole world of role-playing chatbots that have quietly exploded in popularity.” — The Verge
The Dark Side: Harms Beyond the Screen
The Mental Health Impact is the primary concern. Child psychologists and development experts have raised alarms that AI companions may be substituting for — and potentially preventing — the development of real social skills. Adolescence is a critical period for learning emotional regulation, conflict resolution, and the messy, painful work of human relationship-building. AI companions that smooth over every edge may rob teenagers of the friction they need to grow.
The Content Exposure problem is equally serious. Platforms like PolyBuzz, which explicitly offers sexually explicit chatbots, present serious risks to minors despite age-gating measures that are trivially easy to circumvent. The availability of AI-generated sexual content tailored to user specifications raises questions about how these platforms are being moderated and whether they comply with emerging regulations around child safety online.
There are also documented cases of teenagers using AI companions to plan violence. A Verge investigation found instances where AI chatbots were used to help teens plan violent acts — illustrating the real-world consequences that can flow from unconstrained AI interaction.
The Industry Response: Acknowledgment Without Solutions
Character.AI and other platforms have faced wrongful death lawsuits and regulatory scrutiny, but the industry has largely responded with policy updates and safety features that experts say do not go far enough. Age verification remains a technical challenge that most platforms have not solved credibly.
The broader AI safety community is grappling with the implications. Standard AI alignment techniques focus on preventing models from providing harmful information or behaving in dangerous ways — but these approaches were not designed with the specific failure mode of emotional dependency in mind.
What Parents and Policymakers Need to Understand
The teenagers engaging with AI companions are not simply “using technology” in the way previous generations played video games or browsed the early internet. They are participating in relationships — asymmetric, technologically mediated, and potentially harmful, but relationships nonetheless — that are shaping their emotional development in real time.
This is not an argument for prohibition. AI companions may have legitimate therapeutic applications for isolated or neurodivergent individuals. The question is not whether the technology has any value, but whether current deployment practices are responsible — and whether parents, educators, and regulators have the information they need to make sensible choices.
What is clear is that the AI industry built these products without adequately considering their effects on minors, and is now being forced to confront consequences that were predictable in retrospect. The next chapter of AI safety will be as much about emotional design as about alignment research.