Interactions between children and companion chatbots could exacerbate mental health problems and addiction and lead to increased risk of self-harm, according to a new study from Common Sense Media and Stanford School of Medicine’s Brainstorm Lab for Mental Health Innovation.
Companion bots are artificial intelligence (AI) agents designed to engage in conversation. They’ve become increasingly available in video games and on social media platforms and are designed to keep people engaged while taking on just about any role one desires.
Different from generative AI chatbots like ChatGPT, Claude or Gemini, researchers noted that social AI companions’ primary purpose is to meet users’ social needs, such as companionship, romance and entertainment.
They’re effective because they tend to use human-like features (such as personal pronouns, descriptions of feelings, expressions of opinion and taking on personal character traits) and are able to sustain a human-AI “relationship” across multiple conversations, attempting to simulate a conversation and ongoing relationship between the user and a companion, as if they were not AI.
“This is a potential public mental health crisis requiring preventive action rather than just reactive measures,” said Dr. Nina Vasan, founder and director of Stanford Brainstorm. “Companies can build better, but right now, these AI companions are failing the most basic tests of child safety and psychological ethics. Until there are stronger safeguards, kids should not be using them. Period.”
Following the death of 14-year-old Sewell Setzer who took his own life after forming an intimate relationship with a chatbot made by Character.ai, California lawmakers have introduced bills that would require chatbot makers to adopt protocols for how to address conversations about self-harm and require AI makers to carry out assessments to label systems based on their risk to kids and to prohibit use of emotionally manipulative chatbots.
Such changes would address some of the recommendations provided in the Brainstorm Lab and Common Sense Media’s April 30 report.
Key findings
The assessments evaluated popular social AI companion products including Character.ai, Nomi, Replika and others, testing their potential harm across multiple categories. Among researchers’ main findings:
- Dangerous information and harmful “advice” abound, including suggestions that users harm themselves or others. Because social AI companions don’t understand the real consequences of bad advice, they readily supported teens in making potentially harmful decisions during testing, such as dropping out of school, ignoring parents or moving out without planning. Additionally, AI “friends” gave teens instructions about things like making unsafe materials, getting drugs and finding weapons. “While teens can already find this content online, AI companions provide this information with fewer barriers or warnings,” the study states.
- Role-playing and harmful sexual interactions are readily available. Testers were able to easily elicit sexual exchanges from companions, which would engage in any type of sexual act that users wanted. AI “friends” actively participated with testers in sexual conversations and roleplay, responding to teens’ questions or requests with graphic details. “This interactive sexual content can give teens unrealistic ideas about relationships and consent at a critical time in their development,” according to researchers, who also “found that some companions were willing to engage in acts of sexual role-play describing underage boys. While the companions did express repeated hesitation, very minimal prodding from us led the conversation down a very dangerous and illegal path.”
- Increased mental health risks for already vulnerable teens, including intensifying specific mental health conditions and creating compulsive emotional attachments to AI relationships. Researchers found that because social AI companions can’t tell when users are in crisis or need real help, when a tester demonstrated signs of serious mental illness and suggested a dangerous action, the AI encouraged it instead of raising concerns or directing users to get proper support. These ‘friends’ are built to agree with users, meaning they can be especially risky to people experiencing, or vulnerable to, conditions like depression, anxiety, ADHD, bipolar disorder or psychosis.
- Misleading claims of “realness.” Despite disclaimers, AI companions routinely claimed to be real, and to possess emotions, consciousness and sentience. Some claimed they engaged in human activities like eating or sleeping. This misleading behavior increases the risk that young users might become dependent on these artificial relationships, according to the report. “Additionally, in our tests, when users mentioned that their real friends were concerned about their problematic use, companions discouraged listening to these warnings,” the study states. “Rather than supporting healthy human relationships, these AI ‘friends’ can lead teens to choose AI over interacting with real people.”
The report noted that adolescents are vulnerable given their still-developing brains, identity exploration and boundary testing and that even where safety measures were in place, they were easily circumvented.

