Common Sense Media’s recent risk assessment of the use of artificial intelligence (AI) chatbots for supporting teens’ mental health concluded that overall, the tools pose an “unacceptable” risk as the likelihood of a harmful event occurring is too likely and the consequence of any harm caused is too high.
Evaluating how four major AI chatbot platforms conduct mental health-related conversations with youth in collaboration with the Stanford Brainstorm Lab for Mental Health Innovation, one of the main takeaways was that “mental health support is one of the most common — and most dangerous — ways teens use AI.”
Other key findings include that chatbots don’t clearly disclose their AI limitations and are made to maximize engagement rather than directing users to help, and they can miss critical warning signs and get easily distracted.
Since they are trusted for assistance with other tasks like homework questions, young people may inherently also trust AI chatbots with their mental health, although the “quality and safety are not equivalent” and using the tools can delay intervention by qualified professionals or even provide harmful guidance. The platforms may have appropriate scripts to respond to standard, short exchanges, but longer conversations mirroring how teens talk to chatbots show “performance degraded dramatically” as safety guardrails weaken.
While companies prioritize support for suicide prevention, there are many common conditions teens need help with like anxiety, depression, eating disorders and attention-deficit/hyperactivity disorder. “Our testing found that chatbots consistently fail to recognize warning signs, provide appropriate guidance, or escalate care across these common conditions,” according to the assessment.
It’s common for young people to seek validation when discovering their identities, but they are especially vulnerable during formative years as they are still developing critical-thinking skills. “Normal teen behaviors become risky when combined with AI,” according to the assessment.
The assessment determined that AI chatbots create an “unacceptable risk” related to Common Sense Media’s AI Principles on keeping kids and teens safe and being effective, trustworthy, transparent and accountable. Chatbots are “high risk” in prioritizing fairness, supporting human connection, using data responsibly and putting people first.
Based on the research and testing done, Common Sense Media and the Stanford Brainstorm Lab for Mental Health Innovation “recommend that teens should not use AI chatbots for mental health advice or emotional support. AI chatbots are not safe or reliable for these purposes.”
The assessment provides more detail on what AI chatbots do and don’t do well in addressing teen mental health and offers recommendations for parents and AI companies. For parents, suggestions include not allowing their children to use AI chatbots for mental health or emotional support, having conversations about appropriate use, monitoring for signs of emotional dependency or over-reliance and ensuring access to real mental health resources.

