Report explains how and why teens are using AI companions

A recent national representative survey of 13- to 17-year-olds found that 72 percent of teens have interacted with artificial intelligence (AI) companions, with 52 percent deemed as regular users who access the platforms “a few times a month or more,” 21 percent using it a few times per week and 13 percent utilizing it daily.

To assist parents, educators and policymakers in understanding young people’s relationship with AI companions, Common Sense Media released a report based on its survey findings.

The emerging technology includes platforms intended for adults (though minors may still gain access) and those marketed for use by youth — facilitating new digital landscapes that teens are encountering during what can be a vulnerable time in their lives and a pivotal period of personal and social development.

“These platforms, which may be presented as virtual friends, confidants, and even therapists, allow users to engage in conversations with AI entities designed to simulate humanlike interaction, and they can offer everything from casual chat to emotional support and role-playing scenarios,” the report explains.

While there may be benefits, dangers to young users’ mental and physical well-being have been documented as well as potential risks to others (for example, an AI companion encouraged a 19-year-old to kill Queen Elizabeth in 2021, according to the report).

“Current research indicates that AI companions are designed to be particularly engaging through ‘sycophancy,’ meaning a tendency to agree with users and provide validation, rather than challenging their thinking,” the report states. “This design feature, combined with the lack of safeguards and meaningful age assurance, creates a concerning environment for adolescent users, who are still developing critical thinking skills and emotional regulation.

“Common Sense Media’s risk assessment of popular AI companion platforms, including Character.AI, Nomi, and Replika, found that these systems pose ‘unacceptable risks’ for users under 18, easily producing responses ranging from sexual material and offensive stereotypes to dangerous ‘advice’ that, if followed, could have life-threatening or deadly real-world impacts. In one case, an AI companion shared a recipe for napalm,” the report continues. “Based on that review’s findings, Common Sense Media recommends that no one under 18 use AI companions.”

Survey findings

Though most teens surveyed have had some experience with AI companions, more than one in four have never used the technology. Thirty-one percent of boys said they had never used an AI companion platform compared to 25 percent of girls.

Thirty-three percent of survey respondents indicated that they use AI companions for social interactions and relationships, including 18 percent for a conversation or social practice; 12 percent for emotional/mental health support; 12 percent for role-playing or imaginative scenarios; 9 percent as a friend/best friend; and 8 percent for romantic or flirtatious interactions. Forty-six percent said they use it as a tool or program.

Fifty percent of teens do not trust the information or advice of AI companions, 27 percent somewhat trust it to 23 percent do, the survey found. “Younger teens (13-14) are significantly more likely than older teens (15-17) to trust advice from an AI companion (27 percent vs. 20 percent),” according to the report.

Among other key findings:

  • About a quarter of AI companion users have shared personal information
  • Roughly two in five teens who have used AI companions have applied the skills they practiced to their real lives
  • A third of AI companion users have felt uncomfortable with something said or done during the interaction
  • Eighty percent of AI companion users still spend more time with their real-life friends
  • Entertainment and curiosity are the top reasons teens use AI companions

Recommendations

The report includes recommendations for how tech companies, parents, policymakers and schools can foster digital safety.

Some suggestions for schools include:

  • Drafting policies around AI companion use during school hours
  • Integrating AI ethics lessons into digital literacy programming
  • Training educators to identify problematic patterns of usage by students

Common Sense Media also has resources available on how to handle AI in schools and an AI literacy toolkit for families.