New research explores the use of AI tools in identifying students in crisis

With youth suicidality on the rise and schools struggling to hire the number of mental health professionals needed to support students, local educational agencies have begun to turn to artificial intelligence (AI)-based tools to help identify those at risk for suicide and self-harm.

However, a RAND Corp. report released in December suggests more evidence is needed to understand the risks and benefits of using online monitoring tools to spot a student crisis.

“The adoption of AI and other types of educational technology (edtech) to partially address student mental health needs has been a natural forward step for many schools during the transition to remote education during the COVID-19 pandemic,” the report states. “However, there is limited understanding about how such programs work, how they are implemented by schools, and how they may benefit or harm students and their families. To assist policymakers, school districts, school leaders, and others in making decisions regarding the use of these tools, this report addresses these knowledge gaps by providing a preliminary examination of how AI-based suicide risk monitoring programs are implemented in K-12 schools, how stakeholders perceive the effects that the programs are having on students, and the potential benefits and risks of such tools.”

As the use of AI surveillance technology continues to grow increasingly popular among LEAs looking to curb cheating, bullying and other student behaviors in schools, using AI-based algorithms to detect risk among students for suicide, self-harm and harm to others could potentially serve as a useful way for schools to prevent suicide, researchers noted.

Interviews with school staff and health care providers highlighted instances of such tools successfully identifying a student at imminent risk for suicide who likely would not have been detected through other prevention or mental health programs at the school.

Still, as with other AI-based monitoring technology, there remain concerns related to student privacy and the lack of oversight and research regarding the accuracy of AI monitoring tools.

Recommendations

To support policymakers and tech developers as they navigate the use of AI-based suicide risk monitoring programs and seek to mitigate possible risks, researchers provided several recommendations, including:

  • For LEAS: Engaging with their communities for feedback on the implementation of AI-based suicide risk monitoring; clearly notifying caregivers and students about AI-based suicide risk monitoring and clarifying opt-out procedures; establishing effective and consistent processes for responding to AI alerts and track student outcomes from those alerts; engaging with students to help them understand mental health issues; and reviewing and updating antidiscrimination policies to consider the implementation of AI-based technologies and their potential biases against protected classes.
  • For policymakers: Fund evidence-based mental health supports in schools and communities, including the use of technology, and refine government approaches and standards for privacy, equity and oversight of suicide risk monitoring systems.
  • For technology developers: Continue participating in school engagement activities to integrate feedback into their programs and share data to allow for evaluation of the impact of AI-based monitoring software on student outcomes and developing best practices for its implementation.