Guidance calls for ‘shared responsibility’ among LEAs and edtech providers in developing AI tools

The U.S. Department of Education’s Office of Education Technology published guidance for developers of artificial intelligence (AI) for educational settings in July outlining ways in which edtech providers can build trust with local educational agencies as they integrate AI into products and platforms.

The guidance emphasizes “shared responsibility” among LEA leaders and edtech providers in designing and implementing thoughtful and safe AI tools in education, as well as the potential impact of AI systems on vulnerable and underserved communities.

“Educational decision makers express cautious optimism for new products and services that leverage new capabilities of AI,” the guidance states. “’Earning the public trust’ is of paramount importance as developers launch new applications of AI in education. The Department envisions a healthy edtech ecosystem highlighting mutual trust amongst those who offer, those who evaluate or recommend, and those who procure and use technology in educational settings.”

Earning that trust will require edtech providers to actively manage risks from the rapidly evolving technology in several categories that address topics including bias and fairness, data privacy and security, harmful content, misinformation management, underprepared users and more.

Lynwood Unified School District Assistant Superintendent Patrick Gittisriboongul is quoted in the guidance as an example of LEA leaders’ current desire for AI solutions being hampered by distrust with the technology.

“Would I buy a generative AI product? Yes!” Gittisriboongul said. “But there’s none I am ready to adopt today because of unresolved issues of equity of access, data privacy, bias in the models, security, safety, and a lack of a clear research base and evidence of efficacy.”

The Education Department outlined five key areas for ed tech providers to consider as they develop a shared responsibility with the education community:

  • Design for education by taking the time to understand specific values and challenges within the sector and seek out teacher and student feedback in all aspects of product development.
  • Provide evidence of rationale and impact of a tool’s advertised solutions.
  • Advance equity and protect civil rights by remaining aware of representation and bias in data sets and algorithmic discrimination, and ensure accessibility for students with disabilities.
  • Fortify safety and security of those using AI tools.
  • Promote transparency and earn trust through collaboration with district leaders, edtech providers, educators and other stakeholders.