AI policy guide provides LEAs critical resources

A new resource produced by a group of organizations with expert knowledge on education and technology can serve as a starting point for local educational agencies as they look to integrate artificial intelligence (AI) safely, effectively and responsibly into classrooms.

A collaboration of TeachAI, Code.org, CoSN, Digital Promise, the European EdTech Alliance, Policy Analysis for California Education and strategic advisor James Larimore, AI Policy Guidance for Schools: A Teach AI Toolkit was drafted to fill an immediate need for guidance on the matter and help communities recognize the potential benefits in improving education while mitigating possible risks.

“Compared to the introduction of previous technologies in education, education systems should not delay efforts to develop guidance on the use of AI since students and teachers already have independent access, and many existing technologies embed AI into their systems (e.g., search engines and email applications),” according to the toolkit. “The first step should be ensuring that AI use complies with existing security and privacy policies, providing guidance to students and staff on topics such as the opportunities and risks of AI, and clarifying responsible and prohibited uses of AI tools, especially uses that require human review and those related to academic integrity.”

The toolkit includes information on ways to incorporate AI in an education system, seven principles for AI in education, sample school guidance documents, sample considerations for existing policies, a customizable slideshow presentation, sample letters for parents/guardians and staff on the use of AI and a sample student agreement on the use of AI.

“With guidance, an education system may realize the potential benefits of AI to improve learning outcomes, support teacher instruction and quality of life and enhance educational equity,” the toolkit states. “Without guidance, teachers and students can be exposed to privacy violations, inconsistent disciplinary consequences and counterproductive AI adoption practices.”

The toolkit’s website includes explanations of generative AI and predictive AI and lists additional potential benefits like content development and differentiation, tutoring, and personalized learning assistance.  Some risks include plagiarism, overreliance, loss of critical thinking and perpetuating societal bias.

It also notes that having a systemwide approach that incorporates professional development for all staff is key.

As AI continues to evolve and use cases emerge, LEAs will need to update their policies as appropriate.