On April 24, the Future of Privacy Forum (FPF) released a checklist and accompanying policy brief to provide local educational agencies clarity and guidance as they vet generative artificial intelligence (AI) tools for compliance with student privacy laws.
Despite the speed at which technology develops and reaches the hands of students and teachers alike, it’s crucial that LEA leaders understand how existing federal and state student privacy laws, such as the Family Educational Rights and Privacy Act (FERPA), apply to the complexities of machine learning systems to protect student privacy.
“AI technology holds immense promise in enhancing educational experiences for students, but it must be implemented responsibly and ethically,” said David Sallay, FPF director for Youth & Education Privacy. “With our new checklist, we aim to empower educators and administrators with the knowledge and tools necessary to make informed decisions when selecting generative AI tools for classroom use while safeguarding student privacy.”
Unlike many guidance documents, briefs and other resources released in the past year to support LEAs that call for educators to follow privacy laws, the latest documents from FPF clarify how to do so.
The policy brief details relevant laws and policies LEAs should consider as they develop policy, including the unique compliance considerations of generative AI tools (such as data collection, transparency and explainability, product improvement and high-risk decision-making), and their most likely use cases (student, teacher or institution-focused).
It also encourages LEAs to update existing edtech vetting policies to address AI, or to create a comprehensive policy if one does not already exist, rather than creating a separate vetting process for AI. The brief also highlights the role of state lawmakers in ensuring the efficiency of edtech vetting and oversight at the local level through sufficient funding and calls on vendors to be proactively transparent with LEAs about their use of AI.
The checklist outlines key considerations when incorporating generative AI into an LEA’s edtech vetting process. Among them:
- Keep student data safe from commercial purposes (e.g., not selling data, prohibiting targeted advertising)
- Requiring data breach notifications
- Specific requirements for vendor contracts
- Understand the cases in which AI tools will be used and whether they are covered by existing student privacy laws, consumer privacy law or Al laws
Additionally, if use involves information that will make students personally identifiable, LEAs need to be prepared to answer the following: Are you able to explain how the tool will be used and how the data will flow to teachers, parents and students? And does the vendor provide an Al transparency page that explains how the tool works?
By prioritizing these steps, LEAs can promote transparency and protect student privacy while maximizing the benefits of technology-driven learning experiences for students.