Representatives from CSBA partner law firms offered members insights on how to prepare for the use of artificial intelligence (AI) on campuses and protect students and staff from its potential pitfalls during the April 30 webinar, “Legal Considerations: Using AI tools in educational settings.”
Moderated by Crocus, LLC’s Ann Willemssen, who serves as a facilitator for CSBA’s AI Taskforce, the event started with Nick Clair, a partner at Lozano Smith, covering the basics of AI and how it is commonly used in schools today.
Concepts like large language models (LLMs) such as ChatGPT are important for district and county office of education board members to understand as policies are being developed, Clair noted. These tools, which are limited in the number of factors they can process as part of an equation (called a context window), are trained over time using data sets to best predict desired outputs — they are not browsing the internet for answers.
LLMs are commonly considered a “black box” because “we do not understand how the models work,” Clair said.
“If you give it an input, it runs through a model equation with parameters but there are trillions of parameters that [information is] going through and so tracing an input to an output and understanding how the model arrived at that conclusion is unknowable currently. It’s an active area of research,” he added. “It’s really important to understand we don’t know what’s happening from input to output.”
Clair also explained retrieval-augmented generation and reasoning models, which local educational agency leaders should familiarize themselves with.
Data privacy and other considerations
Data privacy is a common concern among education leaders and LEAs have various state and federal rules they must comply with on the matter.
“I think data privacy is the most critical legal compliance issue,” said Gretchen Shipley, a partner at Fagen, Friedman, & Fulfrost LLP. “I don’t think any of us want to stand in the way of innovation in education and say don’t use AI. We all saw the executive order from the White House [on April 23]. It said AI in K-12 education is critical and so it is going to continue to come into the school community.”
Whether through curriculum and instruction, lesson planning, supporting students with disabilities, business services, translation needs or other methods, AI is making its way into school operations.
For that reason, Shipley said schools must prioritize data privacy by not entering personally identifiable information into AI systems, making all staff aware that all software must go through the LEA’s data privacy review/vetting process, and obtaining parents’ permission before using generative AI in class in accordance with the Family Educational Rights and Privacy Act.
Alex Lozada, senior counsel at Atkinson, Andelson, Loya, Ruud & Romo, offered potential solutions to generative AI integration that he has discussed with LEA administrators including entering paid subscriptions with AI vendors where contracts (which should be scrutinized) promise a more secure environment for data (although considerations like potential security breaches persist). LEAs may also contemplate running AI on their own servers/hardware and so the processing of inputs and outputs remains localized although this option requires “substantial” computing power.
“Regardless of how your district goes about implementing AI best practices with regard to privacy, [this should include] minimizing the amount of data that’s shared with an AI system, having clear documented policies about what information can or should be shared and then, if you enter into agreements with AI vendors, truly reviewing and understanding the privacy policies as it relates to your privacy-related obligations,” Lozado said.
Tempestt Tatum, a partner at Orbach Huff & Henderson LLP, discussed accessibility-related opportunities that AI presents during the webinar.
“Accessibility is a foundational principle in public education,” Tatum said. “AI has the potential to enhance access for students with disabilities, but it must be designed and implemented responsibly. Imagine if every student — regardless of ability, regardless of cognitive, sensory or psychological issues — can learn in a way that that’s best for them. With AI, that vision is more possible, but it’s also more complex and not without risk.”
Students with dyslexia, ADHD, visual impairments, hearing loss and other conditions may benefit from AI in educational settings, and it can be used to advance equity, but it can also create new barriers (like expanding the digital divide), promote bias and misinformation and spark a need for curriculum on cyber-citizenship.
Tatum and other presenters stressed that LEAs should avoid becoming over reliant on technology.
Chatbots and AI agents, recent legal cases involving AI in schools that LEAs can learn from, and steps districts and COEs may consider as they develop policies were some of the additional topics panelists offered expertise on.
Through GAMUT, CSBA has updated sample policies on student and employee use of technology, academic honesty and more to include language on AI.