A recent report from Policy Analysis for California Education aims to provide education leaders with a better understanding of artificial intelligence (AI) and how it can be used in education by exploring its potential strengths and limitations.
Generative AI in particular has become a prevalent topic of conversation in education with the rising popularity of ChatGPT and similar programs. This type of AI differs from traditional computing because it learns from the patterns and structure of its inputs and uses machine-learning algorithms to form neural networks rather than following step-by-step programming, which can make it unpredictable.
Optimists see AI as a tool enabling teachers to focus more on building caring and trusting relationships and develop lessons tailored to students’ needs, backgrounds, cultures and more. Pessimists envision a future in which students using AI will spend less time interacting with human teachers and peers while misusing AI tools or, perhaps worse, relying on AI educational resources rife with biases, misinformation, oversimplification and other issues.
From the federal to the local level, education policymakers must consider an array of differing views to harness the potential and mitigate the hazards presented by advances in AI technologies, according to the report.
“In contrast to AI’s rapid pace of change, educational change requires years of thoughtful work to reform curriculum, pedagogy, and assessment and to prepare the education workforce to implement the changes,” researchers determined. “The difference in these paces leaves educators and policymakers feeling they are behind and need to catch up. We recommend a ‘keep calm and plan carefully’ mindset to foster the careful consideration, thoughtful implementation, and involvement of multiple stakeholder groups that are required for changes in education policy to successfully harness the benefits of AI and minimize the risks.”
Across the county, teachers are using generative AI tools to help create personalized lesson plans and students are working with AI tutors, using AI in research and writing projects, receiving AI feedback on draft essays and more — to mixed rates of success.
While generative AI can do many things, researchers noted several limitations and risks that are especially concerning when used in education related to spreading inaccurate, inappropriate or damaging information on topics ranging from historical figures to depression and suicidal ideation.
AI models can also be used to create realistic, deepfake pictures, audio recordings and videos that make it appear that someone said or did something they didn’t, and raises concerns about privacy and security and about violating the federal Children’s Online Privacy Protection Act, Children’s Internet Protection Act and Family Educational Rights and Privacy Act requirements, according to the report.
“The heart of the matter is that generative AI does not distinguish constructive, accurate, appropriate outputs from destructive, misleading, inappropriate ones,” researchers wrote. “AI developers and researchers are actively working to mitigate these problems. They are seeking to better curate the training data, provide human feedback to train the AI models further, add filters to reduce certain types of outputs, and create closed systems that limit the information in a neural network. Even so, the problems will not be entirely eliminated, so educators need to be aware of them and plan to protect students and teachers from potential harms.”
Looking forward
State leaders will have to make many decisions regarding policies and programs to foster productive, appropriate and safe uses of AI, researchers noted. To address challenges in supporting local efforts to thoughtfully explore and adopt AI tools that could improve teaching, learning and functioning in local educational agencies, the report provided several recommendations, including:
- Focusing on how AI can help educators address current, critical challenges such as improving teacher working conditions, increasing student engagement, and reducing long-standing opportunity and achievement gaps.
- Building upon existing policies and programs by reviewing and updating them to address AI rather than creating AI-centric ones, as “many of the concerns and needs AI is driving resemble what is already addressed in existing policies and processes for the acceptable use of technology, academic integrity, professional learning, digital security and privacy, teacher evaluation, procurement of technology and instructional materials, and other areas.”
- Supporting learning about computer science and AI through changes to teacher credential requirements, programs to upskill existing teachers, standards and frameworks that specify what students should be learning.
CSBA’s AI Taskforce: Education in the Age of Artificial Intelligence has released a suite of resources to support local educational agencies in this work which will be updated regularly. Current offerings include promising practices and policies, sample scenarios and resolutions and more.