CSBA webinar showcases frameworks for ethical AI integration

A Jan. 29 webinar hosted by CSBA’s AI Taskforce titled “Ethical AI in Education: Building transparent frameworks for the future” offered local educational agency leaders tips on creating an artificial intelligence (AI) governance committee, environmental scans and organizational roadmaps, and evaluation metrics for the use of new education technology (edtech).

Moderated by Nick Zurlo, a senior consultant and analyst at Crocus, LLC and a taskforce facilitator, the event also explored how to implement components of ethical AI that fit local needs, among other critical topics.

“I feel like a lot of our school districts might feel like they’re behind because AI has taken off … but I think if I’ve learned anything from the taskforce, it’s that we’re all in the right place right now because we’re all figuring it out,” said taskforce member Jennifer Crabtree, educational technology coordinator at West Covina Unified School District.

Assessing AI

“There’s not really any clear path for AI’s integration … There’s a vitally important need, as you all know, for an ethical framework and governance model for AI in education that is not only comprehensible — so everyone at every level in every job role can understand this — it needs to be comprehensive, applicable and also aspirational for our practitioners,” said Dr. Pierrette Renée Dagg, director of Technology Impact Research for Merit Network at the University of Michigan.

Dagg covered the basic functions of AI and generative AI, noting that “Traditional AI, which is going to be the technology that is used in so many of our tools not only at the educator level but also with personalized learning and administratively, those are excelling at pattern recognition while generative AI is really going to excel at pattern creation.”

In the education sector, some applications of AI include assisting with and streamlining administrative tasks and facilitating opportunities for personalized learning.

She also highlighted research she conducted on the subject with LEAs and stakeholders and gave an overview of the history of edtech, which proved that many of the same concepts and concerns that have emerged with AI — like the availability of high-quality professional development and inequitable access for students — also occurred with past technological advancements like the introduction of radio, television, computers and the internet.

While there are plenty of benefits associated with AI implementation, there are also potential pitfalls, including algorithmic bias, pedagogical obstacles, academic honestly, spread of misinformation, governance and productization of data and digital surveillance.

Building a framework

Dagg noted that when drafting an ethical framework, LEAs must determine what ethics means to their community. “Many existing AI frameworks point to norms and laws as a validation for their components’ inclusion, but we all know particular groups of people don’t justify their ethnical or moral validity and that’s been demonstrated over time,” she said.

Dagg shared a draft document she authored, “Components of an Ethical Framework for Artificial Intelligence in Education,” that can assist LEAs on their journeys. Drawing on Stoicism, it focuses on the cultivation of virtues and controlling the things you can. The major components explored in depth in Dagg’s presentation are ethics, accountability, transparency, well-being, autonomy, equity and inclusion, pedagogy and teacher training, IT risk management, administration considerations and supplier scrutiny.

Additionally, Dagg gave a logic model LEAs can use with steps to establish a problem statement, guiding questions, inputs, activities, outputs and outcomes.

Dagg also showed a governance model that suggested beginning with review of existing frameworks and decision-making models before deploying intake models to find current applications, sentiment, processes, knowledge-base and policies followed by organizing regular review of an ethical framework compared to current and future AI use and, finally, developing a strategic adoption plan.

The International Society for Technology in Education’s report, Evolving Teacher Education in an AI World, was referenced as well.

Sample policies

CSBA has sample policies available to support LEAs.

“We don’t have an AI-specific policy because the approach that we’ve taken from our policy shop, and in conjunction with the taskforce, has been based in this philosophy that AI permeates a lot of different policy areas so there isn’t an omnibus AI policy that covers everything,” said Andrew Keller, CSBA Senior Director, Executive Office Operations & Strategic Initiatives. “So, the approach that we’ve taken has been to update select policies, and there’s five of them, to incorporate the types of language and policies that you’d need around artificial intelligence. And there’s more coming on that.”

Current policies include:

  • BP/E 4040 – Employee Use of Technology
  • BP/E 6163.4 – Student Use of Technology
  • BP 5131.9 – Academic Honesty
  • BP 6154 – Homework and Makeup Work
  • BP 6162.5 – Student Assessment

The policies can give LEAs a reference point when considering vendors, Keller added. The taskforce is concluding its work this spring by looking for avenues to provide governance teams with guidance that they can use locally in these discussions.

View a recording of the webinar and presentation slides.