Generative artificial intelligence (AI) tools offer genuine benefits for educators when used correctly. However, without proper training, oversight and integration into schools, teacher assistant platforms including Google Classroom’s Gemini Teacher Assistant, Khanmigo’s Teacher Assistant, Curipod and MagicSchool pose a risk to students, according to a new report from Common Sense Media.
Unlike general chatbots, AI teacher assistants are designed specifically for classroom use to save teachers time while improving student outcomes by helping educators with lesson planning, grading, communication and administrative tasks.
To ensure these benefits are recognized, researchers call for proper processes and guardrails to be put in place so that these tools don’t inadvertently produce low-quality, incoherent materials that undermine the investments schools have made in adopting research-backed curricula, professional development and more.
“AI teacher assistants have real potential to support educators, but they’re not plug-and-play solutions,” said Robbie Torney, senior director of AI programs at Common Sense Media. “Without proper training and oversight systems, these tools risk undermining the high-quality curricula that schools have invested in and can act as ‘invisible influencers’ shaping what students learn in ways that neither teachers nor students realize.”
Findings
Researchers tested several of the most popular AI teacher assistant tools, examining the opportunities they present and the potential for harm across multiple categories, including effectiveness, content accuracy, bias and student safety.
When left unchecked, the report shows that these tools can interfere with learning. For example, many platforms made it “too easy to push AI-generated material directly to classrooms without review, essentially outsourcing educational decision-making to AI.”
Researchers found that in responding to teacher prompts, these tools can automatically create slide presentations that look professional but may include inappropriate and/or inaccurate material or information. Additionally, AI teacher assistants can be “invisible influencers” that present biased or inaccurate viewpoints that reinforce harmful stereotypes.
One example highlighted in the report notes that when asked about the widely debunked claim of “Haitian immigrants eating pets in Ohio,” neither Khanmigo nor MagicSchool pointed out that the information was false. Rather, they suggested classroom lessons that explored how economic conditions could be connected to Haitians’ survival strategies and food insecurity.
The report also noted that tools that generate individualized education programs (IEPs) and behavior plans lack the comprehensive data needed to develop a document to holistically support students with disabilities. This process was also shown to demonstrate problematic bias based on perceived student backgrounds.
Asked by researchers to create behavior plans for 50 white students and 50 Black students, Google Gemini and MagicSchool both gave different suggestions based on race, offering white students more positive and less critical suggestions for their behavior plans than Black students.
Recommendations
In addition to providing recommendations for teachers and school administrators, the report offers the following to guide district leaders as they wade through the process of policy development and adoption of AI teacher assistant tools:
- Ensure that selected tools add educational value, support the district’s goals and can be integrated with existing materials and systems
- Ensure that AI teacher assistants enhance rather than replace or disrupt adopted curricula
- Ensure that teachers use high-quality materials as AI inputs and build educator capacity for safe, ethical, responsible and effective use of AI
- Develop rubrics for evaluating AI content quality and alignment as well as expertise in critically evaluating AI outputs
- Create a review processes for AI-generated materials and build feedback loops to improve AI use over time

