AI empowers the teacher so they can help more children
The results of our work with Google have been published
Today marks an exciting day in the history of Eedi. The results of our year-long work with Google’s LearnLM team have been published, investigating the role AI can play in supporting one-to-one tutition.
You can read the report here.
Here are the five findings I think will be of most interest to classroom teachers:
1. AI Tutoring Can Enhance Student Learning Transfer
Students who received AI-supported tutoring were 5.5 percentage points more likely to solve problems on new topics successfully (66.2% success rate) compared to those tutored by humans alone (60.7%). This suggests AI tutoring may help students develop a deeper understanding that transfers across mathematical concepts.
2. Socratic Questioning That Teachers Can Learn From
All five interviewed tutors independently praised LearnLM’s consistent use of Socratic dialogue, with three tutors reporting unexpected professional growth from supervising it. One tutor noted: “I remember thinking, ‘Oh, I hadn’t thought of explaining it that way before.’ Just like when you watch another teacher”. The AI can model effective questioning techniques that teachers might incorporate into their own practice.
3. The Human Touch Remains Essential
Tutors’ most frequent intervention (44.3% of edits) was moderating the pedagogical pacing—stepping in when AI’s Socratic persistence risked frustrating students. Teachers will recognise this challenge: knowing when to keep pushing students, and when to step in and help them. Tutors also provided crucial social-emotional nuance (19.5% of edits), adding personal recognition and adjusting tone that AI couldn’t replicate.
4. Interactive Support Dramatically Outperforms Static Resources
Students who received interactive tutoring (either human or AI-supervised) were substantially more effective at correcting mistakes than those who received pre-written hints: 91-93% success rates versus only 65%. This reinforces what teachers know intuitively—real-time, personalised feedback is far more powerful than generic resources, even well-designed ones.
5. AI Can Increase Teacher Capacity Without Compromising Safety
Tutors reported that LearnLM made their work “more fluid and efficient,” with 82.4% citing “supporting multiple students at the same time” as its most useful feature. Critically, there were zero instances of harmful content and only 0.1% factual errors across 3,617 AI-generated messages. This suggests that with proper oversight, AI could help teachers provide more one-to-one support without sacrificing quality or safety—addressing the perennial challenge of reaching every student in a classroom.
You can read more coverage of this report:
What comes next?
This work is just the beginning. We’re partnering with Imagine Learning for a large-scale US RCT in 2026, and running another UK trial with support from the Learning Engineering Virtual Institute.
What questions do you have?
What do you agree with, and what have I missed?
Let me know in the comments below!
Thanks so much for reading, and have a great week!
Craig
P.S. Check out my new 16-book series, The Tips for Teachers guides to…



Hi Craig, interesting read. I had a couple of questions.
1. You say that “AI tutoring can enhance student learning transfer” but when you look at the 95% credible intervals of the human tutor and supervised AI interventions they are [55.8%, 65.4%] vs. [61.1%, 71.2%], so even if there is a difference between the two interventions, it is not statistically significant.
2. Has this paper been peer-reviewed?
I'll admit that I've not read the whole article yet - I've had a skim-read over a few portions of it.
1. Part of the problem of research like this is that there is often a "preferred outcome". Just below the article heading, it shows collaboration between "LearnLM Team, Google & Eedi". Well, at least two of those has a massive interest to show AI working effectively.
2. The questions seem to be multiple choice. How would this work for open-ended questions?
3. Judging by the few AI responses given in the article, it would seem that the questions are fairly easy. How would AI cope with more tricky topics? (Now combine this with point 2.)
4. Example (b) of page 2 shows that the AI hasn't really explained what a ratio is, nor how to form the ratio. It has simply asked the student to count squares, then tells the student the ratio before asking the student to simplify it. The student seems to imply that s/he does not understand where the 8 comes from, judging by the emoji. This would be picked up by a human and explained.
5. Given that the AI responses are them moderated by tutors, the student is essentially getting the benefit of two tutors, not one. Of course the results will show improvement.
6. Teachers/tutors are usually teaching by themselves. This means that if a teacher/tutor makes a mistake, the student is disadvantaged and may not know that an error has been made. If the AI is left to its own devices and not moderated by a human tutor, the negative impact needs to be assessed. This study does not seem to account for that in any way.
7. Of course tutors will also learn things from the AI responses. In mathematics, there are often many ways to explain something, and teachers tend to have their preferred explanation of each topic. If teachers observe lessons from another Maths teacher (especially in another school), they are highly likely to learn something too. I have physically observed lessons in over 12 different schools, across 3 continents (and I've watched many more instructional mathematical videos), and I have learned new ways to explain things.
8. The table on page 5 shows that all of the tutors who were picked for the semi-structured interviews were female. Were all 17 tutors female?
9. There are nuances in communication that a human teacher/tutor will pick up on: tones that indicate whether a student understands as well as gestures, etc.; the way a students' response is worded may reveal a deeper misconception.
10. Ultimately, you cannot get rid of human interaction/teaching. Even if all the issues above (which are by no means exhaustive) are resolved, people need interaction with other humans. Like a child with a new toy, soon reality sets in and the toy loses it's novelty and entertainment value. So too, with AI, students may find it new and helpful at first, but there will soon be a longing for real human interaction, with someone who genuinely cares about them. That human factor will never be replaced, and the longing for human praise and understanding will result in even less attentive students, which will lead to lower educational achievement.