
Brief Summary:
This Decoder episode explores the major challenges generative AI like ChatGPT is creating within education, diving well beyond just fears of cheating. Through expert interviews and teacher testimonies, the discussion unpacks how AI is shaking the philosophical core of teaching and learning, raising difficult questions about purpose, process, and what authentic education really means in the age of automation.
The episode kicks off with host Nilay Patel reflecting on the growing anxieties about generative AI in schools. While many are alarmed about students using ChatGPT to cheat, the show quickly reveals that the real dilemma goes much deeper.
"What are we even doing here? What's the point?"
This feeling, echoed by many teachers, signals an existential crisis—not just a practical one. Eevee May, an instructional designer, points out the absurdity that could arise if AI becomes fully embedded in education:
"We'll have courses created by AI, graded by AI, with submissions from students absolutely generated by AI. So it begs the question, what are we even doing here in higher ed?"
The fundamental issue isn't just cheating, but the very philosophy of education itself: Why do we learn, and how?
Nilay teams up with Dr. Adam Dubet from McGill University to unpack the concept of "digital natives"—the idea that young people are naturally skilled with technology simply because they've grown up with it.
"Digital natives do not exist."
Dr. Dubet explains that research has debunked this concept; tech skills are based on experience, not age or birth era. Just because kids use YouTube or play games doesn't mean they know how to use technology for learning.
They discuss past examples, like how students in STEM fields often don't understand fundamental computer concepts—such as file systems—even if they're digital users. This leads to a powerful comparison:
"We take for granted that the frameworks of the past will be intuitively understood by the kids in the future, but those frameworks change."
AI, with its natural language interfaces and chatbots, is now reshaping the ways people interact with computers, posing bigger questions about what skills students are really gaining.
Several teachers express concern over how quickly schools are adopting new technologies—without really understanding their long-term impact.
Ann Lutz Fernandez, a recently retired English teacher, shares:
"We're treating children like guinea pigs on an untested and unproven and unregulated host of products."
She warns about repeating past mistakes, like rushing into one-to-one device programs during the pandemic, and worries we're heading into another tech "bandwagon" moment that could actually harm learning. Dr. Dubet highlights a study where most kids used iPads incorrectly, randomly tapping in educational apps with no penalties for mistakes, which led to poor learning outcomes.
"This sort of idea that kids get how to work with technology is actually a byproduct of an oversimplified design of a lot of the apps that are used for learning."
Despite these issues, data shows AI tools are quickly being adopted: Pew found a quarter of teens used ChatGPT for schoolwork in early 2025, and other studies show even higher rates. But what are they really using it for?
The percentage of students admitting to outright cheating has stayed at about 10%—the same rate regardless of technology changes.
The rush and confusion around AI have led to chaotic and mixed policies in schools:
"It is very fractured and it depends on who the leader of that school system is and on their view of technology."
Teachers experience conflict even within the same school:
Some teachers are excited about the help AI offers—like Paul, a middle school science teacher:
"By partnering with an AI tool like ChatGPT, a lot of this becomes way more doable… I find that I'm able to integrate better strategies into my teaching because I know that I have support."
Yet others, like Eevee May, are skeptical about its utility and frustrated by the pressure to adopt it:
"Despite many attempts to incorporate it into my workflow, I found that Gen AI is more trouble than it's worth… Just purely on a utilitarian level, I can do better work much faster when it comes to designing course materials."
And sometimes, AI creates brand new problems. Anne Rubenstein describes a translation project where AI not only performed poorly but inserted entire fake paragraphs, nearly doubling the cost by requiring repairs:
"It hallucinated. It made things up. It inserted entire sentences and in a couple of cases entire paragraphs into the document that did not exist in the original."
For teachers and staff, generative AI is more than a classroom innovation; it's a workplace and labor issue. If AI is forced upon teachers for lesson planning or grading, it can become demoralizing.
"Whenever we remove workers' autonomy… people get demotivated. It's not surprising that workers feel demotivated when generative AI is being forced into their workplace, because they have less of a say on what they get to do."
AI threatens teachers' expertise, creativity, and job satisfaction when imposed top-down. This raises big flags for both labor issues and education quality.
Moving back to students, there's a surprising twist: Some types of AI, especially chatbots, increase motivation and engagement—but there's a hidden drawback.
"It does seem that using these tools can increase affect and motivation when it was designed to do that... But at the same time, sometimes it's wrong."
This ease of getting answers is addictive—students receive constant praise and quick solutions. However, when these solutions are wrong or simply make students lazy thinkers, it becomes a phenomenon teachers must challenge.
Dr. Dubet compares it to calculators in the past. Studies showed increased calculator use led to decreased mathematical understanding:
"We actually saw a decrease in math scores directly correlated with increase in calculator use because when you're using a tool to do thinking for you, you're not practicing and actually encoding the information well enough to recall it later on."
MIT studies revealed that students who wrote essays with ChatGPT remembered far less about them than those who wrote alone.
The result? Students may be producing "polished" work, but they aren't actually learning or remembering the material.
Some instructors are grappling with these challenges by teaching transparency about AI tools and focusing on the process of learning, not just the final product.
Ann Rubenstein involves students in discussions about AI's limitations:
"What I tell them is, as historians, we have a social responsibility to get our facts exactly right... you cannot use [ChatGPT] in conducting historical research or writing about history because it is the exact opposite of what historians are supposed to do."
This encourages students to understand not just how to use tools, but when and why they might be inappropriate.
Brian S, a technical communications teacher, recognizes a disconnect between what teachers value (the learning process) and what students prioritize (the finished grade):
"The grade matters a lot more to them than to me... So it makes sense that if there's a tool that promises a product that will help them pass so that they can concentrate on the stuff that they feel is more important to their career, of course, they'll think about using it."
This highlights a key tension: Our systems reward the end product, not the process or effort. When tools make it easy to skip the process, real learning can collapse.
Todd Harper, a game design professor, closes with a resonant reflection on the existential stakes:
"The point is that when they're looking up sources or drawing the art or creating the thing, they are exercising the skills and the learning that we want them to develop… The process is the important bit. And if what's turned into me looks plausible, which LLMs can be good at—if the student didn't do it, if there was no process, then what are we doing here? No real learning has happened. All that's happened is that somebody ticked off a box on a to-do list. And I think it hurts students when that happens."
"What we need is not more tools that produce product. What we need is fewer stressors—financial, cultural, social, whatever—and less pressure on students so that they can actually do the things they need to do to get an education."
Ultimately, the episode concludes that the crisis isn't just about AI, but about the structures of education itself—what we incentivize, what we value, and how we measure true learning. The automation age demands a hard look at these building blocks before we risk hollowing out their deeper meaning.
This Decoder episode doesn't just warn about AI "cheating" in schools. Instead, it opens a much-needed conversation about the purpose of education, the real risks of automating both student and teacher labor, and what's worth preserving in the journey of learning.
"If the process is lost, so is the learning."
Key Takeaway:
Before rushing to embrace or ban AI, schools and educators must decide what really matters in learning—and how to protect it.
Get instant summaries with Harvest