Highlights
- We should see it as unhealthy for our children to potentially form deep emotional bonds with a next-token prediction algorithm. Post This
- A healthy AI EdTech ecosystem should be one of surgically-deployed tools for specific-use cases, not a comprehensive takeover of the classroom. Post This
- While it is absolutely critical that students in American schools should learn to use AI, it is quite another matter to suggest that they should use AI to learn. Post This
Editor's Note: The final essay in our symposium on AI in the Classroom is written by Brad Littlejohn, Director of Programs and Education at American Compass. Brad offers a thoughtful response to our other writers, and urges caution on the deployment of AI in schools, noting that "education is a matter of slow and painful midwifery, the formation of mental pathways through effort, struggle, and repetition. The aspiration of AI...is to bypass mental pathways through effortless, efficient, on-demand results." Read more below and be sure to check out the other essays in our series from Christos Makridis, Meg Leta Jones, Oren Cass, and Tim Estes.
In one of the earliest surviving texts on education, Socrates summons a nearby slave boy to help him win his argument with the nobleman Meno. Through a series of leading questions, he gets the untaught boy to demonstrate the Pythagorean theorem—and in the process, Plato’s theory of innate human knowledge. According to Plato, the role of the teacher is not so much to deposit information in an empty mind, but to elicit and bring to full expression an inchoate understanding already germinating in the child’s mind. The teacher’s role, in short, is that of a midwife.
It is an arresting metaphor, not least because midwifery fell quite out of fashion in the twentieth century. Convinced that the marvels of modern technique could replace the folksy wisdom of the midwife, we began to eagerly apply every new technology to the business of childbirth. At first, the results were encouraging, as both infant and maternal mortality rates fell precipitously, but eventually, science began to overplay its hand. Today, birth complications are again on the rise, as fully one-third of babies in America are delivered by C-section, triple the rate that the WHO considers optimal.
The technologization of education today parallels that of conception and birth—but with less impressive results. After millennia in which the theory and practice of education changed comparatively little, the 20th century witnessed the obsessive application of modern technique to try and crack this stubborn nut—how do you get a child to learn, especially if he doesn’t want to? The demands of universal education (a task the ancients would never have dreamed of) made this far more difficult and more urgent, and it was hoped that the latest fads from the industrial economy—scientific management, economies of scale, and finally computers—would produce better results. And again, for a time, they did. Literacy rates and other metrics rose steadily through the 20th century, but outcomes have begun to decline this century—precipitously in the past few years.
Desperate to reverse the decline, our educators and bureaucrats are reaching frantically for more technology as the answer. But, as with the proliferation of C-sections, all the data suggests that over-technologization is the chief culprit of our present woes. And it shouldn’t be hard to see why. The human condition, like it or not, is one in which the greatest goods come only through pain and patience: paradigmatically the birth of a child, but also the growth of that child through all the physical, mental, and moral struggles that enable his or her native capacities to fully emerge and mature. This is a process that is not particularly amenable to the application of clever hacks and technological shortcuts; indeed, it is not clear that our educational techniques have improved all that much since Plato, with most of the 20th century’s gains simply a result of vastly expanded educational access. It is entirely understandable that in an age of non-existent student attention spans, harassed educational administrators and exhausted teachers should turn with longing eyes upon AI as a potential savior. But it is far from clear why we should oblige them.
Certainly, the data from the last round of EdTech has not been encouraging. As Oren Cass notes in his wonderful essay, there is a great deal of difference between learning to use computers and using computers to learn, and most of the evidence suggests that the latter has been largely counterproductive. Similarly, while it is absolutely critical that students in American schools should learn to use AI, it is quite another matter to suggest that they should use AI to learn. In fact, a moment’s reflection should reveal that the very logic of AI is largely inimical to learning—if indeed education is a matter of slow and painful midwifery, the formation of mental pathways through effort, struggle, and repetition. The aspiration of AI, after all, is to bypass mental pathways through effortless, efficient, on-demand results.
It is understandable that in an age of non-existent student attention spans, harassed administrators and exhausted teachers should turn [to] AI as a potential savior. But it is far from clear why we should oblige them.
Now, Christos Makridis is entirely fair to suggest that much of this is a matter of design. After all, one of the most wonderful things about digital technology is its malleability. If, to date, we have allowed our schools to be inundated by distraction and extraction machines, rather than genuine learning technologies, this was a matter of poor design decision making—or more fundamentally, of poor economic incentives that have rewarded predatory technologies. It is certainly conceivable that we could design educational technologies for flourishing. So I welcome Makridis’s suggestion of AI tutors programmed to force students to develop “intellectual tenacity” rather than generating effortless answers. Estes makes similar points in his call for “building intellectual ‘anti-fragility,’” which he argues can be achieved through thoughtful design, following five concrete guardrails, which I enthusiastically echo.
But, of course, such designs will not emerge—and certainly will not be profitably deployed at scale—from good intentions alone. They require the right incentive structures, which is where Meg Leta Jones’s points come in. Within a regulatory regime that makes it easy for schools to consent on behalf of parents, and that treats children’s data as a commodity to be handed over to the lowest bidder for the school’s EdTech contracts, we should hardly be surprised that predatory products predominate. Similarly, there is an urgent need for strict federal safety standards on child-facing AI tools, informed by research into the cognitive harms of overreliance on AI, and the psychological harms of early exposure to chatbots in particular.
It is worth pausing to note that the cognitive and psychological harms are distinct, and that it may be tricky to guard against both simultaneously. For instance, let’s imagine Makridis’s demanding personal AI tutor, designed to foster intellectual tenacity. Most students may be liable to give up in tears of frustration quickly, faced with a stubborn and relentless computer telling them to try harder. Accordingly, such bots may be designed to winsomely cajole rather than harshly chide. In fact, shouldn’t we attempt to design them to closely imitate the most effective human tutors: cultivating a posture of “tough love,” winning the student’s trust and affection so that the student will be incentivized to persevere through every difficulty—because they want to please their teacher and be like them? Well, no, I don’t think we should. We should see it as unhealthy for our children to potentially form deep emotional bonds with a next-token prediction algorithm. We don’t want them to desire to be like an inhuman impersonator. As Estes notes, we cannot underestimate the harms inflicted on developing minds by “illusory friendship designed to exploit our deepest cognitive and emotional vulnerabilities.”
A healthy AI EdTech ecosystem should be one of surgically-deployed tools for specific cases, not a comprehensive takeover of the classroom. It might look like midwifery assisted by ultrasound, not a Caesarean section.
An interface designed for emotional flourishing, it seems, would be one that encouraged the student to invest minimal emotional energy into their device, and to cultivate instead rich relationships with their classmates and teachers. Such an interface, presumably, would not be terribly engaging and would not maximize time-on-device. Which also presumably means that it would not be terribly profitable for the companies supplying such products. A healthy AI EdTech ecosystem should be one of surgically-deployed tools for specific-use cases, not a comprehensive takeover of the classroom. It might look, in other words, like midwifery assisted by ultrasound, not a Caesarean section. That is, after all, what the revival of classical education over the past generation has been seeking to achieve, and as Estes notes, the practitioners of this pedagogy are those best positioned to steer AI educational tools toward helping students grow in mastery.
Will this take time to get right? You bet; very few educators have been trained in the techniques of Socratic discussion and oral examinations that will have to be central to pedagogy in the AI era. But why should that be a problem? Why the breakneck haste to deploy AI into every classroom in America? It is difficult to make sense of the urgency—except as reflecting some vague sense that transformative results will be reaped from the exercise, that our education system will receive an adrenaline shot that will send it galloping ahead of global competitors. But that’s not how education works. Gains are only ever slow and incremental, and frequently turn out to be entirely illusory, while the harms wrought from ill-conceived educational fads can be lifelong. With the mental, moral, and emotional development of 50 million American youth on the line, perhaps we can afford to take the time to get this particular experiment right.
Brad Littlejohn is Director of Programs and Education at American Compass. From 2022-2025, he was a fellow at the Ethics and Public Policy Center.