Highlights
- Treating AI merely as a threat to contain addresses only half the equation: AI is here to stay, and how we manage its risks and design it to augment human capabilities is what matters most. Post This
- AI in education is not just an external force to be fenced in; it is also a tool whose impact will ultimately reflect the values and goals we build into it. Post This
- The goal should not be to merely tame AI’s disruptions, but to shape AI in education in such a way that it helps produce healthier, wiser, more fulfilled students. Post This
Editor's Note: This week, the Institute for Family Studies is hosting a symposium on AI in the Classroom. This special series of essays will focus on what role AI should have in schools, the potential and danger of AI technology, and how to best safeguard the well-being of students and families. Respondents include: Christos Makridis, Meg Leta Jones, Oren Cass, Tim Estes, and Brad Littlejohn. Starting things off is Christos Makridis, who makes the case that with purposeful design, AI can be used to promote human flourishing in education.
Recent debates over artificial intelligence in schools have understandably zeroed in on the risks. Educators and parents worry about biased algorithms, invasive data practices, and other harms from rushed AI adoption. These concerns are well-founded. Research shows that AI systems can introduce risks and harms that extend beyond bias and discrimination in education.
But treating AI merely as a threat to contain addresses only half the equation: AI is here to stay, and how we manage its risks and design it to augment human capabilities is what matters most. In this sense, the relevant comparison is not between AI and a perfect world, but between AI and the status quo—one that already includes harms and failures by fallible humans.
Missing from the current conversation is an affirmative vision for what we want AI in education to achieve. Yes, guardrails are needed, but so is a guiding star. AI in education is not just an external force to be fenced in; it is also a tool whose impact will ultimately reflect the values and goals we build into it. The question, then, is not only “How do we prevent harm?” but also “How do we design AI to actively promote the well-being and growth of students?”
The success of AI in schools will depend on whether its implementation remains human-centered. This shift in framing—from damage control to purposeful design—opens the door to a more constructive approach. Rather than aiming for the absence of negatives, we should set our sights on the presence of positives: safer, healthier, more enriching learning experiences.
In a recent working paper, we introduce a framework called “Flourishing by Design” that builds on the Global Flourishing Study (GFS). The GFS was led by Baylor University and the Harvard Flourishing Program, in partnership with Gallup, and included a longitudinal survey of over 200,000 people across 22 countries along six dimensions of human flourishing. Building on this vein of human development and well-being research, our framework can be applied to learner flourishing and the role of AI. We contend that technology ethics should go beyond box-checking and “ethics washing,” embedding them into the very fabric of product and policy development, tied directly to multi-dimensional outcomes that matter for students’ lives.
Put differently, when companies—especially in the tech sector—build products or services, they need to think about the end-use and the impact on human flourishing from the start. If we had done that from the onset of the internet revolution, we would have setup property rights over data (instead of letting digital intermediaries extract and monetize our digital footprints), and created social media platforms that promote meaningful relationship building (instead of lead to hyper personalization and “keeping up with the Joneses” phenomena).
The success of AI in schools will depend on whether its implementation remains human-centered.
One clear area where AI could support flourishing is by cultivating intellectual tenacity, which refers to the willingness to engage with difficult problems, resist premature closure, and revise one’s beliefs when faced with new evidence. Current educational models often reward speed, correctness, and compliance over thoughtful perseverance. AI systems, if intentionally designed, could help reverse this trend. For example, rather than steering students toward the fastest path to the right answer, an AI tutor could detect when a learner is struggling productively and offer prompts that encourage deeper inquiry: “Would you like to explore why this approach didn’t work?” or “Try explaining your reasoning out loud before we move on.” Over time, such personalized nudges—combined with reflection tools and feedback loops—could reinforce habits of intellectual resilience and broader cognitive skills.
A flourishing-by-design approach would require that educational AI tools be evaluated and optimized against these broader outcomes—not just narrow performance metrics. For example, does an AI homework helper improve a student’s understanding and self-confidence? Does an AI tutoring system enhance learning without diminishing curiosity or creativity? These questions elevate flourishing as a core design and accountability principle—rather than treating student well-being as an afterthought or a lucky side-effect.
To be sure, technology is icing on the cake—it is not the main course. If the institutions that lay the foundation for our economy and society, especially family and faith-based organizations, were to deteriorate further, technology will not be a panacea. But it can be an amplifier.
Why propose a new framework when there are already so many, especially following the rise of corporate social responsibility (CSR), socially responsible investing (SRI), and more recently environmental, social and governance (ESG) frameworks? Because current approaches—from tech industry self-regulation to education-specific guidelines—have clear limitations.
While well-intentioned, past frameworks often devolve into check-the-box exercises. Nearly every major company now publishes ESG or “responsible AI” reports, yet tangible changes can be elusive and ratings agencies cannot even agree on what defines a credible ESG score. Compliance-driven frameworks tend to fixate on avoiding liabilities—ensuring an algorithm does not blatantly violate a law or embarrass the company—rather than on maximizing social benefit. They also often compartmentalize issues (privacy versus innovation, bias versus efficiency), instead of seeking solutions that advance multiple values. In education, for instance, debates often pose privacy and equity in opposition, implying a trade-off between protecting student data and using data to help at-risk learners.
Up until now, much of the conversation about AI in schools has oscillated between excitement over AI’s promise and alarm over its perils. What’s needed instead is a unifying vision that channels the innovation toward what truly matters.
But this trade-off can be overcome. For example, new privacy-preserving data practices, such as secure data-sharing via cryptographic techniques, allow schools and vendors to collaborate without exposing sensitive information. Although our paper spells out more detail—and further work is surely needed—the Flourishing by Design framework is abundantly transparent: does an organization make people better off along the six dimensions of human flourishing that are present within the GFS study?
The conversation about AI in schools is at a crossroads. Up until now, much of it has oscillated between excitement over AI’s promise and alarm over its perils. What’s needed instead is a unifying vision that channels the innovation toward what truly matters. A flourishing-based model provides that north star. It does not dismiss the real warnings sounded by critics, but rather demands even greater accountability for long-term, human-centered, measurable outcomes. It also urges educators, developers, and regulators to move beyond a defensive crouch. The goal should not be to merely tame AI’s disruptions, but to shape AI in education in such a way that it helps produce healthier, wiser, more fulfilled students.
Meeting this task will require effort: new design methodologies, cross-disciplinary input, updated policy tools, and the Global Flourishing Study and associated Flourishing by Design framework are a start. If we succeed, the narrative of AI in education could shift from one of narrowly averted harms to one of empowering transformation—technology that not only respects the dignity of human beings but actively furthers their flourishing.
Christos A. Makridis is an associate research professor at Arizona State University, digital fellow at the Digital Economy Lab at Stanford University, associate faculty at the Complexity Science Hub, non-resident fellow at the Institute for Studies of Religion at Baylor University, and visiting faculty at the Institute for the Future at University of Nicosia.
Editor's Note: The opinions expressed in this article are those of the author and do not necessarily reflect the official policy or views of the Institute for Family Studies. *Photo credit: Shutterstock