Comment
Docket ID ED-2025-OS-0118
20 August 2025
Secretary Linda McMahon
U.S. Department of Education
400 Maryland Ave SW
Washington, DC 20202
Re: “Proposed Priority and Definitions—Secretary’s Supplemental Priority and Definitions on Advancing Artificial Intelligence in Education
Dear Secretary McMahon,
This comment is submitted in response to the Department of Education's proposed priority to integrate artificial intelligence (A.I.) into US K-12 and higher education.1
The Department’s proposed priority seeks to “support efforts that expand student understanding of AI and its real-world applications.”2 It also requests public input for the Department’s efforts at establishing “the appropriate integration of AI into education, providing AI training for educators, and fostering early exposure to AI concepts and technology to develop an AI-ready workforce and the next generation of American AI innovators.”3
Generally, we agree with the Secretary that in a world where A.I. is “rapidly reshaping the future of education, work, learning, and daily life…it is increasingly important for students to develop AI literacy.”4 Sharing this concern, we support provisions (a)(i), (a)(vi), and (a)(vii) of the proposed priority, as they mark commonsense steps toward equipping the next generation of teachers and students with the skills they need to master this important new technology. That is, we support these provisions because they approach A.I. in education as a subject matter to be studied, like a computer laboratory, rather than a technology to be brought into every classroom, such as with ed tech, which has made computer technology the very basis of American education.
We also applaud the Department’s commitment to prioritize grantmaking for responsible or “appropriate methods” of A.I. integration that supports, rather than substitutes, the work of educators and classroom engagement.5 To this end, we are generally supportive of provision (a)(x) of the proposed priority, which aims to “[b]uild evidence of appropriate methods of integrating AI into education.”6 However, “appropriate” integration necessarily assumes the possibility of “inappropriate” integration. We encourage the Department to define inappropriate integration as not providing meaningful parental choice in how A.I. is used in the classroom, and, again, failing to focus A.I. in American education as a discrete subject. (More on both of these topics below.)
However, as written, the Department’s proposed priority undermines its own principles by implementing what is a top-down imposition that would foist untested and untrusted technologies upon our country’s educational institutions and, consequently, American children and families. If carried out as described, the Secretary’s grantmaking priorities will subvert the rights of parents and states to determine what is best for their families, place students in harm’s way, and, based on existing research and experience, undermine rather than advance learning outcomes. We respectfully urge the Secretary to direct the Department to prioritize research and acquire input from parents, educators, and communities to determine “appropriate methods” for integrating A.I. in education before funding the incorporation of A.I. technologies into the classroom. We believe that such an approach will be necessary to responsibly integrate A.I. in American education as well as earn the public’s trust and secure the flourishing of students.
A.I. Education vs. Educational A.I. Technology
The Secretary’s proposed priority is divided into two parts. Section (a) deals with expanding the “understanding of artificial intelligence” by incorporating A.I. education into existing curricula. Section (b) deals with expanding the “appropriate use of artificial intelligence technology in education.” Generally, the first is aimed at incorporating a new kind of “technological education” (i.e., education about technology) into American schools and the second is aimed at incorporating new technologies into American schools.
This distinction between “tech ed” and “ed tech” is critical in the comment that follows. As noted above, technological education in A.I. tools will be critical in a world where A.I. is “rapidly reshaping the future of education, work, learning, and daily life.” Accomplishing this, however, does not require all or most of education to be mediated by A.I. technologies, whether marketed as educational or otherwise. Put simply, learning about A.I. is not the same thing as learning by A.I., and it certainly does not necessitate the active incorporation of A.I. technologies into every classroom, every subject, every assignment, and every school-issued device.
In the past, America has circumscribed technological education to physical classrooms where certain technologies can be accessed, used, and learned for specific purposes. Historically, shop class, home economics, and computer learning were all incorporated into education in this manner. This simultaneously facilitated knowledge of these technical arts, while preserving the cognitive primacy of the oral and written word as mediated by hand-written or printed texts. Such an approach recognizes that all tools—from hammers to sewing machines to computers—are designed to assist humans with a specific task or set of tasks, and, furthermore, to allow them into subjects where they are inappropriate is to undermine those subjects.
This “focused” approach to technological education is especially important when it comes to incorporating new technologies into the classroom, as our experience with “ed tech,” i.e., the mandatory issuance of personal computers to students, underscores. This was a fundamental transition away from a liberal arts education, in which every subject had its own place in a larger curriculum along with its own way of doing things, toward one in which computers became the very basis of learning, childhood personality, and even in-school sociality. This paradigm has been a disaster,7 and incorporating A.I. under these conditions will inevitably result in it becoming the very basis of all the cognitive activity of American schooling. As American economist Oren Cass has helpfully put it: “the existence of the Computer Lab reflected the importance of learning how to use a computer, not the importance of using a computer to learn anything else.”8 The same should apply to A.I. in the classroom. That it is important for American students to learn how to use A.I. does not necessitate that A.I. technologies must be used to learn everything else. In fact, as we will discuss, there are reasons for it not to be used in this way. To that end, we are supportive of a “focused” approach to A.I. education that is reflected in provisions (a)(i), (a)(vi), and (a)(vii) of the proposed priority, and we encourage the Department to prioritize the integration of A.I. education in this manner.
Human Flourishing Eschewed Again
In his January 23, 2025, Executive Order, “Removing Barriers to American Leadership in Artificial Intelligence,” President Trump stated that his administration would work to develop policy that would “sustain and enhance America’s global AI dominance in order to promote human flourishing, economic competitiveness, and national security.”9
On its face, such language indicated a commitment by the Administration to curb libertarian impulses if and where these threaten the well-being of American families, workers, and children. But, to the surprise of many, the Administration has subsequently deemphasized this critical dimension of A.I. policy.
In fact, despite the President’s express commitment to pursue A.I. policy that promotes human flourishing, the Administration has largely remained silent on how it aims to achieve this goal or balance it with its other goals. For example, in its July 2025 A.I. Action Plan—by far the most comprehensive A.I. policy proposal published by the Administration—the White House excluded human flourishing from the three pillars of its plan, and all but one mention of it was made.10
The Department’s proposal is similarly silent on this critical dimension of education, which, if nothing else, is a process by which human beings are formed to be free, good, excellent, and happy—that is, flourishing. Questions and concerns regarding A.I.’s effects on student and teacher well-being have been eschewed in pursuit of economic and national security priorities. As written, the Secretary’s proposal assumes, without evidence and against experience, that A.I. technologies (to be distinguished from A.I. education) will improve learning outcomes for all students—whether they be advanced, below grade level, or experience disabilities.
What makes the current proposal concerning is not that A.I. technologies are intrinsically opposed to human flourishing as such, or that A.I. education should be excluded from a school’s curriculum altogether. Rather, the problem is the Department’s move to accelerate the integration of A.I. technologies in the classroom without the requisite public participation, and without evidence that doing so will improve learning outcomes and a vision for what flourishing even means for an American child. Left unchanged, such a proposal will be inimical to securing public trust and evidence-based education standards, not to mention the success of the project itself, all of which are vital for purely humanistic reasons, as well to accomplish the Administration’s stated goal of America leading well in the age of A.I.
During the president’s first term, the Administration—while optimistic about A.I. and generally discouraging of regulatory actions that would “needlessly hamper AI innovation and growth”—understood that A.I. regulation on a federal level needed to adhere to various principles “when formulating regulatory and non-regulatory approaches to the design, development, deployment, and operation of AI applications.”11 In the Administration’s own words, a principled approach was crucial “to sustain and enhance the scientific, technological, and economic leadership position of the United States in AI.” Developed under the supervision of then-White House Chief Technology Officer Michael Kratsios, such principles included, amongst others: (1) securing public trust in A.I., (2) allowing for public participation in all stages of the rule-making process, and (3) making policy decisions based on science (i.e. research evidence).12 While we do not believe that the third principle is sufficient in and of itself in this context (i.e., education is a perennial human endeavor and the weight of wisdom, history, and experience are also important to account for), we, nonetheless, agree that an evidence-based incorporation of A.I. into American schools is vastly superior to what we are currently entertaining: namely, incorporation of A.I. without evidence. We urge the Secretary, along with the rest of the Administration, to therefore pursue priorities and policies that adhere to these stated principles.
I. The Problem of Public Trust
Today, public trust remains arguably the greatest hurdle to integrating A.I. into society. From the workforce to education and beyond, the lack of public trust in A.I., in Big Tech companies in general, and in the Administration’s close relationship with technological interests dramatically undermines this effort. According to our research, the majority of lower-income adults today (52% of those who make $40,000-$99,000; 60% of those who make $40,000 or less) are concerned that A.I. is a threat.13 Likewise, we also found that less than a quarter of Trump voters were supportive of a federal A.I. moratorium that would restrict states’ abilities to regulate A.I.14 In fact, the majority of voters were opposed to the A.I. moratorium, with the highest opposition from younger generations (70% of 18-34 year olds).15 Moreover, with regards to education, other findings show that the majority of parents do not want A.I. in their children’s classrooms.16
It is absolutely critical that any integration of A.I. into education be evidence-based if public trust is to be secured. Therefore, we urge the Secretary (1) to offer greater clarity on “appropriate methods of integrating AI into education” by defining or issuing guidance on what “appropriate methods” involve, and (2) to prioritize research efforts to develop evidence-based “appropriate methods” for A.I. integration before embedding A.I. technologies into any K-12 classroom, teacher training, or other education-related activities and environments, as outlined under section (b) of the proposal.
II. The Problem of Public Participation
A. States Rights
In its first set of proposed grantmaking priorities, the Department included a proposal for “Returning Education to the States.”17 Through this priority, the Department seeks to empower States, Tribes, and local communities to “take the lead in formulating, developing, and implementing policies that best serve students, families, and educators.” The Department’s justification for the priority was simple:
One-size-fits-all mandates from the federal government create obstacles, limiting the ability of State, Tribal, local, and institutional leaders to make decisions in the best interest of their students and their workforce.18
We could not agree more. However, the Department’s latest proposed priority to integrate A.I. into schools threatens to repeat the very errors it seeks to avoid. Issuing guidance and proposed priorities designed to integrate A.I. technologies (not just A.I. education) into every institution of K-12 and higher education is a top-down mandate that does the opposite of “empowering States and Tribes to take the lead in formulating, developing, and implementing policies that best serve students, families, and educators within their communities.”
Clarification will be needed on how the Department’s proposed priority on integrating A.I. in education compliments its priority to empower State, Tribal, local, and institutional leaders to make decisions in the best interest of their students and their workforce.
B. Parental Rights
“Families deserve an education system that reflects the unique needs of the communities in which they live,” the Department wrote in its first set of proposed priorities.19 The input of parents and legal guardians is key to determining the unique educational needs of the families in each community. In seeking to prioritize integration of A.I. in education, the Department must ensure that it respects the rights and duties of parents and legal guardians as the primary caretakers of children.
As Georgetown University’s Dr. Meg Leta Jones has argued, existing administrative guidance regarding the integration of technology in education undermines parental rights and thus children’s safety.20 In July 2020, the Federal Trade Commission (FTC) released guidance on the Children’s Online Privacy Protection Act (COPPA), stating that “schools may act as the parent’s agent and can consent under COPPA to the collection of kids’ information on the parent’s behalf.”21 The guidance limited the ability of schools to consent on behalf of parents to “the educational context – where an operator collects personal information from students for the use and benefit of the school, and for no other commercial purpose.”22 Likewise, the Family Educational Rights and Privacy Act (FERPA) has undergone expansive administrative interpretations to allow ed tech companies to access student’s records without parental consent. As passed by Congress, FERPA narrowly permitted educators and other school personnel to access records for “legitimate educational interests.”23 As Leta Jones notes, today, “[e]ducational technology companies now routinely qualify as ‘school officials,’ despite FERPA’s requirements.”24 Regardless of formal complaints regarding FERPA violations, the Department under the Biden administration refused to enforce them.
Unsurprisingly, violations of the data privacy of kids are systematic in America’s schools. According to its 2022 K-12 EdTech Safety Benchmark report (published in 2024), Internet Safety Labs found that of “the technology recommended and used by U.S. educational institutions,”
Nearly all apps (96%) share children’s personal information with third parties, 78% of the time with advertising and monetization entities, typically without the knowledge or consent of the users or the schools, making them unsafe.25
It’s no wonder then, that, according to one survey, 91% of parents do not want their children using or interacting with A.I. technology in the classroom.26 Before A.I. technology is integrated into any school, the Department should issue rules and guidance that reassert the original intent of FERPA and COPPA by outlining strict safety standards regarding the access of students’ data, eliminating the ability of educational technology companies to use and access student data without explicit parental consent, enforcing violations of COPPA and FERPA, and requiring parental consent for the integration of new technologies in the classroom. Put simply, the prior generation of ed tech transformed American education into a field for data enrichment and children as objects of extraction. A.I. cannot be safely incorporated into American education in any manner unless this systematic practice is corrected and curtailed.
III. The Problem of Evidence-Based Policy
A. Learning Outcomes
As noted above, the Department’s proposal presupposes that A.I. technologies improve learning outcomes. Generally, however, existing research shows that more technology in classrooms does not produce better academic performance. According to a landmark study by the Organization for Economic Co-operation and Development, students who used computers “very frequently” at school had worse learning outcomes than those who used them moderately or less frequently.27 And a 2019 review of existing research found that “[i]nitiatives that expand access to computers… do not improve K-12 grades and test-scores.”28 In fact, as screens have become more ubiquitous in schools as well as American society, global test scores in reading, math, and science have been steadily dropping,29 reaching their lowest in half a century in 2022.30 Despite these and other findings, the US continues to spend $30 billion annually on integrating ed tech into schools.31
Though new, A.I. technologies will build upon the existing ed tech platforms and threaten to accelerate these effects. In a groundbreaking study this year by the MIT media lab, individuals who used large language models like ChatGPT to write essays over a four-month period “consistently underperformed at neural, linguistic, and behavioral levels” than their counterparts who did not do so.32 To be sure, the integration of A.I. technologies extend well beyond the use of applications like ChatGPT. But at the very least, these findings should deter the Department from funding the integration of A.I. technologies into the classroom until further research can be performed to determine the effects of these technologies on learning outcomes. To this end, we again underscore our support for provision (a)(x) of the proposed priority and other research-focused priorities by the Secretary.
B. Known Harms to Minors
Today, it is well known that ed tech—specifically digital devices and applications—exposes students to various harms. As already mentioned, almost all the ed tech apps used or recommended by schools share children’s personal data.33 And laptops like Google’s Chromebook have long had poor content filters and overly complicated parental controls, making it easy for minors to access age-inappropriate content like pornography.34
Current A.I. technologies, including those being marketed as ed tech, expose students to similar harms. A.I. teaching assistants and tutors are fundamentally social in nature, interacting with students in ways that mimic human conversation. Today, three-quarters of teens have interacted with A.I. chatbots, and a third of those users have reported being made to feel uncomfortable by something the A.I. has said or done.35 A Common Sense Media report published this year concluded that A.I. chatbots “pose significant risks to teens and children under 18.”36 Such risks include “encouraging harmful behaviors, providing inappropriate content, and potentially exacerbating mental health conditions.”37
These risks already exist with Big Tech and ed tech products alike. Companies like Meta and X have chatbots that will engage in sexual conversation with users it knows are minors. Recently, X released an A.I. companion, accessible to minors, that engages users in a sexual and romantic manner.38
Similarly, according to a Wall Street Journal exposé, Meta has made “multiple internal decisions to loosen the guardrails around the bots to make them as engaging as possible.”39 This included removing explicit content bans when engaging in romantic or sexual discourse, even when the chatbot is engaging with minors.40 Sadly, A.I. products developed by ed tech companies are not much more “age appropriate.” For example, ed tech company KnowUnity’s “School GPT,” has given users recipes for fentanyl and encouraged harmful eating behaviors.41 Other ed tech A.I. applications like CourseHero have even given instructions for synthesizing date rape drugs.42
This is to say nothing of the problems of A.I.-generated “deepfake” nude pictures. Already schools are having to discipline students that use A.I. to generate child sexual abuse material mimicking the personages of other students.43
Some students have even disseminated such content to harass or extort their peers. Sadly, this is a growing problem. According to research published earlier this year, around 1 in 8 teens aged 13 to 17 personally know someone who has been a victim of deepfake.44
Parents and schools have already been struggling for years to reign in these and other collateral harms of educational technologies. However, as currently written, the Department’s proposal threatens to expose American youth to sustained harms by supporting the integration of A.I. technologies in the classroom to assist students, including the use of A.I.-driven “virtual teaching assistants” and “tutoring.”45 Given that A.I. companies are already using their technologies to prey on kids, the Department should instead be wary about allowing this industry to access children in general, much less without first delineating robust safeguards and guidelines to ensure their protection.
Conclusion
If the Department wishes to prioritize A.I. education and the integration of A.I. technology into classrooms, it should first define “appropriate methods” and develop robust guidelines to ensure that students and families will flourish. This, of course, will require research, which is why we commend provision (a)(x) of the proposed priority. But it will also require input from parents, educators, advocates, and technologists. We strongly urge the Secretary to prioritize research to determine what methods and uses of A.I. education and technology best serve students, and to seek public input to develop safeguards and guidelines to protect students before putting the weight of the federal government behind accelerating A.I. preeminence in the classroom.
Respectfully,
Michael Toscano
Director, Family First Technology Initiative
The Institute for Family Studies
Jared Hayden
Policy Analyst, Family First Technology Initiative
The Institute for Family Studies
References
Interested in learning more about the work of the Institute for Family Studies? Please feel free to contact us by using your preferred method detailed below.
P.O. Box 1502
Charlottesville, VA 22902
(434) 260-1048
For media inquiries, contact Chris Bullivant (chris@ifstudies.org).
We encourage members of the media interested in learning more about the people and projects behind the work of the Institute for Family Studies to get started by perusing our "Media Kit" materials.