Highlights
- With the help of the new Executive Order, AI companies threaten to shove more addictive, dehumanizing screens into our communities. Post This
- The only place where decisions about AI in children’s lives should be made is in local schools where parents can participate. Post This
- AI and Big Tech’s new partnership with the teacher’s union represents a billion-dollar strategy to embed AI throughout the classroom before parents can resist. Post This
Editor’s Note: Up next in our symposium on AI in the Classroom is Georgetown University professor and tech law expert Meg Leta Jones, who warns that the integration of AI in schools poses a significant threat to parental rights.
In June, the American Federation of Teachers (AFT), the second largest union in the country, announced a partnership with OpenAI, Microsoft, and Anthropic to further integrate Artificial Intelligence into schools across the country. AFT President Randi Weingarten explained that a new national AI training hub in New York City, funded by tech companies, will be “where companies come to the union to create standards.” This came on the heels of Amazon, Apple, Google, Microsoft, Meta, and Roblox signing the new White House pledge to invest in AI in K-12 education. A continuation of corporate and political capture of classrooms, the alliances represent the powerful forces working against families and local communities trying to raise non-iPad kids.
These forces build barriers I discovered firsthand when I registered my kids for school. I was delighted to see a unique box on the online registration form. By checking it, I would agree to the Technology Acceptable Use Policy, confirming that I “want my student to receive access to school devices, technology, and network.” I enthusiastically chose not to check the box. As a privacy scholar, I was confident I had effectively declined to consent to my children using individual devices and the data collection that accompanies it while under school care.
I was wrong. When I followed up, administrative leadership informed me that the district considers technology integration part of its “blended learning curriculum” and that teachers cannot effectively conduct classroom sessions in two different ways, so parents are not able to opt out. This scenario plays out in schools nationwide, where frustrated parents discover their choices about their children’s engagement with educational technology companies are essentially meaningless, despite federal laws that require their consent.
This erasure of parental rights reveals the scope of administrative overreach in both tech and education policy. Congress democratically established clear protections for families: the Children’s Online Privacy Protection Act (COPPA) requires companies to obtain parental consent before collecting data from children under 13, and the Family Educational Rights and Privacy Act (FERPA) protects student records from commercial exploitation. Yet through administrative excess and enforcement abdication, federal agencies have systematically dismantled these legislative protections, transforming schools from educational institutions serving families into data collection intermediaries doing the bidding of Big Tech. The overreach has paved the way for top-down, federally-directed, and corporate captured AI that will further weaken American students.
FTC: “Schools Can Consent on Behalf of Parents”
The foundation of this systematic undermining lies in how federal agencies have reinterpreted the foundational privacy laws that Congress passed to protect families. COPPA, passed in 1998, represented Congress’s recognition that children deserve special protection in the digital age and that parents, not corporations or government agencies, should make decisions about their children’s privacy. Then, through two “advisory” writings—not rulemakings or any official process that involved the public or elected people—the FTC declared that schools are able to consent on behalf of parents.
This occurred first in 1999 as a statement of purpose accompanying the initial implementation of COPPA when the FTC stated, “the Rule does not preclude schools from acting as intermediaries between operators and parents in the notice and consent process, or from serving as the parents’ agent in the process.” This concept is not part of the actual law.
Federal agencies have systematically dismantled legislative protections, transforming schools from educational institutions serving families into data collection intermediaries doing the bidding of Big Tech.
Second, the unsettling phrase “consent on behalf of parents” came in a 2020 Q&A style guidance, stating that as long as the edtech company limits use of the child’s information to the educational context authorized by the school, the operator can presume that the school’s authorization is based on the school’s having obtained the parent’s consent. As a best practice, it says, schools should consider making such notices available to parents and consider the feasibility of allowing parents to review the personal information collected. And worse still, schools are not letting parents opt out—meaning even when parents are objecting to the use of these tools, schools maintain that they are consenting on behalf of parents.
The practical result is that most parents have absolutely no idea what applications or technologies their children are using in school. They don’t know where to start looking, and when they do investigate, they discover their districts have approved hundreds of apps that could be deployed in their children’s classrooms at any time. The FTC guidance allows AI companies to simply assume that schools have obtained parental consent, but in reality, no such consent exists. Parents aren’t consenting to anything; they’re completely unaware that consent decisions are supposedly being made for them.
FERPA’s Parallel Collapse
FERPA has undergone a similar transformation. Congress originally designed FERPA to protect student educational records from nonconsensual disclosure and ensure parental access to their children’s information. The law included a narrow “school official” exception allowing designated school personnel to access records for legitimate educational purposes. This exception was meant to enable teachers and counselors to serve students effectively while maintaining strict boundaries around data access. Administrative interpretation has since stretched this congressional intent beyond recognition. Educational technology companies now routinely qualify as “school officials,” despite FERPA’s requirements. Without parental consent, edtech vendors are supposed to operate only as direct school agents under contracts that strictly control data use, prohibit commercial purposes, require immediate deletion after termination, and ensure parent access.
The implications for children’s futures are profound and largely irreversible.
These details are not enforced by the Department of Education, despite the fact that parents cannot see, delete, or control their children’s records in practice. In 2021, when the parents of the Student Data Privacy Project filed over a dozen complaints documenting widespread violations, the agency explained they won’t enforce FERPA because withholding federal funding from schools hurts students. Since then, a number of high-profile data breaches and mergers have only increased the need for scrutiny, yet it has not come. The exception that Congress designed to meet basic school functions now enables massive technology corporations to access, analyze, and profit from student data under the banner of educational service.
The IXL Case
The true scope of this administrative overreach became clear in the recent legal battle between parents and IXL Learning. Frustrated by federal agencies’ refusal to enforce the privacy rights that Congress entrusted to them under COPPA and FERPA, parents were forced to bring their own lawsuit against IXL for allegedly collecting their children’s data illegally.
When they did, the company made an extraordinary claim: that schools had acted as parents’ agents with federal authority to bind families to arbitration agreements they never signed. IXL’s argument relied on FTC guidance, claiming that federal law had transformed schools into parental agents with broad authority to make binding legal commitments on behalf of families. As FTC Commissioner (now Chairman) Andrew Ferguson noted in his scathing response, this interpretation “does not pass the smell test.” What began as administrative interpretation has evolved into legal and policy strategy, with schools and companies citing federal guidance and lack of enforcement to justify overriding the parental authority granted by statute.
IXL and PowerSchool, both being sued by the EdTech Law Center on behalf of parents for skipping consent, have already introduced AI tools into their suite. PowerSchool’s PowerBuddy, an omnipresent learning assistant that collects and analyzes extensive student data across multiple educational functions, claims to be built with “responsible AI” principals—but responsible AI apparently does not include local buy-in or parental consent.
AI: The Next Frontier of Overreach
As Artificial Intelligence rapidly infiltrates educational settings, the same pattern of administrative overreach is accelerating the process. In April 2025, the White House’s “Advancing Artificial Intelligence Education for American Youth” Executive Order directs federal agencies to prioritize AI in education grants and teacher training, and creates public-private partnerships with AI industry organizations to provide resources for teaching foundational AI skills to students. While paying lip service to privacy and safety concerns, the order contains no explicit requirement for parental consent or involvement before AI systems analyze, profile, or make decisions about their children.
AI in K-12 education, particularly at the elementary level, risks undermining the development of fundamental cognitive skills that children need to build through struggle and practice. When AI tools complete tasks for students, they can short-circuit the learning process itself, preventing students from developing critical thinking, perseverance, and the ability to work through the confusion and practice. Early reliance on AI could also atrophy children’s natural curiosity and creativity, as they become accustomed to instant, polished responses rather than exploring messy ideas and developing their own voice and reasoning, potentially creating a generation that struggles with independent thought. Additionally, AI educational tools collect vast amounts of data about children’s learning patterns, mistakes, interests, and behavioral responses—information that could be used to manipulate their preferences or create detailed psychological profiles they’re too young to understand. The gamified, instantly-responsive nature of many AI systems can create addictive patterns, where children become dependent on immediate gratification and personalized attention, potentially interfering with their ability to engage with less stimulating but equally important learning activities like reading, having conversations, or working through problems on their own. The implications for children’s futures are profound and largely irreversible.
AI in K-12 education risks undermining the development of fundamental cognitive skills that children need to build through struggle and practice.
Under current federal guidance, schools can implement untested, standardless, black-boxed AI systems while claiming to act on behalf of parents who may have no knowledge of what technologies are being used, what data is being collected, how they’re being used in the classroom and why, or what decisions are being made about their children. The same administrative framework that enabled educational technology companies to override the parental consent that Congress required now provides cover for AI systems with significant potential for individual and generational damage.
Restoring and Enshrining Parental Consent to EdTech AI
Addressing this crisis requires restoring parental consent rights in education. The only place where decisions about AI in children’s lives should be made is in local schools where parents can participate. My neighborhood has worked hard over the last couple years to have our local public school represent our shared local values around tech in the classroom. Our PTO Technology Working Group has rallied parents and joined with school staff to develop policies for minimal screen use by students. Our needs are not the same as other schools in the district, and the district wisely acknowledges that in their own approach.
Yet, despite our neighborhood efforts to rid our school of Big Tech, AI companies now—with the help of the new Executive Order—threaten to shove more addictive, dehumanizing screens into our communities. And they can do so precisely because administrative agencies have already diminished America’s parental consent protections in schools.
Therefore, the restoration of parental consent requires three surprisingly easy steps:
- The Federal Trade Commission should immediately rescind its guidance allowing schools to consent on behalf of parents, returning to COPPA’s original congressional requirement for direct parental consent. This guidance was created through informal administrative process and can be quickly eliminated, immediately restoring the meaningful parental choice for all educational technology decisions.
- FERPA requires agency leadership to similarly reassert the law’s original intent and enforce parental consent protections. The school official exception needs to be carefully narrowed, monitored, and enforced. If the exception remains, significant oversight of edtech vendors should be established and extended to all the third parties receiving student data downstream. If the Department of Education will not enforce the law, violations should be opened up to private rights of action by parents.
- Most urgently, any AI policy in educational settings must explicitly require informed parental consent before implementation. The AI Executive Order should be amended to include clear language that AI systems may only be used with students after obtaining parental consent that explains what AI technologies will be used, what data will be collected, how decisions will be made, and the potential impacts on children’s educational and social development.
Federal agencies have systematically destroyed the parental consent protections that Congress established, transforming schools into data collection operations for tech companies, while leaving families powerless to protect their children. AI and Big Tech’s new partnership with the teacher’s union represents a billion-dollar strategy to embed AI throughout the classroom before parents can resist. But this crisis is remarkably easy for the current administration to fix. Restoring parental consent in educational technology requires only the political will to undo previous administrative overreach that was never legally justified in the first place.
Meg Leta Jones is a Provost’s Distinguished Associate Professor in the Communication, Culture, & Technology at Georgetown University, where she researches rules and technological change with a current focus on family technology policy.
Editor's Note: The opinions expressed in this article are those of the author and do not necessarily reflect the official policy or views of the Institute for Family Studies.