Quantcast

How to Get Age Verification Right

Share

Highlights

  1. Effective age verification doesn’t need to rely on tech companies to collect information about their users; third-party verification tools can do the job. Post This
  2. The need for well-designed age verification laws is as urgent now as ever. Post This
  3. Age verification laws should be tailored as carefully as possible to pass constitutional muster, rather than confronting 'Ashcroft' head-on. Post This

Across the nation, online age verification is a hotter topic than ever. With federal action seemingly stalled, states from Virginia to Utah have passed a wide variety of laws mandating age checks for certain internet activities. Even California has gotten in on the action, with its recent California Age-Appropriate Design Code Act— a digital privacy law providing guidance for online “age assurance” measures.

These laws reflect growing consensus that the contemporary internet isn’t working for kids. From sexually explicit images and videos that children encounter at earlier and earlier ages, to psychologically destructive social media sites that foment anxiety and glamorize eating disorders, the online world has become a poisonous place for young people. Gen Z suffers from greater mental health challenges than any generation on record, and ubiquitous internet access has clearly played a role. Expanding age verification is a logical response to this crisis.

The devil lies in the details. Implementing age verification has proven difficult from a legal standpoint: to name just one example, courts in Texas and Arkansas recently enjoined those states’ age-verification laws on First Amendment grounds, arguing that the Supreme Court’s 2004 decision in Ashcroft v. ACLU slammed the door on such efforts.

These court battles shouldn’t deter policymakers from pursuing age verification laws. For one thing, there is a sound argument that the Court’s Ashcroft decision was predicated on outdated technological assumptions, which more recent history has undermined. In the years to come, the Court may well take up the issue again. In the meantime, though, age verification laws should be tailored as carefully as possible to pass constitutional muster, rather than confronting Ashcroft head-on. This likely means imposing age checks on certain types of communication technologies as such, rather than on particular types of internet content. Expressly content-based age-verification laws are the most constitutionally fraught by far.

No age verification solution is foolproof. It doesn’t need to be. It need only be better than the destructive online free-for-all that America’s kids currently encounter on a daily basis.

The need for well-designed age verification laws is as urgent now as ever. Children are still suffering, and dominant technology firms have little interest in regulating access to their own platforms. If age verification is going to happen, it will almost certainly be compelled by democratically-elected leaders, not corporate boardrooms. What’s important is getting the job done right, rather than giving up hope.

Where age verification is concerned, though, the how is just as important as the why. Everyone agrees that age verification should preserve user privacy as much as possible. And this presents a critical question of institutional design.

Across its massive constellation of lobby groups, the tech industry claims that age verification laws will categorically destroy Americans’ privacy and safety. That’s not accurate. First, it’s a bad-faith argument: these tech firms already possess massive tranches of data on their users, which they routinely monetize for purposes of “behavioral advertising.”

But second, and more importantly, effective age verification doesn’t need to rely on tech companies to collect information about their users. As Adam Candeub, Clare Morell, and Michael Toscano explain in a recent piece for The Hill, third-party verification tools can do the job. On a “zero-knowledge proof” model, independent third parties can collect the information necessary to verify a given user’s age, and then provide an assurance to the platform that the user does in fact meet the legal requirements for access. Companies can contract with these third parties to handle age verification without ever needing to access the user’s raw data: on this model, no sensitive identity information passes from the user to the tech company at any point. Policymakers can further shore up privacy by imposing severe penalties for personal data breaches on all the companies involved, incentivizing platforms to take security seriously.

Of course, no age verification solution is foolproof. It doesn’t need to be. It need only be better than the destructive online free-for-all that America’s kids currently encounter on a daily basis. That being said, there are avoidable pitfalls in this area, which age verification laws should seek to avoid.

California’s Age-Appropriate Design Code Act speaks of “age assurance”—a catch-all term that appears to encompass both age verification and the much more nebulous notion of “age estimation.” What’s wrong with “age estimation”? Ultimately, age estimation not relying on actual records will likely function in one of two ways—both of which come with major downsides.

First, age estimation might involve inferring a user’s age from their pattern of online behavior. (For instance, younger users are more likely to spend time on TikTok; older users prefer Facebook.) But not only is this an unreliable proxy for actual age, as French technology regulators observed in 2022, it also requires invasive tracking of users across the internet. That doesn’t do much to protect user privacy.

Second, age estimation may rely on biometric data points—such as retinal, facial, or fingerprint scans—to make an “educated guess” about the user’s age. This is an even worse idea. For one thing, camera-based identity scans are easily thwarted: as researchers demonstrated in 2020, users can simply hold up images of other individuals to bypass an age check.

But more importantly, even if biometric age checks were to work, the resulting regime would fit poorly with the American democratic tradition. The mass collection of biometric information tends inevitably towards a devaluation of persons: individuals become subjects to be managed by sovereign power, not participants in a shared project of self-governance.

Specifically, any age verification law with real teeth needs to include audit procedures—a mechanism for state or federal authorities to check up on the companies supposedly carrying out age checks, and hold them accountable if they’re not. On a biometrics-based system, effective audits will inevitably entail giving government regulators access to the biometric data collected by the companies involved. This is data which governments don’t already collect. And that leads us down a dangerous path.

Philosopher and social critic Giorgio Agamben writes that “the fundamental activity of sovereign power is the production of bare life as originary political element.” By this, Agamben means that the temptation of politics is always to treat human beings as nothing more than matter in motion. It is essentially dehumanizing to reduce individuals to a collection of physiological data points. And totalitarianisms begin when the biopolitical temptation is indulged. 

Conversely, the American tradition insists that the nation’s people are individuals “created equal” in the words of the Declaration of Independence, who together make up the “We” of the Constitution’s Preamble. They have inherent rights and responsibilities, over against the state. That’s a lofty vision, almost unique in human history. And it’s a fragile one. Policymakers shouldn’t undercut it by getting companies—and the government—into the business of mass biometric data collection. Public records, created and anchored within a legal framework that recognizes the rights of persons, can do the age-verification job perfectly well.

In the face of these concerns, it may seem impossible to design age verification legislation that’s constitutionally defensible, practically effective, and consistent with American values. But it’s not. Policymakers need only get creative, exploring the full range of technological and legal resources available to them. And they must do so now. 

America’s young people are still at risk, and the hour is growing late.

John Ehrett is Chief Counsel to U.S. Senator Josh Hawley on the Senate Judiciary Committee, and he serves as lead Republican counsel on the Subcommittee for Privacy, Technology, and the Law.

Editor's Note: The opinions expressed in this article are those of the author and do not necessarily reflect the official policy or views of the Institute for Family Studies, or of any other individual.

Sign up for our mailing list to receive ongoing updates from IFS.
Join The IFS Mailing List