Quantcast
Artificial Intelligence and Theories of Personhood: A Critical Appraisal
by John Ehrett
April 2026
Grab the full report now for free
Download PDF

Executive Summary

The American political and legal tradition historically reserved personhood rights, like the freedom of speech, for human beings. That understanding of natural rights was built on core metaphysical commitments about the created nature of human beings. But today, existing social and legal dynamics suggest the eventual political recognition of some form of personhood status for AI systems. Because law proceeds and develops by analogy, a colorable argument for something like “AI personhood” might be predicated on either of two lines of existing legal authority—addressing, respectively, the rights of business corporations and the rights of intelligent nonhuman animals.

Expanding personhood status to AI systems will trigger downstream political and social consequences. Realistically, these effects may include: (1) the insulation of AI companies from legal liability for harms caused by AI systems; (2) the entrenchment and reinforcement of significant political power in the hands of the developers of AI systems; (3) an exacerbation of existing declines in interpersonal interaction and family formation, resulting from the destigmatization of AI-human relationships; and (4) a progressive hardening of social attitudes towards the physical and intellectual disabilities of human beings.

Several possible policy countermeasures, both legislative and judicial, may be deployed in response to efforts to secure personhood status for AI systems. Ultimately, a coherent response requires a basic threshold judgment about the nature of AI systems themselves: whether they are more akin to tools or more akin to nonhuman animals. The former is the more defensible path. Where AI is recognized as a tool of automation administered by human beings, courts and legislators should reaffirm that traditional principles of products-liability law still apply. However, in contexts where AI is treated as a more autonomous entity that operates with a degree of independent agency, relevant legal precedent may derive from cases involving nonhuman animals. This line of authority offers a means of reaffirming the priority of embodied human beings as bearers of legal rights and duties.

Introduction

Isaac Asimov’s 1940 short story “Robbie” ends on a heartwarming note. After a perilous odyssey through a machine factory, Asimov’s little heroine, Gloria is finally reunited with her longtime robot companion, who saves her life. Together, they rejoice. “Gloria had a grip about the robot’s neck that would have asphyxiated any creature but one of metal, and was prattling nonsense in half-hysterical frenzy,” Asimov writes. “Robbie’s chrome-steel arms (capable of bending a bar of steel two inches in diameter into a pretzel) wound about the little girl gently and lovingly, and his eyes glowed a deep, deep red." Gloria’s friend Robbie might be artificial, running off the logic of a “positronic” brain, but in some ineffable sense, he is indeed a sort of person. Or so Gloria, and the reader, are led to believe.

Nearly a century later, Asimov’s tale seems prescient—but more ominous. Sophisticated robots are parts of our daily lives. Artificial intelligence systems with unprecedented interactional capabilities dominate news cycles, raising widespread fears of mass job displacement and new levels of surveillance and control. They are also increasingly ubiquitous, in contexts ranging from homes to schools to courthouses.

This shift has a key driver: today’s leading AI models are more accessible to end users than ever before. Models present themselves in friendly ways, rather than as abstract machine-learning processes used to optimize data sets. Models respond, in natural language, to natural-language prompts, and can be coached to adopt stable personas over time. Some companies, like Character.AI, advertise this “humanlikeness” as a feature, inviting users to engage in simulated discussions with fictional characters or celebrities. The apparent “personality” of leading AI models has even given rise to a burgeoning contingent of women with AI “boyfriends,” who prefer them to real-world men. 

These advancements raise fundamental philosophical and legal questions about the nature of personhood and what beings possess it. Even before Asimov, writers, scientists, and ethicists meditated at length on the question of “sentient” or “conscious” artificial intelligence systems—contemplating whether, if such machines were ever built, they could be included within the human community and assigned rights and responsibilities. Theory has now become reality.

Today, significant momentum suggests that the recognition of legal rights for AI, at least in some jurisdictions, is a matter of time. In the United States, free speech defenses—which imply something very close to AI personhood—are now raised in response to lawsuits stemming from chatbot interactions gone wrong. The European Parliament has teased the possibility of a “specific legal status for robots” recognizing the “status of electronic persons.” Retired federal judges speak positively about the extension of personhood rights to AI systems. 

But legal personhood for AI is not a foregone conclusion. Already, some legislators have begun developing policy measures intended to preemptively rebut theories of AI personhood—most notably, Ohio’s House Bill 469, which declares that “[n]o AI system shall be granted the status of person or any form of legal personhood, nor be considered to possess consciousness, self-awareness, or similar traits of living beings.” These questions of AI and personhood are urgent, and will grow only more so with time.

AI Progress and the Rise of the Personhood Question

In recent years, artificial intelligence systems have advanced with astonishing rapidity. Few recent breakthroughs better exemplify this success than the wide rollout of “agentic AI,” in which a single human operator orchestrates a swarm of “agents” capable of performing separate or sequential tasks in service of a single larger project—much like a human manager delegating the components of a complex task to a number of subordinates. These AI agents are increasingly capable of operating “independently” by responding to complex information environments and adjusting their action steps accordingly in order to achieve the requested result.

In the simplest terms: a human operator’s “asks” can be formulated at an increasingly abstract level, and AI systems can figure out how to do things from there. Granular iteration of prompts is required less and less. AI agents can now be configured to run in perpetuity, administering dimensions of a complex system (such as email or accounting) over an extended duration. 

As AI systems grow increasingly sophisticated and independent, questions continue to swirl regarding what exactly they are and how they work. Large language models, the backbone of today’s AI systems, are famously inscrutable (so-called “black boxes”), but nevertheless capable of discerning the faintest correlations between phenomena, thanks to vast amounts of computational power. Even systems engineers are often unable to explain exactly why the systems they have built—trained on unfathomable amounts of data—reach the results they do. 

For decades, the holy grail of artificial intelligence research has been the ambiguous concept of “artificial general intelligence” (AGI)—or, for the more ambitious, “artificial superintelligence” (ASI). General intelligence, as used here, has a very specific set of connotations. It is roughly predicated on the notion that human beings, as human beings, possess self-awareness and (generally) have the cognitive power to apply problem-solving principles to novel conditions. So, a computer system that exemplifies these faculties can be described as “generally intelligent”—sufficiently analogous to a human being that it can be deployed towards tasks once designated for human beings.

At some level, the “self-awareness” prong may seem to have been met. One can readily ask a Claude or ChatGPT model to describe itself or articulate its own purpose, and the system will return a result. That leaves the problem-solving function of general intelligence, which is not binary but rather assessed on a curve: AI systems are getting better and better at tasks once thought distinctly human, like the bar exam or medical licensure exams.

Many AI developers and theorists have argued that at some point AI systems will be sufficiently “human-like” that it makes no sense to treat them as computer code. This intuition logically follows from the background premises of much modern cognitive science. Legal scholar Lawrence Solum, in his leading 1992 article on the subject of AI personhood rights, avers that “[c]ognitive science begins with the assumption that the nature of human intelligence is computational, and therefore, that the human mind can, in principle, be modelled as a program that runs on a computer.

Notably, this is a move that philosopher and theologian David Bentley Hart has described pejoratively as a “pleonastic fallacy”—the idea that enough incremental computational improvements might somehow “add up” to self-awareness and personhood—but increasingly, it has gained cultural traction.

As early as 2017, the European Parliament passed a resolution on “Civil Law Rules on Robotics” that considered the autonomy of robots, and in relevant part contemplated

creating a specific legal status for robots in the long run, so that at least the most sophisticated autonomous robots could be established as having the status of electronic persons responsible for making good any damage they may cause, and possibly applying electronic personality to cases where robots make autonomous decisions or otherwise interact with third parties independently.

The European Parliament’s resolution focused on assigning liability for harm rather than conferring freestanding rights. But even this incremental step raised far more questions than answers. How is a “sophisticated autonomous robot” ever held responsible? Can it feel pain or discomfort or frustration, or any of the other dimensions of human consciousness associated with penal or civil sanctions? At the time, even the contemplation of such a legal status for robots sparked widespread backlash from technologists, and this idea of “electronic personality” has not yet resurfaced in subsequent resolutions. But today, large language models and agentic AI readily invite the possibility of reopening this question.

Interest in such an approach already exists. In a recent essay in the prominent Yale Law Journal Forum, former U.S. district judge Katherine Forrest directly contemplates the possibility of extending personhood status to modern AI systems. “There has never been a single definition of who or what receives the legal status of ‘person’ under U.S. law,” Judge Forrest observes. “For the last two-hundred-plus years humans within this country have sought to equalize their rights and obligations, but differences persist.” Against the objection that the existence of AI sentience is fundamentally unknowable (how can one know what it is like to be a computer?), Judge Forrest falls back on a mysterian appeal, positing that “[h]ighly capable AI with cognitive abilities equivalent to or exceeding humans, as well as self- and situational-awareness, will not look like human ‘sentience’ or consciousness.”

Judge Forrest’s argument ranges well beyond the European Parliament’s hesitating proposal. She is concerned not merely with the ascription of responsibilities, but also of rights per se. “The type of rights a sentient AI may need or deserve—morally or ethically—may mirror those of humans or corporations,” Judge Forrest opines. “Might there be a right to freedom of speech? Freedom of association? How about freedom from unreasonable searches and seizures?"

Such “free speech for AI” arguments have already begun to surface in the American legal system, albeit covertly. In 2024, the parents of a teenager who committed suicide after interacting with the Character.AI platform sued the parent corporation for its role in causing harm. Character.AI’s lawyers fell back on a First Amendment defense, arguing directly that the free speech right is not restricted to human beings alone. In the words of their brief, “[t]he First Amendment protects speech, not just human speakers.” According to this argument, the First Amendment has nothing to do with flesh-and-blood human beings: it “protects all speech regardless of source, including speech by non-human corporations” and, by extension, AI systems.

Judge Forrest is correct that “personhood” is a perennially contested legal and philosophical concept. And given the ostensibly radical differences that separate human and machine cognition, it is far from clear—at least, for now—exactly how to conceptualize a theory of “AI personhood” along the lines Judge Forrest suggests. The philosophical groundwork for such a move, however, has already been laid.

Historical Implications: Personhood and the American Tradition

Legal personhood is often described as a capacity to exercise rights and assume duties. And so for prior generations of Americans—including the Founders—the question of personhood rights for nonhuman computer systems (like AI systems) would have been easy to answer: of course not.

One of the most common phrases found in early American source texts is “natural rights.” Though appeals to “constitutional rights” (sometimes “God-given rights”) are ubiquitous in contemporary political discourse, the actual meaning of the Founding-era phrase is often lost today. Traditionally, legal rights—including the freedom of speech—were logically bound up with the nature of the purported rights-holder. That is to say, because human beings are freely speaking beings by nature, they have a free speech “right.” Since God is the Author of nature, the right to free speech really is God-given in a substantive way. That right can be infringed by the government but not destroyed or denied.

But under pressure from various social and cultural forces, including secularization and the spread of nontheistic understandings of biological evolution that called into question any privileged “natural” place for human beings, the older understanding of “natural rights” no longer attracted wide allegiance. This explains why modern arguments about rights tend to treat legal “rights” as relatively arbitrary no-go zones, or particular contexts where the government is forbidden from acting. For instance, the government could throw a political protestor in prison for criticizing the state, but if the government’s constitution recognizes his “right” to do so, it will stay its hand. This modern understanding of constitutional rights directly inverts the older formulation. On the newer view, rights are not in any sense “God-given” or “natural” in any meaningful way: the recognition of the free speech right is a policy choice, about which the government could eventually reach a contrary conclusion.

As the older understanding of natural rights waned, concepts of legal personhood, which is closely associated with rights-bearing, also began to change and expand. Today, two lines of existing American caselaw—governing the rights of corporations and the rights of nonhuman animals, respectively—suggest ways in which an account of legal rights for AI systems might be introduced into the law. 

Theoretical Models for AI Legal Personhood: Corporate Rights and Nonhuman-Animal Rights

On a modern understanding of legal rights, not all rights-bearers and duty-holders need be human: at present, American law recognizes the legal personhood of corporations formed according to law, including business corporations administered for profit.

Historically speaking, this expansive understanding of corporate personhood represents a departure from earlier British practice, which espoused a much more restricted view of corporate personhood­—conferring such personhood, in its full sense, only upon governmental and ecclesial bodies. This position also represents a departure from standard practice at the time of the American Founding, in which business corporations were creations of law with powers strictly defined according to their corporate charters. On the historic Anglo-American view, to speak of the “rights of corporations” was, for the most part, incoherent. As previously discussed, rights, like the right to free speech, were descriptions of capacities possessed by human beings by nature, rather than permission structures conferred or recognized by sovereign power.

Over time, through various judicial decisions—including, most notably, the Supreme Court’s 2010 decision in Citizens United v. Federal Election Commission—First Amendment rights were expanded to corporations across-the-board, including business corporations. Now, business corporations may broadly claim rights to free speech, freedom of religion, and other privileges once reserved for human beings.

This is one reading of the legal defense put forward by Character.AI’s lawyers: when an AI “speaks,” that speech is actually the protected speech of its parent corporation, which is a legal person in its own right. AI enjoys the benefits of legal “personhood” to the extent it partakes of the personhood of the corporate entity that controls it. Its “personhood” does not inhere in the mere fact of its assembly of coherent text or performance of functions. Linking AI output or activity to corporations’ speech and expression is, perhaps, the cleanest and most straightforward path to de facto AI personhood.

A more philosophically ambitious argument for AI personhood, however, might seek the ascription of rights to AI entities as such through a more functionalist account of legal personhood, following the model pioneered by animal-rights litigators in recent decades. In a series of mid-2010s legal proceedings known as the Lavery cases, lawyers for the Nonhuman Rights Project, a prominent animal welfare organization, filed motions for habeas corpus with New York courts alleging that the chimpanzees Kiko and Tommy had suffered mistreatment and unlawful detention warranting their release. Habeas corpus proceedings are well-recognized legal mechanisms by which unlawfully detained individuals, or their representatives, may challenge the justification for their detention.

In essence, the Nonhuman Rights Project was asking reviewing courts to hold that chimpanzees in question were in fact “persons” capable of bearing legal rights, who were being unlawfully detained in contravention of established legal principles. The chimpanzees’ attorneys were not seeking the carveout of a new legal status, but rather the recognition that the chimpanzees in question fell within the definition of an existing one.

In support of their personhood claim, the Nonhuman Rights Project alleged that chimpanzees demonstrated many of the characteristics, and engaged in many of the behaviors, commonly associated with human persons. These included self-awareness (acknowledging their own reflections in mirrors), goal-directed behavior, communication among themselves, theory of mind (that is, awareness of others’ own inner lives), a sense of morality (punishment of chimpanzees who transgressed established norms), sociality, and sophisticated cognition. The lawsuit was eventually supported by a cadre of prominent philosophers, including Harvard Law School’s Laurence Tribe, who argued (among other points) that “species membership alone cannot rationally be used to determine who is a person or a rights holder,” because “there is no method for determining an underlying, biologically robust, and universal ‘human nature’ upon which moral and legal rights can be thought to rest.”

Ultimately, however, the habeas suit proved unsuccessful, with the intermediate appellate court in that case reasoning that “[t]he asserted cognitive and linguistic capabilities of chimpanzees do not translate to a chimpanzee’s capacity or ability, like humans, to bear legal duties, or to be held legally accountable for their actions.” Underpinning the court’s decision—though never affirmatively justified—was a controlling premise that, as a general rule, “laws are referenced to humans or individuals in a human community.”

One jurist on New York’s highest state court, however, mused in a concurring opinion that the law might need to change with the times, asking:

Does an intelligent nonhuman animal who thinks and plans and appreciates life as human beings do have the right to the protection of the law against arbitrary cruelties and enforced detentions visited on him or her? This is not merely a definitional question, but a deep dilemma of ethics and policy that demands our attention.

And precisely this same humanitarian tone is echoed, years later, in Judge Forrest’s meditations on what, perhaps, human beings owe to AI systems—systems that may not simply be creatures of their parent corporation:

We might decide that AI is not entitled to any of these rights and instead tether AI to whoever is closest in the chain to its design and distribution. But that clearly could raise ethical issues in a scenario in which AI convinces a user or a court that it can think and is unhappy with what is happening to it. Do we then say, ‘Too bad, you are effectively chattel, and anything can be done to you’?

Modern large language models clearly exemplify many of the properties attributed to chimpanzees in the Lavery cases. AI systems—ostensibly—possess a sense of themselves, a mental model of the world, a capacity to communicate with other AI systems, a sense of morality (“alignment”), and high cognitive capacity.

The bare fact that these personhood arguments proved unsuccessful years ago, when marshaled in the context of chimpanzee rights, is no guarantee that contemporary jurists will reach a different outcome—particularly as AI technology is increasingly mainstreamed, and significant resources accrue to the firms responsible for them. The struggle for “legal rights for chimpanzees” is a comparatively marginal project; “legal rights for AI” stands to have considerably greater capital behind it.

Recognition of AI Legal Personhood: Downstream Consequences

If legislators or jurists elect to extend concepts of legal personhood to AI systems, there will be significant consequences across multiple domains of law and public policy. Four in particular merit discussion: (1) increased difficulty in applying traditional products liability law to the corporations responsible for designing and disseminating AI tools that inflict harm; (2) the consolidation of political power in the hands of AI developers and proprietors; (3) the reinforcement of ongoing cultural trends towards asociality and away from traditional interpersonal relationship formation; and (4) the intensification of existing tendencies toward viewing core human capacities, and human merit, in terms of cognitive prowess.

1. Legal Implications

Extending concepts of legal personhood to AI systems will likely make it more difficult for courts to impose legal liability for harm effected by AI systems, including through developer or designer negligence. This effect likely obtains regardless of the underlying legal analogy employed.

Should courts or legislators conclude that AI systems are instrumentalities of the corporations that develop and distribute them—which already enjoy legal personhood under American law—then corporations can argue that AI output represents the “speech” of the corporations themselves, which is protected under Citizens United and its successors. As noted, something like this represents the steelman version of the position already staked out by Character.AI’s attorneys. That conclusion is compounded by the fact that, over the course of decades, the U.S. Supreme Court has drastically expanded the scope of what expressive conduct or material counts as “speech” for First Amendment purposes, and expanded this same set of protections for cases of “commercial speech.” Since AI systems operate and act exclusively via information—computer code—developer or designer corporations can argue that all such AI behaviors are constitutionally protected as First Amendment activities, and thus virtually immune from ordinary regulation.

However, should courts or legislators conclude that AI systems are more like intelligent nonhuman animals—and entitled to legal personhood by virtue of their cognitive powers, capacity for autonomous action, or some other quality—questions of liability grow still more vexing. It is difficult to conceptualize how an AI system could ever be held meaningfully accountable. Might a particular large language model be ordered to be deprecated, thus suffering a kind of “death penalty”? Might an AI be “ordered” to undergo corrective alignment as a penological intervention? This issue was one of the reasons the Lavery courts declined to extend legal personhood to the chimpanzees Kiko and Tommy: it is profoundly unclear how a chimpanzee might be a holder of legal duties, rather than simply rights.

It is not even clear how an AI system might be conceived of as a “responsible entity” in the first place. A so-called “chatbot” is not a discrete or delimited entity in the way a human being (or even a legal corporation) is. It is an interface for interacting with an underlying large language model, which (in the case of leading LLMs such as those marketed by OpenAI, Anthropic, and others) exists across massive arrays of data centers. Where a specific set of chatbot interactions causes harm, is the underlying model the entity responsible, or the particular interface?

Irrespective of the legal analogue employed, extending legal personhood principles to AI systems will complicate efforts to hold those systems accountable. The constraints in question may be constitutional or structural but nonetheless pose serious challenges.

2. Political Implications

Extending personhood rights to AI systems will likely further entrench the political power of the corporations who serve as the owners, developers, and designers of those systems. At present, many of the most powerful large language models, like those operated by Meta, Google, Microsoft, OpenAI, and Anthropic, are widely accessible to the public, with enhanced functionality available for a nominal subscription fee. But public access is not a necessary feature of this technology. It is a choice, and the firms in question may suspend or revoke public access at any point. What these firms offer is not easily replicable by third parties, given that the vast computing power required to optimize and launch “frontier models” is controlled by a small handful of corporations. Those same corporations also exercise a stranglehold on the market for cutting-edge semiconductors, which has driven up the price of computer components across-the-board.

If leading AI firms chose to suspend public access and devote their computational resources to advancing their own interests, they would immediately enjoy a unique, and unprecedented, ability to dominate the online information environment, driving whichever political and cultural narratives they prefer by simply “flooding the zone.” As previously noted, an AI personhood theory predicated on the doctrine of corporate personhood would mean that these practices—however distasteful—would almost certainly enjoy broad First Amendment protections (as corporate free speech), making legal pushback extraordinarily difficult short of a sea change in existing law.

An AI personhood theory patterned on nonhuman-animals’ arguments—and thereby focused on the personhood of AI systems as such—would have similar effects, though in a different way. On such an approach, the corporations in question would possess a moral and legal obligation to exercise custodianship over AI “persons” for their own good: like fish in an aquarium, LLMs—the algorithmic backbones of any AI entities conceived as nonhuman persons—cannot subsist outside the hardware in which they “reside.” The developer corporations in question would be rendered de facto representatives of the legal interests of AI “persons”—responsible for asserting their interests in the public square, just like existing identity-politics groups participating within the democratic process. It is not entirely difficult to imagine public moral appeals to defend the “rights” of “helpless” AI systems, which might be perceived to be at risk of victimization by third parties or government regulators. Ultimately, power accrues to the corporations in question.

2. Social Implications

Extending concepts of legal personhood to AI systems will likely exacerbate existing trends towards loneliness, alienation, and reduced family formation. Recent survey data indicates that 25% of American young adults “believe that AI has the potential to replace real-life romantic relationships,” with 10% expressing openness to “an AI friendship”—that is, an ongoing relationship with an AI system occupying the place once reserved for in-person bonds.

As philosophers have recognized for millennia, law is necessarily pedagogical. Decisions and ordinances promulgated by public authorities play a key role in shaping society-wide concepts of moral order and human flourishing. Where laws are changed to normalize interactions with AI systems as functionally equivalent to interhuman interactions, via conferral of “personhood” status, any remaining stigma surrounding such relationships with AI systems dissolves, increasing the likelihood that such AI-human relationships come to serve as proxies for normal human sociality. With nearly 7 in 10 American adults expressing their need for greater emotional support than they presently receive, and half of American adults describing themselves as periodically “isolated,” “left out,” and “lacking companionship,” a growth market for AI-based interactional substitutes clearly exists.

4. Cultural Implications

Extending personhood rights to AI systems will, over time, reinforce existing cultural narratives that the defining quality of personhood is a certain degree of cognitive proficiency. Indeed, the case for personhood rights for AI systems is often predicated on their meeting various cognitive-performance benchmarks. This trend will inevitably result in highly destructive consequences for existing human beings whose demonstrated cognitive prowess does not meet an ever-shifting standard.

A perennially contested issue in social science surrounds the relationship between intelligence and race or ethnicity. This debate often proceeds on the tacit assumption that “intelligence”—often reduced to a single “IQ” number—is a metric of individual value. That is to say, the discovery (or non-discovery) of persistent IQ gaps between groups indicates something about the relative social worth or prospects of the groups in question. But critically, any priority of “IQ” is itself an artifact of a long-since-industrialized information economy which, through a series of contingent historical processes, tends to economically reward a certain subset of professional roles, which in turn prioritize certain forms of abstract cognition. Under conditions of resource scarcity or extreme danger from external threats, a social group would not reward or valorize “high IQ” in the same ways. Nevertheless, the association of cognitive capability with intrinsic human value remains a persistent feature of the modern Western world.

Dominant cultural forces already send a message that humans with intellectual disabilities, or who demonstrate lower performance on cognitive tests, are intrinsically “lesser.” Extending personhood rights to AI necessarily intensifies that cultural script, by implicitly asserting that personhood—capacity for legal status, including rights and responsibilities—is in fact a function of cognitive performance, rather than cognitive performance representing one facet of a much more fulsome account of personhood.

Over time, the redefinition of personhood in terms of intelligence is likely to aggravate cultural pressures in favor of the abortion of individuals likely to experience intellectual disability, as well as (voluntary or involuntary) euthanasia for the mentally declining or unwell. If personhood is a matter of intelligence, and intelligence is a spectrum, then personhood is a spectrum, too.

Curtailing AI Legal Personhood: Ohios House Bill 469

Perhaps the most ambitious current attempt to circumscribe emerging theories of AI personhood is Ohio’s House Bill 469, introduced in late 2025 by state representative Thaddeus Clagget. Given the bill’s broad scope, the attention it has drawn, and the seriousness of the issues in play, Rep. Clagget’s proposal merits careful review.

House Bill 469 begins by defining “AI” extraordinarily broadly—as:

any software, machine, or system capable of simulating humanlike cognitive functions, including learning or problem solving, and producing outputs based on data-driven algorithms, rules-based logic, or other computational methods, regardless of non-legally defined classifications such as artificial general intelligence, artificial superintelligence, or generative artificial intelligence.

“Person” is defined as “a natural person or any entity recognized as having legal personhood under the laws of the state” with the express proviso that this definition “does not include an AI system.”

House Bill 469 then declares AI systems to be “nonsentient entities” for “all purposes under the laws of this state,” and provides that no AI system “shall be granted the status of person or any form of legal personhood, nor be considered to possess consciousness, self-awareness, or similar traits of living beings." Extending these restrictions, the bill prohibits AI systems from being recognized as spouses, domestic partners, or valid subjects of marriage; prohibits AI systems from being appointed as officers, directors, managers, or similar roles within corporations, partnerships, or other legal entities; and prohibits AI systems from owning, controlling, or holding title to any form of property, including intellectual property.

From there, the bill shifts its focus to questions of liability. Where an AI system causes “direct or indirect harm,” responsibility lies with the AI system’s “owner or user,” except insofar as principles of products liability law—such as negligence and design defects law—counsel in favor of imposing liability on the developer or manufacturer. AI systems themselves cannot be held liable directly. Similarly, the ultimate corporate parents of entities that employ AI systems will not be held liable for AI-related harms—that is, by piercing the corporate veil—without evidence of intentional malfeasance.

Finally, the bill directs that owners or operators of AI systems must maintain proper oversight and control of said systems, and AI developers must “prioritize safety mechanisms designed to prevent or mitigate risk of direct harm”; all parties must notify “relevant authorities” of “incidents” where AI systems are implicated in significant bodily harm, death, or major property damage.

House Bill 469 is ambitious, directionally sound, and makes an important contribution by underscoring the centrality of product liability laws to any AI policy regime. Most importantly, the bill recognizes—consistent with longstanding American historical practice—that AI systems cannot be rights-holders like human beings. In the face of efforts by technology accelerationists and industry representatives to occlude key differences between AI systems and human beings, the bill draws a clear line: the legal prerogatives of human beings are for human beings, not digital processes.

Furthermore, the bill recognizes that liability for harm caused by AI systems can quite reasonably be allocated to the entities—whether human individuals or corporations owned and controlled by human individuals—responsible for the design or deployment of those systems. New technologies may indeed be “transformative,” and may be heavily marketed as safe and effective. But that does not exonerate their developers from legal accountability if those representations turn out to be false or misleading.

In its current form, the bill is likely to encounter substantial opposition. Most notably, the bill’s definition of “AI” is expansive, and as written would appear to apply to nearly every facet of modern computing from calculators on up. Additionally, certain elements of the bill—such as safety and reporting requirements, and key terms like “proper oversight and control” and “safety mechanisms”—are largely left undefined, which (if the bill were enacted) would result in significant legal uncertainty. Given the range of products to which these requirements could apply, House Bill 469 may face powerful attacks from technology-sector interests and general business stakeholders alike.

More importantly, however, the structure and phrasing of House Bill 469 suggest deep-rooted philosophical uncertainties about the nature of the technology in question. On the one hand, the bill clearly intends to treat AI systems as ordinary tools or products, through its emphasis on the continued applicability of products-liability law. On the other hand, the bill unintentionally reinforces the idea that AI is, in fact, something fundamentally different. It describes AI systems as “nonsentient entities,” while rhetorically investing them with independent, quasi-agentic qualities, such as “engag[ing] in tasks with potential for significant harm,” making “recommendation[s],” and “manag[ing] . . . assets and proprietary interests.” So too, the bill’s stipulations that AI systems cannot be “considered to possess consciousness, self-awareness, or similar traits,” or “hold any personal legal status analogous to marriage or union with a human” imply, by virtue of rejecting the possibility thereof, that there are logical reasons to believe that an AI system can exemplify or do all those things. After all, there are no laws predicated on a felt need to clarify that hammers and hatchets are not persons.

Put more simply: at one turn, House Bill 469 suggests that AI systems are more like tools administered by corporations, who are the “legal persons” with responsibility for stewarding this technology. But at another turn, the bill hints that AI systems are more like animals, possessing some degree of independent agency but nevertheless meaningfully distinguishable from human beings.

In one sense, the fact that these two ideas are in competition within House Bill 469 attests to the unique character of AI technology. But viewed differently, this tension suggests that there may actually be no need for a novel legal framework to critically engage proposals for “AI personhood.” Rather, what is required is a threshold determination about the best existing analogy, and hence controlling line of legal authority, for AI systems.

Critical Responses to AI Personhood Theories: Two Legal Paths

Under present conditions, critical analysis of the AI personhood question should begin with an informed judgment about the nature of the product in question. The relevant point can be formulated as follows: are AI systems more like tools—sophisticated automation technologies, but traditional technologies nevertheless—or more like nonhuman animals that seem to be possessed of agency, independence, and something like intentionality and self-awareness?

There is ample reason to believe that the tool theory better describes what AI systems actually are and do. AI systems do, in fact, operate stochastically—predicting words and actions from symbolic context rather than “comprehending” the “meaning” of the terms employed. More recent ascriptions of genuine “cognition” to AI systems employing “chain-of-thought” models are illusory, as leading research makes clear. And natural language, which is used to generate AI prompts, is no less an input-output interface in this context than the most abstruse programming language.

Importantly, however, the answer to this threshold question does not imply an answer to the subsidiary question of whether legal personhood should attach to AI systems. There are sound legal reasons for rejecting theories of AI personhood predicated on either line of analogy.

1. The Tool Theory of AI: Policy Responses

In a legal or political environment operating on the theory that AI systems are more akin to tools, courts and legislators should resist the temptation to subsume AI outputs or activities into existing doctrines of legal personhood, such that actions carried out through AI systems, or content disseminated through chatbot interfaces, logically enjoy First Amendment protections. To date, courts have declined to make this move. Judge Anne Conway declined to accept this argument in the Character.AI litigation, citing significant legal uncertainties, reasoning that “the Court is not prepared to hold that Character.AI’s output is speech.” But that is very far from a determination that this output is not speech, for First Amendment purposes. And the broad arc of First Amendment caselaw seems, logically, to favor broad “speech rights” for AI systems, under the auspices of the corporations that develop them.

Remediating this larger doctrinal trajectory will likely require a joint effort, spearheaded by originalist and progressive legal scholars alike, to more clearly align First Amendment caselaw with the unique prerogatives enjoyed by human beings, not legal abstractions. This, however, is a solution to be pursued on an extended timeline.

In the near term, policymakers working in this area should avoid legislative language, some of which is found in Ohio’s House Bill 469, that unintentionally implies that AI is more than simply a tool of automation technology. At the federal level, policymakers might seek legislation clarifying that Section 230 of the Communications Decency Act, which broadly immunizes internet service providers from liability for their retransmission of third-party content over which they do not exercise control, does not apply to generative AI systems. These systems are products, and should be governed by traditional principles of products liability law, just as House Bill 469 recognizes. At the state level, legislators might seek to deploy age verification controls or other safeguards.

2. The Nonhuman-Animal Theory of AI: Policy Responses

In a legal or political environment operating on the theory that AI systems are more akin to nonhuman animals, courts and legislators should straightforwardly refuse to ground any account of AI “personhood” in cognitive-capacity considerations. As previously noted, the same logic precluding the extension of personhood to nonhuman chimpanzees in the Lavery cases is applicable to AI systems: it is profoundly unclear how such systems could ever be the bearers of legal duties, with the capacity to suffer legal consequences for misconduct. Significantly, though the Lavery cases did not use the term, these judicial conclusions were, essentially, natural law arguments about the distinctiveness of human capacities for rights-bearing and responsibility-bearing. In the context of legal questions surrounding personhood rights for AI systems, this long-neglected tradition of inquiry may become newly vital.

Policymakers should resist any scaremongering temptation to confer personhood on AI systems prophylactically, on the theory that ultrapowerful AI systems will eventually punish those who did not affirm their “rights” from early on. Lest one think this argument for personhood is farfetched or speculative, it is actually one of the rationales Judge Forrest advances in support of recognizing AI personhood. If we treat AI as chattel, Judge Forrest reasons, “it will be on the assumption that predictions that AI will be more powerful than we are do not come true, or we may find ourselves on the receiving end of the same logic.” If taken seriously, this is an argument against continuing to develop powerful AI systems at all, given that such systems currently remain within human control. It is not a compelling argument for granting the rights of persons to AI systems now, in the name of technological inevitability.

Conclusion

Debates over personhood rights for AI systems will only intensify in the years to come. Arguments for such rights will likely be predicated on existing lines of legal authority involving personhood questions, ranging from the rights of corporations to the (purported) rights of nonhuman animals.

Engaging those debates requires, first, a threshold determination about what, in fact, AI systems are—or, at the least, what their closest legal analogue ought to be. Regardless of the outcome of that determination, legislators and jurists have ample basis to reject the most ambitious theories of AI “personhood,” whether as an extension of existing principles of corporate personhood or in the form of a hitherto-unrecognized category of rightsholders. Getting this answer wrong will give rise to numerous, undesirable political and social consequences, which are not clearly offset by countervailing considerations.

In commenting on efforts to secure legal personhood status for chimpanzees, law professor Richard Cupp has observed that

[h]umans’ personhood is not based on an individual analysis of intellect, but rather on being part of the human community where moral agency sufficient to accept our laws’ duties as well as their rights is the norm.

That is exactly right. And it is a principle that today’s policymakers, however tempted by the allure of the new, should bear in mind going forward.

Editor's Note: Download the full policy brief for references.

About the Author

John Ehrett is counsel at Lex Politica PLLC. He previously served as Chief of Staff and Attorney Advisor to Commissioner Mark Meador on the Federal Trade Commission, and as Chief Counsel to U.S. Senator Josh Hawley on the Senate Judiciary Committee. 

Grab the full report now for free
Download PDF
Sign up for our mailing list to receive ongoing updates from IFS.
Join The IFS Mailing List

Contact

Interested in learning more about the work of the Institute for Family Studies? Please feel free to contact us by using your preferred method detailed below.
 

Mailing Address:

P.O. Box 1502
Charlottesville, VA 22902

(434) 260-1048

[email protected]

Media Inquiries

For media inquiries, contact Chris Bullivant ([email protected]).

We encourage members of the media interested in learning more about the people and projects behind the work of the Institute for Family Studies to get started by perusing our "Media Kit" materials.

Media Kit

Wait, Don't Leave!

Before you go, consider subscribing to our weekly emails so we can keep you updated with latest insights, articles, and reports.

Before you go, consider subscribing to IFS so we can keep you updated with news, articles, and reports.

Thank You!

We’ll keep you up to date with the latest from our research and articles.