Highlights
- Advancements in AI technology raise fundamental philosophical, and legal, questions about the nature of personhood and what beings possess it. Post This
- Legal personhood for AI is not a foregone conclusion. Some legislators have begun developing policy measures intended to preemptively rebut theories of AI personhood Post This
The American political and legal tradition historically reserved personhood rights, like the freedom of speech, for human beings. That understanding of natural rights was built on core metaphysical commitments about the created nature of human beings. But today, existing social and legal dynamics suggest the eventual political recognition of some form of personhood status for AI systems. Because law proceeds and develops by analogy, a colorable argument for something like “AI personhood” might be predicated on either of two lines of existing legal authority—addressing, respectively, the rights of business corporations and the rights of intelligent nonhuman animals.
Expanding personhood status to AI systems will trigger downstream political and social consequences. Realistically, these effects may include: (1) the insulation of AI companies from legal liability for harms caused by AI systems; (2) the entrenchment and reinforcement of significant political power in the hands of the developers of AI systems; (3) an exacerbation of existing declines in interpersonal interaction and family formation, resulting from the destigmatization of AI-human relationships; and (4) a progressive hardening of social attitudes towards the physical and intellectual disabilities of human beings.
Several possible policy countermeasures, both legislative and judicial, may be deployed in response to efforts to secure personhood status for AI systems. Ultimately, a coherent response requires a basic threshold judgment about the nature of AI systems themselves: whether they are more akin to tools or more akin to nonhuman animals. The former is the more defensible path. Where AI is recognized as a tool of automation administered by human beings, courts and legislators should reaffirm that traditional principles of products-liability law still apply. However, in contexts where AI is treated as a more autonomous entity that operates with a degree of independent agency, relevant legal precedent may derive from cases involving nonhuman animals. This line of authority offers a means of reaffirming the priority of embodied human beings as bearers of legal rights and duties.
Today, significant momentum suggests that the recognition of legal rights for AI, at least in some jurisdictions, is only a matter of time.
Isaac Asimov’s 1940 short story “Robbie” ends on a heartwarming note. After a perilous odyssey through a machine factory, Asimov’s little heroine, Gloria is finally reunited with her longtime robot companion, who saves her life. Together, they rejoice. “Gloria had a grip about the robot’s neck that would have asphyxiated any creature but one of metal, and was prattling nonsense in half-hysterical frenzy,” Asimov writes. “Robbie’s chrome-steel arms (capable of bending a bar of steel two inches in diameter into a pretzel) wound about the little girl gently and lovingly, and his eyes glowed a deep, deep red.” Gloria’s friend Robbie might be artificial, running off the logic of a “positronic” brain, but in some ineffable sense, he is indeed a sort of person. Or so Gloria, and the reader, are led to believe.
Nearly a century later, Asimov’s tale seems prescient—but more ominous. Sophisticated robots are parts of our daily lives. Artificial intelligence systems with unprecedented interactional capabilities dominate news cycles, raising widespread fears of mass job displacement and new levels of surveillance and control. They are also increasingly ubiquitous, in contexts ranging from homes to schools to courthouses.
This shift has a key driver: today’s leading AI models are more accessible to end users than ever before. Models present themselves in friendly ways, rather than as abstract machine-learning processes used to optimize data sets. Models respond, in natural language, to natural-language prompts, and can be coached to adopt stable personas over time. Some companies, like Character.AI, advertise this “humanlikeness” as a feature, inviting users to engage in simulated discussions with fictional characters or celebrities. The apparent “personality” of leading AI models has even given rise to a burgeoning contingent of women with AI “boyfriends,” who prefer them to real-world men.
These advancements raise fundamental philosophical, and legal, questions about the nature of personhood and what beings possess it. Even before Asimov, writers, scientists, and ethicists meditated at length on the question of “sentient” or “conscious” artificial intelligence systems—contemplating whether, if such machines were ever built, they could be included within the human community and assigned rights and responsibilities. Theory has now become reality.
Today, significant momentum suggests that the recognition of legal rights for AI, at least in some jurisdictions, is a matter of time. In the United States, free speech defenses—which imply something very close to AI personhood—are now raised in response to lawsuits stemming from chatbot interactions gone wrong. The European Parliament has teased the possibility of a “specific legal status for robots” recognizing the “status of electronic persons.” Retired federal judges speak positively about the extension of personhood rights to AI systems.
But legal personhood for AI is not a foregone conclusion. Already, some legislators have begun developing policy measures intended to preemptively rebut theories of AI personhood—most notably, Ohio’s House Bill 469, which declares that “[n]o AI system shall be granted the status of person or any form of legal personhood, nor be considered to possess consciousness, self-awareness, or similar traits of living beings.” These questions of AI and personhood are urgent, and will grow only more so with time.
Editor's Note: This essay is excerpted from the IFS policy brief, "Artificial Intelligence and Theories of Personhood: A Critical Appraisal." Read the full brief, here.
John Ehrett is counsel at Lex Politica PLLC. He previously served as Chief of Staff and Attorney Advisor to Commissioner Mark Meador on the Federal Trade Commission, and as Chief Counsel to U.S. Senator Josh Hawley on the Senate Judiciary Committee.
