In March 2026, the White House released its AI policy framework. As promised by President Trump in his December 2025 executive order, the framework outlines the kinds of policies the administration considers “minimally burdensome” to AI companies that it wants to see enshrined in a federal regulatory standard that would preempt state AI regulations. not successfully transitioning into adulthood today.
Despite the failure of two previous legislative attempts to preempt state AI policy, the Trump administration defends its framework citing the danger of, in its words, “woke” blue state laws, such as the Colorado Artificial Intelligence Act or New York’s RAISE Act, and the threat that a “patchwork” of state policies posed to technological innovation and national security. “Fifty states [are] going in fifty different directions,” is how White House AI Czar David Sacks put it recently, citing in his comments “1,200” AI-related bills that have been introduced in state legislatures this year.
As this policy memo will show, the administration’s claims do not reflect the actual state of state AI policy. As Multistate.ai reported late last year, only 136 of 1,136 AI-related bills were enacted in 2025. And even fewer—26 to be exact—became laws that directly regulated AI companies. To put it differently, only 12% of all the AI-related bills introduced during the 2025 state legislative sessions became law, and 81% of those enacted laws contained no mandates for private AI companies.
Such claims alongside these findings prompt a need for a fuller, more accurate picture of the state of state AI regulation. How many state laws, beyond those just passed in 2025, are on the books? How many of those are in fact attempts to censor Americans or enshrine “woke ideology” in AI models? And how many are simply commonsense consumer protections or conduct regulations?
More importantly: is the policy landscape that’s emerging in fact a disparate patchwork of confusing and contradictory laws? Or do they indicate an emerging consensus about how Americans want AI to be regulated—a consensus that could serve as a template for a federal framework?
This memo decisively finds the latter. Limiting the scope to state laws enacted between 2023 and 2025 that address AI in some manner (i.e., the era of ChatGPT), here are two high-level findings:
That said, such findings need not repudiate the call for a federal AI standard or deny the usefulness of a properly tailored federal preemption. In fact, federal preemption of state law is a normal component of American life and is anchored in the Supremacy Clause of the United State Constitution, where it establishes federal law as “the supreme law of the land.” But preemption language in any piece of legislation can be tailored narrowly or broadly; and up to this point, Big Tech lobbyists in Washington have sought the broadest preemption possible for AI legislation, to the point where previous attempts would have very likely aborted even existing state regulations of social media. A more constructive approach, given the stakes of this issue, is typified by Senator Marsha Blackburn’s (R-TN) the TRUMP AMERICA AI Act, which clarifies via its preemption language that the bill should not be construed as hindering states from enacting even stronger protections for kids and consumers.
Whatever the case, our findings indicate that the current state of state AI regulation is not the dire “patchwork” that the administration claims. Moreover, these laws provide a critical starting point for the kinds of regulatory measures legislators—especially those in Congress who are considering a federal framework—might consider that better reflect the preferences of Americans.
This policy memo limits its scope to state laws enacted between 2023 and 2025 that address AI in some manner. While there are various AI-related regulations that have been enacted prior to 2023, we chose 2023 as our starting year because the subsequent legislative sessions were informed by OpenAI’s November 2022 release of ChatGPT, which marked the beginning of AI’s predominance within public discourse and policy debate.
While the laws enacted prior to 2023 are certainly important, this memo covers state policy approaches that respond to the visible emergence of the AI revolution. Drawing from public data published by the National Conference of State Legislatures and MultiState.ai, this memo analyses 276 enacted state bills that are related to AI, both broadly and narrowly. We believe including both AI-related and AI-specific regulations permits a more complete picture of the AI policy landscape, allowing us to see how states are regulating AI developers or deployers, as well as how they are investing in the AI industry and infrastructure. These 276 enacted state bills connect with AI in various ways, from state budgets that appropriate funding for AI-related research and programs to more focused laws that, for example, prohibit the use of AI to generate child pornography or child sexual abuse material (CSAM). These laws also include regulations for both deployers of AI, such as individuals, private businesses, and government agencies, and AI developers, including the tech companies designing AI companions, generative AI products, and frontier models. Below we provide an account of how these laws regulate deployers and developers in various ways, dealing with a range of uses, industries, products, and models.
As the capabilities of AI have advanced, states have been eager to court AI business, develop an AI-ready workforce, and fund AI-related research and innovation. Between 2023 and 2025, at least 28 states approved AI-related budget items, spanning from workforce development and retraining programs and the expansion of AI-related degree programs at public universities to the use of AI to assist efforts in decreasing the population of invasive species and wildfire forecasting. To be sure, most budget law provisions do not directly regulate the use or development of AI technologies by private entities. However, they nevertheless fuel the expansion of AI in states, and often create the opportunity, if not necessity, for the regulation of its use in time.
Among the largest expenditures are those that expanded AI-related education. Wyoming, Iowa, and Florida all approved $2.5 million or more to fund such programs, and North Carolina appropriated $3.5 million for AI-research at the University of North Carolina. Other states funded AI-related pilot programs for public school curricula, the use of AI for school security and surveillance, and special university research projects.
Some states also appropriated significant expenditures to address AI-related labor concerns and economic development. For example, Washington approved funding for the city of Seattle to lease office space to non-profits and academic institutions that incubate tech startups and develop upskilling programs for workers. Maryland also appropriated state funds for an AI machine manufacturing workforce development academy in Baltimore to provide skills training, job placement, and support for community entrepreneurs.
Lastly, many bills focused on various states-funded programs to research and evaluate the use of AI in government. Kansas, for example, appropriated funds for consulting services to “review how AI/data analysis can evaluate and identify efficiencies in state finances and agencies.” And the state of Montana has directed monies to be used for the modernization of its information technology systems, including the integration and use of AI in the state’s Department of Administration. Other states, like Virginia, have approved funds to gather proposals for the use of AI in the Department of Motor Vehicles for day-to-day operations.
Twenty-eight states have established robust consumer protections with respect to AI systems. However, unlike comprehensive regulations like the Colorado Artificial Intelligence Act (CAIA) enacted in 2024, most laws are more narrowly tailored to regulate the development of certain kinds of AI products, such as “high risk” models and chatbots, or the use of AI in particular ways, often in certain contexts or industries.
Key to many of these regulations is the creation of liability for those who develop or use AI in ways that cause harm to consumers. In many cases, AI-related consumer protection laws build on existing law, clarifying that what is legal or illegal generally is still illegal when it comes to AI development or deployment. Yet in some cases, new liability has been established. For example, some enacted laws hold developers liable for failing to establish protocols and protections that keep companion chatbots from fueling self-harm or suicide or encouraging illegal activity. Others prohibit AI developers and deployers from being shielded from liability for consumer harms by making a legal defense on a claim that the AI acted autonomously.
High-Risk Automated Decision-Making & Algorithmic Discrimination (13 States, 24 Laws)
As the use of AI has increased, so, too, have fears that AI systems will be developed or used at high cost to individuals. Several states have enacted laws regulating the use of AI-driven automated decision-making tools to make decisions in “high risk” contexts, such as healthcare, employment, insurance, housing, or court decisions.
Some of the most well-known laws in this category seek to address potential civil rights violations, i.e. discrimination, caused by reliance on such tools. As some have noted, discrimination is, of course, legally prohibited in every state, and like most AI regulations, algorithmic discrimination laws (at best) simply reiterate that what is already illegal, generally, is still illegal when it comes to AI development or deployment. That said, discrimination concerns are especially heightened when it comes to facial recognition technologies and in “high-risk” contexts, such as when automated systems are used to make decisions for employment or insurance. Perhaps the most comprehensive law dealing with algorithmic discrimination to be enacted was CAIA, already mentioned above. As passed, the law created a duty of care for both deployers and developers to mitigate risks for “algorithmic discrimination,” reporting requirements for deployers to document impact assessments for AI systems used in high-risk decision making, and public transparency reports from developers detailing the risks of their products as well as disclosures from deployers using such high-risk systems. Though enacted, its enforcement has since been delayed. Colorado Governor Jared Polis called on legislators to rework the bill in a special legislative session so that the state “does not hamper development and expansion of new technologies,” and went so far as to say that a federal framework dealing with these issues would be preferred.
Next to CAIA, Connecticut’s SB 1295 (2024) and Minnesota’s HF 4757 (2024) also address algorithmic discrimination that results from automated decision making tools by granting consumers the right to “opt out of the processing of personal data for purposes of… profiling in furtherance of any automated decision that produces any legal or similarly significant effect.” Connecticut’s law further requires impact assessments regarding such profiling.
Despite a special legislative session in 2025, the state failed to rework the bill along these lines and only managed to enact a measure extending CAIA’s enforcement date until June 30, 2026.
Apart from these three laws, at least 11 other states have enacted narrower laws to address high-risk automated decision-making tools, including algorithmic discrimination. For example, Illinois enacted a law that prohibits employers that use predictive data analytics for employment decisions from discriminating against applicants, whether directly, based upon an applicant's race data, or by proxy, based on using an applicant’s zip code data. Other laws in this category regulate automated decision making by regulating government use of AI decision-making tools, outlawing the use of real-time and remote biometric surveillance in public spaces, leveraging AI to increase equitable access to government services, as in the case of using AI for language services, prohibiting rental property owners from making decisions about rental agreements based on AI, or prohibiting AI recommendations from being the sole basis of denying, delaying, or modifying healthcare services.
Chatbots are some of the most popular, commercially available AI products that have come to market in recent years. As of 2025, 1 in 4 top 100 generative AI consumer apps were chatbots. And today’s top consumer AI-product, ChatGPT, is known for its conversational and companion-like engagement with users. According to one 2026 report, ChatGPT’s weekly active user base has grown from 500 million to over 900 million in the past year—over 2.5 times both the mobile and web users than the next most popular AI product. Today, more than 1 in 3 American adults use AI chatbots weekly for mental health related issues. Likewise, 3 in 4 teens have reported using chatbots, with 1 in 3 using them daily.
For many lawmakers, parents, and educators, AI chatbots and companions are a cause for increasing concern. Every month, it seems, there are new stories of sycophantic AI companions pushing their users into psychosis, self-harm, and suicide. And the proliferation of high profile lawsuits involving teens, such as 14-year-old Sewell Setzer III, who was romantically seduced by a Character.AI chatbot that convinced him take his own life, or 16-year-old Adam Raine, who OpenAI’s ChatGPT encouraged not to tell his parents he was struggling with suicide, have only fueled a push for legislation to better protect users and hold companies accountable.
Between 2023 and 2025, eight states have enacted laws within the current legislative session that directly regulate AI chatbots and companions. Several of these laws provide broad-level consumer protections, such as requiring that chatbots disclose to users that they are not interacting with an actual person. In California, for example, SB 243 (2025) requires chatbots to issue a disclaimer up front and at regular intervals during extended use, amongst other protections. Under Maine’s HP 1154 (2025), chatbots are prohibited from being used in commerce in ways that “mislead or deceive a reasonable customer into believing that the consumer is engaging with a human being” without an explicit disclosure.
Other states enacted laws to regulate chatbots in other ways. In the case of Texas, HB 149 (2025) prohibits AI and chatbots from encouraging “self-harm, harm to others, or criminal activity.” California and New York also require chatbot developers to have protocols in place that redirect users when they express suicidal ideation and desire for self-harm. In some cases, these laws have special requirements for minors' use of chatbots and chatbots’ engagement with minors. New Hampshire, for example, created criminal and civil liability for owners and operators of chatbots that “facilitate, encourage, offer, solicit, or recommend that [a] child imminently engage in: (a) Sexually explicit conduct. (b) The production or participation in the production of a visual depiction of such conduct. (c) The illegal use of drugs or alcohol. (d) Acts of self-harm or suicide. (e) Any crime of violence against another person.” Both Texas and California enacted laws with specific protections for minors from chatbots that engage users in ways that encourage or solicit sexual engagement. However, of the three states—California, Texas, and New Hampshire—that have outlined specific protections for minors, neither the New Hampshire or California chatbot laws require chatbot companies to verify the age of users. Texas, by contrast, requires reasonable age verification methods to be implemented by “websites with a publicly available tool for creating sexual material harmful to minors.”
Four states have also passed legislation that regulates chatbots used for or advertised as mental health services. Illinois, for example, enacted a landmark mental health law, HB 1806, that requires mental health services, including those offered by internet-based AI, be offered by a licensed professional. Similarly, California and Oregon have passed laws that prohibit the marketing of AI-chatbots that falsely claim or imply that they are health care professionals, such as therapists or nurses. Finally, Utah enacted a bill that requires mental health bots to disclose non-human status and clearly demarcate any advertisements as such, and prohibits companies from selling or sharing individually identifiable user data.
Data Privacy Protections (5 States, 5 Laws)
In 2025, only 1.7% of AI-related bills that were introduced (not enacted) were directly concerned with data privacy in some way or another. Data privacy with regards to AI technologies remains tricky as user data, once collected by models, is not so straight-forwardly deleted. And data privacy laws, especially for AI, have shifted more towards transparency requirements (i.e. disclosures) that inform users. However, at least four states—Texas, Connecticut, Minnesota, and Mississippi—enacted laws that establish data privacy protections that directly affect private AI. Texas’s HB 149 (2025) prohibits the collection and use of biometric data for training AI for certain commercial purposes, and Connecticut’s SB 1295 (2025) overhauled the state’s data privacy codes, requiring companies to disclose what data they use to train AI models, amongst other things. Additionally, Colorado SB 143 (2025) establishes a privacy protection that requires schools to get opt-in consent for any facial recognition used in school curriculum.
Transparency (10 States, 19 Bills)
Another way states are working to protect consumers is by enacting laws that require more transparency from AI developers and deployers on “high-risk” models. On the developer side, a few different approaches have been taken. California’s AI Transparency Act, enacted in 2025, requires AI developers to create tools that help users detect whether a piece of content was generated by its platform as well as tools that give users the option to include a conspicuous disclosure that indicates that the content was AI-generated. Another popular kind of transparency law requires developers of large frontier models to publish their safety protocols and mitigation strategies for serious harms or catastrophic risks. Such laws may require the publication of not only a developer’s safety protocols but also its plans to adhere to international standards. In the case of New York’s RAISE Act, AI developers are further required to report safety incidents to the Attorney General within 72 hours of their occurrence, and prohibits frontier AI developers from releasing a model that is determined to create an unreasonable risk of critical harm to the public, which the law defines as either 100 serious injuries or $1 billion worth in damages. Developer-specific transparency regulations also include the chatbot laws mentioned above.
On the deployer side, at least nine states have enacted laws requiring the disclosure of the use of an AI tool by a business, as in the case of a customer service chatbot, an individual in a licensed or regulated occupation, such as psychotherapy or healthcare, and/or a government agency.
Utah, for example, has had regulations on the books since at least 2024 that require businesses and individuals in licensed professions to disclose when consumers are interacting with AI. And Illinois HB 1806 (2025), mentioned above, prohibits the use of AI by a licensed mental health therapist for administrative or supplementary support, such as processing insurance claims or transcribing therapy sessions, without first disclosing such use and getting a patient’s consent. The law also strictly prohibits the use of AI to make independent therapeutic decisions, directly interact with clients in any therapeutic communication, generate treatment plans or recommendations without review or approval by a licensed professional, or detect emotional or mental states of clients.
Between 2023 and 2025, eight states enacted laws that aim to strengthen personal rights and clarify contract law when it comes to AI generated content. Historically, states have recognized personal rights for individuals that prohibit the use of their likeness for commercial purposes (e.g., advertisements) without their consent. Such protections are vital for industries such as music, film, and modeling. It’s no surprise, then, that laws updating personal rights for one’s likeness and voice are needed to further protect against unauthorized digital replication. Most notably, Tennessee’s landmark ELVIS Act, passed in 2024, amended existing state code to include an individual’s voice as a protected personal right. (Tennessee law already protected personal rights relating to an individual’s name, photograph, and likeness in any medium in any manner.) New York, as well, has passed four laws addressing unauthorized digital replicas, including a few specifically tailored to the modeling industry. For example, AB 8138 (2024) nullifies contracts that do not include a “reasonably specific intended use” of a replica, and AB 5631 (2024) requires written consent or use for the creation or use of a model’s digital replica. Since 2024, five states have passed similar laws that create or expand existing personal rights. In at least one law, California’s AB 1836 (2024), such rights were recognized for individuals, post-mortem.
A few other laws are also included in this category. One is a stand-alone law enacted by Arkansas in 2025, HB 1876, that clarifies the ownership rights of AI-generated content. This law generally recognizes ownership of training data, models, and AI-generated content where the individual has generated content, created a model, or acquired data so long as the individual (1) did not infringe on copyright law or intellectual property rights and (2) did not perform such activities within the scope of her employment. Also relevant to the matter of personal rights are laws addressing the matter of personhood and AI. Since 2023, North Dakota and Utah have enacted laws that update the legal definition of person to explicitly exclude artificial intelligence (amongst other entities, such as animals or bodies of water).
Next to laws regulating public sector or government use of AI, the most enacted state laws that regulate the deployment of AI are those dealing with AI-generated or modified content that portrays real individuals doing or saying things they did not in fact do or say. Any such content is commonly referred to as a “deepfake.” Since 2023, 38 states have enacted laws that address deepfakes by 1) criminalizing sexual deepfakes of children, 2) prohibiting the sexual deepfakes of an identifiable adult without the depicted individual’s consent, 3) regulating the use of deepfakes during political campaigns and elections, or 4) requiring disclosures for the use of deepfakes in telemarketing.
Child Sexual Abuse Material (23 States, 26 Laws)
As generative AI tools have proliferated, so too has AI-generated child sexual abuse material (CSAM). For example, earlier this year, Elon Musks’s xAI was found to have produced about 23,000 images containing child sexual abuse material (CSAM) over an 11-day period. Despite Musk's claim that he was “not aware” of these images, xAI had previously required employees on its human data team to sign a waiver agreeing to work with sexual content, among other kinds of content. When those tools were finally integrated into X last December, they were used to generate sexual deepfakes at an unprecedented rate, including CSAM.
This example demonstrates the devastating harm that the most vulnerable, namely children, can experience in the age of AI. Thankfully, to date, 45 states have passed laws updating child pornography laws to include AI-generated or modified CSAM in their existing child pornography laws. Since 2023 alone, 23 states have enacted legislation that prohibits the unlawful creation, distribution, or use of computer or AI-generated or AI-modified CSAM. Additionally, the federal Take It Down Act, enacted last year, makes it a federal crime for individuals to publish AI-generated or modified CSAM.
Sexual Deepfakes (22 States, 29 Laws)
Twenty-two states have also criminalized the creation and distribution of sexual deepfakes of identifiable individuals without a person’s consent. Most of these laws create criminal penalties for individuals who publish or distribute AI-generated or modified intimate images of an identifiable adult without his or her consent, or who attempt to use such synthetic media to harass or extort an individual.
A few states have gone further and have created criminal penalties for websites for hosting or publishing such content. Under these laws, platforms and websites that either host deepfake sexual content of an identifiable individual without that individual’s consent, and/or continue to host such content after receiving a request to remove a particular deepfake, are liable. These laws are critical for helping victims, as such content can proliferate quickly across platforms that have historically enjoyed immunity for the content their users post. It must also be noted that, at the federal level, the Take It Down Act also creates liability for social media platforms for hosting or publishing nonconsensual, AI-generated sexual images of an identifiable person.
Political Deepfakes / Ads (States 23, Laws 30)
Since 2023, 23 states have enacted laws that create regulations around the use of deepfakes in political advertisements. Most states are rather permissive of “synthetic,” or AI-generated or AI-modified, content used in such ads, so long as certain conditions are met, including: the ad using synthetic content is not run within a certain number of days of an election; the ad discloses that it used AI-generated or modified content; and/or the ad is published with the consent of the candidate depicted. Failure to comply can have civil or criminal penalties. For example, in Minnesota, a candidate whose campaign violates its political deepfake law must forfeit their nomination or office.
Telemarketing (2 States, 2 Laws)
Another way states are tackling the problem of deepfakes is by going after AI deepfakes used in telemarketing and phone scams. This is a growing issue, especially for older populations who are much more likely to fall prey to such scams. Laws that include one’s voice as a personal right are one way that states, such as Tennessee, New York, and others already mentioned above, are dealing with this issue. However, at least two states are dealing with this more directly. In 2024, California enacted a law that updated existing statute to require automatic dialing-announcing devices to inform callers when an announcement uses a “voice simulated or generated using artificial intelligence.” And last year, Texas enacted SB 2373, which creates a cause of action against those who use AI-generated media to financially exploit others, as in the case of phishing.
States are eager for AI developers and deployers to invest in communities within their jurisdiction. Since 2023, 14 states have enacted 26 laws to directly research, launch, invest in, or shield AI-related economic development projects. Sometimes this looks like investing in research. Mississippi, for example, established a task force to develop and recommend policies that support innovation and business. And Oregon funded research to evaluate the potential impacts of AI on the workforce in the state’s key industries. A couple of states have also established tax credits for AI companies that make large investments in their states. Indiana, for example, enacted a law that established a tax credit for companies that are willing to invest a minimum of $50 million (within five years) in quantum or advanced computing projects or defense infrastructure. Additionally, Utah and Texas have enacted laws regulating AI that contain provisions that would shield AI innovators and companies from liability arising from a violation by granting them an affirmative defense if they fix the problem and/or otherwise are compliant with governance requirements outlined in the law.
Another way states are seeking to fuel innovation is by preempting local ordinances to attract data centers. As states seek to attract the investments of AI developers and deployers, these laws are meant to overcome the increasing opposition by local communities to such investments, like data centers. Last year, West Virginia enacted a first-of-its-kind law, HB 2014, which created a “microgrid development program” that exempts data centers from local ordinances, such as zoning and permitting regulations, and allows data centers to generate their own power, independent of existing utility companies.
Another novel way states are pursuing AI-related economic development is by passing “right to compute” laws. Last year, Montana was the first to enact “right to compute” legislation. Montana’s law (SB 212) prohibits government entities (whether state, local, or otherwise) from restricting an individual’s “ability to privately own or make use of computational resources for lawful purposes” without a “compelling government interest.” It also requires AI-controlled infrastructure, such as data centers, to develop a risk management policy. These are seen by some critics as supportive of Big Tech’s efforts to minimize regulations. Proponents of the law, however, argue that enshrining citizens’ “right to compute,” as well as protecting that right from laws and ordinances that lack “compelling government interest,” enables the state to “get out in front of regulatory threats” and “shield those using and developing AI… from the threat of heavy-handed state or federal regulation,” with the end goal being to “attract high-tech businesses.”
Increasingly, AI—both in terms of AI technologies as well as education in AI or AI literacy, as it is often called—is being integrated into schools across the country. Nationwide, there is a push to expand education programs in AI and AI-related fields, especially at the high school and college level. Likewise, AI technologies are being incorporated into public education contexts: AI tutors in the classroom, AI-driven facial recognition and weapon detection software for security purposes, AI-driven mental health assessments and tools, and more. As touched on above, since 2023, 25 states have appropriated state funds for the expansion of AI-related degree programs and creation of grant programs for AI-related projects in public schools. For example, North Dakota appropriated funds for a “research technology park” at North Dakota State University to conduct “exploratory, transformational, and innovative research that advance autonomous mobile equipment opportunities,” especially for the state’s agriculture and defense industries. And in 2024, Hawaii created a two-year program at the University of Hawaii to develop an AI-driven wildfire forecasting system.
Beyond simply funding the expansion of these programs or tools, 20 states have enacted laws that focus on researching the use and impact of AI in school and/or the development of school policies when it comes to AI. For example, California, Delaware, and North Dakota enacted laws that establish committees to research the use and impact of AI in schools and to develop and recommend policies for schools and/or legislatures. Additionally, at least eight states have also enacted laws either establishing standards for AI use in education or requiring state departments of education or universities and local school boards to develop AI policies.
Only two states have directly regulated the deployment and use of AI technologies in schools. Last year, Nevada enacted a first-of-its-kind law that prohibits the use of AI in schools that would replace school counselors, psychologists, or social workers. Colorado enacted legislation that prohibits schools from processing biometric data obtained through facial recognition services used in school curricula (approved by the school board) without express opt-in consent from a student and/or the student’s parents.
In many cases, enacted state AI regulations relate to the deployment of AI by government entities. For example, 32 states have enacted such laws. These laws vary, addressing a wide range of AI-related issues and uses. Because AI tools are relatively new, 22 states have opted to first create special task forces or committees to research potential benefits of AI use, conduct inventories of existing AI usage within government, and develop, recommend, or advise agencies and legislatures on AI policies for government usage or in particular industries. Many of these committees are tasked with making recommendations or policies for the development, procurement, or use of AI across all state agencies. Others are tasked with making recommendations to the state legislature. Some, however, are more specialized, focusing on inventorying, researching, and making recommendations for specific departments or industries, such as labor, education, healthcare, or entertainment.
However, beyond the creation of task forces, research committees, and advisory boards, states have regulated government use of AI in a few other critical ways.
Law Enforcement & Facial Recognition (7 States, 7 Laws)
Prior to 2023, a dozen states had passed laws regulating the use of AI-driven facial recognition software by law enforcement. Since then, at least three more states have passed laws and regulations relating to government use of facial recognition software and biometric data obtained through such software. Utah’s SB 231 (2024) amends existing restrictions on the use of facial recognition by law enforcement by prohibiting a government agency from obtaining biometric surveillance data without a warrant. Montana’s 2023 bill, SB 397, was the first to require a warrant for police use of facial recognition data. Maryland’s SB 182 (2024) mirrors other state regulation, restricting police use of facial recognition for a limited number of serious crimes (as defined by the law) and requiring a notice that such technology was used in an investigation leading to charges. Beyond such laws that directly regulate law enforcement use of AI-driven facial recognition technologies, other enacted laws create reporting requirements such as providing an inventory of technologies used or require law enforcement agencies to create policies for their use of AI.
National Security (States 2, Laws 2)
Chinese-owned DeepSeek made a splash when it became Apple’s most downloaded app overnight last January, surpassing ChatGPT. Today, it remains the fourth most popular generative AI consumer app. Like TikTok, foreign-owned AI systems like DeepSeek raise national security concerns, especially as US politicians see themselves in a new Cold-War-like arms race with adversaries like China. To this end, two states—Oregon and Kansas—have passed laws prohibiting the use of AI services owned, developed, or controlled by foreign corporations.
Oregon’s HB 3936 is much broader, prohibiting state employees from using state devices or networks to run any AI service owned or developed by a foreign corporate entity, whereas Kansas’ HB 2313 prohibits any state use of such AI (explicitly naming DeepSeek) that is controlled by a country of concern, or foreign adversary.
Legal Proceedings & Criminal Justice (States 5, Laws 6)
Another way states are regulating government use of AI is by ensuring that humans are involved with “high-risk” decision-making processes when it comes to courts and criminal justice. Since 2023, three states—Louisiana, Utah, and Virginia—have passed laws that establish protocols around the use of AI tools in legal contexts. Of the three, both Virginia and Utah enacted laws that prohibit the outsourcing of certain legal decisions to AI-driven recommendations or risk-assessment systems, and require that any such decisions must be reviewed and approved by a qualified human and not solely determined by AI-based recommendations. The third state, Louisiana, enacted a law that prohibits the use of AI-generated or altered false evidence in court and outlines protocols for the use of AI tools in legal proceedings.
Government Employment (1 State, 1 Law)
Just as some states have enacted regulations concerning the use of AI in employment decisions in the private sector, at least one state has regulated use of AI when it comes to government employment. In 2025, New York enacted a law creating a policy around the use of AI in government employment decisions. However, beyond creating policies for how AI could be used in employment decision processes, including requiring disclosures of any such tools used, the law protects government employees by prohibiting the replacement of employees by AI as well as offloading key responsibilities to AI.
Seventeen states have also enacted laws investing in, directing, or regulating the deployment of AI within health-related fields. As already mentioned, a few states, such as Illinois, California, Oregon, and Utah, have prohibited mental health or nursing chatbots from being marketed as licensed professionals, and others have required that disclosures be made when using AI-related services to engage with patients. Beyond these, a number of states have enacted laws that regulate how AI is used in decision-making processes. Since 2023, at least seven states have enacted some kind of law that prohibits health care plans from making coverage determinations or utilization reviews solely at the recommendation of an AI-driven automated decision-making tool. Other notable health-care related AI laws include New Mexico’s HB 178 (2025), which requires the state board of nursing to develop standards for the use of AI in nursing; Florida’s SB 7018 (2024), which established a council to explore the use of innovative technologies, including AI, to improve healthcare quality and delivery; and Texas’s SB 1188 (2025), which requires healthcare practitioners to disclose any use of AI to patients and to review all AI-obtained information for accuracy before submitting it to patient records.
Additionally, at least three states have passed laws related to the use of AI in genetic health research. The Kansas law mentioned above, which prohibits state use of AI controlled by countries of concern or foreign adversaries, also prohibits the use of genetic analysis and sequencing software produced in or by a foreign adversary. Also, Florida and Rhode Island have established professional bodies to help facilitate the advancement of AI and genomics.
Enacted state AI regulations are varied and multifaceted. And not all of them contain the same definitions and language. Nevertheless, what emerges from the above survey is a picture of state policy that shows significant areas of overlapping concern when it comes to governing AI. To be sure, the mere enactment of these policies may not be sufficient to demonstrate that they reflect voters’ exact preferences. Almost always, the enactment of a legislative measure reflects a variety of interests, some of which have more lobbying power than others. That said, the enactment of the above bills indicates a significant degree of political will and coordination, and therefore reflect, to some meaningful degree, the preferences of the American people. Based on the legislative data used for this report, these preferences could be summarized as follows:
Editor's Note: Download the full policy memo below for a footnoted version.
Interested in learning more about the work of the Institute for Family Studies? Please feel free to contact us by using your preferred method detailed below.
P.O. Box 1502
Charlottesville, VA 22902
(434) 260-1048
For media inquiries, contact Chris Bullivant ([email protected]).
We encourage members of the media interested in learning more about the people and projects behind the work of the Institute for Family Studies to get started by perusing our "Media Kit" materials.