AI is used widely, but lawmakers have set few rules
In the fall of 2016, the Connecticut Department of Children and Families began using a predictive analytics tool that promised to help identify kids in imminent danger.
The tool used more than two dozen data points to compare open cases in Connecticut’s system against previous welfare cases with poor outcomes. Then each child received a predictive score that flagged some cases for faster intervention.
Even as more states began to adopt the tool, however, some agencies found that it seemed to miss urgent cases and inaccurately flag less serious ones. A study published in the journal Child Abuse & Neglect later found it didn’t improve child outcomes. Connecticut and several other states abandoned the tool, which was developed by a private company in Florida. In 2021 — five years after Connecticut’s Department of Children and Families first used the tool, and two years after the state junked it — researchers at Yale University requested information about the mechanics of how it worked and concluded that the agency had never understood it.
“This is a huge, huge public accountability problem,” said Kelsey Eberly, a clinical lecturer at Yale Law School. “Agencies are getting these tools, they’re using them, they’re trusting them — but they don’t even necessarily understand them. And the public certainly doesn’t understand these tools, because they don’t know about them.”
Connecticut is the latest state to pass explicit regulations for artificial intelligence and other automated systems, thanks in part to the legacy of the tool to screen for at-risk kids. A bipartisan bill passed May 30, which Democratic Gov. Ned Lamont is expected to sign into law, would require state agencies to inventory and assess any government systems that use artificial intelligence and create a permanent working group to recommend further rules.
Many states already regulate aspects of these technologies through anti-discrimination, consumer protection and data privacy statutes. But since 2018, at least 13 states have established commissions to study AI specifically — and since 2019, at least seven states have passed laws aimed at mitigating bias, increasing transparency or limiting the use of automated systems, both in government agencies and the private sector.
In 2023 alone, lawmakers in 27 states, plus Washington, D.C., and Puerto Rico, considered more than 80 bills related to AI, according to the National Conference of State Legislatures.
AI is here whether we want it or not. It’s part of our lives now … and lawmakers are just trying to get ahead of it.
Artificial intelligence tools — defined broadly as technologies that can perform complex analysis and problem-solving tasks once reserved for humans — now frequently determine what Americans see on social media, which students get into college, and whether job candidates score interviews.
More than a quarter of all American businesses used AI in some form in 2022, according to the IBM Global AI Adoption Index. In one striking illustration of AI’s growing ubiquity, a recent bill to regulate the technology in California drew comment from organizations as diverse as a trade association for the grocery industry and a state nurses union.
But federal legislation has stalled, leaving regulation to local governments and creating a patchwork of state and municipal laws.
“The United States has been very liberal on technology regulation for many years,” said Darrell M. West, a senior fellow in the Center for Technology Innovation at the Brookings Institution think tank and the author of a book on artificial intelligence. “But as we see the pitfalls of no regulation — the spam, the phishing, the mass surveillance — the public climate and the policymaking environment have changed. People want to see this regulated.”
Lawmakers’ interest in regulating technology surged during this legislative session, and is likely to grow further next year, thanks to the widespread adoption of ChatGPT and other consumer-facing AI tools, said Jake Morabito, the director of the Communications and Technology Task Force at the conservative American Legislative Exchange Council (ALEC), which favors less regulation.
‘Tremendous’ potential and dangers
Once the stuff of science fiction, artificial intelligence now surfaces in virtually every corner of American life. Experts and policymakers have often defined the term broadly, to include systems that mimic human decision-making, problem-solving or creativity by analyzing large troves of data.
AI already fuels a suite of speech and image recognition tools, search engines, spam filters, digital map and navigation programs, online advertising and content recommendation systems. Local governments have used artificial intelligence to identify lead water lines for replacement and speed up emergency response. A machine-learning algorithm deployed in 2018 slashed sepsis deaths at five hospitals in Washington, D.C., and Maryland.
But even as some AI applications yield new and unexpected social benefits, experts have documented countless automated systems with biased, discriminatory or inaccurate outcomes. Facial recognition services used by law enforcement, for instance, have repeatedly been found to falsely identify people of color more often than white people. Amazon scrapped an AI recruiting tool after it discovered the system consistently penalized female job-seekers.
Critics sometimes describe AI bias and error as a “garbage in, garbage out” problem, said Mark Hughes, the executive director of the Vermont-based racial justice organization Justice for All. In several appearances before a state Senate committee last year, Hughes testified that lawmakers would have to intervene to prevent automated systems from perpetuating the bias and systemic racism that often inherently appear in their training data.
“We know that technology, especially something like AI, is always going to replicate that which already exists,” Hughes told Stateline. “And it’s going to replicate it for mass distribution.”
More recently, the advent of ChatGPT and other generative AI tools — which can create humanlike writing, realistic images and other content in response to user prompts — have raised new concerns among industry and government officials. Such tools could, policymakers fear, displace workers, undermine consumer privacy and aid in the creation of content that violates copyright, spreads disinformation and amplifies hate speech or harassment. In a recent Reuters/Ipsos poll, more than two-thirds of Americans said they were concerned about the negative effects of AI — and 3 in 5 said they feared it could threaten civilization.
“I think that there’s tremendous potential for AI to revolutionize how we work and make us more efficient — but there are also potential dangers,” said Connecticut state Sen. James Maroney, a Democrat and champion of that state’s AI law. “We just need to be cautious as we move forward.”
Connecticut’s new AI regulations provide one early, comprehensive model for tackling automated systems, said Maroney, who hopes to see the regulations expand from state government to the private sector in future legislative sessions.
The law creates a new Office of Artificial Intelligence in the state executive branch, tasked with developing new standards and policies for government AI systems. By the end of the year, the office must also create an inventory of automated systems used by state agencies to make “critical decisions,” like those regarding housing or health care, and document that they meet certain requirements for transparency and nondiscrimination.
The law draws from recommendations by scholars at Yale and other universities, Maroney said, as well as from a similar 2021 law in Vermont. The model will likely surface in other states too: Lawmakers from Colorado, Minnesota and Montana are now working with Connecticut to develop parallel AI policies, Maroney said, and several states — including Maryland, Massachusetts, Rhode Island and Washington — have introduced similar measures.
In Vermont, the law has already yielded a new advisory task force and a state Division of Artificial Intelligence. In his first annual inventory, Josiah Raiche, who heads the division, found “around a dozen” automated systems in use in state government. Those included a computer-vision project in the Department of Transportation that uses AI to evaluate potholes and a common antivirus software that detects malware in the state computer system. Neither tool poses a discrimination risk, Raiche said.
But emerging technologies might require more vigilance, even as they improve government services, he added. Raiche has recently begun experimenting with ways that state agencies could use generative AI tools, such as ChatGPT, to help constituents fill out complex paperwork in different languages. In a preliminary, internal trial, however, Raiche found that ChatGPT generated higher-quality answers to sample questions in German than it did in Somali.
“There’s a lot of work to do to make sure equity is maintained,” he said. But if done right, automated systems “could really help people navigate their interactions with the government.”
A regulatory patchwork
Like Connecticut, Vermont also plans to expand its AI oversight to the private sector in the future. Raiche said the state will likely accomplish that through a consumer data privacy law, which can govern the data sets underlying AI systems and thus serve as a sort of backdoor to wider regulation. California, Connecticut, Colorado, Utah and Virginia have also passed comprehensive data privacy laws, while a handful of jurisdictions have adopted narrower regulations targeting sensitive or high-risk uses of artificial intelligence.
By early July, for instance, New York City employers who use AI systems as part of their hiring process will have to audit those tools for bias and publish the results. Colorado, meanwhile, requires that insurance companies document their use of automated systems and demonstrate that they do not result in unfair discrimination.
The emerging patchwork of state and local laws has vexed technology companies, which have begun calling for federal regulation of AI and automated systems. Most technology companies cannot customize their systems to different cities and states, said West, of the Brookings Institution, meaning that — absent federal legislation — many will instead have to adopt the most stringent local regulations across their entire geographic footprint.
That is a situation many companies hope to avoid. In April, representatives from a wide range of business and technology groups lined up to oppose a California AI bill, that would have required private companies to monitor AI tools for bias and report the results — or face hefty fines and consumer lawsuits. The bill survived two committee votes in April before dying in the Assembly Appropriations Committee.
“Governments should collaborate with industry and not come at it with this adversarial approach,” said Morabito, of ALEC. “Allow the market to lead here … a lot of private sector players want to do the right thing and build a trustworthy AI ecosystem.”
ALEC has proposed an alternative, state-based approach to AI regulation. Called a “regulatory sandbox,” the program allows businesses to try out emerging technologies that might otherwise conflict with state laws in collaboration with state attorneys general offices. Such sandboxes encourage innovation, Morabito said, while still protecting consumers and educating policymakers on industry needs before they draft legislation. Arizona and Utah, as well as the city of Detroit, have recently created regulatory sandboxes where companies can conduct AI experiments.
Those programs have not prevented lawmakers in those states from also pursuing AI regulations, however. In 2022, a Republican-sponsored bill sought to bar AI from infringing on Arizonans’ “constitutional rights,” and the Utah legislature recently convened a working group to consider possible AI legislation.
Policymakers no longer consider AI a vague or future concern, Yale’s Eberly said — and they aren’t waiting for the federal government to act.
“AI is here whether we want it or not,” she added. “It’s part of our lives now … and lawmakers are just trying to get ahead of it.”