Lawmakers again probe possible AI regulation in Nebraska but look to other states as guide
LINCOLN — An interim study ahead of possible 2025 legislation to regulate artificial intelligence when it comes to Nebraska elections could depend on the fate of legislation in at least 19 other states.
State Sens. Tom Brewer of north-central Nebraska and John Cavanaugh of Omaha each posed the question to their colleagues Thursday about whether the state should regulate AI. For Brewer’s Legislative Resolution 362, the focus was possible dangers for elections generally, and for Cavanaugh’s LR 412 about the use of AI in political campaigns.
Cavanaugh, who introduced Legislative Bill 1203 this year before it stalled in February, said he and others are still trying to understand AI and how to approach it. That’s especially so when balancing possible dangers with potential uses all under First Amendment protections.
“There won’t be a simple solution as technology is ever changing and we’re struggling to keep up right now,” Cavanaugh said. “But together, with the stakeholders in this room and throughout Nebraska, I believe we can reach some common ground.”
Cavanaugh’s legislation would have put AI regulation under the auspices of the Nebraska Accountability and Disclosure Commission and require clear and conspicuous disclosures for paid state or local advertisements for candidates or ballot questions.
But similar to the February hearing on LB 1203, lawmakers received a frosty response on the interim study from State Sen. Danielle Conrad of Lincoln, an attorney and member of the Government, Military and Veterans Affairs Committee that Brewer chairs.
Conrad repeatedly pushed back and said she was “very skeptical” that new regulations on political speech are needed.
“I think it runs afoul of the First Amendment,” Conrad said. “If not legally, I think it has a chilling effect even on speech that we find confusing or confounding or distasteful or misleading.”
What other states have done
Adam Kuckuk and Ben Williams of the National Conference of State Legislatures, a bipartisan organization that assists lawmakers and their staffs nationwide and tracks legislation, said at least 19 states have explicitly addressed AI and political messaging in legislation across political ideology.
Kuckuk said AI has been the “hottest topic” in the past two years but is “only the latest wrinkle in a long line of technological changes that have impacted state campaigns and elections.” For example, he said, other changes have included television, social media and cryptocurrency campaign contributions.
Many of those laws use different terms for generative AI — such as “synthetic media,” “deceptive media,” or “deepfakes” — but there is no one, accepted definition, even among researchers. No state has completely banned deceptive AI political messaging, either.
Instead, Kuckuk said, states prohibited deceptively created messages at a certain time point before an election or require a disclosure that the material is AI generated.
‘It’s scary close’: Nebraska lawmakers react to AI voice clones, possible regulations
Williams said some states impose civil fines, ranging from the nation’s lowest at $500 on the first violation in Michigan to the highest at $10,000 on the second violation in Minnesota. Several states, such as New Mexico and Utah, fine offenders $1,000 for each violation, and Colorado imposes a penalty of 10% of the dollar amount used to promote a deepfake.
Other states impose criminal penalties, such as up to one year in prison in Texas or Mississippi, or up to five years in prison in Mississippi if the intention of the message is to cause violence.
Texas lawmakers were the first to pass an AI law in 2019, Williams said, defining a “deepfake” as a video alone, not images or audio, and banning such content within 30 days before an election.
Minnesota has a “two strike” rule before there is possible prison time. It prohibits deepfakes within 90 days prior to an election, as does Arizona, unless there is a clear disclosure it was AI-generated.
Kuckuk said some states require disclosure in a digital metadata file instead of a physical message, such as in Colorado. That must stipulate who created the content, when it was created and edited, and that it is AI based.
Congress has yet to pass legislation but has considered bills to require a federal agency to monitor AI use, and the Federal Elections Commission is considering new regulations, according to national representatives.
A ‘reflexive’ approach?
Conrad said she sees definitional or enforcement problems in many of the laws and said political satire, impersonation and cherry-picking of someone’s words has existed “since the dawn of politics.”
“I’m just concerned about a reflexive approach,” Conrad said, pointing to potential new penalties.
Cavanaugh, in response, said he doesn’t want to be reflexive but instead wants to be thoughtful and deliberate and decide definitively if anything should be done.
“I think getting out front and having the conversation as we go, before it actually comes up, is probably the smarter thing to do,” Cavanaugh said.
As he testified in February, Jim Timm, president and executive director of the Nebraska Broadcasters Association, asked that any legislation clearly exempt broadcasters from liability. He noted that under federal law, organizations must run political advertising regardless of content.
Timm used an analogy of a fever or a tweaked knee that can be checked with a “trusty thermometer” or doctor visit and be defined quickly. For AI, he said, there is no such detector.
“We have no magical powers to make those determinations,” Timm said.
The Nebraska Accountability and Disclosure Commission, which handles certain complaints against elected officials or candidates and monitors campaign finance, opposed Cavanaugh’s bill in February, stating it was outside the commission’s duties.
“The NADC is not tasked with trying to judge the truth or falsity of claims made in the heat of a campaign,” David Hunter, NADC executive director, testified at the time. “We are not equipped to be fact checkers.”
Brewer, who attended a forum on AI that Civic Nebraska hosted in February, said again Thursday that some uses of AI “could be pretty scary if they come true.” At the February event, professors, researchers and a county election commissioner explained how AI could be used to sow misinformation or disinformation about elections generally, and described how terrorists had used AI.
Other effects could include falsely advertising that voting deadlines or polling places have changed or, as happened in New Hampshire’s 2024 primary, a generated voice of the president telling voters to “save” their vote and stay home until the general election.
“Now, a lot of the information was pretty conceptual, but today’s concept can be tomorrow’s problems,” Brewer said at Thursday’s hearing.
‘Not necessarily insidious’
Cavanaugh pointed to an independent project, published in the Nebraska Examiner, that used AI last December to replicate the voices of seven state senators, including Conrad’s. Cavanaugh said the results of that experiment were “unsettling.”
He’s worried about AI getting so good that people will struggle to differentiate reality, especially in contentious elections when candidates take issue even with “half truths.”
Spike Eickholt, an attorney and lobbyist for the ACLU of Nebraska, urged lawmakers not to create any new crimes. He also asked lawmakers to be cautious because most AI-related legislation has been signed into law without being tested in the courts.
“They may not be constitutional, may be suspect, who knows,” Eickholt said.
Eickholt questioned where the line might be drawn with other software that edits images, audio or video, such as photo editing systems or filters that don’t need to be disclosed. Candidates might also use AI to boost up a potentially shoestring campaign or office to respond to constituents or read up on issues, which Eickholt said isn’t deceptive.
Courts don’t protect false or defamatory statements, Eickholt noted, pointing to various state laws that prohibit impersonating a public servant, theft by deception or fraud, election falsification, voter registration fraud or interference, and electioneering.
“The technology is not necessarily insidious. It’s not necessarily horrible,” Eickholt said. “We shouldn’t always just fear everything because it’s new.”