Home Part of States Newsroom
Commentary
Finding right solution to regulating AI in campaigns will require ongoing effort

Share

Finding right solution to regulating AI in campaigns will require ongoing effort

Feb 27, 2024 | 8:30 am ET
By Randy Stapilus
Share
Finding right solution to regulating AI in campaigns will require ongoing effort
Description
Half of states have considered regulating the use of AI in campaign materials over the past year. (Getty Images)

There’s a growing and widespread consensus that artificial intelligence technology needs legal guardrails, and political ads and communications are one of the prime places lawmakers are looking to place them. 

As Kathy Wai of the Oregon Secretary of State’s Office put it in recent legislative testimony: “Campaigns can easily create high-quality, convincing AI generated content in the form of images, voices, deepfakes and other forms of (AI). AI is an evolving threat in our highly charged mis, dis and mal-information environment.”

Effective solutions, though, will not come easily. Getting the details right, and finding aggressive solutions, can be tricky, and it will take a persistent, ongoing effort. 

In Oregon, Senate Bill 1571 would require disclosure of the use of AI to create a false impression In campaigns in ads or other materials. It came from Sen. Aaron Woods, D-Wilsonville, but also has backing from 27 other legislators in both parties and across the philosophical spectrum. It passed the Senate on Monday, and goes to the House for consideration. 

The bill would carry teeth: Campaigns caught using AI and not disclosing it could face a fine up to $10,000 for each violation. It would exempt news media and some satirical publications from the requirements, and would allow the secretary of state to draft rules to put enforcement into effect.

But even if campaigns disclose the use of AI in any campaign material, any ad, flyer or other message still could easily lead to false impressions – usually about the subject of an attack. And with AI technology becoming so commonplace nationally, it’s likely to start showing up in small and local political activities before long. 

Oregon isn’t the first state to consider regulating the use of AI in campaigns. Quite a few states already have entered the fray: Half of all the states considered AI-related legislation in last year’s session, and they’ve adopted varying approaches. 

A law passed in Texas in 2019 bans deepfakes within 30 days of an election if the purpose is “to injure a candidate or influence the result of an election.” California that year – and again in 2022 – passed a roughly similar measure with a 60-day period. Washington state last year added a law banning AI messages with an “appearance, speech or conduct that has been intentionally manipulated with the use of generative adversarial network techniques or other digital technology” that give a false impression of a candidate or issue.

The Oregon bill defines a false impression as, “A fundamentally different understanding or impression than a reasonable person would have from the unaltered, original version of the image, audio recording or video recording.” That still might afford significant wiggle room in specific cases if one got to court. 

There’s also a reasonable question in most of these efforts about how effective those rules would be. The required disclosure in the Oregon bill, for example, might translate into a small-print notice that would be ignored by viewers or readers emotionally swept away by powerful images.

In the Senate Rules Committee hearing, almost all the testimony on SB 1571 was favorable. However, a major exception was Emily Hawley from the American Civil Liberties Union who said, “We appreciate the scale of these potential electoral risks but believe this bill as written would likely be challenged and overturned.” 

Oregon law already has long-standing limits on speech in areas such as libel, fraud in some cases, soliciting, perjury and conspiracy, and but Hawley said that while the new bill covers some of that territory, it doesn’t “proscribe the speech only when it actually or necessarily produces the harm.”

AI is evolving so fast – as are its uses  – that it has become hard to define. That doesn’t mean Oregon legislators should wait to address it, but it means they ought to set up an ongoing review – probably a persistent interim committee – to monitor its evolution and track the ways laws might usefully address it. They should anticipate this will be an ongoing work area for years to come. 

In arguing for the current Oregon bill, Woods, the sponsor, said “The bill will build awareness.” That it may do, whether it passes or even if it doesn’t, since more voters may be locally alerted to some of the new ways some candidates or causes may try to deceive them. And that would be a significant plus all by itself, whatever the legal challenge emerging down the road.