Home Part of States Newsroom
News
Researchers find some worry, some hope for AI in democracy

Share

Researchers find some worry, some hope for AI in democracy

By Jennifer Smith
Researchers find some worry, some hope for AI in democracy
Description
Photo courtesy of CommonWealth

WHEN IT COMES to the 2024 election season, the democratic sky didn’t fall because of artificial intelligence, Harvard researchers say, with candidates using the technology to reach voters in helpful new ways. But, they warn, it is still worth keeping a wary eye on some of AI’s most insidious possible applications.

New Englanders may recall the use of artificial intelligence to mimic President Joe Biden’s voice to dissuade voters from participating in the New Hampshire Democratic primary. More than a dozen states, including Massachusetts, have adopted or considered legislation that would ban distribution of deepfakes created by artificial intelligence that falsely depict situations, actions, or speech near an active election.

Those risks are serious, but focusing only on artificial intelligence misinformation threatens to overtake conversations about where AI has been most impactful in elections as a mass communication tool, according to Bruce Schneier, a lecturer in public policy at the Harvard Kennedy School, and Nathan Sanders of Harvard’s Berkman Klein Center for Internet & Society.

In an analysis of elections in 2024, Schneier and Sanders argue that the dreaded “death of truth” did not materialize from deepfakes and AI-assisted misinformation. 

Incidents like the Biden deepfake are “significant,” Schneier said on an episode of The Codcast. However, research found “that they don’t seem to have been determinative in any election that we’re aware of around the world. And that’s why we say that the apocalypse hasn’t happened. But there are so many other examples of AI being used in the wild that we thought were interesting and called out – a large category of uses in political campaigning, by campaigns that are starting to use AI to support the roles of campaign staff and volunteers and doing things like canvassing, speaking with voters, generating campaign materials in the US and around the world.”

Schneier and Sanders, who are writing a book on AI and democracy expected to be out next fall, described themselves as artificial intelligence “realists.” That is, they are very concerned about the applications of AI for disinformation, misinformation, and propaganda online, but they also view it as a transformative tool with plenty of applications outside of those nefarious uses.

In their definition of AI, “what we are using is a sort of a broad basket of technologies that largely mimic human thought,” Schneier said. This can include generative AI and large language models like ChatGPT that create large amounts of text based on existing texts, he noted, but it also includes chess-playing AI, medical diagnosis AI, and weather-predicting AI.

“These are all tasks that, until now, have been tasks that only humans can do. They’re thinking tasks,” Schneier said. “We are really reaching the point where a lot of these AI systems are able to replace humans in a wide variety of tasks. Sometimes good, sometimes bad.”

Artificial intelligence is increasingly core to a growing industry of online tools dedicated to tracking legislation and explaining maneuverings on Beacon Hill. InstaTrac uses talk-to-text features to document searchable transcripts of hearings and sessions. Legislata has rolled out AI-generated transcripts and summaries of public hearings from state agencies and municipalities. Sanders co-founded MAPLE, or the Massachusetts Platform for Legislative Engagement, which now includes explanations of its AI use to offer bill summaries and sort through dense legislation.

The technology has a trust problem, Schneier and Sanders have argued for some time, in part because of the corporate ownership of most of the industry’s flagship AI products. If there are errors or biases built into a black box system, where the inner workings aren’t visible to users, those users can’t be sure they should trust all the information that comes out.

Using corporate AI models to do content moderation, for instance, has not inspired confidence in Sanders. 

“We’ve seen some really prominent examples of content moderation AI tools failing on platforms like Facebook, and I don’t think there would be trust in applying those types of tools to democratic processes like engagement between citizens and legislatures,” he said. “I don’t think we’d want to use those AI tools as they exist today for moderating a discussion in a public forum like a town hall.”

One solution is government-backed development of AI systems open to public scrutiny. Massachusetts lawmakers recently approved $100 million to start a Massachusetts AI Hub.

“I think we could build systems that can achieve much greater trust if they’re developed with greater transparency, if they’re developed with meaningful public participation and public control of the design of those systems and how they’re used,” Sanders said. “And I think it’s necessary that we have an alternative to corporate controlled AI to do that.”

There are some seemingly straightforward democratizing powers in widely available AI, both researchers note. Widespread real-time translation tools, AI-generated basic campaign literature that could be reviewed by a small staff, communicating with potential constituents – as long as the translations and information are accurate – could potentially be helpful to new candidates looking to break into political systems with often overwhelming incumbency advantages. 

“The thing we find that we’re constantly hitting against is the notion of power,” Schneier said, “and whether AI serves existing power, thereby increasing already great power imbalances, or whether it can serve more distributed power, thereby reducing power imbalances. … To the extent it allows more people to run for local office, that’s phenomenal for democracy. But if it entrenches the existing power, whether it’s corporate power or government power, then that’s bad for democracy.”

For more with Bruce Schneier and Nathan Sanders – on pre-AI misinformation, the role of regulators, and how people understand human versus AI political communication – listen to The Codcast on Apple Podcasts, Spotify, or wherever you listen to podcasts.