Indiana law regulating AI usage in elections takes effect

  • Print
Listen to this story

Subscriber Benefit

As a subscriber you can listen to articles at work, in the car, or while you work out. Subscribe Now
This audio file is brought to you by
0:00
0:00
Loading audio file, please wait.
  • 0.25
  • 0.50
  • 0.75
  • 1.00
  • 1.25
  • 1.50
  • 1.75
  • 2.00
Indiana Statehouse (IL file photo)

Two days before the New Hampshire presidential primary, calls went out to thousands of likely Democratic voters with President Joe Biden’s voice telling them that they would be ineligible to vote in the general election if they participated in the primary election.

The calls weren’t real. Instead, a so-called “deep fake” had been created of Biden’s voice using generative artificial intelligence. A lawsuit filed by the national and a local chapter of the League of Women Voters alleges that Steve Kramer, Lingo Telecom LLC and Life Corp. “used illegal AI-generated robocalls to discourage voters from participating.”

Lawmakers in Indiana are taking action to try to prevent similar problems here by regulating the way the rapidly changing technology is used in political advertising. The state joins nearly 40 others that have passed or are considering laws to regulate how artificial intelligence is used in elections, according to civil advocacy group Public Citizen

The Indiana legislation, House Bill 1133, was signed into law by Gov. Eric Holcomb on Tuesday. Authored by Rep. Julie Olthoff, R-Crown Point, the law requires that candidates include a disclaimer when political advertising includes usage of generative AI, and it creates a path for legal action when candidates believe they are misrepresented.

In addition to more commonly used AI-based writing tools like ChatGPT, other forms of generative AI allow users to create realistic images, videos and voice modulations. As the technology becomes increasingly capable of mimicking real people and more accessible, candidates, consultants and voting advocacy groups have become increasingly worried about the impacts.

“Numerous members of my caucus came to me with concerns, wanting to know what we could do to protect them,” Megan Ruddie, director of the Indiana House Democratic Caucus, told IBJ. Prior to her work at the Indiana Statehouse, Ruddie spent 15 years working in not-for-profit organizations and policy sectors.

Ruddie said she and others would have liked to see the law include broader protections for everyday Hoosiers from AI misrepresentations. But she said the law is a bipartisan step in the right direction when it comes to election integrity. And it could not have come at a better time.

In about 2018, Ruddie began hearing rumblings among political insiders about the future impact of AI. At that time, the difference between real media and fabricated versions were more obvious. Even just a year ago, she said, the technology was less sophisticated. But today, mimicking voices and images is easier to do and harder to detect.

“I do think that this is probably one of the first cycles that, at a local level, it would be an attainable thing,” Ruddie said.

Most political consultants would stay away from generative AI—even without regulations, experts say. Eric Cullen, managing partner at Republican-focused firm Bullhorn Communications, told IBJ that generally, no reputable campaign consultant would use generative AI in advertising.

“We are an industry that is built on social and civic trust, even when campaigns get ugly,” he said.

Trade organization American Association of Political Consultants condemned the use of generative AI in media in May 2023. In a unanimous vote, the board agreed that its use is “a dramatically different and dangerous threat to democracy” President R. Rebecca Donatelli wrote in a statement at the time.

But Cullen said it’s beneficial as the technology advances to have some sort of mechanism in law to hold bad actors accountable. That could especially be true in local races, he said, as AI tools become more accessible.

And his concern is not just what AI use would mean in a particular race but how it could harm the political advertising industry by causing an increased distrust in paid messaging—altered or not.

The new law does not ban on all usage of generative AI in election media. Instead, it requires a disclaimer. Still Cullen and Ruddie agreed that the law could still dissuade a campaign from using AI-generated content because the disclosure could cause voters to distrust it.

“When there are disclosures on things like this, … whether it be on pieces of literature or whether it be you know, as a part of an ad, I believe that voters pay attention to those things,” Ruddie told IBJ.

Under the law, a candidate represented in AI-generated media could bring a civil action against the person that paid for the material, the person that sponsored it and the person that disseminated it.

The law went into effect upon passage.

Please enable JavaScript to view this content.

{{ articles_remaining }}
Free {{ article_text }} Remaining
{{ articles_remaining }}
Free {{ article_text }} Remaining Article limit resets on
{{ count_down }}