New law makes it a crime to distribute fake nude photos generated by AI

  • Print
Listen to this story

Subscriber Benefit

As a subscriber you can listen to articles at work, in the car, or while you work out. Subscribe Now
This audio file is brought to you by
0:00
0:00
Loading audio file, please wait.
  • 0.25
  • 0.50
  • 0.75
  • 1.00
  • 1.25
  • 1.50
  • 1.75
  • 2.00
Deep fake hoax false and ai manipulation social media on display. Searching on tablet, pad, phone or smartphone screen in hand. Abstract concept of news titles 3d illustration.

The damage caused by one fake nude photo generated by artificial intelligence often can’t be undone.

Angela Tipton

Teacher Angela Tipton learned that the hard way about a year ago when some Indianapolis students at Eastwood Middle School in Washington Township used AI to edit her face onto a naked body and sent it electronically to other students.

Even though it wasn’t her body in the photo, it didn’t stop the horror of the situation from taking root.

“My field is one of those places where it does matter what people think about your morality and your reputation,” said Tipton, who now works for Indianapolis Public Schools. “People want to trust the person taking care of their children, and that’s not a good look.”

Because there was no state law in place at the time that addressed the situation, the students found responsible couldn’t be held criminally liable for their actions, Tipton said, and she was expected to continue working with them.

But she has hope that a new law signed by Gov. Eric Holcomb earlier this month will act as a deterrent and hold future perpetrators responsible.

The new provision in House Enrolled Act 1047 makes the distribution of unauthorized and undisclosed “intimate” images generated or altered by AI a Class A misdemeanor, punishable by up to a year in prison and a fine of up to $5,000.

“With the growing popularity of AI, creating and distributing malicious and exploitative images and videos of others is easier than ever,” Rep. Sharon Negele, an Attica Republican and sponsor of the legislation, said in written remarks.

Mike Bohacek

Sharon Negele

“Women are the primary target of deepfake pornography and up until now they’ve had no recourse. And to make it worse, they can face years of embarrassment as the content is nearly impossible to remove from the web” she added. “That’s why I had to take action and I’m incredibly grateful to see this legislation be signed into law.”

The Indiana Prosecuting Attorneys Council supported the bill, noting the importance of doing something to address the rise in AI fakes.

Spokeswoman Whitney Riggs said the law “recognizes the impact of artificially created intimate images and provides a previously absent remedy for this destructive action.”

Senator Mike Bohacek, R-Michiana Shores, who added an amendment to the bill, said victims should seek civil damages and criminal penatlies over the distribution of such images.

Background of AI-generated images

Despite its recent uptick in use by the general public, artificial intelligence is not a new concept. Development of the technology dates back to the 1950s.

As with any form of technology, AI gradually evolved into a widely available and effective tool.

Now, perhaps it is most recognized through the chatbot tool ChatGPT and social media deepfakes, like one of Pope Francis wearing a large, white puffer coat.

Shelly Jackson

Tim Sewell

Tim Sewell, co-founder of Reveal Risk, a cybersecurity company in Carmel, says AI technology is so top of mind in society right now for three reasons: the accessibility of the technology, simpler interfaces and its coverage in the media.

“Everybody has become aware, even people that might not otherwise have known about it are now being exposed to it daily,” he said.

Shelley Jackson, a Carmel-based partner at Krieg Devault LLP who helps businesses develop and understand artificial intelligence for their business practices, said the explosion of the technology has driven an urgency to use and regulate it.

Earlier this year, AI-generated, explicit photos of singer Taylor Swift appeared on X (formerly known as Twitter).

The images, according to the research firm Graphika, stemmed from the message board 4chan, and were created as a game of sorts.

Swift’s fans, in turn, set X ablaze to stop the spread of the images.

Tipton, the teacher who was a victim of an explicit AI image, understands the outrage and appreciates Indiana’s new law aimed at preventing the technology’s misuse.

“It doesn’t fix anything that’s happened to … (me) or the impact that the situation had,” she said. “But I do think if even sharing my story as loudly as I’ve tried, to help decision makers, I think that’s something that, it’s satisfying that something good can come out of a really crappy situation.”•

Please enable JavaScript to view this content.

{{ articles_remaining }}
Free {{ article_text }} Remaining
{{ articles_remaining }}
Free {{ article_text }} Remaining Article limit resets on
{{ count_down }}