Michigan lawmakers want to outlaw the spread of 'deepfake' sexual content

Michigan lawmakers want to outlaw the spread of deepfake sexual content, looking to join dozens of other states that have taken similar action against what experts say is a rising — and rapidly evolving — threat.

A deepfake in a video, image or audio recording that seems real but has been manipulated using artificial intelligence (AI), making someone appear to say or do something they didn't actually say or do. The vast majority of online deepfake videos are pornographic in nature and nearly all of the content targets women.

In June, the House voted overwhelmingly to create both civil and criminal penalties for creating and spreading AI-generated pornography depicting a person’s likeness without their consent. Backers of the proposal say the legislation is needed to protect individuals from potential abuse as AI image generators become more advanced.

“The passage of this bill was a crucial step in the effort to protect the people of Michigan from the abusive and exploitative act of creating or sharing nonconsensual intimate deepfakes," Rep. Penelope Tsernoglou, D-East Lansing, said after the legislation advanced through the House.

House Bills 5569 and 5570 banning deepfakes have advanced to the Michigan Senate, where they await consideration.

What are the harms of deepfakes?

Like any technology, artificial intelligence can be harnessed for good and bad, experts say.

In the hands of bad faith actors, it can be used for nefarious purposes. An analysis of 95,820 deepfake videos and other online content found that 99% percent of the deepfake pornography content targets women and the vast majority of those featured work in the entertainment industry, according to a 2023 report by Home Security Heroes. The cybersecurity company found that it takes less than 25 minutes to create a 60-second deepfake pornographic video. It costs nothing.

"There is a concern that as the technology becomes more accessible to the public, it becomes easier for any average individual to create this type of content. In theory, we're going to see more deepfakes, more pornographic deepfakes, and that's going to be causing more harm. So, this is a very timely concern," said Marc Berkman, CEO of the California-based Organization for Social Media Safety.

Nonconsensual content has targeted women business owners to harm their reputation and finances, women in child custody battles, women journalists to deter their work and middle and high school girls, Berkman said. High profile celebrities have fallen victim, too. Earlier this year, sexuality explicit AI-generated content of Taylor Swift and Megan Thee Stallion circulated online.

"When it comes to deepfake technology, all you have to do is exist and it can happen," said Uldouz Wallace, actress and founder of the nonprofit Foundation Ra that helps people take down nonconsensual content, like deepfakes, revenge porn, sextortion, hacking and leaking.

Last year alone, Wallace said her organization took down more than 200,000 images and videos, most of which were deepfakes. She created her nonprofit in 2021 after her privates images and videos, along with those of celebrities like Jennifer Lawrence and Kate Upton, were hacked and leaked from iCloud in 2014.

According to the AI Incident Database — which tracks instances where AI systems allegedly harm or nearly harm people, property or the environment — 2023 saw an increase in AI-generated child sexual abuse materials. Last year, students at a New Jersey high school used AI to create fake nude images of female classmates from real pictures, according to reports.

"There's a lot of emotional and psychological harms. There's definitely reputational harms and definitely economic harms as well. Most of the time, perpetrators are doing it for financial disruption," said Amna Batool, a PhD candidate at the University of Michigan School of Information, whose research focuses on online privacy and security for women.

What would the proposed Michigan bills do?

Researchers say that the proposed Michigan legislation is timely and crucial, but there is room for improvement.

Deepfakes, under the bills' definition, are videos, photos, images or audio recordings that are produced being "substantially dependent on technical means" and depict an individual who is identifiable by their likeness or personal information. Deepfakes are considered "so realistic that a reasonable person would believe it depicts speech or conduct of a depicted individual."

Under the bills, victims would be able to bring civil action against an individual who violates the ban, including for damages, injunctions and temporary restraining orders. Criminal penalties could include fines and potential jail time, ranging from up to a year to up to three years depending on if the individual spreading the images intends to profit off them, or harass or extort the victim.

The bills cracking down on deepfakes of sexual content is part of a legislative push to regulate AI in Michigan — last year, Gov. Gretchen Whitmer also signed bills banning the use of AI in political campaign materials without clear labels.

"Legislation is so important, because if there's legislation, then there will be this barrier to block it from getting uploaded in the first place or created in the first place, or these predators will be too scared to even try to do something like that, because they know that they can be held accountable," Wallace said.

Qiwei Li, another PhD student at the University of Michigan School of Information, said although the proposed bills are a good start, there is still room for potential improvements. The legislation focuses on individual perpetrators, but not on the rest of the broader ecosystem, she said, including platforms, such as Reddit and X, formerly known as Twitter, where this content is posted. It doesn't address the applications and websites used to create and profit from the content, she said.

"It's a good start, but there's still a long ways to," said Li, who has done research on the dissemination of nonconsensual intimate media, like explicit deepfakes, and the role of technology.

How can people protect themselves and their children?

Now, it only takes dozens to hundreds of images, videos and audio samples, to create a realistic deepfake, whereas a few years ago, the technology required thousands of pieces of such content, said Arun Ross, a professor at the Michigan State University's Department of Computer Science and Engineering.

"The more data you have of an individual, the more realistic the deepfake can be," Ross said.

There isn't an exact guidebook for how people can guard themselves from falling victim to the harms of deepfakes and experts say there shouldn't be a wave of panic. Instead they emphasize the importance of raising awareness of the issue and taking precautions when posting personal content publicly on social media.

Here are some general best practices experts and the nonprofit National Cybersecurity Alliance suggest to guard against deepfakes:

  • When sharing personal content, be aware of who can view it.

  • Understand the privacy settings of devices and social media platforms.

  • Use multi-factor authentication, strong passwords and keep software updated.

  • Be wary of phishing scams via email, texts, phone calls and other means.

  • Report deepfake content to the platform hosting the content and to federal law enforcement.

USA TODAY contributed to this report.

Contact Nushrat Rahman: nrahman@freepress.com. Follow her on X,: @NushratR.

Contact Arpan Lobo: alobo@freepress.com. Follow him on X: @arpanlobo.

This article originally appeared on Detroit Free Press: Michigan lawmakers want to outlaw deepfake sexual content: What it is

Advertisement