Opinion - We need to get serious about election deepfakes

When President Biden bowed out of the presidential race in July, questions immediately turned to who would run in his place. But I was captivated by another aspect of the development.

Just as the president was sharing his announcement on social media, Sen. Chris Coons (D-Del.), one of his closest confidantes, was speaking on a high-powered panel about artificial intelligence and disinformation. And before reporters or audience members could react to the historic moment playing out before their eyes, they had to determine that the letter appearing on their screens, purportedly from Biden and announcing his withdrawal, was not a deepfake.

It comes as no surprise to me that we can no longer be certain that words attributed to a candidate are real. As a computer engineer who has studied deepfakes for more than 15 years, I was encouraged that the conference attendees did not instantly accept the veracity of what they were reading, even though it had been foreshadowed for weeks.

But this was a group of trained skeptics. Average Americans are not nearly as likely to probe what they see on social media or elsewhere online.

Deepfake-enabled disinformation has and will continue to spread during the election in ways we could not have imagined even a few years ago, just as it has in recent elections from South Africa to India to Moldova. The release of fake audio of Vice President Kamala Harris shortly after she emerged as the presumptive Democratic nominee only serves as further evidence that it can happen here too.

Unfortunately, every entity in the U.S. with responsibility for combating deepfakes — from tech companies to regulators to policymakers — has been taking a band-aid approach to this significant and pernicious threat.

The commercial software I use to train my own deepfake detection system has recently begun blocking attempts to build or refine models based on audio or video of high-profile political figures. While I understand the software company’s logic, I believe that this meager security measure will backfire. Bad actors are experts at getting around simple roadblocks, simply finding another software to use. Meanwhile, researchers like me will be left unable to conduct the very analysis needed to thwart them.

If this were simply a matter of one software company implementing well-meaning but ultimately short-sighted safeguards, I’d still be frustrated, but not overly concerned. But the company’s stop-gap measure was put in place due to the absence of a coordinated, multipronged plan for rooting out political deepfakes in the U.S. We need that plan now.

Disinformation peddlers embed subtle distortions in their forgeries to fool detection systems. Companies that produce generative AI software need to employ more sophisticated detectors to catch them. These companies also need to roll out more robust digital watermarking — the process of embedding markers into every file that make it easy to trace information, like where and on what platform that file was created. Tech companies have the know-how to launch these mitigation tools, and must do so now.

Social media companies need to deploy these features to more effectively identify and remove known sources of disinformation, and federal regulators need to hold them accountable for doing so. Congress, too, needs to treat this like the emergency it is and act swiftly. The European Union recently enacted artificial intelligence laws mandating that all deepfakes disseminated for any purpose to be identified as such. Crucially, the new law also requires that AI companies employ watermarking or other identification methods to ensure their output can be detected as AI-generated.

Here in the U.S., a bipartisan bill in the Senate would ban AI deception in political ads. That’s a start, but an incredibly tentative one, given that deepfakes spread most quickly on social media. In April, Sen. Josh Hawley (R-Mo.), a cosponsor of the legislation, chastised his colleagues for failing to move the bill — cosponsored by Sen. Amy Klobuchar (D-Minn.) and others — forward.

“The dangers of this technology without guardrails and without safety features are becoming painfully, painfully apparent,” Hawley said. “And I think the question now is: Are we going to have to watch some catastrophe unfold?”

He’s right.

Finally, the federal government needs to launch an extensive awareness campaign to educate the public on the pervasiveness of deepfakes and the resources available to assess the authenticity of the audio and video files we see online. The more sophisticated deepfakes get, the more the general public needs to understand what clues to look for.

Deepfakes will circulate in these final months of the 2024 election like never before; that is a given. And while not every measure to combat them can be deployed before Nov. 5, many measures — like more robust detection and tracing, due diligence by social media companies and public information campaigns — still can.

Piecemeal end-user approaches to rooting out deepfakes may play well for public relations purposes or to ease tech executives’ consciences. But a more thoughtful, all-fronts approach will be needed to save our democracy.

Hafiz Malik is a professor of electrical and computer engineering at the University of Michigan-Dearborn.

Copyright 2024 Nexstar Media, Inc. All rights reserved. This material may not be published, broadcast, rewritten, or redistributed.

For the latest news, weather, sports, and streaming video, head to The Hill.

Advertisement