Tech giants are trying to stop deepfakes from influencing the 2024 election. Good luck

With generative AI tools now available to almost anyone, next year’s election season may turn our information space into a hall of mirrors in which no one is sure what content is real and what is AI-generated. If political actors weaponized social media in 2016 and 2020, they may go full nuclear with deepfakes in 2023.

Americans are already dreading that possibility. An August Morning Consult/Axios poll showed that more than half of Americans believe misinformation will help decide the winner of the 2024 presidential election, and more than a third said that artificial intelligence will degrade their trust of the outcome of U.S. elections.

As AI booms and the hotly contested 2024 election looms, pressure is increasing on both Congress and tech companies to act. As Congress held yet another hearing on the topic this week, Meta and Microsoft announced new steps to confront deepfakes and other forms of AI truth manipulation.

Meta adopted new rules on how “generated” political content (imagery, video, or audio) can be presented on Facebook and Instagram. The company says it’s requiring political campaigns and interest groups to put a disclosure label on any political or social cause ads created using generative AI tools. For example, if the Trump campaign used an AI image generation tool to generate a fictitious video of President Joe Biden falling during a campaign stop, it would have to disclose that fact with the video.

The policy, which kicks in at the start of next year, applies to all political advertisers around the world. Meta wasn’t the first mover in the advertising space: Google said in September that advertisers would be required to label any AI-generated ads that run on YouTube and Google platforms.

Meta also acknowledged that it’s barring advertisers from using its own generative AI tools to create ads in sensitive (or regulated) categories such as housing, employment, credit, health, pharmaceuticals, or financial services. Ads in those categories containing content generated using third-party tools such as DALL-E or Midjourney will require a disclosure label.

Social platforms have several options for dealing with deepfakes; the challenge is finding an approach that minimizes harm to both free speech and election integrity. “[There] is the philosophical question on if these deepfakes should be removed from the platform,” says Katie Harbath, former Facebook public policy director. “Or is it enough to have them fact-checked, labeled, and their distribution reduced? That’s more of a First Amendment, freedom-of-expression value question.” Meta chose a light touch.

And the company isn’t requiring all ads containing AI-generated content to include a label. No label is required when content is digitally created or altered in ways that are inconsequential or immaterial to the claim, assertion, or issue raised in the ad, Harbath points out.

Robert Weissman of consumer advocacy group Public Citizen says his organization would rather have seen an outright ban on political deepfake ads, but he supports Meta’s decision. “Applying labels in this space could massively mitigate the harm,” he says.

What Public Citizen faults Meta for is confining the labels to deepfakes in ads. Some of the most influential and viral political deepfakes might show up as regular organic posts, the group says. Weissman says organic deepfakes are more likely to impact elections “precisely because they will carry more of an air of authenticity.”

(Note that Meta’s manipulated media policy, covering all Facebook and Instagram content, “aims to remove” AI-generated video in which the subject of the video “said words that they did not say.” So the above example of a deepfake showing Biden falling during a campaign stop would apparently be okay as an organic post but would require a label as an ad.)

Detecting political deepfakes among the billions of organic posts on the platform is another story.

“There’s the question of how good the technology is to proactively detect deepfakes,” Harbath says. “That’s not just a Meta problem but an industry-wide one that they’ve been working on for years.” Meta relies heavily on AI systems to detect misinformation and other harmful content, but those systems have struggled to identify it within some formats, like memes.

Meta didn’t respond to requests for an interview.

While Meta tries to protect voters from being misled by AI, Microsoft and others are focusing on giving campaigns tools to control their content and likeness. Microsoft announced a new digital watermarking tool that lets creators (campaigns, perhaps) digitally label their content to establish when, how, why, and by whom it was made. Then, if some other party attempts to co-opt the content to mislead (perhaps by altering or mislabeling it), the image or video’s true origin and purpose can easily be found within cryptographic data embedded within the file. The digital watermarking service will launch this spring to a small group of users that will include political campaigns.

While campaigns don’t typically use Microsoft tools to create ad content, they do use Microsoft productivity and security products. It’s more likely that a campaign would use Adobe products to create ad content. And Adobe, too, has offered an open-source provenance tool called Content Credentials that can be embedded into content creation tools and platforms. The New York Times and The Wall Street Journal will use it to authenticate news stories. Nvidia and Stability AI (an AI image generator) will use it to authenticate generated content. Camera companies such Nikon and Leica will use it to ID images at the device level.

The U.S. political scene has already seen the beginnings of deepfakery. In June the Ron DeSantis campaign was accused of spreading AI-generated images of former President Trump hugging Dr. Anthony Fauci (then the White House chief medical adviser) within a video circulated online. In April the Republican National Committee released a video ad with AI-generated images of a dystopian future America where Biden has been reelected president (that ad, however, disclosed the use of AI).

The Meta and Microsoft announcements coincide with a high-profile hearing on deepfakes held by the House Subcommittee on Cybersecurity, Information Technology, and Government Innovation. Lawmakers have spent much of this year cramming to get ready to pass regulations on responsible AI. The issue of AI-generated political disinformation has risen to the top of many lawmakers’ priority lists because of the plain fact that deepfakes could be used directly against them in a reelection bid, Weissman contends.

Microsoft, Adobe, and other tech companies have thrown their support behind a bipartisan bill called the Protect Elections From Deceptive AI Act, introduced by Senators Amy Klobuchar, Josh Hawley, Chris Coons, and Susan Collins. The bill, which establishes the right of a candidate harmed by a deepfake to sue for injunctive relief and damages, is picking up steam at the Capitol, sources say. But with a highly partisan and dysfunctional Congress, and another government shutdown looming, nothing is certain.

{Categories} *ALL*,_Category: Implications{/Categories}
{URL}https://www.fastcompany.com/90979713/tech-giants-are-trying-to-stop-deepfakes-from-influencing-the-2024-election-good-luck{/URL}
{Author}Mark Sullivan{/Author}
{Image}https://images.fastcompany.net/image/upload/w_1280,q_auto,f_auto,fl_lossy/wp-cms/uploads/2023/11/p-1-90979713-tech-giants-are-trying-to-stop-deepfakes-from-influencing-the-2024-election-good-luck.jpg{/Image}
{Keywords}Tech{/Keywords}
{Source}Implications{/Source}
{Thumb}{/Thumb}

Exit mobile version