On the eve of New Hampshire’s primary election, a flood of robocalls exhorted Democratic voters to sit out a write-in campaign supporting President Joe Biden during the state’s presidential primary. An AI-generated voice on the line matched the uncanny cadence and signature catchphrase— (“malarkey!”)—characteristic to Biden. From that call to fake creations envisioning a cascade of calamities under Biden’s watch to AI deepfakes of a Slovakian candidate for country leader pondering vote rigging and raising beer prices, AI is making its mark on elections worldwide. Against this backdrop, governments and several tech companies are taking some steps to mitigate risks—European lawmakers just approved a watershed law, and as recently as last month tech companies signed a pledge at the Munich Security Conference. But much more needs to be done to protect American democracy.
[time-brightcove not-tgx=”true”]
In Munich, companies including OpenAI, Apple, Meta, Microsoft, TikTok, Google, X, and others announced a compact to undertake measures to protect elections as America and other countries go to the polls in 2024. The companies pledged to help audiences track the origin of AI-generated and authentic content, to try to detect deceptive AI media in elections, and to deploy “reasonable precautions” to curb risks from AI-fueled election trickery. While not unwelcome, the success of the compact will depend on how its commitments are executed. They were couched in slippery language—“proportionate responses,” “where appropriate and technically feasible,” “attempting to,” and so on—that give latitude to companies to do very little if they so choose. While some are taking further steps, of course, the urgency of the situation demands stronger and more universal action.
This year is the first national American election since AI developments will allow fraudsters to produce phony but close-to-perfect pictures, video, or audio of candidates and officials doing or saying almost anything with minimal time, little to no cost, and on a widespread basis. And the technology underlying generative AI chatbots lets hoaxers spoof election websites or spawn pretend news sites in a matter of seconds and on a mammoth scale. It also gives bad actors— foreign and domestic—the ability to conduct supercharged, hard-to-track interactive influence campaigns.
As generative AI is integrated into common search engines and voters converse with chatbots, people seeking basic information about elections have at times been met with misinformation, pure bunkum, or links to fringe websites. A recent study by AI Democracy Projects and Proof News indicated that popular AI tools—including Google’s Gemini, OpenAI’s GPT-4, and Meta’s Llama 2— “performed poorly on accuracy” when fed certain election questions. The more traditional, non-generative AI could fuel mass challenges to thousands of voters’ eligibility, risking wrongful purges from voter rolls and burdening election offices. As election officials consider using new AI tools in their day-to-day tasks, the lack of meaningful regulation risks putting some voting access in harm’s way even as AI unlocks time-saving opportunities for short-staffed offices.
Before the Munich conference the world’s premier generative AI operation, OpenAI, had recently announced a slate of company policies designed to curtail election harms as America votes in 2024. These include forbidding users from building custom AI chatbots that impersonate candidates, barring users from deploying OpenAI tools to spread falsehoods about when and how to vote, and encrypting images with digital codes that will help observers figure out whether OpenAI’s Dall-E made an image that is multiplying in the wild.
But these actions—while more robust than those of some other major AI companies to date—fall short in important ways and underscore the limitations of what we may see in future months as OpenAI and other tech giants make gestures towards honoring the commitments made in Munich. First: the company’s public-facing policies do not call out several core false narratives and depictions that have haunted prior election cycles and are likely to be resurrected in new guises this year. For example, they do not expressly name fakeries that supposedly show election officials interfering with the vote count, fabrications of unreliable or impaired voting machines, or baseless claims that widespread voter fraud has occurred. (According to Brennan Center tracking, these rank among the most common false narratives promoted by election deniers in the 2022 midterm elections.) While OpenAI policies addressing misleading others and intentional deception arguably cover some, or all, such content, specifically naming these categories as being barred from creation and spread would give more clarity to users and protection to voters. Since election procedures— and the occasional fast-resolved Election Day glitch—vary from county to county, the company should have conduits for the sharing of information between local election officials and OpenAI staff in the months leading up to the election.
Perhaps most importantly, the tech wunderkind needs to do more to curb the risks of working with third party developers—that is, the companies that license OpenAI’s technology and integrate its models into their own services and products. For instance, if a user enters basic election questions into a third-party search engine that cannibalizes OpenAI models, the answers can be rife with errors, outdated facts, and other blunders. (WIRED reporting on Microsoft Copilot last December revealed several issues at that time with the search engine—which uses OpenAI technology— though some of those problems may have since been addressed.) To better protect voters this election season, the company must create and enforce stricter requirements for its partnerships with developers that integrate OpenAI models into search engines—or any digital platform where voters may go to seek information on the ABCs of voting and elections. While the accord signed in Munich was targeted to intentionally deceptive content, voters can also be harmed by AI “hallucinations”—fabrications spit out by systems’ algorithms—or other hiccups that AI creators and service providers have failed to prevent.
But, OpenAI can’t go it alone. Other tech titans must release their own election policies for generative AI tools, unmasking any behind-the-scenes tinkering and opening internal practices to public scrutiny and knowledge. As a first step, more major tech companies—including Apple, ByteDance, Samsung, and X—should sign onto the Coalition for Content Provenance and Authenticity’s open standard for embedding content with digital markers that help prove that it is AI-generated or authentic as it travels through the online ether. (While Apple and X committed to consider attaching similar signals to their content in Munich, a consistent standard would improve detection and coordination across the board.) The markers, meanwhile, should be made more difficult to remove. And social media companies must act swiftly to address deepfakes and bot accounts that mimic human activity, as well as instituting meaningful account verification processes for election officials and bonafide news organizations to help voters find accurate information in feeds flush with misinformation, impersonators, and accounts masquerading as real news.
One company’s—or a handful of companies’ —steps to address election harms, while important, are not enough in a landscape awash with unsecured, open-source AI models where systems’ underlying code and the mathematical instructions that make them tick are publicly available, downloadable, and manipulatable. Companies as prominent as Meta and Stability AI have released unsecured AI systems, and other players have rapidly churned out many more. Anyone who wishes to interfere in elections today has a suite of AI technology to choose from to bolster their efforts, and multiple ways to deploy it. That means that governments at all levels must also take urgent action to protect voters.
Congress, agencies, and states have a plethora of options at hand to blunt AI’s risks ahead of the 2024 election. Congress and states should regulate deepfakes—particularly those spread by campaigns, political action committees, and paid influencers—by requiring disclaimers on deceptive and digitally manipulated images, video, and audio clips that could suppress votes, or that misrepresent candidates’ and election officials’ words and actions. Lawmakers should also require campaigns and political action committees to clearly label a subset of content produced by the technology underlying generative AI chatbots, particularly where politicos deploy the technology to engage in continuous conversations with voters or to deceptively impersonate humans to influence elections. Policymakers should protect voters against frivolous challenges to their voting eligibility by setting constraints on the evidence that may be used to substantiate a challenge — including evidence unearthed or created through AI.
Federal agencies should act quickly to publish guidance for certifying the authenticity and provenance of government content—as envisioned by the executive order on AI issued by the administration last year—and state and local election officials should apply similar practices to their official content. Longer term, federal and state governments should also create guidance and benchmarks that help election officials evaluate AI systems before purchasing them, checking for reliability, accuracy, biases, and transparency.
These steps are all pivotal to dealing with challenges that arise specifically from AI technology. But the election risks that AI amplifies—disinformation, vote suppression, election security hazards, and so on—long predate the advent of the generative-AI boom. To fully protect voting and elections, lawmakers must also pass reforms like the Freedom to Vote Act—a set of wide-ranging provisions that stalled in the U.S. Senate. It also means updating political ad disclosure requirements for the 21st century. And it means maintaining an expansive vision for democracy that is impervious to age-old and persistent efforts to subvert it.
{Categories} *ALL*,_Category: Implications{/Categories}
{URL}https://time.com/6965299/risks-ai-elections/{/URL}
{Author}Mekela Panditharatne{/Author}
{Image}https://api.time.com/wp-content/uploads/2024/04/GettyImages-2068288213.jpg{/Image}
{Keywords}Uncategorized,freelance{/Keywords}
{Source}All{/Source}
{Thumb}{/Thumb}