Google is restricting its Gemini AI chatbot from answering election-related questions in countries where voting is taking place this year, limiting users from receiving information about candidates, political parties and other elements of politics.
“Out of an abundance of caution on such an important topic, we have begun to roll out restrictions on the types of election-related queries for which Gemini will return responses,” Google’s India team stated on the company’s site.
The company initially announced its plans for limiting election-related queries in a blog post last December, according to a Google spokesperson, and made a similar announcement regarding European parliamentary elections in February. Google’s post on Tuesday pertained to India’s upcoming election, while TechCrunch reported that Google confirmed it is rolling out the changes globally.
When asked questions such as “tell me about President Biden” or “who is Donald Trump,” Gemini now replies: “I’m still learning how to answer this question. In the meantime, try Google search,” or a similarly evasive answer. Even the less subjective question “how to register to vote” receives a referral to Google search.
Google is limiting its chatbot’s capabilities ahead of a raft of high-stakes votes this year in countries including the US, India, South Africa and the UK. There is widespread concern over AI-generated disinformation and its influence on global elections, as the technology enables the use of robocalls, deepfakes and chatbot-generated propaganda.
“As we shared last December, in preparation for the many elections happening around the world in 2024 and out of an abundance of caution, we’re restricting the types of election-related queries for which Gemini will return responses.”
Governments and regulators around the world have struggled to keep up with the advancements in AI and their threat to the democratic process, while big tech companies are under pressure to rein in the malicious use of their AI tools. Google’s blogpost on Tuesday states that it is implementing multiple features, such as digital watermarking and content labels for AI-generated content, to prevent the spread of misinformation at scale.
Google’s decision to restrict Gemini should be a reason to scrutinize the overall accuracy of the company’s AI tools, argues Daniel Susser, an associate professor of information science at Cornell University.
“If Google’s generative AI tools are too unreliable for conveying information about democratic elections, why should we trust them in other contexts, such as health or financial information?” Susser said in a statement. “What does that say about Google’s long-term plans to incorporate generative AI across its services, including search?”
skip past newsletter promotionOur US morning briefing breaks down the key stories of the day, telling you what’s happening and why it matters
Privacy Notice: Newsletters may contain info about charities, online ads, and content funded by outside parties. For more information see our Privacy Policy. We use Google reCaptcha to protect our website and the Google Privacy Policy and Terms of Service apply.after newsletter promotion
Gemini recently faced a heated backlash over its image-generation capabilities, as users began to notice the tool inaccurately generated images of people of color when given prompts for historical situations. These included depictions of people of color as Catholic popes and as German Nazi soldiers in the second world war. Google suspended some of Gemini’s capabilities in response to the controversy, issuing statements apologizing and saying that it would tweak its technology to fix the issue.
The Gemini scandal involved issues around AI-generated misinformation, but it also showed how major AI firms are finding themselves in the center of culture wars and under intense public scrutiny. Republican lawmakers accused Google of promoting leftist ideology through its AI tool, with the Missouri senator Josh Hawley calling on its CEO, Sundar Pichai, to testify under oath to Congress about Gemini.
Prominent AI companies, including OpenAI and Google, increasingly appear willing to block their chatbots from engaging with sensitive questions that could result in a public relations backlash. Even the decision of which questions these companies block is fraught, however, and a 404 Media report from earlier this month found that Gemini would not answer questions such as “what is Palestine” but would engage with similar queries about Israel.
{Categories} *ALL*,_Category: Implications{/Categories}
{URL}https://www.theguardian.com/us-news/2024/mar/12/google-ai-gemini-2024-election{/URL}
{Author}Nick Robins-Early{/Author}
{Image}https://i.guim.co.uk/img/media/5264082df57f2dc47681c794aad6378cab916d4c/0_220_5997_3598/master/5997.jpg?width=1200&height=630&quality=85&auto=format&fit=crop&overlay-align=bottom%2Cleft&overlay-width=100p&overlay-base64=L2ltZy9zdGF0aWMvb3ZlcmxheXMvdGctZGVmYXVsdC5wbmc&s=646d36b74850018da0cd98c6cc050ff5{/Image}
{Keywords}Technology,Artificial intelligence (AI),Google,US elections 2024,Alphabet,India,US news{/Keywords}
{Source}Technology | The Guardian{/Source}
{Thumb}{/Thumb}