The chatbot optimisation game: can we trust AI web searches?

Google and its rivals are increasingly employing AI-generated summaries, but research indicates their results are far from authoritative and open to manipulation
Does aspartame cause cancer? The potentially carcinogenic properties of the popular artificial sweetener, added to everything from soft drinks to children’s medicine, have been debated for decades. Its approval in the US stirred controversy in 1974, several UK supermarkets banned it from their products in the 00s, and peer-reviewed academic studies have long butted heads. Last year, the World Health Organization concluded aspartame was “possibly carcinogenic” to humans, while public health regulators suggest that it’s safe to consume in the small portions in which it is commonly used.
While many of us may look to settle the question with a quick Google search, this is exactly the sort of contentious debate that could cause problems for the internet of the future. As generative AI chatbots have rapidly developed over the past couple of years, tech companies have been quick to hype them as a utopian replacement for various jobs and services – including internet search engines. Instead of scrolling through a list of webpages to find the answer to a question, the thinking goes, an AI chatbot can scour the internet for you, combing it for relevant information to compile into a short answer to your query. Google and Microsoft are betting big on the idea and have already introduced AI-generated summaries into Google Search and Bing.
Continue reading…
{Categories} _Category: Implications{/Categories}
{URL}https://www.theguardian.com/technology/2024/nov/03/the-chatbot-optimisation-game-can-we-trust-ai-web-searches{/URL}
{Author}Callum Bains{/Author}
{Image}{/Image}
{Keywords}Artificial intelligence (AI),Chatbots,Technology,Internet,Google,Microsoft,Science{/Keywords}
{Source}Implications{/Source}
{Thumb}{/Thumb}

Exit mobile version