Google Uses Anthropic’s Claude To Improve Gemini AI

Contractors working to improve Google’s Gemini AI are comparing Gemini’s responses with output from Claude, a competing model from Anthropic. By doing so, they hope to improve Gemini’s results.

This is according to correspondence accessed by TechCrunch. Google would not confirm whether the company received permission to use Claude to test Gemini.

Mirroring benchmark
As tech companies rush to build better AI models, their performance is often compared against competitors. Usually, this is done by running their models and competing models through benchmarks. Google uses another method: paying contractors to evaluate competitors’ AI answers carefully.

Contractors working on Gemini responsible for assessing the accuracy of model outputs must evaluate each answer they see against multiple criteria. Consider truthfulness and comprehensiveness.

Direct comparison
According to correspondence studied by TechCrunch, contractors are given up to 30 minutes per prompt to determine whose answer is better: Gemini’s or Claude’s.

The correspondence shows that contractors recently began noticing references to Claude from Anthropic popping up in the internal Google platform used to compare Gemini to other AI models. At least one of the outputs presented to Gemini contractors explicitly stated, “I am Claude, created by Anthropic.”

Safety Claude better secured
In an internal chat, contractors noted that Claude’s responses seem to place more emphasis on security. “Claude’s security settings are the strictest,” wrote one contractor. For example, Claude refused to answer a prompt, while a contractor characterized Gemini’s response as a “major security breach” because of the inclusion of “nudity and bondage.”

Anthropic’s commercial terms of use prohibit customers from accessing Claude “to build a competing product or service.” Or to “train competing AI models” without Anthropic’s approval. Google is a major investor in Anthropic.

“Incorrect suggestion.”
Shira McNamara, a spokesperson for Google DeepMind, responsible for Gemini, stated that DeepMind “compares model outputs” for evaluations but that it does not train Gemini using Anthropic models.

“Of course, in line with standard industry practice, in some cases we compare model outputs as part of our evaluation process,” McNamara said. “However, any suggestion that we have used Anthropic models to train Gemini is inaccurate.”

Last week, Techzine reported that Google contractors working on the company’s AI products are required to evaluate Gemini’s AI responses in areas outside their expertise. Internal correspondence raised concerns among contractors that Gemini could potentially generate inaccurate information on highly sensitive topics such as healthcare.

{Categories} _Category: Applications{/Categories}
{URL}https://www.techzine.eu/news/applications/127415/google-uses-anthropics-claude-to-improve-gemini-ai/{/URL}
{Author}Mels Dees{/Author}
{Image}https://www.techzine.eu/wp-content/uploads/2024/11/Google-Gemini.png{/Image}
{Keywords}Applications,AI,Claude,Gemini,Google{/Keywords}
{Source}Applications{/Source}
{Thumb}{/Thumb}

Exit mobile version