Marketers are increasingly figuring out that artificial intelligence can be biased.
In recent weeks, testing by The Brandtech Group on its gen AI-as-a-service platform, Pencil, revealed troubling results when generating images of a CEO. Two models exclusively produced male images in 100 generations, while a third model showed 98% male results. A fourth and fifth model displayed male CEOs 86% and 85% of the time, respectively.
The male-heavy results highlight a disconnect from reality, where McKinsey’s 2023 report found women hold 28% of C-suite roles, and 10.4% of Fortune 500 CEOs are female. Pencil aggregates major AI models trained on both open web data and copyright-cleared libraries. The company wouldn’t share specifics on which AI models showed bias.
To tackle biases stemming from AI like stereotyping age and race, The Brandtech Group is launching a technology product aptly named called Bias Breaker. Bias Breaker has been in beta for more than a year of development. The Brandtech Group said that the tool is now being offered to all clients.
The Brandtech Group is pitching Bias Breaker as a way to correct diversity factors like race, gender, and age. It is incorporated into Pencil, which has generated over 1 million ads for more than 5,000 brands and processed $1 billion in media spend since 2018. The Brandtech Group credited Bias Breaker for new business wins but declined to name clients.
“Ethics in [marketing] is about using this technology in a responsible way,” said The Brandtech Group partner and head of emerging technology Rebecca Sykes. “That means that it does no harm and is net positive in terms of the capability marketers have to reach more people in more relevant and applicable ways.”
Marketers are enjoying the efficiency and scalability of generative AI tools for creating targeted ad images. However, if these AI-generated images lean towards certain stereotypes, the potential ROI could be compromised, according to The Brandtech Group.
Some of the big brands working with The Brandtech Group have sounded the alarms on defaulting to stereotypes in AI-generated ads with Pencil, according to Sykes. For example, AI might automatically generate an image of a woman cleaning a house for a cleaning product brand.
“[Brands] don’t always want that stereotype,” said Sykes. “We’re making sure that we don’t quietly slip back into accepting a default position.”
The Bias Breaker works by incorporating key diversity factors like age, race, ability, gender identity, and religion from sources such as Citizens Advice, Gov.uk, and the U.S. Census Bureau, into AI-generated prompts.
When you prompt Pencil to generate a CEO, the technology automatically applies these diversity criteria by randomly selecting one or more factors to enrich the prompt, resulting in what The Brandtech Group considers a more inclusive output.
“It’s still incumbent on the human user to choose good images and consciously prompt to make good choices,” Sykes said.
Beyond gender and profession bias
When prompted for ‘menopause’ AI tool Pencil generated images of significantly older women in their 70s instead of 40s or 50s. The Brandtech GroupAI-generated images reveal biases extending to descriptions, affecting representations beyond just gender and profession.
The Brandtech Group’s clients frequently target products to people at various life stages, using prompts like emotional, strong, intelligent, beautiful, or menopausal. However, the default AI images for “beautiful” often skew toward young and slim women. Menopause is depicted with significantly older women in their 70s or 80s, rather than those in their 40s, according to Sykes. And “emotional” is almost always associated with women, while “strong” is linked to men, according to The Brandtech Group.
Prompting ‘authoritative’ in AI generates images of men, highlighting a bias in representation. The Brandtech Group“They’re very gendered terms in the way that we’ve used them historically in language, and you see that reflected back at you,” said Sykes. “But most brands don’t want that kind of gendered or skewed reflection.”
Shifts in client briefsThe Brandtech Group’s clients that tested Bias Breaker are refining their approach to AI, shifting from a broad mindset to a more targeted strategy, Sykes said. Initially, most brand briefs sought to apply AI across all aspects of their work, from image generation to brand messaging. Now, they are more selective, focusing on where automation and scalability can be most effective.
For example, when crafting campaign images for events like Black History Month or Pride Month, The Brandtech Group encourages brands to not use AI and instead work directly with community members and fairly compensate them to be part of the campaign.
“From the very sort of positioning of it makes it makes a bad AI brief,” said Sykes. “You cannot substitute AI for what would be real life work such as understanding the community.”
For scaling product communication or optimizing brand messaging and performance, where human experience is less critical, the benefits of generative AI become more evident.
“It’s a more intentional application of AI,” said Sykes.
{Categories} _Category: Implications{/Categories}
{URL}https://www.adweek.com/media/the-brandtech-groups-latest-tool-addresses-ai-biases/{/URL}
{Author}Trishla Ostwal{/Author}
{Image}https://www.adweek.com/wp-content/uploads/2024/09/ai-tool-biases-2024.jpg?w=600&h=315&crop=1{/Image}
{Keywords}Ad Tech Industry News,Agencies,AI News,Campaigns,Emerging Technologies,Media Agencies,Media News{/Keywords}
{Source}Implications{/Source}
{Thumb}https://www.adweek.com/wp-content/uploads/2024/09/ai-tool-biases-2024.jpg?w=640&h=360&crop=1{/Thumb}