Greg Brunk is the Co-founder and Head of Product at MetaRouter.
getty
Business as we know it has transformed dramatically over the past three years thanks to the seemingly rapid evolution of AI. But the new explosion of AI is just the latest development in a long line of technological evolution. In reality, “novel” manifestations of AI, like generative AI and large-language models (LLMs), have been brewing behind the scenes for years. In fact, rudimentary versions of GenAI go back to the 1960s.
Most people aren’t tracking the deep lineages of AI, ML and neural networks, leading to widespread confusion about the actual capabilities and risks of these technologies. Fewer than half of Americans ever report using AI, so most consumers still base their AI opinions on media depictions or word-of-mouth instead of direct exposure. This may explain why over half of U.S. consumers (54%) feel cautious about advances in AI, 49% feel concerned—and 22% feel downright scared.
These impressions matter. Businesses building AI engines or working with AI vendors must acknowledge prevailing consumer sentiment around AI and craft strategies accordingly, acknowledging consumer distrust without hampering the innovation that AI can bring. It’s a delicate balance—but, ultimately, a necessary one.
Consumer Perceptions About AI Matter
Distrust in AI has grown since the release of ChatGPT, the first widely accessible GenAI tool. As of mid-2023, only 39% of U.S. consumers believed AI technologies were safe and secure, down 9% from November 2022. More recent data indicates that 44% of consumers fear GenAI will specifically compromise their data privacy and confidentiality.
These fears aren’t unfounded. After all, LLMs currently offer no “delete” option, leaving many consumers worrying about when—if ever—their conversations with these chatbots will stop lingering in cyberspace. AI also amplifies existing consumer fears about the lack of privacy in the digital age.
Over the years, we’ve seen many stories about companies neglecting data privacy in the name of hyper-personalization. For example, in 2012, Target’s predictive analytics algorithms alerted a father to his teenage daughter’s pregnancy before she’d had the chance to tell him personally. This infamous case represents what can happen when companies embrace AI technologies without safeguarding consumers’ data privacy rights.
Most modern consumers know their data is highly valuable. Combine that with a pervasive mistrust in AI, and you’re left with the perfect storm: Consumer trust is declining across the board. Six in ten consumers believe that companies routinely misuse their personal data, meaning 60% of shoppers have already lost faith in digital retailers. Reversing this problematic course is vital because trust remains a deciding factor in buying decisions for more than 80% of consumers. Trust has become a new form of currency for retailers, and it’s in short supply.
How To Use AI And Still Maintain Trust
Despite concerns about its misuse, AI remains an incredible resource for increasing productivity and delivering sharp consumer insights. Leveraged appropriately, it can improve the customer experience (CX) through better demand forecasting, pricing strategies and personalized marketing. So, neglecting AI isn’t an option.
Therefore, leaders must place guardrails around their emergent AI models and policies. Priorities to consider in this process include:
Remain transparent and disclose AI policies. Eighty-five percent of consumers want companies to transparently share AI assurance practices before bringing products equipped with AI technology to market. In this context, transparency means a few different things. One, consumers want disclosures whenever AI is (and isn’t) used. Second, they want this information distilled in a non-technical, consumer-friendly way. Finally, they want these disclosures to be displayed prominently. This could be as simple as providing notice before customers interact with an AI chatbot or pre-loading a banner at the bottom of your site with links to relevant AI disclosures whenever AI is involved in the data collection process. Furthermore, it’s important to integrate robust data governance into your AI strategy’s DNA. By enforcing consumer consent preferences at the moment of data collection, you reduce the chance that AI tools will mishandle consumer data.
Stay ahead of evolving regulations. Companies must demonstrate that they handle data securely and comply with relevant privacy regulations, such as the Global Data Protection Regulation (GDPR) and the California Consumer Privacy Act (CCPA). Failure to do so has resulted in costly fines for some of the world’s biggest companies, quite recently including X (formerly Twitter), which has been hit with nine GDPR complaints following its non-consensual use of user data to train AI. But remaining aware of current policies isn’t enough anymore. Businesses should proactively research emerging regulations to ensure they’re laying the groundwork for an effective AI and data privacy strategy. Legislation to keep an eye on here includes the American Privacy Rights Act (APRA). If passed, this regulation will codify data privacy protections for consumers across the U.S., including the right to be forgotten. It’s important to consider how your AI tools might help—or hinder—the realization of these policies.
Vet third parties carefully. Even if companies aren’t directly interfacing with AI, it’s likely that their third-party partners are. It’s important to understand the AI policies of third-party partners and vendors so they can communicate this information to consumers when relevant. Additionally, conducting this due diligence ahead of a relationship enables you to make a more informed decision about vendor selection. Always prioritize vendors with ethical and well-disclosed AI policies.
Navigating The Future Of AI In Business
It’s becoming clear that AI will remain an indispensable resource for businesses moving forward. Industry research indicates that most businesses (55%) have entered the AI implementation phase, meaning leaders will spend the next few years determining how to best communicate and explain their AI practices to consumers.
During this phase, I urge you to focus on consumers’ best interests. By protecting consumers’ right to transparency and information, leaders can ensure their AI strategies withstand the test of time—and the increasingly brutal battle for trust.
Forbes Technology Council is an invitation-only community for world-class CIOs, CTOs and technology executives. Do I qualify?
{Categories} _Category: Inspiration{/Categories}
{URL}https://www.forbes.com/councils/forbestechcouncil/2024/11/05/how-to-maintain-consumer-trust-in-the-ai-era/{/URL}
{Author}Greg Brunk, Forbes Councils Member{/Author}
{Image}https://imageio.forbes.com/specials-images/imageserve/672901a9cdb5ea991ee21e52/0x0.jpg?format=jpg&height=600&width=1200&fit=bounds{/Image}
{Keywords}Innovation,/innovation,Innovation,/innovation,technology,standard{/Keywords}
{Source}Inspiration{/Source}
{Thumb}{/Thumb}