Deepfakes And The Erosion Of Digital Trust: Zero-Trust Strategies In The Age Of AI-Generated Content

Chief Security Advisor at Microsoft.

getty

"Trust only what you see" is no longer a principle to live by nowadays, considering the many tools that can manipulate what we read, hear or see. Last week, OpenAI introduced Sora, a groundbreaking AI system capable of transforming text descriptions into photorealistic videos.

Sora’s development is built upon the foundation of OpenAI’s existing technologies, including the renowned image generator DALL-E and the sophisticated GPT large language models. It can produce videos up to 60 seconds in length, utilizing either pure text instructions or a combination of text and images.

Yet, the rise of such technologically advanced systems like Sora also amplifies concerns regarding the potential for artificial deepfake videos to exacerbate the issues of misinformation and disinformation, especially during crucial election years like 2024.

Widespread Misinformation And Cybersecurity Concerns—The New Digital Reality?
Until now, text-to-video AI models have been trailing behind regarding realism and widespread accessibility. But Sora’s outputs are described by Rachel Tobac as an "order of magnitude more believable and less cartoonish" than its predecessors. And this brings a lot of cybersecurity risks for all of us.

Even before Sora, the global incidence of deepfakes has skyrocketed, experiencing a tenfold surge worldwide from 2022 to 2023, with a 1740% increase in North America, 1530% in APAC, 780% in Europe (including the U.K.), 450% in MEA and 410% in Latin America.

Trying to appease these concerns, OpenAI explains that it collaborates with red teamers—experts in misinformation, hateful content and bias—to adversarially test the Sora model. The team is developing tools, including a detection classifier, to identify misleading content and videos generated by Sora.

But is this enough to combat the impending wave of misinformation and deepfakes that would follow the wider adoption of Sora? Over 70% of businesses have not taken any concrete steps to prepare themselves to address or protect themselves from deepfakes, as estimated by KPMG (pg. 5).

In fact, individuals and enterprises alike should strengthen their own cyber defense lines—starting with adopting a zero-trust model.

Combatting The Proliferation Of Deepfakes With Zero Trust
As the name suggests, the philosophy of the cybersecurity model of zero trust is "never trust, always verify." Zero trust operates on the rationale that no entity, internal or external, should be trusted by default, and verification is required from everyone trying to access resources in the network.

But under these new circumstances of AI-generated content, zero-trust principles should not only end at securing networks and endpoints. Cybersecurity leaders must focus on validating the content that circulates within and outside their enterprises, keeping an eye on the fluctuations in the online medium.

We must be deeply concerned that almost half of organizations deem the zero-trust model as either moderate or low priority. The following steps represent the starting point for strengthening your cybersecurity strategy on the AI-generated content front.

Integrating Adaptive Access Control
The model’s access controls dynamically adjust based on the user’s context and behavior. Suppose an access request seems unusual, like if it comes from a new location or device. In that case, zero trust tightens the security checks, which is useful for instances when deepfakes can be used to mimic legitimate users.

Using Real-Time Content Analysis
AI-driven analysis tools can detect anomalies in video and audio that may indicate a deepfake. Some of the most common clues of AI-generated content can be detected after examining speech patterns, facial movements and other digital artifacts often present in manipulated content.

Improving Identity Verification
Zero trust demands strong proof of identity before granting access, which can be integrated into strategies for countering deepfake attempts. Implementing multifactor authentication (MFA) and digital certificates ensures access is granted only after thorough verification, making it harder for deepfakes to penetrate security perimeters.

Implementing Behavior Monitoring And Analysis
Continuous monitoring of abnormal behavior is critical for a zero-trust architecture. Quite simply, it means looking for any actions that deviate from the user’s typical pattern—which might aid the detection of impersonations by deepfakes. Any suspicious activity triggers immediate action, significantly mitigating the risk of unauthorized access.

Bottom Line
Sora and similar AI-generation models open a Pandora’s box of potential misuse, particularly through deepfakes. So, can cybersecurity rise to this challenge and successfully integrate zero-trust principles against AI-generated images and videos? Only time will tell.
Forbes Technology Council is an invitation-only community for world-class CIOs, CTOs and technology executives. Do I qualify?

{Categories} _Category: Inspiration,*ALL*{/Categories}
{URL}https://www.forbes.com/sites/forbestechcouncil/2024/03/14/deepfakes-and-the-erosion-of-digital-trust-zero-trust-strategies-in-the-age-of-ai-generated-content/{/URL}
{Author}Terence Jackson, Forbes Councils Member{/Author}
{Image}https://imageio.forbes.com/specials-images/imageserve/65f1e8f3669ecbb3d4f7dc75/0x0.jpg?format=jpg&height=600&width=1200&fit=bounds{/Image}
{Keywords}Innovation,/innovation,Innovation,/innovation,technology,standard{/Keywords}
{Source}Forbes – Innovation{/Source}
{Thumb}{/Thumb}

Exit mobile version