Earlier this year, a Korean university student’s world shattered when a message arrived on her phone — a sexually explicit image of herself she had never taken. There she was, with her face digitally grafted onto someone else’s body.
The images were deepfakes — an AI technology trained on real photo, video, or audio material, designed to deceive and humiliate. But the student is not alone in this. Today, the deepfake crisis is unfolding across schools, workplaces, and our personal lives. This year has seen a nearly 60% surge in phishing attacks, partly due to AI deepfakes. Even more worryingly, these attacks are hijacking our perceptions and our ability to distinguish fiction from reality.
This rise in deepfakes and audio phishing can be linked to the growing accessibility of tools like D-ID and ElevenLabs, which make identity cloning easier. In just eight minutes, anyone can create a convincing deepfake video. When manipulation has become so effortless, the core challenge lies in verifying authenticity. To understand why these scams deceive everyone — from C-suite executives to frontline employees — we should turn to psychologist Robert Cialdini’s principles of persuasion. Principles such as authority, social proof, and urgency are often crafted into these scams, prompting automatic thinking from the most cautious of us. It is no wonder that human error drives most cyber incidents. These tactics exploit our cognitive biases, preying on blind spots that arise from quick, emotion-driven decisions.
As verifying digital content grows more complex, we instinctively turn to technology for help. Numerous “AI detectors” make bold claims, but with tools that can remove AI watermarks from images and high school teachers struggling to detect AI-generated homework, effective solutions are scarce. Automatic detection remains inconsistent and inaccessible. Unsurprisingly, threat actors are capitalizing on this gap.
Until protective technology catches up, individuals are left alone to navigate real time decisions under high pressure, without a reliable safety net. But an awareness of our cognitive biases can significantly improve our decision-making accuracy, especially under stress. By understanding and anticipating our cognitive vulnerabilities, we can then equip ourselves to recognize and respond to these manipulation tactics.
It’s not a matter of human failure; our minds, wired for efficiency, are being manipulated in ways we’re naturally vulnerable to. Let’s start with three cognitive biases that affect us all — whether in daily personal choices or high stakes business decisions.
Consider the case of a Chicago man who lost $50,000 after falling for an urgent email warning that appeared to come from PayPal that stated that his account was about to be compromised due to some suspicious transactions. In a hurry, he was urged to click on an enclosed link to re-enter his bank account details. In that given moment, he was a victim of cognitive tunneling: a one-track focus on the urgent task at hand blinded him to some other telling signs of the scam such as the suspicious URL and sender’s email.
Or, think of the 72-year-old man from Kerala, India, who received a video call from an old colleague, urging for financial help for a medical emergency. Consumed by worry, he transferred the funds without realizing that the emotional pleas were actually fabricated by a scammer. The incident illustrates yet another common bias: affective heuristics, where our emotions overpower our judgements.
Finally, take the instance of a financial worker attending a video call with multiple senior executives of a multinational firm. The worker was asked to immediately wire transfer $25 million. His trust in the perceived authority figures led him to send the money — overlooking any inconsistencies he noticed. This is a clear example of authority bias, where we instinctively trust and give undue weight to the words of those in power.
Executives and leaders have always been the prime targets of phishing scams, but deepfakes elevate these threats gravely. With digital media making images and video recordings easily accessible, creating a convincing attack has never been simpler. While systemic defenses against phishing — from multi-factor authentication and AI-powered detection to gamified training and cyber insurance — offer valuable protection, they are not enough. Below, we offer practical steps for immediate, personal vigilance and detection — essential elements in fostering a resilient security culture.
Phishing scams rely on urgency or fear to distract us from scrutinizing the situation. Research shows that deliberately shifting focus towards neutral cues help manage attention biases. Take the recent case of a senior manager at a cybersecurity company targeted by a scammer impersonating to be his CEO. Armed with a convincing WhatsApp message and audio recording, he was pressed for action on an urgent “business deal.”
Instead of falling for the ploy, he focused on the cues which didn’t add up — discussing sensitive information in an informal app and refusing a follow-up call. Instead of complying, he paused, consulted his mental checklist, and reported the incident to IT. Developing the habit of stepping back, even for nine seconds (a minute or two is even better), can make a significant difference. This pause allows time to consult a curated checklist of neutral verification steps: verify sender details, hover over enclosed links, or take a walk to cool down. This action-oriented pause helps regain focus on facts, managing cognitive overload.
Social engineering attacks target our emotions first. But when our response matches the intensity of their manipulation, our thinking grows reactive and rigid. A more effective strategy is affect labelling — consciously naming our emotions to see them as transient data points. This practice helps recognize what’s at stake while enabling a more rational response.
Take Alan and Alicia, who received a horrifying call claiming Alan’s parents had been taken hostage. Initially panicking, Alan took a moment to acknowledge their fear. He recognized that panicking wouldn’t save his parents and opted for a logical strategy. He used a different phone to call them directly and confirmed their safety. Pausing to label his emotions helped him regain emotional control and consider the long-term consequences as opposed to short-term benefits. By doing so, we shift can from automatic responses to more deliberate, mindful ones.
In most incidents, half the battle is lost due to our inherent trust in authority. A good practice is to leverage our shared knowledge as a verification tool. When a Ferrari executive received a call from CEO Benedetto Vignaan from an unknown number, he initially thought nothing of it. But as the conversation turned confidential, he grew suspicious. Instead of trusting the caller’s authority, he asked a personal question only the CEO could answer. The caller went silent and hung up.
Another strategy is to actively seek information that challenges our assumptions. Engaging in collective-decision making can help defer blind faith in authority. Consult credible sources before acting on urgent requests. Most scams make unusual requests. Simply asking, “Why does this feel different?” can trigger our awareness and nudge us to verify with trusted sources. This tactic of credibility checking — by calling back through trusted or verified channels — keeps misinformation in check and helps reduce anxiety, ambiguity and allows for follow up questions.
Establishing transparent communication channels within an organization empowers employees to challenge questionable instructions or voice concerns without fear of retaliation. A workplace culture that encourages questioning, verification, and collaborative problem-solving builds a resilient one.
Cognitive biases are essential for our daily decision making and cannot be eliminated. But managing them is possible. Over time, cognitive resilience, supported by an enabling workplace and technology can fortify organizations from the ground up.
As October marks Cybersecurity Awareness Month, it serves as the perfect opportunity for organizations to revisit and gear up their cybersecurity strategies to better protect themselves and their employees. But there’s never a bad time to restart the conversation on the role each of us have to play in securing ourselves. Most attacks often use the simple strategy of human exploitation. A true cybersecurity culture starts by protecting the human mind, even before moving to the systems around us. To build a resilient security mindset, we must first master how we are wired.
{Categories} _Category: Inspiration{/Categories}
{URL}https://hbr.org/2024/10/phishing-attacks-are-evolving-heres-how-to-resist-them{/URL}
{Author}Öykü Isik{/Author}
{Image}https://hbr.org/resources/images/article_assets/2024/10/Oct25_25_VitoAnsaldi.jpg{/Image}
{Keywords}Cybersecurity and digital privacy,Technology and analytics,AI and machine learning,Psychology and neuroscience,Digital Article{/Keywords}
{Source}Inspiration{/Source}
{Thumb}{/Thumb}