Cyber Overconfidence Is Leaving Your Organization Vulnerable

The increased sophistication of cyber threats pumped by the relentless use of AI and machine learning brings forth record-breaking statistics. Cyberattacks grew 44% YoY in 2024, with a weekly average of 1,673 cyberattacks per organization. While organizations up their security game to help thwart these attacks, a critical question remains: Can employees identify a threat when they come across one? A Confidence Gap survey reveals that 86% of employees feel confident in their ability to identify phishing attempts. But things are not as rosy as they appear; the more significant part of the report finds this confidence misplaced.

Overconfident Employees Live with a False Sense of Security
The survey revealed interesting disparities in confidence levels according to demographics:

  • By region: The UK and South Africa exude the highest confidence level, at 91%, compared to just 32% in France.
  • By gender: Men report higher scam-savviness than women, potentially because of their access to cybersecurity training or perceived digital literacy.
  • By age: Younger employees (25 to 34 year olds) feel the most confident about nearly all fraud types, except for more advanced threats such as deepfake scams, where their confidence aligns with that of 16 to 24 year-olds.

A deeper look at the survey results unfolds a different story. Higher cybersecurity confidence does not directly translate into lower victimization rates.

  • South Africa may have topped the confidence chart but recorded the highest rate of scam victims (68%).
  • Of the 86% of respondents confident of identifying email phishing scams, over half fell for a cyberattack: 24% succumbed to email phishing, followed by social media phishing (17%) and deepfakes (12%).

Why Does Cybersecurity Overconfidence Not Imply Better Cybersecurity?
Humans differ significantly in risk perception and self-assessment based on cultural differences, education, experiences, access to technology, and more. Exposure to cyber threats and media coverage of security incidents, corporate culture, historical context of cyber incidents, and socio-economic factors all play a role in our susceptibility to deception and manipulation when a cyberattack strikes. Furthermore, cybercriminals can exploit more than 30 susceptibility factors, including emotional and cognitive biases, situational awareness gaps, behavioral tendencies, and even demographic traits, leaving a thin chance for cyber-savviness to outsmart malicious tactics.

Let’s take a closer look at why employee overconfidence, more often than not, results in security blind spots:

The Dunning-Kruger effect: This cognitive bias causes individuals to overestimate their abilities. For example, the research revealed that while 83% of African employees are confident in their ability to recognize cyber threats, 53% did not understand ransomware, and 35% lost money to scams.

Excessive reliance on tools and tech: Too many tools add to a system's complexity, making management difficult and allowing security lapses to slip in. Uber’s 2022 breach exemplifies how its overconfidence in multi-factor authentication (MFA) led to critical notifications being overlooked, resulting in a successful attack. Attackers exploited MFA fatigue, overwhelming an employee with repeated authentication requests until the person felt compelled to accept one just to stop the barrage.

Optimism bias: Pride often comes before a fall. Assuming that “this won’t happen to me” has led to many a downfall. A case in point was the 2014 attack on Sony Pictures. Employees fell victim to phishing emails they believed they could identify. Sophisticated criminal tactics bypassed their defenses, leading to a newsworthy breach.

Professional negligence: Underestimating your adversary and overestimating the ability of technology to stop incidents can sometimes overshadow the importance of investing in alternative measures. The technical jargon cybersecurity vendors use to market their wares creates a false sense of confidence that the network is unbreachable. As a result, resources and capabilities are diverted to other business needs, neglecting the very areas that need them.

What Can Organizations Do To Combat the Ills of Overconfidence?
A false sense of security from cybersecurity overconfidence might pose a greater risk than hackers. So how can organizations avoid falling into the overconfidence trap?

Foster collaboration and transparency: Encourage users to report potential issues to enhance overall security. Fostering open conversations about cybersecurity helps users see themselves as partners rather than hindrances.

Simplify reporting procedures: Ensure that employees feel comfortable reporting threats. This will help create a healthier security culture and foster an environment where users feel safe reporting incidents without fear of reprimand.

Deploy continuous training: Facilitate a culture of constant learning, where employees are encouraged to upskill and stay updated on emerging threat scenarios. Phishing simulation platforms and other forms of training exercises can improve employee instincts and reduce the risk of human error.

Tailor security policies around employee needs: Adapting security policies and strategies that recognize the unique perspectives of different industries, locations, and age groups can go a long way in bridging the gap between overconfidence and competence.

Hire external experts: Security audits and assessments by outside providers can offer an unbiased view of the organization's cybersecurity posture. This can help subdue the overconfidence bias by providing a reality check.

Cybersecurity overconfidence can cause security blind spots. Employees assume they are fraud-savvy, leading to complacency or apathy, making them less vigilant. Rapid digitization and threat evolution have led to a scenario where what worked yesterday may not work today. That’s why organizations must regularly assess their cyber strategies and bridge the gap between real and perceived security competence through continuous learning, implementing regular phishing simulations, and enabling a transparent security culture.

Featured

  • Survey: 48 Percent of Worshippers Feel Less Safe Attending In-Person Services

    Almost half (48%) of those who attend religious services say they feel less safe attending in-person due to rising acts of violence at places of worship. In fact, 39% report these safety concerns have led them to change how often they attend in-person services, according to new research from Verkada conducted online by The Harris Poll among 1,123 U.S. adults who attend a religious service or event at least once a month. Read Now

  • AI Used as Part of Sophisticated Espionage Campaign

    A cybersecurity inflection point has been reached in which AI models has become genuinely useful in cybersecurity operation. But to no surprise, they can used for both good works and ill will. Systemic evaluations show cyber capabilities double in six months, and they have been tracking real-world cyberattacks showing how malicious actors were using AI capabilities. These capabilities were predicted and are expected to evolve, but what stood out for researchers was how quickly they have done so, at scale. Read Now

  • Why the Future of Video Security Is Happening Outside the Cloud

    For years, the cloud has captivated the physical security industry. And for good reasons. Remote access, elastic scalability and simplified maintenance reshaped how we think about deploying and managing systems. Read Now

  • UL Solutions Launches Artificial Intelligence Safety Certification Services

    UL Solutions Inc., a global leader in safety science, today announced the launch of artificial intelligence (AI) safety certification services, enabling comprehensive assessments for evaluating the safety of AI-powered products. Read Now

  • ESA Announces Initiative to Introduce the SECURE Act in State Legislatures

    The Electronic Security Association (ESA), the national voice for the electronic security and life safety industry, has announced plans to introduce the SECURE Act in state legislatures across the country beginning in 2025. The proposal, known as Safeguarding Election Candidates Using Reasonable Expenditures, provides a clear framework that allows candidates and elected officials to use campaign funds for professional security services. Read Now

    • Guard Services

New Products

  • A8V MIND

    A8V MIND

    Hexagon’s Geosystems presents a portable version of its Accur8vision detection system. A rugged all-in-one solution, the A8V MIND (Mobile Intrusion Detection) is designed to provide flexible protection of critical outdoor infrastructure and objects. Hexagon’s Accur8vision is a volumetric detection system that employs LiDAR technology to safeguard entire areas. Whenever it detects movement in a specified zone, it automatically differentiates a threat from a nonthreat, and immediately notifies security staff if necessary. Person detection is carried out within a radius of 80 meters from this device. Connected remotely via a portable computer device, it enables remote surveillance and does not depend on security staff patrolling the area.

  • Connect ONE’s powerful cloud-hosted management platform provides the means to tailor lockdowns and emergency mass notifications throughout a facility – while simultaneously alerting occupants to hazards or next steps, like evacuation.

    Connect ONE®

    Connect ONE’s powerful cloud-hosted management platform provides the means to tailor lockdowns and emergency mass notifications throughout a facility – while simultaneously alerting occupants to hazards or next steps, like evacuation.

  • QCS7230 System-on-Chip (SoC)

    QCS7230 System-on-Chip (SoC)

    The latest Qualcomm® Vision Intelligence Platform offers next-generation smart camera IoT solutions to improve safety and security across enterprises, cities and spaces. The Vision Intelligence Platform was expanded in March 2022 with the introduction of the QCS7230 System-on-Chip (SoC), which delivers superior artificial intelligence (AI) inferencing at the edge.