Darktrace - Generative AI Changes Everything You Know About Email Cyber Attacks

Darktrace - Generative AI Changes Everything You Know About Email Cyber Attacks

In new data published today, Darktrace reveals that email security solutions, including native, cloud, and ‘static AI’ tools, take an average of thirteen days from an attack being launched on a victim to that attack being detected, leaving defenders vulnerable for almost two weeks if they rely solely on these tools.

In March 2023, Darktrace commissioned a global survey with Census-wide to 6,711 employees across the UK, United States, France, Germany, Australia and The Netherlands to gather third-party insights into human behavior around email, to better understand how employees globally react to potential security threats, their understanding of email security and the modern technologies that are being used as a tool to transform the threats against them.

Key findings (globally and U.S.) indicate:

  1. Eighty-two percent of global employees are concerned that hackers can use generative AI to create scam emails that are indistinguishable from genuine communication.
  2. The top three characteristics of communication that make employees think an email is a phishing attack are: being invited to click a link or open an attachment (68%), unknown sender or unexpected content (61%), and poor use of spelling and grammar (61%)
  3. Nearly 1 in 3 (30%) of global employees have fallen for a fraudulent email or text in the past
  4. Seventy percent of global employees have noticed an increase in the frequency of scam emails and texts in the last 6 months
  5. Eighty-seven percent of global employees are concerned about the amount of personal information available about them online that could be used in phishing and other email frauds
  6. Over a third of people have tried ChatGPT or other Gen AI chatbots (35%)

The Email Threat Landscape Today
Darktrace researchers observed a 135% increase in ‘novel social engineering attacks’ across thousands of active Darktrace/Email customers from January to February 2023, corresponding with the widespread adoption of ChatGPT. These novel social engineering attacks use sophisticated linguistic techniques, including increased text volume, punctuation, and sentence length with no links or attachments. The trend suggests that generative AI, such as ChatGPT, is providing an avenue for threat actors to craft sophisticated and targeted attacks at speed and scale.

In addition, threat actors are rapidly exploiting the news cycle to profit from employee fear, urgency, or excitement. The latest iteration of this is the collapse of Silicon Valley Bank (SVB) and the resulting banking crisis, which has presented an opportunity for attackers to spoof overly sensitive communication, for example seeking to intercept legitimate communication instructing recipients to update bank details for payroll. Seventy-three percent of employees working in financial services organizations have noticed an increase in the frequency of scam emails and texts in the last six months.

Innocent human error and insider threats remain an issue. Many of us (nearly 2 in 5) have sent an important email to the wrong recipient with a similar looking alias by mistake or due to autocomplete. This rises to over half (51%) in the financial services industry and 41% in the legal industry, adding another layer of security risk that is not malicious. A self-learning system can spot this error before the sensitive information is incorrectly shared

What Does the Arms Race for Generative AI Mean for Email Security?

Your CEO emails you to ask for information. It is written in the exact language and tone of voice that they typically use. They even reference a personal anecdote or joke. Darktrace’s research shows that 61% of people look out for poor use of spelling and/or grammar as a sign that an email is fraudulent, but this email contains no mistakes. The spelling and grammar are perfect, it has personal information, and it is utterly convincing. But your CEO did not write it. It was crafted by generative AI, using basic information that a cyber-criminal pulled from social media profiles.

The emergence of ChatGPT has catapulted AI into the mainstream consciousness - 35% of people have already tried ChatGPT or other Gen AI chatbots for themselves – and with it, real concerns have emerged about its implications for cyber defense. Eighty-two percent of global employees are concerned that hackers can use generative AI to create scam emails indistinguishable from genuine communications.

Emails from CEOs or other senior business leaders are the third highest type of email that employees are most likely to engage with, with over a quarter of respondents (26%) agreeing. Defenders are up against Generative AI attacks that are linguistically complex and entirely novel scams that use techniques and reference topics that we have never seen before. In a world of increasing AI-powered attacks, we can no longer put the onus on humans to determine the veracity of communications. This is now a job for artificial intelligence.

By understanding what is normal, it can determine what does not belong in a particular individual’s inbox. Email security systems get this wrong too often, with 79% of respondents saying that their company’s spam/security filters incorrectly stop important legitimate emails from getting to their inbox.

With a deep understanding of the organization, and how the individuals within it interact with their inbox, the AI can determine for every email whether it is suspicious and should be actioned or if it is legitimate and should remain untouched.

This approach can stop threats like:

  1. Phishing
  2. CEO Fraud
  3. Business Email Compromise (BEC)
  4. Invoice Fraud
  5. Phishing scams
  6. Data Theft
  7. Social Engineering
  8. Ransomware & Malware
  9. Supply Chain Attack:
  10. URL-based spear-phishing
  11. Account Takeover
  12. Human Error
  13. Ransomware & Malware
  14. Microsoft
  15. Insider Threat

Self-learning AI in email, unlike all other email security tools, is not trained on what ‘bad’ looks like but instead learns you and the normal patterns of life for each unique organization.

Social engineering – specifically malicious cyber campaigns delivered via email - remain the primary source of an organization’s vulnerability to attack. Popularized in the 1990s, email security has challenged cyber defenders for almost three decades. The aim is to lure victims into divulging confidential information through communication that exploits trust, blackmails or promises reward so that threat actors can get to the heart of critical systems.

Social engineering is a profitable business for hackers – according to estimates, around 3.4 billion phishing e-mails get delivered every day.

As organizations continue to rely on email as their primary collaboration and communication tool, email security tools that rely on knowledge of past threats are failing to future-proof organizations and their people against evolving email threats.

Widespread accessibility to generative AI tools, like ChatGPT, as well as the increasing sophistication of nation-state actors, means that email scams are more convincing than ever.

Humans can no longer rely on their intuition to stop hackers in their tracks; it is time to arm organizations with an AI that knows them better than attackers do.

Featured

  • Maximizing Your Security Budget This Year

    Perimeter Security Standards for Multi-Site Businesses

    When you run or own a business that has multiple locations, it is important to set clear perimeter security standards. By doing this, it allows you to assess and mitigate any potential threats or risks at each site or location efficiently and effectively. Read Now

  • New Research Shows a Continuing Increase in Ransomware Victims

    GuidePoint Security recently announced the release of GuidePoint Research and Intelligence Team’s (GRIT) Q1 2024 Ransomware Report. In addition to revealing a nearly 20% year-over-year increase in the number of ransomware victims, the GRIT Q1 2024 Ransomware Report observes major shifts in the behavioral patterns of ransomware groups following law enforcement activity – including the continued targeting of previously “off-limits” organizations and industries, such as emergency hospitals. Read Now

  • OpenAI's GPT-4 Is Capable of Autonomously Exploiting Zero-Day Vulnerabilities

    According to a new study from four computer scientists at the University of Illinois Urbana-Champaign, OpenAI’s paid chatbot, GPT-4, is capable of autonomously exploiting zero-day vulnerabilities without any human assistance. Read Now

  • Getting in Someone’s Face

    There was a time, not so long ago, when the tradeshow industry must have thought COVID-19 might wipe out face-to-face meetings. It sure seemed that way about three years ago. Read Now

    • Industry Events
    • ISC West

Featured Cybersecurity

Webinars

New Products

  • ResponderLink

    ResponderLink

    Shooter Detection Systems (SDS), an Alarm.com company and a global leader in gunshot detection solutions, has introduced ResponderLink, a groundbreaking new 911 notification service for gunshot events. ResponderLink completes the circle from detection to 911 notification to first responder awareness, giving law enforcement enhanced situational intelligence they urgently need to save lives. Integrating SDS’s proven gunshot detection system with Noonlight’s SendPolice platform, ResponderLink is the first solution to automatically deliver real-time gunshot detection data to 911 call centers and first responders. When shots are detected, the 911 dispatching center, also known as the Public Safety Answering Point or PSAP, is contacted based on the gunfire location, enabling faster initiation of life-saving emergency protocols. 3

  • Automatic Systems V07

    Automatic Systems V07

    Automatic Systems, an industry-leading manufacturer of pedestrian and vehicle secure entrance control access systems, is pleased to announce the release of its groundbreaking V07 software. The V07 software update is designed specifically to address cybersecurity concerns and will ensure the integrity and confidentiality of Automatic Systems applications. With the new V07 software, updates will be delivered by means of an encrypted file. 3

  • Mobile Safe Shield

    Mobile Safe Shield

    SafeWood Designs, Inc., a manufacturer of patented bullet resistant products, is excited to announce the launch of the Mobile Safe Shield. The Mobile Safe Shield is a moveable bullet resistant shield that provides protection in the event of an assailant and supplies cover in the event of an active shooter. With a heavy-duty steel frame, quality castor wheels, and bullet resistant core, the Mobile Safe Shield is a perfect addition to any guard station, security desks, courthouses, police stations, schools, office spaces and more. The Mobile Safe Shield is incredibly customizable. Bullet resistant materials are available in UL 752 Levels 1 through 8 and include glass, white board, tack board, veneer, and plastic laminate. Flexibility in bullet resistant materials allows for the Mobile Safe Shield to blend more with current interior décor for a seamless design aesthetic. Optional custom paint colors are also available for the steel frame. 3