Scam Sites at Scale: LLMs Fueling a GenAI Criminal Revolution

Cybercrime groups, like other businesses, can create more content in less time using GenAI tools. Over the last 6 months, Netcraft identified threat actors using these technologies across a range of attacks, from innovating advance fee-fraud to spamming out the crypto space. In total, our observations show LLM-generated text being used across a variety of the 100+ attack types we cover, with tens of thousands of sites showing these indicators.

Netcraft’s first party research into the use of generative artificial intelligence (GenAI) to create text for fraudulent websites in 2024 include:

  • A 3.95x increase in websites with AI-generated text observed between March and August 2024, with a 5.2x increase1 over a 30-day period starting July 6, and a 2.75x increase in July alone—a trend which we expect to continue over the coming months
  • A correlation between the July spike in activity and one specific threat actor
  • Thousands of malicious websites across the 100+ attack types we support
  • AI text is being used to generate text in phishing emails as well as copy on fake online shopping websites, unlicensed pharmacies, and investment platforms
  • How AI is improving search engine optimization (SEO) rankings for malicious content
  • July 2024 saw a surge in large language models (LLMs) being used to generate content for phishing websites and fake shops. Netcraft was routinely identifying thousands of websites each week using AI-generated content. However, in that month alone we saw a 2.75x increase (165 per day on the week centered January 1 vs 450 domains per day on the week centered July 31) with no influencing changes to detection. This spike can be attributed to one specific threat actor setting up fake shops, whose extensive use of LLMs to rewrite product descriptions contributed to a 30% uplift in the month’s activity.

    These numbers offer insight into the exponential volume and speed with which fraudulent online content could grow in the coming year; if more threat actors adopt the same GenAI-driven tactics, we can expect to see more of these spikes in activity and a greater upward trend overall.

    “As an AI language model, I can make scam emails more believable”

    Threat actors in the most traditional forms of cybercrime—like phishing and advance fee fraud emails—are enhancing their craft with GenAI.

    Netcraft observed signs of threat actors’ prompts being leaked in responses, providing insight into how they are now employing LLMs. In our Conversational Scam Intelligence service—which uses proprietary AI personas to interact with criminals in real-time—Netcraft has observed scammers using LLMs to rewrite emails in professional English to make them more convincing.

    Fake investment platforms are particularly well positioned for LLM enhancement, because the templates we’ve typically seen for these scams are often generic and poorly written, lacking credibility. With the help of GenAI, threat actors can now tailor their text more closely to the brand they are imitating and invent compelling claims at scale. By using an LLM to generate text that has a professional tone, cadence, and grammar, the website instantly becomes more professional, mimicking legitimate marketing content. That is, if they remember to remove any artifacts the LLM leaves behind.

    There’s no honor among thieves of course. Just as criminals are happy to siphon credentials from other phishing sites, Netcraft observed that when they see a convincing LLM-generated template, they may replicate the content almost verbatim. To evade detection and correct errors in the original template, some threat actors appear to be using LLMs to rewrite existing LLM-drafted text.

    Threat actors are becoming more effective at using GenAI tools in a highly automated fashion. This enables them to deploy attacks at scale in domains where they don’t speak the target language and thus overlook LLM-produced errors in the content. By example, Netcraft came across numerous websites where page content itself warns against the very fraud it’s enabling.

    It’s no surprise that threat actors are beginning to utilize GenAI to both create efficiencies and improve the effectiveness of their malicious activities. Netcraft has been observing this trend for some time and developing suitable countermeasures in response. Netcraft’s platform flags attacks with indicators of LLM-generated content quickly and accurately, ensuring customers get visibility of the tactics being used against them.

    For the complete research report visit here.

    Featured

    • Survey: 48 Percent of Worshippers Feel Less Safe Attending In-Person Services

      Almost half (48%) of those who attend religious services say they feel less safe attending in-person due to rising acts of violence at places of worship. In fact, 39% report these safety concerns have led them to change how often they attend in-person services, according to new research from Verkada conducted online by The Harris Poll among 1,123 U.S. adults who attend a religious service or event at least once a month. Read Now

    • AI Used as Part of Sophisticated Espionage Campaign

      A cybersecurity inflection point has been reached in which AI models has become genuinely useful in cybersecurity operation. But to no surprise, they can used for both good works and ill will. Systemic evaluations show cyber capabilities double in six months, and they have been tracking real-world cyberattacks showing how malicious actors were using AI capabilities. These capabilities were predicted and are expected to evolve, but what stood out for researchers was how quickly they have done so, at scale. Read Now

    • Why the Future of Video Security Is Happening Outside the Cloud

      For years, the cloud has captivated the physical security industry. And for good reasons. Remote access, elastic scalability and simplified maintenance reshaped how we think about deploying and managing systems. Read Now

    • UL Solutions Launches Artificial Intelligence Safety Certification Services

      UL Solutions Inc., a global leader in safety science, today announced the launch of artificial intelligence (AI) safety certification services, enabling comprehensive assessments for evaluating the safety of AI-powered products. Read Now

    • ESA Announces Initiative to Introduce the SECURE Act in State Legislatures

      The Electronic Security Association (ESA), the national voice for the electronic security and life safety industry, has announced plans to introduce the SECURE Act in state legislatures across the country beginning in 2025. The proposal, known as Safeguarding Election Candidates Using Reasonable Expenditures, provides a clear framework that allows candidates and elected officials to use campaign funds for professional security services. Read Now

      • Guard Services

    New Products

    • A8V MIND

      A8V MIND

      Hexagon’s Geosystems presents a portable version of its Accur8vision detection system. A rugged all-in-one solution, the A8V MIND (Mobile Intrusion Detection) is designed to provide flexible protection of critical outdoor infrastructure and objects. Hexagon’s Accur8vision is a volumetric detection system that employs LiDAR technology to safeguard entire areas. Whenever it detects movement in a specified zone, it automatically differentiates a threat from a nonthreat, and immediately notifies security staff if necessary. Person detection is carried out within a radius of 80 meters from this device. Connected remotely via a portable computer device, it enables remote surveillance and does not depend on security staff patrolling the area.

    • Connect ONE’s powerful cloud-hosted management platform provides the means to tailor lockdowns and emergency mass notifications throughout a facility – while simultaneously alerting occupants to hazards or next steps, like evacuation.

      Connect ONE®

      Connect ONE’s powerful cloud-hosted management platform provides the means to tailor lockdowns and emergency mass notifications throughout a facility – while simultaneously alerting occupants to hazards or next steps, like evacuation.

    • QCS7230 System-on-Chip (SoC)

      QCS7230 System-on-Chip (SoC)

      The latest Qualcomm® Vision Intelligence Platform offers next-generation smart camera IoT solutions to improve safety and security across enterprises, cities and spaces. The Vision Intelligence Platform was expanded in March 2022 with the introduction of the QCS7230 System-on-Chip (SoC), which delivers superior artificial intelligence (AI) inferencing at the edge.