Scam Sites at Scale: LLMs Fueling a GenAI Criminal Revolution

Cybercrime groups, like other businesses, can create more content in less time using GenAI tools. Over the last 6 months, Netcraft identified threat actors using these technologies across a range of attacks, from innovating advance fee-fraud to spamming out the crypto space. In total, our observations show LLM-generated text being used across a variety of the 100+ attack types we cover, with tens of thousands of sites showing these indicators.

Netcraft’s first party research into the use of generative artificial intelligence (GenAI) to create text for fraudulent websites in 2024 include:

  • A 3.95x increase in websites with AI-generated text observed between March and August 2024, with a 5.2x increase1 over a 30-day period starting July 6, and a 2.75x increase in July alone—a trend which we expect to continue over the coming months
  • A correlation between the July spike in activity and one specific threat actor
  • Thousands of malicious websites across the 100+ attack types we support
  • AI text is being used to generate text in phishing emails as well as copy on fake online shopping websites, unlicensed pharmacies, and investment platforms
  • How AI is improving search engine optimization (SEO) rankings for malicious content
  • July 2024 saw a surge in large language models (LLMs) being used to generate content for phishing websites and fake shops. Netcraft was routinely identifying thousands of websites each week using AI-generated content. However, in that month alone we saw a 2.75x increase (165 per day on the week centered January 1 vs 450 domains per day on the week centered July 31) with no influencing changes to detection. This spike can be attributed to one specific threat actor setting up fake shops, whose extensive use of LLMs to rewrite product descriptions contributed to a 30% uplift in the month’s activity.

    These numbers offer insight into the exponential volume and speed with which fraudulent online content could grow in the coming year; if more threat actors adopt the same GenAI-driven tactics, we can expect to see more of these spikes in activity and a greater upward trend overall.

    “As an AI language model, I can make scam emails more believable”

    Threat actors in the most traditional forms of cybercrime—like phishing and advance fee fraud emails—are enhancing their craft with GenAI.

    Netcraft observed signs of threat actors’ prompts being leaked in responses, providing insight into how they are now employing LLMs. In our Conversational Scam Intelligence service—which uses proprietary AI personas to interact with criminals in real-time—Netcraft has observed scammers using LLMs to rewrite emails in professional English to make them more convincing.

    Fake investment platforms are particularly well positioned for LLM enhancement, because the templates we’ve typically seen for these scams are often generic and poorly written, lacking credibility. With the help of GenAI, threat actors can now tailor their text more closely to the brand they are imitating and invent compelling claims at scale. By using an LLM to generate text that has a professional tone, cadence, and grammar, the website instantly becomes more professional, mimicking legitimate marketing content. That is, if they remember to remove any artifacts the LLM leaves behind.

    There’s no honor among thieves of course. Just as criminals are happy to siphon credentials from other phishing sites, Netcraft observed that when they see a convincing LLM-generated template, they may replicate the content almost verbatim. To evade detection and correct errors in the original template, some threat actors appear to be using LLMs to rewrite existing LLM-drafted text.

    Threat actors are becoming more effective at using GenAI tools in a highly automated fashion. This enables them to deploy attacks at scale in domains where they don’t speak the target language and thus overlook LLM-produced errors in the content. By example, Netcraft came across numerous websites where page content itself warns against the very fraud it’s enabling.

    It’s no surprise that threat actors are beginning to utilize GenAI to both create efficiencies and improve the effectiveness of their malicious activities. Netcraft has been observing this trend for some time and developing suitable countermeasures in response. Netcraft’s platform flags attacks with indicators of LLM-generated content quickly and accurately, ensuring customers get visibility of the tactics being used against them.

    For the complete research report visit here.

    Featured

    • Report: 47 Percent of Security Service Providers Are Not Yet Using AI or Automation Tools

      Trackforce, a provider of security workforce management platforms, today announced the launch of its 2025 Physical Security Operations Benchmark Report, an industry-first study that benchmarks both private security service providers and corporate security teams side by side. Based on a survey of over 300 security professionals across the globe, the report provides a comprehensive look at the state of physical security operations. Read Now

      • Guard Services
    • Identity Governance at the Crossroads of Complexity and Scale

      Modern enterprises are grappling with an increasing number of identities, both human and machine, across an ever-growing number of systems. They must also deal with increased operational demands, including faster onboarding, more scalable models, and tighter security enforcement. Navigating these ever-growing challenges with speed and accuracy requires a new approach to identity governance that is built for the future enterprise. Read Now

    • Eagle Eye Networks Launches AI Camera Gun Detection

      Eagle Eye Networks, a provider of cloud video surveillance, recently introduced Eagle Eye Gun Detection, a new layer of protection for schools and businesses that works with existing security cameras and infrastructure. Eagle Eye Networks is the first to build gun detection into its platform. Read Now

    • Report: AI is Supercharging Old-School Cybercriminal Tactics

      AI isn’t just transforming how we work. It’s reshaping how cybercriminals attack, with threat actors exploiting AI to mass produce malicious code loaders, steal browser credentials and accelerate cloud attacks, according to a new report from Elastic. Read Now

    • Pragmatism, Productivity, and the Push for Accountability in 2025-2026

      Every year, the security industry debates whether artificial intelligence is a disruption, an enabler, or a distraction. By 2025, that conversation matured, where AI became a working dimension in physical identity and access management (PIAM) programs. Observations from 2025 highlight this turning point in AI’s role in access control and define how security leaders are being distinguished based on how they apply it. Read Now

    New Products

    • Unified VMS

      AxxonSoft introduces version 2.0 of the Axxon One VMS. The new release features integrations with various physical security systems, making Axxon One a unified VMS. Other enhancements include new AI video analytics and intelligent search functions, hardened cybersecurity, usability and performance improvements, and expanded cloud capabilities

    • HD2055 Modular Barricade

      Delta Scientific’s electric HD2055 modular shallow foundation barricade is tested to ASTM M50/P1 with negative penetration from the vehicle upon impact. With a shallow foundation of only 24 inches, the HD2055 can be installed without worrying about buried power lines and other below grade obstructions. The modular make-up of the barrier also allows you to cover wider roadways by adding additional modules to the system. The HD2055 boasts an Emergency Fast Operation of 1.5 seconds giving the guard ample time to deploy under a high threat situation.

    • 4K Video Decoder

      3xLOGIC’s VH-DECODER-4K is perfect for use in organizations of all sizes in diverse vertical sectors such as retail, leisure and hospitality, education and commercial premises.