Scam Sites at Scale: LLMs Fueling a GenAI Criminal Revolution
Cybercrime groups, like other businesses, can create more content in less time using GenAI tools. Over the last 6 months, Netcraft identified threat actors using these technologies across a range of attacks, from innovating advance fee-fraud to spamming out the crypto space. In total, our observations show LLM-generated text being used across a variety of the 100+ attack types we cover, with tens of thousands of sites showing these indicators.
Netcraft’s first party research into the use of generative artificial intelligence (GenAI) to create text for fraudulent websites in 2024 include:
- A 3.95x increase in websites with AI-generated text observed between March and August 2024, with a 5.2x increase1 over a 30-day period starting July 6, and a 2.75x increase in July alone—a trend which we expect to continue over the coming months
- A correlation between the July spike in activity and one specific threat actor
- Thousands of malicious websites across the 100+ attack types we support
- AI text is being used to generate text in phishing emails as well as copy on fake online shopping websites, unlicensed pharmacies, and investment platforms
How AI is improving search engine optimization (SEO) rankings for malicious content
July 2024 saw a surge in large language models (LLMs) being used to generate content for phishing websites and fake shops. Netcraft was routinely identifying thousands of websites each week using AI-generated content. However, in that month alone we saw a 2.75x increase (165 per day on the week centered January 1 vs 450 domains per day on the week centered July 31) with no influencing changes to detection. This spike can be attributed to one specific threat actor setting up fake shops, whose extensive use of LLMs to rewrite product descriptions contributed to a 30% uplift in the month’s activity.
These numbers offer insight into the exponential volume and speed with which fraudulent online content could grow in the coming year; if more threat actors adopt the same GenAI-driven tactics, we can expect to see more of these spikes in activity and a greater upward trend overall.
“As an AI language model, I can make scam emails more believable”
Threat actors in the most traditional forms of cybercrime—like phishing and advance fee fraud emails—are enhancing their craft with GenAI.
Netcraft observed signs of threat actors’ prompts being leaked in responses, providing insight into how they are now employing LLMs. In our Conversational Scam Intelligence service—which uses proprietary AI personas to interact with criminals in real-time—Netcraft has observed scammers using LLMs to rewrite emails in professional English to make them more convincing.
Fake investment platforms are particularly well positioned for LLM enhancement, because the templates we’ve typically seen for these scams are often generic and poorly written, lacking credibility. With the help of GenAI, threat actors can now tailor their text more closely to the brand they are imitating and invent compelling claims at scale. By using an LLM to generate text that has a professional tone, cadence, and grammar, the website instantly becomes more professional, mimicking legitimate marketing content. That is, if they remember to remove any artifacts the LLM leaves behind.
There’s no honor among thieves of course. Just as criminals are happy to siphon credentials from other phishing sites, Netcraft observed that when they see a convincing LLM-generated template, they may replicate the content almost verbatim. To evade detection and correct errors in the original template, some threat actors appear to be using LLMs to rewrite existing LLM-drafted text.
Threat actors are becoming more effective at using GenAI tools in a highly automated fashion. This enables them to deploy attacks at scale in domains where they don’t speak the target language and thus overlook LLM-produced errors in the content. By example, Netcraft came across numerous websites where page content itself warns against the very fraud it’s enabling.
It’s no surprise that threat actors are beginning to utilize GenAI to both create efficiencies and improve the effectiveness of their malicious activities. Netcraft has been observing this trend for some time and developing suitable countermeasures in response. Netcraft’s platform flags attacks with indicators of LLM-generated content quickly and accurately, ensuring customers get visibility of the tactics being used against them.
For the complete research report visit here.