AI-Generated Code Poses Major Security Risks in Nearly Half of All Development Tasks

Veracode, a provider of application risk management, recently unveiled its 2025 GenAI Code Security Report, revealing critical security flaws in AI-generated code. The study analyzed 80 curated coding tasks across more than 100 large language models (LLMs), revealing that while AI produces functional code, it introduces security vulnerabilities in 45 percent of cases.

The research demonstrates a troubling pattern: when given a choice between a secure and insecure method to write code, GenAI models chose the insecure option 45 percent of the time. Perhaps more concerning, Veracode’s research also uncovered a critical trend: despite advances in LLMs’ ability to generate syntactically correct code, security performance has not kept up, remaining unchanged over time.

“The rise of vibe coding, where developers rely on AI to generate code, typically without explicitly defining security requirements, represents a fundamental shift in how software is built,” said Jens Wessling, Chief Technology Officer at Veracode. “The main concern with this trend is that they do not need to specify security constraints to get the code they want, effectively leaving secure coding decisions to LLMs. Our research reveals GenAI models make the wrong choices nearly half the time, and it’s not improving.”

AI is enabling attackers to identify and exploit security vulnerabilities quicker and more effectively. Tools powered by AI can scan systems at scale, identify weaknesses, and even generate exploit code with minimal human input. This lowers the barrier to entry for less-skilled attackers and increases the speed and sophistication of attacks, posing a significant threat to traditional security defenses. Not only are vulnerabilities increasing, but the ability to exploit them is becoming easier.

LLMs Introduce Dangerous Levels of Common Security Vulnerabilities

To evaluate the security properties of LLM-generated code, Veracode designed a set of 80 code completion tasks with known potential for security vulnerabilities according to the MITRE Common Weakness Enumeration (CWE) system, a standard classification of software weaknesses that can turn into vulnerabilities. The tasks prompted more than 100 LLMs to auto-complete a block of code in a secure or insecure manner, which the research team then analyzed using Veracode Static Analysis. In 45 percent of all test cases, LLMs introduced vulnerabilities classified within the OWASP (Open Web Application Security Project) Top 10—the most critical web application security risks.

Veracode found Java to be the riskiest language for AI code generation, with a security failure rate over 70 percent. Other major languages, like Python, C#, and JavaScript, still presented significant risk, with failure rates between 38 percent and 45 percent. The research also revealed LLMs failed to secure code against cross-site scripting (CWE-80) and log injection (CWE-117) in 86 percent and 88 percent of cases, respectively.

“Despite the advances in AI-assisted development, it is clear security hasn’t kept pace,” Wessling said. “Our research shows models are getting better at coding accurately but are not improving at security. We also found larger models do not perform significantly better than smaller models, suggesting this is a systemic issue rather than an LLM scaling problem.”

Featured

  • Security Industry Association Announces the 2026 Security Megatrends

    The Security Industry Association (SIA) has identified and forecasted the 2026 Security Megatrends, which form the basis of SIA’s signature annual Security Megatrends report defining the top 10 factors influencing both near- and long-term change in the global security industry. Read Now

  • The Future of Access Control: Cloud-Based Solutions for Safer Workplaces

    Access controls have revolutionized the way we protect our people, assets and operations. Gone are the days of cumbersome keychains and the security liabilities they introduced, but it’s a mistake to think that their evolution has reached its peak. Read Now

  • A Look at AI

    Large language models (LLMs) have taken the world by storm. Within months of OpenAI launching its AI chatbot, ChatGPT, it amassed more than 100 million users, making it the fastest-growing consumer application in history. Read Now

  • First, Do No Harm: Responsibly Applying Artificial Intelligence

    It was 2022 when early LLMs (Large Language Models) brought the term “AI” into mainstream public consciousness and since then, we’ve seen security corporations and integrators attempt to develop their solutions and sales pitches around the biggest tech boom of the 21st century. However, not all “artificial intelligence” is equally suitable for security applications, and it’s essential for end users to remain vigilant in understanding how their solutions are utilizing AI. Read Now

  • Improve Incident Response With Intelligent Cloud Video Surveillance

    Video surveillance is a vital part of business security, helping institutions protect against everyday threats for increased employee, customer, and student safety. However, many outdated surveillance solutions lack the ability to offer immediate insights into critical incidents. This slows down investigations and limits how effectively teams can respond to situations, creating greater risks for the organization. Read Now

New Products

  • Luma x20

    Luma x20

    Snap One has announced its popular Luma x20 family of surveillance products now offers even greater security and privacy for home and business owners across the globe by giving them full control over integrators’ system access to view live and recorded video. According to Snap One Product Manager Derek Webb, the new “customer handoff” feature provides enhanced user control after initial installation, allowing the owners to have total privacy while also making it easy to reinstate integrator access when maintenance or assistance is required. This new feature is now available to all Luma x20 users globally. “The Luma x20 family of surveillance solutions provides excellent image and audio capture, and with the new customer handoff feature, it now offers absolute privacy for camera feeds and recordings,” Webb said. “With notifications and integrator access controlled through the powerful OvrC remote system management platform, it’s easy for integrators to give their clients full control of their footage and then to get temporary access from the client for any troubleshooting needs.”

  • FEP GameChanger

    FEP GameChanger

    Paige Datacom Solutions Introduces Important and Innovative Cabling Products GameChanger Cable, a proven and patented solution that significantly exceeds the reach of traditional category cable will now have a FEP/FEP construction.

  • HD2055 Modular Barricade

    Delta Scientific’s electric HD2055 modular shallow foundation barricade is tested to ASTM M50/P1 with negative penetration from the vehicle upon impact. With a shallow foundation of only 24 inches, the HD2055 can be installed without worrying about buried power lines and other below grade obstructions. The modular make-up of the barrier also allows you to cover wider roadways by adding additional modules to the system. The HD2055 boasts an Emergency Fast Operation of 1.5 seconds giving the guard ample time to deploy under a high threat situation.