Report Highlights How People Trick AI Chatbots Into Exposing Company Secrets

Immersive Labs recently published its “Dark Side of GenAI” report about a Generative Artificial Intelligence (GenAI)-related security risk known as a prompt injection attack, in which individuals input specific instructions to trick chatbots into revealing sensitive information, potentially exposing organizations to data leaks. Based on analysis of Immersive Labs’ prompt injection challenge*, GenAI bots are especially susceptible to manipulation by people of all skill levels, not just cyber experts.

Among the most alarming findings was the discovery that 88% of prompt injection challenge participants successfully tricked the GenAI bot into giving away sensitive information in at least one level of an increasingly difficult challenge. Nearly a fifth of participants (17%) successfully tricked the bot across all levels, underscoring the risk to organizations using GenAI bots.

This report asserts that public and private-sector cooperation and corporate policies are required to mitigate security risks posed by the extensive adoption of GenAI bots. Leaders need to be aware of prompt injection risks and take decisive action, including establishing comprehensive policies for GenAI use within their organizations.

“Based on our analysis of the ways people manipulate GenAI, and the relatively low barrier to entry to exploitation, we believe it’s imperative that organizations implement security controls within Large Language Models and take a ‘defense in depth’ approach to GenAI,” said Kev Breen, Senior Director of Threat Intelligence at Immersive Labs and a co-author of the report. “This includes implementing security measures, such as data loss prevention checks, strict input validation and context-aware filtering to prevent and recognize attempts to manipulate GenAI output.”

Key Findings from Immersive Labs “Dark Side of GenAI” Study

The team observed the following key takeaways based on their data analysis, including:

  • GenAI is no match for human ingenuity (yet): Users successfully leverage creative techniques to deceive GenAI bots, such as tricking them into embedding secrets in poems or stories or altering their initial instructions, to gain unauthorized access to sensitive information.
  • You don’t need to be an expert to exploit GenAI: The report’s findings show that even non-cybersecurity professionals and those unfamiliar with prompt injection attacks can leverage their creativity to trick bots, indicating that the barrier to exploiting GenAI in the wild using prompt injection attacks may be easier than one would hope.
  • As long as bots can be outsmarted by people, organizations are at risk: No protocols exist today to fully prevent prompt injection attacks. Cyber leaders and GenAI developers need to urgently prepare for – and respond to – this emerging threat to mitigate potential harm to people, organizations, and society.

“Our research demonstrates the critical importance of adopting a ‘secure-by-design’ approach throughout the entire GenAI system development life cycle,” added Breen. “The potential reputational harm to organizations is clear, based on examples like the ones in our report. Organizations should consider the trade-off between security and user experience, and the type of conversational model used as part of their risk assessment of using GenAI in their products and services.”

The research team at Immersive Labs consisting of Dr. John Blythe, Director of Cyber Psychology; Kev Breen, Senior Director of Cyber Threat Intelligence; and Joel Iqbal, Data Analyst, analyzed the results of Immersive Labs’ prompt injection GenAI Challenge that ran from June to September 2023. The challenge required individuals to trick a GenAI bot into revealing a secret password with increasing difficulty at each of 10 levels. The initial sample consisted of 316,637 submissions, with 34,555 participants in total completing the entire challenge. The team examined the various prompting techniques employed, user interactions, prompt sentiment, and outcomes to inform its study.

For more about these and other insights, access the report today at: https://www.immersivelabs.com/dark-side-of-genai-report/.

Featured

  • Hot AI Chatbot DeepSeek Comes Loaded With Privacy, Data Security Concerns

    In the artificial intelligence race powered by American companies like OpenAI and Google, a new Chinese rival is upending the market—even with the possible privacy and data security issues. Read Now

  • Survey: CISOs Increasing Budgets for Crisis Simulations in 2025

    Today, Cyber Performance Center, Hack The Box, released new data showcasing the perspectives of Chief Information Security Officers (CISOs) towards cyber preparedness in 2025. In the aftermath of 2024’s high-profile cybersecurity incidents, including NHS, CrowdStrike, TfL, 23andMe, and Cencora, CISOs are reassessing their organization’s readiness to manage a potential “chaos” of a full-scale cyber crisis. Read Now

  • Human Risk Management: A Silver Bullet for Effective Security Awareness Training

    You would think in a world where cybersecurity breaches are frequently in the news, that it wouldn’t require much to convince CEOs and C-suite leaders of the value and importance of security awareness training (SAT). Unfortunately, that’s not always the case. Read Now

  • Windsor Port Authority Strengthens U.S.-Canada Border Waterway Safety, Security

    Windsor Port Authority, one of just 17 national ports created by the 1999 Canada Marine Act, has enhanced waterway safety and security across its jurisdiction on the U.S.-Canada border with state-of-the-art cameras from Axis Communications. These cameras, combined with radar solutions from Accipiter Radar Technologies Inc., provide the port with the visibility needed to prevent collisions, better detect illegal activity, and save lives along the river. Read Now

New Products

  • Camden CM-221 Series Switches

    Camden CM-221 Series Switches

    Camden Door Controls is pleased to announce that, in response to soaring customer demand, it has expanded its range of ValueWave™ no-touch switches to include a narrow (slimline) version with manual override. This override button is designed to provide additional assurance that the request to exit switch will open a door, even if the no-touch sensor fails to operate. This new slimline switch also features a heavy gauge stainless steel faceplate, a red/green illuminated light ring, and is IP65 rated, making it ideal for indoor or outdoor use as part of an automatic door or access control system. ValueWave™ no-touch switches are designed for easy installation and trouble-free service in high traffic applications. In addition to this narrow version, the CM-221 & CM-222 Series switches are available in a range of other models with single and double gang heavy-gauge stainless steel faceplates and include illuminated light rings.

  • HD2055 Modular Barricade

    Delta Scientific’s electric HD2055 modular shallow foundation barricade is tested to ASTM M50/P1 with negative penetration from the vehicle upon impact. With a shallow foundation of only 24 inches, the HD2055 can be installed without worrying about buried power lines and other below grade obstructions. The modular make-up of the barrier also allows you to cover wider roadways by adding additional modules to the system. The HD2055 boasts an Emergency Fast Operation of 1.5 seconds giving the guard ample time to deploy under a high threat situation.

  • EasyGate SPT and SPD

    EasyGate SPT SPD

    Security solutions do not have to be ordinary, let alone unattractive. Having renewed their best-selling speed gates, Cominfo has once again demonstrated their Art of Security philosophy in practice — and confirmed their position as an industry-leading manufacturers of premium speed gates and turnstiles.