Report Highlights How People Trick AI Chatbots Into Exposing Company Secrets

Immersive Labs recently published its “Dark Side of GenAI” report about a Generative Artificial Intelligence (GenAI)-related security risk known as a prompt injection attack, in which individuals input specific instructions to trick chatbots into revealing sensitive information, potentially exposing organizations to data leaks. Based on analysis of Immersive Labs’ prompt injection challenge*, GenAI bots are especially susceptible to manipulation by people of all skill levels, not just cyber experts.

Among the most alarming findings was the discovery that 88% of prompt injection challenge participants successfully tricked the GenAI bot into giving away sensitive information in at least one level of an increasingly difficult challenge. Nearly a fifth of participants (17%) successfully tricked the bot across all levels, underscoring the risk to organizations using GenAI bots.

This report asserts that public and private-sector cooperation and corporate policies are required to mitigate security risks posed by the extensive adoption of GenAI bots. Leaders need to be aware of prompt injection risks and take decisive action, including establishing comprehensive policies for GenAI use within their organizations.

“Based on our analysis of the ways people manipulate GenAI, and the relatively low barrier to entry to exploitation, we believe it’s imperative that organizations implement security controls within Large Language Models and take a ‘defense in depth’ approach to GenAI,” said Kev Breen, Senior Director of Threat Intelligence at Immersive Labs and a co-author of the report. “This includes implementing security measures, such as data loss prevention checks, strict input validation and context-aware filtering to prevent and recognize attempts to manipulate GenAI output.”

Key Findings from Immersive Labs “Dark Side of GenAI” Study

The team observed the following key takeaways based on their data analysis, including:

  • GenAI is no match for human ingenuity (yet): Users successfully leverage creative techniques to deceive GenAI bots, such as tricking them into embedding secrets in poems or stories or altering their initial instructions, to gain unauthorized access to sensitive information.
  • You don’t need to be an expert to exploit GenAI: The report’s findings show that even non-cybersecurity professionals and those unfamiliar with prompt injection attacks can leverage their creativity to trick bots, indicating that the barrier to exploiting GenAI in the wild using prompt injection attacks may be easier than one would hope.
  • As long as bots can be outsmarted by people, organizations are at risk: No protocols exist today to fully prevent prompt injection attacks. Cyber leaders and GenAI developers need to urgently prepare for – and respond to – this emerging threat to mitigate potential harm to people, organizations, and society.

“Our research demonstrates the critical importance of adopting a ‘secure-by-design’ approach throughout the entire GenAI system development life cycle,” added Breen. “The potential reputational harm to organizations is clear, based on examples like the ones in our report. Organizations should consider the trade-off between security and user experience, and the type of conversational model used as part of their risk assessment of using GenAI in their products and services.”

The research team at Immersive Labs consisting of Dr. John Blythe, Director of Cyber Psychology; Kev Breen, Senior Director of Cyber Threat Intelligence; and Joel Iqbal, Data Analyst, analyzed the results of Immersive Labs’ prompt injection GenAI Challenge that ran from June to September 2023. The challenge required individuals to trick a GenAI bot into revealing a secret password with increasing difficulty at each of 10 levels. The initial sample consisted of 316,637 submissions, with 34,555 participants in total completing the entire challenge. The team examined the various prompting techniques employed, user interactions, prompt sentiment, and outcomes to inform its study.

For more about these and other insights, access the report today at: https://www.immersivelabs.com/dark-side-of-genai-report/.

Featured

  • Thinking About GSX Products

    GSX may be in your rearview mirror, but the products, solutions and technology should still be forefront in your mind. It is my pleasure to travel the tradeshow floor for product demonstrations, and a keen understanding of what each new solution brings. Read Now

    • Industry Events
  • Survey Shows Election Anxiety Crosses Party Lines

    New reports of election worker intimidation are raising concerns about election interference. A majority of Americans (71%) are worried about voter intimidation or safety at the polls, and 75% want security cameras at their voting place, according to a new national survey. Read Now

  • 66 Percent of Cybersecurity Pros Say Job Stress is Growing

    Sixty-six percent of cybersecurity professionals say their role is more stressful now than it was five years ago, according to the newly released 2024 State of Cybersecurity survey report from ISACA, a global professional association advancing trust in technology. Read Now

  • Live from GSX 2024: Post-Show Recap

    Another great edition of GSX is in the books! We’d like to thank our great partners for this years event, NAPCO, LVT, Eagle Eye Networks and Hirsch, for working with us and allowing us to highlight some of the great solutions the companies were showcasing during the crowded show. Read Now

    • Industry Events
    • GSX

Featured Cybersecurity

Webinars

New Products

  • Unified VMS

    AxxonSoft introduces version 2.0 of the Axxon One VMS. The new release features integrations with various physical security systems, making Axxon One a unified VMS. Other enhancements include new AI video analytics and intelligent search functions, hardened cybersecurity, usability and performance improvements, and expanded cloud capabilities 3

  • PE80 Series

    PE80 Series by SARGENT / ED4000/PED5000 Series by Corbin Russwin

    ASSA ABLOY, a global leader in access solutions, has announced the launch of two next generation exit devices from long-standing leaders in the premium exit device market: the PE80 Series by SARGENT and the PED4000/PED5000 Series by Corbin Russwin. These new exit devices boast industry-first features that are specifically designed to provide enhanced safety, security and convenience, setting new standards for exit solutions. The SARGENT PE80 and Corbin Russwin PED4000/PED5000 Series exit devices are engineered to meet the ever-evolving needs of modern buildings. Featuring the high strength, security and durability that ASSA ABLOY is known for, the new exit devices deliver several innovative, industry-first features in addition to elegant design finishes for every opening. 3

  • Mobile Safe Shield

    Mobile Safe Shield

    SafeWood Designs, Inc., a manufacturer of patented bullet resistant products, is excited to announce the launch of the Mobile Safe Shield. The Mobile Safe Shield is a moveable bullet resistant shield that provides protection in the event of an assailant and supplies cover in the event of an active shooter. With a heavy-duty steel frame, quality castor wheels, and bullet resistant core, the Mobile Safe Shield is a perfect addition to any guard station, security desks, courthouses, police stations, schools, office spaces and more. The Mobile Safe Shield is incredibly customizable. Bullet resistant materials are available in UL 752 Levels 1 through 8 and include glass, white board, tack board, veneer, and plastic laminate. Flexibility in bullet resistant materials allows for the Mobile Safe Shield to blend more with current interior décor for a seamless design aesthetic. Optional custom paint colors are also available for the steel frame. 3