Analysis of AI Tools Shows 85 Percent Have Been Breached

AI tools are becoming essential to modern work, but their fast, unmonitored adoption is creating a new kind of security risk. Recent surveys reveal a clear trend – employees are rapidly adopting consumer-facing AI tools without employer approval, IT oversight, or any clear security policies. According to Cybernews Business Digital Index, nearly 90% of analyzed AI tools have been exposed to data breaches, putting businesses at severe risk.

About 75% of workers use AI in the workplace, with AI chatbots being the most common tools to complete work-related tasks. While this boosts productivity, it could expose companies to credential theft, data leaks, and infrastructure vulnerabilities, especially since only 14% of workplaces have official AI policies, contributing to untracked AI use by employees.

While a significant number of employees use AI tools at work, a large share of this usage remains untracked or unofficial. Estimates show that around one-third of AI users keep their usage hidden from management.

Personal accounts are used uncontrollably for work tasks

According to Google’s 2024 survey of over 1,000 U.S.-based knowledge workers, 93% of Gen Z employees aged 22–27 use two or more AI tools at work. Millennials aren't far behind, with 79% reporting similar usage patterns. These tools are used to draft emails, take meeting notes, and bridge communication gaps.

Additionally, a 2025 Elon University survey found that 58% of AI users regularly rely on two or more different models, while data from Harmonic indicates that 45.4% of sensitive data prompts are submitted using personal accounts, completely bypassing company monitoring systems.

“Unregulated use of multiple AI tools in the workplace, especially through personal accounts, creates serious blind spots in corporate security. Each tool becomes a potential exit point for sensitive data, outside the scope of IT governance,” says Emanuelis Norbutas, Chief Technical Officer at nexos.ai, a secure AI orchestration platform for businesses. “Without clear oversight, enforcing policies, monitoring usage, and ensuring compliance becomes nearly impossible.”

Most popular AI tools struggle with cybersecurity

To better understand how these tools perform behind the scenes, Cybernews researchers analyzed 52 of the most popular AI web tools in February 2025, ranked by total monthly website visits based on Semrush traffic data.

Using only publicly available information, Business Digital Index uses custom scans, IoT search engines, IP, and domain name reputation databases to assess companies based on online security protocols.

The findings paint a concerning picture. Widely used AI platforms and tools show uneven and often poor cybersecurity performance. Researchers found major gaps despite an average cybersecurity score of 85 out of 100. While 33% of platforms earned an A rating, 41% received a D or even an F, revealing a deep divide between the best and worst performers.

“What is mostly concerning is the false sense of security many users and businesses may have,” says Vincentas Baubonis, Head of Security Research at Cybernews. “High average scores don’t mean tools are entirely safe – one weak link in your workflow can become the attacker’s entry point. Once inside, a threat actor can move laterally through systems, exfiltrate sensitive company data, access customer information, or even deploy ransomware, causing operational and reputational damage.”

84% of AI tools analyzed have suffered data breaches

Out of the 52 AI tools analyzed, 84% had experienced at least one data breach. Data breaches often result from persistent weaknesses like poor infrastructure management, unpatched systems, and weak user permissions. However, even more alarming is that 36% of analyzed tools experienced a breach in just the past 30 days.

Alongside breaches, 93% of platforms showed issues with SSL/TLS configurations, which are critical for encrypting communication between users and tools. Misconfigured SSL/TLS encryption weakens the protection of data sent between users and platforms, making it easier for attackers to intercept or manipulate sensitive information.

System hosting vulnerabilities were another widespread concern, with 91% of platforms exhibiting flaws in their infrastructure management. These issues are often linked to weak cloud configurations or outdated server setups that expand the attack surface.

Password reuse and credential theft

44% of companies developing AI tools showed signs of employee password reuse – a significant enabler of credential-stuffing attacks, where hackers exploit recycled login details to access systems undetected.

In total, 51% of analyzed tools have had corporate credentials stolen, reinforcing the need for stronger password policies and IT oversight, especially as AI tools become routine in the workplace. Credential theft is often a forerunner to a data breach, as stolen credentials can be used to access sensitive data.

“Many AI tools simply aren’t built with enterprise-grade security in mind. Employees often assume these tools are safe by default, yet many have already been compromised, with corporate credentials among the first targets,” says Norbutas. “When passwords are reused or stored insecurely, it gives attackers a direct line into company systems. Businesses must treat every AI integration as a potential entry point and secure it accordingly.”

Productivity tools show weakest cybersecurity

Productivity tools, commonly used for note-taking, scheduling, content generation, and work-related collaboration, emerged as the most vulnerable category, with vulnerabilities across all key technical domains. Particularly infrastructure, data handling, and web security.

According to Business Digital Index analysis, this category had the highest average number of stolen corporate credentials per company (1,332), and 92% had experienced a data breach. Every single tool in the category had 100% system hosting and SSL/TLS configuration issues.

“This is a classic Achilles’ heel scenario,” says cybersecurity expert Baubonis. “A tool might appear secure on the surface, but a single overlooked vulnerability can jeopardize everything. Hugging Face is a perfect example of that risk – it only takes one blind spot to undermine months of security planning and expose the organization to threats it never anticipated.”

Research Methodology

Cybernews researchers examined 52 of the 60 most popular AI tools in February 2025, ranked by total monthly website visits based on Semrush traffic data. Seven tools could not be scanned due to domain limitations.

The report evaluates cybersecurity risk across seven key dimensions: software patching, web application security, email protection, system reputation, hosting infrastructure, SSL/TLS configuration, and data breach history.

Featured

  • Smarter Access Starts with Flexibility

    Today’s workplaces are undergoing a rapid evolution, driven by hybrid work models, emerging smart technologies, and flexible work schedules. To keep pace with growing workplace demands, buildings are becoming more dynamic – capable of adapting to how people move, work, and interact in real-time. Read Now

  • Trends Keeping an Eye on Business Decisions

    Today, AI continues to transform the way data is used to make important business decisions. AI and the cloud together are redefining how video surveillance systems are being used to simulate human intelligence by combining data analysis, prediction, and process automation with minimal human intervention. Many organizations are upgrading their surveillance systems to reap the benefits of technologies like AI and cloud applications. Read Now

  • The Future is Happening Outside the Cloud

    For years, the cloud has captivated the physical security industry. And for good reason. Remote access, elastic scalability and simplified maintenance reshaped how we think about deploying and managing systems. But as the number of cameras grows and resolutions push from HD to 4K and beyond, the cloud’s limits are becoming unavoidable. Bandwidth bottlenecks. Latency lags. Rising storage costs. These are not abstract concerns. Read Now

  • Right-Wing Activist Charlie Kirk Dies After Utah Valley University Shooting

    Charlie Kirk, a popular conservative activist and founder of Turning Point USA, died Wednesday after being shot during an on-campus event at Utah Valley University in Orem, Utah Read Now

  • The Impact of Convergence Between IT and Physical Security

    For years, the worlds of physical security and information technology (IT) remained separate. While they shared common goals and interests, they often worked in silos. Read Now

New Products

  • Compact IP Video Intercom

    Viking’s X-205 Series of intercoms provide HD IP video and two-way voice communication - all wrapped up in an attractive compact chassis.

  • Unified VMS

    AxxonSoft introduces version 2.0 of the Axxon One VMS. The new release features integrations with various physical security systems, making Axxon One a unified VMS. Other enhancements include new AI video analytics and intelligent search functions, hardened cybersecurity, usability and performance improvements, and expanded cloud capabilities

  • PE80 Series

    PE80 Series by SARGENT / ED4000/PED5000 Series by Corbin Russwin

    ASSA ABLOY, a global leader in access solutions, has announced the launch of two next generation exit devices from long-standing leaders in the premium exit device market: the PE80 Series by SARGENT and the PED4000/PED5000 Series by Corbin Russwin. These new exit devices boast industry-first features that are specifically designed to provide enhanced safety, security and convenience, setting new standards for exit solutions. The SARGENT PE80 and Corbin Russwin PED4000/PED5000 Series exit devices are engineered to meet the ever-evolving needs of modern buildings. Featuring the high strength, security and durability that ASSA ABLOY is known for, the new exit devices deliver several innovative, industry-first features in addition to elegant design finishes for every opening.