Analysis of AI Tools Shows 85 Percent Have Been Breached

AI tools are becoming essential to modern work, but their fast, unmonitored adoption is creating a new kind of security risk. Recent surveys reveal a clear trend – employees are rapidly adopting consumer-facing AI tools without employer approval, IT oversight, or any clear security policies. According to Cybernews Business Digital Index, nearly 90% of analyzed AI tools have been exposed to data breaches, putting businesses at severe risk.

About 75% of workers use AI in the workplace, with AI chatbots being the most common tools to complete work-related tasks. While this boosts productivity, it could expose companies to credential theft, data leaks, and infrastructure vulnerabilities, especially since only 14% of workplaces have official AI policies, contributing to untracked AI use by employees.

While a significant number of employees use AI tools at work, a large share of this usage remains untracked or unofficial. Estimates show that around one-third of AI users keep their usage hidden from management.

Personal accounts are used uncontrollably for work tasks

According to Google’s 2024 survey of over 1,000 U.S.-based knowledge workers, 93% of Gen Z employees aged 22–27 use two or more AI tools at work. Millennials aren't far behind, with 79% reporting similar usage patterns. These tools are used to draft emails, take meeting notes, and bridge communication gaps.

Additionally, a 2025 Elon University survey found that 58% of AI users regularly rely on two or more different models, while data from Harmonic indicates that 45.4% of sensitive data prompts are submitted using personal accounts, completely bypassing company monitoring systems.

“Unregulated use of multiple AI tools in the workplace, especially through personal accounts, creates serious blind spots in corporate security. Each tool becomes a potential exit point for sensitive data, outside the scope of IT governance,” says Emanuelis Norbutas, Chief Technical Officer at nexos.ai, a secure AI orchestration platform for businesses. “Without clear oversight, enforcing policies, monitoring usage, and ensuring compliance becomes nearly impossible.”

Most popular AI tools struggle with cybersecurity

To better understand how these tools perform behind the scenes, Cybernews researchers analyzed 52 of the most popular AI web tools in February 2025, ranked by total monthly website visits based on Semrush traffic data.

Using only publicly available information, Business Digital Index uses custom scans, IoT search engines, IP, and domain name reputation databases to assess companies based on online security protocols.

The findings paint a concerning picture. Widely used AI platforms and tools show uneven and often poor cybersecurity performance. Researchers found major gaps despite an average cybersecurity score of 85 out of 100. While 33% of platforms earned an A rating, 41% received a D or even an F, revealing a deep divide between the best and worst performers.

“What is mostly concerning is the false sense of security many users and businesses may have,” says Vincentas Baubonis, Head of Security Research at Cybernews. “High average scores don’t mean tools are entirely safe – one weak link in your workflow can become the attacker’s entry point. Once inside, a threat actor can move laterally through systems, exfiltrate sensitive company data, access customer information, or even deploy ransomware, causing operational and reputational damage.”

84% of AI tools analyzed have suffered data breaches

Out of the 52 AI tools analyzed, 84% had experienced at least one data breach. Data breaches often result from persistent weaknesses like poor infrastructure management, unpatched systems, and weak user permissions. However, even more alarming is that 36% of analyzed tools experienced a breach in just the past 30 days.

Alongside breaches, 93% of platforms showed issues with SSL/TLS configurations, which are critical for encrypting communication between users and tools. Misconfigured SSL/TLS encryption weakens the protection of data sent between users and platforms, making it easier for attackers to intercept or manipulate sensitive information.

System hosting vulnerabilities were another widespread concern, with 91% of platforms exhibiting flaws in their infrastructure management. These issues are often linked to weak cloud configurations or outdated server setups that expand the attack surface.

Password reuse and credential theft

44% of companies developing AI tools showed signs of employee password reuse – a significant enabler of credential-stuffing attacks, where hackers exploit recycled login details to access systems undetected.

In total, 51% of analyzed tools have had corporate credentials stolen, reinforcing the need for stronger password policies and IT oversight, especially as AI tools become routine in the workplace. Credential theft is often a forerunner to a data breach, as stolen credentials can be used to access sensitive data.

“Many AI tools simply aren’t built with enterprise-grade security in mind. Employees often assume these tools are safe by default, yet many have already been compromised, with corporate credentials among the first targets,” says Norbutas. “When passwords are reused or stored insecurely, it gives attackers a direct line into company systems. Businesses must treat every AI integration as a potential entry point and secure it accordingly.”

Productivity tools show weakest cybersecurity

Productivity tools, commonly used for note-taking, scheduling, content generation, and work-related collaboration, emerged as the most vulnerable category, with vulnerabilities across all key technical domains. Particularly infrastructure, data handling, and web security.

According to Business Digital Index analysis, this category had the highest average number of stolen corporate credentials per company (1,332), and 92% had experienced a data breach. Every single tool in the category had 100% system hosting and SSL/TLS configuration issues.

“This is a classic Achilles’ heel scenario,” says cybersecurity expert Baubonis. “A tool might appear secure on the surface, but a single overlooked vulnerability can jeopardize everything. Hugging Face is a perfect example of that risk – it only takes one blind spot to undermine months of security planning and expose the organization to threats it never anticipated.”

Research Methodology

Cybernews researchers examined 52 of the 60 most popular AI tools in February 2025, ranked by total monthly website visits based on Semrush traffic data. Seven tools could not be scanned due to domain limitations.

The report evaluates cybersecurity risk across seven key dimensions: software patching, web application security, email protection, system reputation, hosting infrastructure, SSL/TLS configuration, and data breach history.

Featured

  • Security Industry Association Announces the 2026 Security Megatrends

    The Security Industry Association (SIA) has identified and forecasted the 2026 Security Megatrends, which form the basis of SIA’s signature annual Security Megatrends report defining the top 10 factors influencing both near- and long-term change in the global security industry. Read Now

  • The Future of Access Control: Cloud-Based Solutions for Safer Workplaces

    Access controls have revolutionized the way we protect our people, assets and operations. Gone are the days of cumbersome keychains and the security liabilities they introduced, but it’s a mistake to think that their evolution has reached its peak. Read Now

  • A Look at AI

    Large language models (LLMs) have taken the world by storm. Within months of OpenAI launching its AI chatbot, ChatGPT, it amassed more than 100 million users, making it the fastest-growing consumer application in history. Read Now

  • First, Do No Harm: Responsibly Applying Artificial Intelligence

    It was 2022 when early LLMs (Large Language Models) brought the term “AI” into mainstream public consciousness and since then, we’ve seen security corporations and integrators attempt to develop their solutions and sales pitches around the biggest tech boom of the 21st century. However, not all “artificial intelligence” is equally suitable for security applications, and it’s essential for end users to remain vigilant in understanding how their solutions are utilizing AI. Read Now

  • Improve Incident Response With Intelligent Cloud Video Surveillance

    Video surveillance is a vital part of business security, helping institutions protect against everyday threats for increased employee, customer, and student safety. However, many outdated surveillance solutions lack the ability to offer immediate insights into critical incidents. This slows down investigations and limits how effectively teams can respond to situations, creating greater risks for the organization. Read Now

New Products

  • FEP GameChanger

    FEP GameChanger

    Paige Datacom Solutions Introduces Important and Innovative Cabling Products GameChanger Cable, a proven and patented solution that significantly exceeds the reach of traditional category cable will now have a FEP/FEP construction.

  • 4K Video Decoder

    3xLOGIC’s VH-DECODER-4K is perfect for use in organizations of all sizes in diverse vertical sectors such as retail, leisure and hospitality, education and commercial premises.

  • A8V MIND

    A8V MIND

    Hexagon’s Geosystems presents a portable version of its Accur8vision detection system. A rugged all-in-one solution, the A8V MIND (Mobile Intrusion Detection) is designed to provide flexible protection of critical outdoor infrastructure and objects. Hexagon’s Accur8vision is a volumetric detection system that employs LiDAR technology to safeguard entire areas. Whenever it detects movement in a specified zone, it automatically differentiates a threat from a nonthreat, and immediately notifies security staff if necessary. Person detection is carried out within a radius of 80 meters from this device. Connected remotely via a portable computer device, it enables remote surveillance and does not depend on security staff patrolling the area.