Analysis of AI Tools Shows 85 Percent Have Been Breached

AI tools are becoming essential to modern work, but their fast, unmonitored adoption is creating a new kind of security risk. Recent surveys reveal a clear trend – employees are rapidly adopting consumer-facing AI tools without employer approval, IT oversight, or any clear security policies. According to Cybernews Business Digital Index, nearly 90% of analyzed AI tools have been exposed to data breaches, putting businesses at severe risk.

About 75% of workers use AI in the workplace, with AI chatbots being the most common tools to complete work-related tasks. While this boosts productivity, it could expose companies to credential theft, data leaks, and infrastructure vulnerabilities, especially since only 14% of workplaces have official AI policies, contributing to untracked AI use by employees.

While a significant number of employees use AI tools at work, a large share of this usage remains untracked or unofficial. Estimates show that around one-third of AI users keep their usage hidden from management.

Personal accounts are used uncontrollably for work tasks

According to Google’s 2024 survey of over 1,000 U.S.-based knowledge workers, 93% of Gen Z employees aged 22–27 use two or more AI tools at work. Millennials aren't far behind, with 79% reporting similar usage patterns. These tools are used to draft emails, take meeting notes, and bridge communication gaps.

Additionally, a 2025 Elon University survey found that 58% of AI users regularly rely on two or more different models, while data from Harmonic indicates that 45.4% of sensitive data prompts are submitted using personal accounts, completely bypassing company monitoring systems.

“Unregulated use of multiple AI tools in the workplace, especially through personal accounts, creates serious blind spots in corporate security. Each tool becomes a potential exit point for sensitive data, outside the scope of IT governance,” says Emanuelis Norbutas, Chief Technical Officer at nexos.ai, a secure AI orchestration platform for businesses. “Without clear oversight, enforcing policies, monitoring usage, and ensuring compliance becomes nearly impossible.”

Most popular AI tools struggle with cybersecurity

To better understand how these tools perform behind the scenes, Cybernews researchers analyzed 52 of the most popular AI web tools in February 2025, ranked by total monthly website visits based on Semrush traffic data.

Using only publicly available information, Business Digital Index uses custom scans, IoT search engines, IP, and domain name reputation databases to assess companies based on online security protocols.

The findings paint a concerning picture. Widely used AI platforms and tools show uneven and often poor cybersecurity performance. Researchers found major gaps despite an average cybersecurity score of 85 out of 100. While 33% of platforms earned an A rating, 41% received a D or even an F, revealing a deep divide between the best and worst performers.

“What is mostly concerning is the false sense of security many users and businesses may have,” says Vincentas Baubonis, Head of Security Research at Cybernews. “High average scores don’t mean tools are entirely safe – one weak link in your workflow can become the attacker’s entry point. Once inside, a threat actor can move laterally through systems, exfiltrate sensitive company data, access customer information, or even deploy ransomware, causing operational and reputational damage.”

84% of AI tools analyzed have suffered data breaches

Out of the 52 AI tools analyzed, 84% had experienced at least one data breach. Data breaches often result from persistent weaknesses like poor infrastructure management, unpatched systems, and weak user permissions. However, even more alarming is that 36% of analyzed tools experienced a breach in just the past 30 days.

Alongside breaches, 93% of platforms showed issues with SSL/TLS configurations, which are critical for encrypting communication between users and tools. Misconfigured SSL/TLS encryption weakens the protection of data sent between users and platforms, making it easier for attackers to intercept or manipulate sensitive information.

System hosting vulnerabilities were another widespread concern, with 91% of platforms exhibiting flaws in their infrastructure management. These issues are often linked to weak cloud configurations or outdated server setups that expand the attack surface.

Password reuse and credential theft

44% of companies developing AI tools showed signs of employee password reuse – a significant enabler of credential-stuffing attacks, where hackers exploit recycled login details to access systems undetected.

In total, 51% of analyzed tools have had corporate credentials stolen, reinforcing the need for stronger password policies and IT oversight, especially as AI tools become routine in the workplace. Credential theft is often a forerunner to a data breach, as stolen credentials can be used to access sensitive data.

“Many AI tools simply aren’t built with enterprise-grade security in mind. Employees often assume these tools are safe by default, yet many have already been compromised, with corporate credentials among the first targets,” says Norbutas. “When passwords are reused or stored insecurely, it gives attackers a direct line into company systems. Businesses must treat every AI integration as a potential entry point and secure it accordingly.”

Productivity tools show weakest cybersecurity

Productivity tools, commonly used for note-taking, scheduling, content generation, and work-related collaboration, emerged as the most vulnerable category, with vulnerabilities across all key technical domains. Particularly infrastructure, data handling, and web security.

According to Business Digital Index analysis, this category had the highest average number of stolen corporate credentials per company (1,332), and 92% had experienced a data breach. Every single tool in the category had 100% system hosting and SSL/TLS configuration issues.

“This is a classic Achilles’ heel scenario,” says cybersecurity expert Baubonis. “A tool might appear secure on the surface, but a single overlooked vulnerability can jeopardize everything. Hugging Face is a perfect example of that risk – it only takes one blind spot to undermine months of security planning and expose the organization to threats it never anticipated.”

Research Methodology

Cybernews researchers examined 52 of the 60 most popular AI tools in February 2025, ranked by total monthly website visits based on Semrush traffic data. Seven tools could not be scanned due to domain limitations.

The report evaluates cybersecurity risk across seven key dimensions: software patching, web application security, email protection, system reputation, hosting infrastructure, SSL/TLS configuration, and data breach history.

Featured

  • AI Is Now the Leading Cybersecurity Concern for Security, IT Leaders

    Arctic Wolf recently published findings from its State of Cybersecurity: 2025 Trends Report, offering insights from a global survey of more than 1,200 senior IT and cybersecurity decision-makers across 15 countries. Conducted by Sapio Research, the report captures the realities, risks, and readiness strategies shaping the modern security landscape. Read Now

  • Analysis of AI Tools Shows 85 Percent Have Been Breached

    AI tools are becoming essential to modern work, but their fast, unmonitored adoption is creating a new kind of security risk. Recent surveys reveal a clear trend – employees are rapidly adopting consumer-facing AI tools without employer approval, IT oversight, or any clear security policies. According to Cybernews Business Digital Index, nearly 90% of analyzed AI tools have been exposed to data breaches, putting businesses at severe risk. Read Now

  • Software Vulnerabilities Surged 61 Percent in 2024, According to New Report

    Action1, a provider of autonomous endpoint management (AEM) solutions, today released its 2025 Software Vulnerability Ratings Report, revealing a 61% year-over-year surge in discovered software vulnerabilities and a 96% spike in exploited vulnerabilities throughout 2024, amid an increasingly aggressive threat landscape. Read Now

  • Motorola Solutions Named Official Safety Technology Supplier of the Ryder Cup through 2027

    Motorola Solutions has today been named the Official Safety Technology Supplier of the 2025 and 2027 Ryder Cup, professional golf’s renowned biennial team competition between the United States and Europe. Read Now

  • Evolving Cybersecurity Strategies

    Organizations are increasingly turning their attention to human-focused security approaches, as two out of three (68%) cybersecurity incidents involve people. Threat actors are shifting from targeting networks and systems to hacking humans via social engineering methods, living off human errors as their most prevalent attack vector. Whether manipulated or not, human cyber behavior is leveraged to gain backdoor access into systems. This mainly results from a lack of employee training and awareness about evolving attack techniques employed by malign actors. Read Now

New Products

  • Compact IP Video Intercom

    Viking’s X-205 Series of intercoms provide HD IP video and two-way voice communication - all wrapped up in an attractive compact chassis.

  • QCS7230 System-on-Chip (SoC)

    QCS7230 System-on-Chip (SoC)

    The latest Qualcomm® Vision Intelligence Platform offers next-generation smart camera IoT solutions to improve safety and security across enterprises, cities and spaces. The Vision Intelligence Platform was expanded in March 2022 with the introduction of the QCS7230 System-on-Chip (SoC), which delivers superior artificial intelligence (AI) inferencing at the edge.

  • A8V MIND

    A8V MIND

    Hexagon’s Geosystems presents a portable version of its Accur8vision detection system. A rugged all-in-one solution, the A8V MIND (Mobile Intrusion Detection) is designed to provide flexible protection of critical outdoor infrastructure and objects. Hexagon’s Accur8vision is a volumetric detection system that employs LiDAR technology to safeguard entire areas. Whenever it detects movement in a specified zone, it automatically differentiates a threat from a nonthreat, and immediately notifies security staff if necessary. Person detection is carried out within a radius of 80 meters from this device. Connected remotely via a portable computer device, it enables remote surveillance and does not depend on security staff patrolling the area.