Analysis of AI Tools Shows 85 Percent Have Been Breached

AI tools are becoming essential to modern work, but their fast, unmonitored adoption is creating a new kind of security risk. Recent surveys reveal a clear trend – employees are rapidly adopting consumer-facing AI tools without employer approval, IT oversight, or any clear security policies. According to Cybernews Business Digital Index, nearly 90% of analyzed AI tools have been exposed to data breaches, putting businesses at severe risk.

About 75% of workers use AI in the workplace, with AI chatbots being the most common tools to complete work-related tasks. While this boosts productivity, it could expose companies to credential theft, data leaks, and infrastructure vulnerabilities, especially since only 14% of workplaces have official AI policies, contributing to untracked AI use by employees.

While a significant number of employees use AI tools at work, a large share of this usage remains untracked or unofficial. Estimates show that around one-third of AI users keep their usage hidden from management.

Personal accounts are used uncontrollably for work tasks

According to Google’s 2024 survey of over 1,000 U.S.-based knowledge workers, 93% of Gen Z employees aged 22–27 use two or more AI tools at work. Millennials aren't far behind, with 79% reporting similar usage patterns. These tools are used to draft emails, take meeting notes, and bridge communication gaps.

Additionally, a 2025 Elon University survey found that 58% of AI users regularly rely on two or more different models, while data from Harmonic indicates that 45.4% of sensitive data prompts are submitted using personal accounts, completely bypassing company monitoring systems.

“Unregulated use of multiple AI tools in the workplace, especially through personal accounts, creates serious blind spots in corporate security. Each tool becomes a potential exit point for sensitive data, outside the scope of IT governance,” says Emanuelis Norbutas, Chief Technical Officer at nexos.ai, a secure AI orchestration platform for businesses. “Without clear oversight, enforcing policies, monitoring usage, and ensuring compliance becomes nearly impossible.”

Most popular AI tools struggle with cybersecurity

To better understand how these tools perform behind the scenes, Cybernews researchers analyzed 52 of the most popular AI web tools in February 2025, ranked by total monthly website visits based on Semrush traffic data.

Using only publicly available information, Business Digital Index uses custom scans, IoT search engines, IP, and domain name reputation databases to assess companies based on online security protocols.

The findings paint a concerning picture. Widely used AI platforms and tools show uneven and often poor cybersecurity performance. Researchers found major gaps despite an average cybersecurity score of 85 out of 100. While 33% of platforms earned an A rating, 41% received a D or even an F, revealing a deep divide between the best and worst performers.

“What is mostly concerning is the false sense of security many users and businesses may have,” says Vincentas Baubonis, Head of Security Research at Cybernews. “High average scores don’t mean tools are entirely safe – one weak link in your workflow can become the attacker’s entry point. Once inside, a threat actor can move laterally through systems, exfiltrate sensitive company data, access customer information, or even deploy ransomware, causing operational and reputational damage.”

84% of AI tools analyzed have suffered data breaches

Out of the 52 AI tools analyzed, 84% had experienced at least one data breach. Data breaches often result from persistent weaknesses like poor infrastructure management, unpatched systems, and weak user permissions. However, even more alarming is that 36% of analyzed tools experienced a breach in just the past 30 days.

Alongside breaches, 93% of platforms showed issues with SSL/TLS configurations, which are critical for encrypting communication between users and tools. Misconfigured SSL/TLS encryption weakens the protection of data sent between users and platforms, making it easier for attackers to intercept or manipulate sensitive information.

System hosting vulnerabilities were another widespread concern, with 91% of platforms exhibiting flaws in their infrastructure management. These issues are often linked to weak cloud configurations or outdated server setups that expand the attack surface.

Password reuse and credential theft

44% of companies developing AI tools showed signs of employee password reuse – a significant enabler of credential-stuffing attacks, where hackers exploit recycled login details to access systems undetected.

In total, 51% of analyzed tools have had corporate credentials stolen, reinforcing the need for stronger password policies and IT oversight, especially as AI tools become routine in the workplace. Credential theft is often a forerunner to a data breach, as stolen credentials can be used to access sensitive data.

“Many AI tools simply aren’t built with enterprise-grade security in mind. Employees often assume these tools are safe by default, yet many have already been compromised, with corporate credentials among the first targets,” says Norbutas. “When passwords are reused or stored insecurely, it gives attackers a direct line into company systems. Businesses must treat every AI integration as a potential entry point and secure it accordingly.”

Productivity tools show weakest cybersecurity

Productivity tools, commonly used for note-taking, scheduling, content generation, and work-related collaboration, emerged as the most vulnerable category, with vulnerabilities across all key technical domains. Particularly infrastructure, data handling, and web security.

According to Business Digital Index analysis, this category had the highest average number of stolen corporate credentials per company (1,332), and 92% had experienced a data breach. Every single tool in the category had 100% system hosting and SSL/TLS configuration issues.

“This is a classic Achilles’ heel scenario,” says cybersecurity expert Baubonis. “A tool might appear secure on the surface, but a single overlooked vulnerability can jeopardize everything. Hugging Face is a perfect example of that risk – it only takes one blind spot to undermine months of security planning and expose the organization to threats it never anticipated.”

Research Methodology

Cybernews researchers examined 52 of the 60 most popular AI tools in February 2025, ranked by total monthly website visits based on Semrush traffic data. Seven tools could not be scanned due to domain limitations.

The report evaluates cybersecurity risk across seven key dimensions: software patching, web application security, email protection, system reputation, hosting infrastructure, SSL/TLS configuration, and data breach history.

Featured

  • Survey: 48 Percent of Worshippers Feel Less Safe Attending In-Person Services

    Almost half (48%) of those who attend religious services say they feel less safe attending in-person due to rising acts of violence at places of worship. In fact, 39% report these safety concerns have led them to change how often they attend in-person services, according to new research from Verkada conducted online by The Harris Poll among 1,123 U.S. adults who attend a religious service or event at least once a month. Read Now

  • AI Used as Part of Sophisticated Espionage Campaign

    A cybersecurity inflection point has been reached in which AI models has become genuinely useful in cybersecurity operation. But to no surprise, they can used for both good works and ill will. Systemic evaluations show cyber capabilities double in six months, and they have been tracking real-world cyberattacks showing how malicious actors were using AI capabilities. These capabilities were predicted and are expected to evolve, but what stood out for researchers was how quickly they have done so, at scale. Read Now

  • Why the Future of Video Security Is Happening Outside the Cloud

    For years, the cloud has captivated the physical security industry. And for good reasons. Remote access, elastic scalability and simplified maintenance reshaped how we think about deploying and managing systems. Read Now

  • UL Solutions Launches Artificial Intelligence Safety Certification Services

    UL Solutions Inc., a global leader in safety science, today announced the launch of artificial intelligence (AI) safety certification services, enabling comprehensive assessments for evaluating the safety of AI-powered products. Read Now

  • ESA Announces Initiative to Introduce the SECURE Act in State Legislatures

    The Electronic Security Association (ESA), the national voice for the electronic security and life safety industry, has announced plans to introduce the SECURE Act in state legislatures across the country beginning in 2025. The proposal, known as Safeguarding Election Candidates Using Reasonable Expenditures, provides a clear framework that allows candidates and elected officials to use campaign funds for professional security services. Read Now

    • Guard Services

New Products

  • Compact IP Video Intercom

    Viking’s X-205 Series of intercoms provide HD IP video and two-way voice communication - all wrapped up in an attractive compact chassis.

  • EasyGate SPT and SPD

    EasyGate SPT SPD

    Security solutions do not have to be ordinary, let alone unattractive. Having renewed their best-selling speed gates, Cominfo has once again demonstrated their Art of Security philosophy in practice — and confirmed their position as an industry-leading manufacturers of premium speed gates and turnstiles.

  • Luma x20

    Luma x20

    Snap One has announced its popular Luma x20 family of surveillance products now offers even greater security and privacy for home and business owners across the globe by giving them full control over integrators’ system access to view live and recorded video. According to Snap One Product Manager Derek Webb, the new “customer handoff” feature provides enhanced user control after initial installation, allowing the owners to have total privacy while also making it easy to reinstate integrator access when maintenance or assistance is required. This new feature is now available to all Luma x20 users globally. “The Luma x20 family of surveillance solutions provides excellent image and audio capture, and with the new customer handoff feature, it now offers absolute privacy for camera feeds and recordings,” Webb said. “With notifications and integrator access controlled through the powerful OvrC remote system management platform, it’s easy for integrators to give their clients full control of their footage and then to get temporary access from the client for any troubleshooting needs.”