First LLM Benchmark Provides Vendors and SOC Teams Needed Guidance to Select the Best LLM

Recently, Simbian introduced the first benchmark to comprehensively measure LLM performance in SOCs, measuring LLMs against a diverse range of real alerts and fundamental SOC tools over all phases of alert investigation, from alert ingestion to disposition and reporting.

Existing benchmarks compare LLMs over broad criteria such as language understanding, math, and reasoning. Some benchmarks exist for broad security tasks or very basic SOC tasks like alert summarization.

For the first time ever, the industry has a comprehensive benchmarking framework specifically designed for AI SOC. What makes this different from existing cybersecurity benchmarks like  CyberSecEval  or  CTIBench? The key lies in its realism and depth.

Forget generic scenarios. Simbian’s benchmark is built on the autonomous investigation of 100 full-kill chain scenarios that realistically mirror what human SOC analysts face every day. To achieve this, the company created diverse, real-world-based attack scenarios with known ground truth of malicious activity, allowing AI agents to investigate and be assessed against a clear baseline.

These scenarios are even based on historical behavior of well-known APT groups (like APT32, APT38, APT43) and cybercriminal organizations (Cobalt Group, Lapsus$), covering a wide range of MITRE ATT&CK™ Tactics and Techniques, with a focus on prevalent threats like ransomware and phishing.

To facilitate this rigorous LLM testing, Simbian leveraged their AI SOC agent. The evaluation process is evidence-grounded and data-driven, always kicking off with a triggered alert realistically representing the way SOC analysts operate. The AI agent then needed to determine if the alert was a True or False Positive, find evidence of malicious activity (think “CTF flags” in a red teaming exercise), correctly interpret that evidence by answering security-contextual questions, and provide high-level overview of the attacker’s activity and respond to the threat in an appropriate manner – all autonomously. This evidence-based approach is critical for managing hallucinations to ensure the LLM isn’t just guessing but validating its reasoning and actions.

Surprising Findings and Key Takeaways Benchmarking some of the most well-known and high-performing models available as of May 2025, the study including models from Anthropic, OpenAI, Google, and DeepSeek.

The Results:

All high-end models were able to complete over half of the investigation tasks, with performance ranging from 61% to 67%. For reference, during the first AI SOC Championship the best human analysts powered by AI SOC scored in the range of 73% to 85%, while Simbian’s AI Agent at extra effort settings reached the value of 72%. This suggests that LLMs are capable of much more than just summarizing and retrieving data; LLM capabilities extend to robust alert triage and tool use via API interactions.

Another key finding: AI SOC applications heavily lean on the software engineering capabilities of LLMs. This highlights the importance of thorough prompt engineering and agentic flow engineering, which involves feedback loops and continuous monitoring. In initial runs, some models struggled, requiring improved prompts and fallback mechanisms for coding agents involved in analyzing retrieved data.

It should be noted that Sonnet 3.5 sometimes outperformed newer versions like Sonnet 3.7 and 4.0. This could be due to “catastrophic forgetting,” where further domain specialization for software engineering or instruction following might detrimentally affect cybersecurity knowledge and investigation planning. This underscores the critical need for benchmarking to evaluate the fine-tuning of LLMs for specific domains.

The discovery was also made that “thinking models” (those that use post-training techniques and often involve into internal self-talk) didn’t show a considerable advantage in AI SOC applications, with all tested models demonstrating comparable performance. This resembles the findings of the studies on software bug fixing and red team CTF applications, which suggest that once LLMs hit a certain capability ceiling, additional inference leads to only marginal improvements, often at a higher cost. This points to the necessity of human-validated LLM applications in AI SOC and the continued development of fine-grained, specialized benchmarks for improving cybersecurity domain-focused reasoning.

For the full details and results of the benchmarking, click here. Simbian looks forward to leveraging this benchmark to evaluate new foundation models on a regular basis, with plans to share findings with the public.

Featured

  • The Evolution of IP Camera Intelligence

    As the 30th anniversary of the IP camera approaches in 2026, it is worth reflecting on how far we have come. The first network camera, launched in 1996, delivered one frame every 17 seconds—not impressive by today’s standards, but groundbreaking at the time. It did something that no analog system could: transmit video over a standard IP network. Read Now

  • From Surveillance to Intelligence

    Years ago, it would have been significantly more expensive to run an analytic like that — requiring a custom-built solution with burdensome infrastructure demands — but modern edge devices have made it accessible to everyone. It also saves time, which is a critical factor if a missing child is involved. Video compression technology has played a critical role as well. Over the years, significant advancements have been made in video coding standards — including H.263, MPEG formats, and H.264—alongside compression optimization technologies developed by IP video manufacturers to improve efficiency without sacrificing quality. The open-source AV1 codec developed by the Alliance for Open Media—a consortium including Google, Netflix, Microsoft, Amazon and others — is already the preferred decoder for cloud-based applications, and is quickly becoming the standard for video compression of all types. Read Now

  • Cost: Reactive vs. Proactive Security

    Security breaches often happen despite the availability of tools to prevent them. To combat this problem, the industry is shifting from reactive correction to proactive protection. This article will examine why so many security leaders have realized they must “lead before the breach” – not after. Read Now

  • Achieving Clear Audio

    In today’s ever-changing world of security and risk management, effective communication via an intercom and door entry communication system is a critical communication tool to keep a facility’s staff, visitors and vendors safe. Read Now

  • Beyond Apps: Access Control for Today’s Residents

    The modern resident lives in an app-saturated world. From banking to grocery delivery, fitness tracking to ridesharing, nearly every service demands another download. But when it comes to accessing the place you live, most people do not want to clutter their phone with yet another app, especially if its only purpose is to open a door. Read Now

New Products

  • 4K Video Decoder

    3xLOGIC’s VH-DECODER-4K is perfect for use in organizations of all sizes in diverse vertical sectors such as retail, leisure and hospitality, education and commercial premises.

  • PE80 Series

    PE80 Series by SARGENT / ED4000/PED5000 Series by Corbin Russwin

    ASSA ABLOY, a global leader in access solutions, has announced the launch of two next generation exit devices from long-standing leaders in the premium exit device market: the PE80 Series by SARGENT and the PED4000/PED5000 Series by Corbin Russwin. These new exit devices boast industry-first features that are specifically designed to provide enhanced safety, security and convenience, setting new standards for exit solutions. The SARGENT PE80 and Corbin Russwin PED4000/PED5000 Series exit devices are engineered to meet the ever-evolving needs of modern buildings. Featuring the high strength, security and durability that ASSA ABLOY is known for, the new exit devices deliver several innovative, industry-first features in addition to elegant design finishes for every opening.

  • Unified VMS

    AxxonSoft introduces version 2.0 of the Axxon One VMS. The new release features integrations with various physical security systems, making Axxon One a unified VMS. Other enhancements include new AI video analytics and intelligent search functions, hardened cybersecurity, usability and performance improvements, and expanded cloud capabilities