Pragmatism, Productivity, and the Push for Accountability in 2025-2026

Every year, the security industry debates whether artificial intelligence is a disruption, an enabler, or a distraction. By 2025, that conversation matured, where AI became a working dimension in physical identity and access management (PIAM) programs. Observations from 2025 highlight this turning point in AI’s role in access control and define how security leaders are being distinguished based on how they apply it.

Governance Brings Necessary Pragmatism
In 2025, AI adoption in physical security began shifting from hype and overuse to regulation-driven pragmatism, where organizations also looked to define what they meant by AI vs. machine learning (which has been used in security for years in video analytics, anomaly detection and many other use cases).

This distinction helped leaders avoid conflating long-standing ML capabilities with the new dimension of AI entering physical identity and access workflows, such as generative AI and conversational tools. Such clarity and regulatory pragmatism helped security leaders shift towards asking the more pointed question of what should AI do vs. what can it do?

The question became even more important as organizations realized that intent is a critical decision factor that cannot be automated when deciding access and entitlements. And without human oversight, “black box” AI recommendations easily led to errors and risks that can undermine confidence in access governance.

A Value Multiplier Beyond Features
Organizations embraced the fact that AI should be used to drive customer value as opposed to simply adding new features. One of the early proof points was in customer support, where AI can improve customer support and customer experience, by providing users with self-service options before interacting with a customer service team member.

Consequently, chat/AI assistants began to appear in physical identity and access platforms, though adoption was uneven. Value may have been visible where conversational interfaces have been deployed, but so were the shortcomings (such as script-bound flows, weak intent recognition, and late or failed escalations that left customers stuck in loops).

This mirrors the often-frustrating interactions many consumers experience with chatbots in daily life: speedy but irrelevant answers or the lack of prompt handoff, all of which quickly reduces a user’s willingness to engage with chatbots and discourages continued use.

From a Tool to a Trusted Partner
AI’s most sustained value was how it functioned as a productivity partner for security teams, where earlier industry forecasts about predictive insights, log analysis, and policy reviews came to life.

For instance, agentic AI, the ability to take on defined tasks without constant prompting, embedded itself into daily workflows for security teams. It scanned and summarized new regulatory provisions, it parsed log data to surface anomalies for human investigation, it aided in policy reviews to find overlaps and inconsistencies that had gone unnoticed in spreadsheets, and much more.

AI also extended into training and enablement, showing up as agents that drafted training and educational modules and prepared onboarding kits. As a result, AI stopped being viewed as just another tool in the stack; it earned trust as a partner that amplified human capacity, while keeping security teams responsible for decisions that depend on context.

What comes next in 2026 is about execution, where oversight, AI user interfaces and regulations will shape how AI matures in the security market.

2026 Predictions
Human-in-the-Loop Becomes the Design Standard for Access Decisions
Security leaders will lean into intent and context as two non-negotiables that decide actions, not merely patterns detected by AI. As a result, “human-in-the-loop” will be the design standard for security programs to minimize disruption for users while preserving rigorous controls for approvers. In essence, automation will prepare the case, and people will make the final call.

For example, AI within a PIAM decision engine will assemble an evidence pack of applicable policies, role and entitlement history, timing and location signals, and any detected anomalies, then present a clear recommendation to the security team for assessment. Such an interaction ensures that robust controls and shared responsibility are in place to safely scale the use of AI in physical access and identity lifecycle management.

Conversational Interfaces Become Product-native and Add Real Value
Advancements in chatbot-style, conversational user interfaces will appear as a native feature in PIAM platforms.

They will be deeply trained in physical identity and access terminology, policies, processes, and regulations to augment workflows by gathering the necessary facts, confirming them against policy in real time, and preparing a recommended action for approval.

The effect will be less back and forth for the security administrator and faster, more reliable decisions.

For employees, visitors, contractors, and other trusted identities, these interfaces will simplify onboarding by removing the need to fill out long and repetitive access request forms. Instead, a brief dialog will capture role, duration, location, and purpose, then kick off the right automation in the PIAM decision engine…all while preserving the human checkpoint for higher-risk access requests and changes.

For customer support, effective conversational chatbots assistants will simplify engaging with PIAM vendors, where answers and artifacts (policies, troubleshooting steps, links to relevant records) are served up to shorten time to resolution. They will also be trained in context to escalate to a real person quickly, ensuring a positive customer experience.

Compliance Regulations Drive Disciplined AI Adoption
Meaningful standards and regulations will drive more controlled and thoughtful adoption of AI when it touches PII, makes decisions, or spans cybersecurity and physical security. In practice, this means security leaders will follow guidance from regulations to figure out where AI belongs in their organization, as well as how it must behave, and when a human must be in the loop to avoid mistakes, risk, and over-engineering.

Case in point is the European Union AI Act that sets a high bar, much as GDPR did. It requires risk classification, documentation, and human oversight for higher-risk systems, logging, and explainability. The United States follows state and industry requirements, such as the Texas 2025 AI governance law that adds obligations in healthcare and beyond.

All together, these regulations reinforce the need for auditable AI, combined with human oversight and accountability by design, to be built into physical identity and access workflows…as well as to avoid penalties and fines.

The Bottom Line
AI’s impact on physical identity and access management is about value, accountability and responsibility. Organizations that define AI clearly, embed human oversight, and use conversational and agentic tools where automation genuinely improves productivity and the customer experience will not only follow tightening regulations, but will also earn and increase customer trust.

The winners will be those who treat AI as a dimension of access automation and governance, a force multiplier that augments people while keeping intent, accountability and security at the center.

Featured

  • Survey: 48 Percent of Worshippers Feel Less Safe Attending In-Person Services

    Almost half (48%) of those who attend religious services say they feel less safe attending in-person due to rising acts of violence at places of worship. In fact, 39% report these safety concerns have led them to change how often they attend in-person services, according to new research from Verkada conducted online by The Harris Poll among 1,123 U.S. adults who attend a religious service or event at least once a month. Read Now

  • AI Used as Part of Sophisticated Espionage Campaign

    A cybersecurity inflection point has been reached in which AI models has become genuinely useful in cybersecurity operation. But to no surprise, they can used for both good works and ill will. Systemic evaluations show cyber capabilities double in six months, and they have been tracking real-world cyberattacks showing how malicious actors were using AI capabilities. These capabilities were predicted and are expected to evolve, but what stood out for researchers was how quickly they have done so, at scale. Read Now

  • Why the Future of Video Security Is Happening Outside the Cloud

    For years, the cloud has captivated the physical security industry. And for good reasons. Remote access, elastic scalability and simplified maintenance reshaped how we think about deploying and managing systems. Read Now

  • UL Solutions Launches Artificial Intelligence Safety Certification Services

    UL Solutions Inc., a global leader in safety science, today announced the launch of artificial intelligence (AI) safety certification services, enabling comprehensive assessments for evaluating the safety of AI-powered products. Read Now

  • ESA Announces Initiative to Introduce the SECURE Act in State Legislatures

    The Electronic Security Association (ESA), the national voice for the electronic security and life safety industry, has announced plans to introduce the SECURE Act in state legislatures across the country beginning in 2025. The proposal, known as Safeguarding Election Candidates Using Reasonable Expenditures, provides a clear framework that allows candidates and elected officials to use campaign funds for professional security services. Read Now

    • Guard Services

New Products

  • A8V MIND

    A8V MIND

    Hexagon’s Geosystems presents a portable version of its Accur8vision detection system. A rugged all-in-one solution, the A8V MIND (Mobile Intrusion Detection) is designed to provide flexible protection of critical outdoor infrastructure and objects. Hexagon’s Accur8vision is a volumetric detection system that employs LiDAR technology to safeguard entire areas. Whenever it detects movement in a specified zone, it automatically differentiates a threat from a nonthreat, and immediately notifies security staff if necessary. Person detection is carried out within a radius of 80 meters from this device. Connected remotely via a portable computer device, it enables remote surveillance and does not depend on security staff patrolling the area.

  • Connect ONE’s powerful cloud-hosted management platform provides the means to tailor lockdowns and emergency mass notifications throughout a facility – while simultaneously alerting occupants to hazards or next steps, like evacuation.

    Connect ONE®

    Connect ONE’s powerful cloud-hosted management platform provides the means to tailor lockdowns and emergency mass notifications throughout a facility – while simultaneously alerting occupants to hazards or next steps, like evacuation.

  • QCS7230 System-on-Chip (SoC)

    QCS7230 System-on-Chip (SoC)

    The latest Qualcomm® Vision Intelligence Platform offers next-generation smart camera IoT solutions to improve safety and security across enterprises, cities and spaces. The Vision Intelligence Platform was expanded in March 2022 with the introduction of the QCS7230 System-on-Chip (SoC), which delivers superior artificial intelligence (AI) inferencing at the edge.