The Stage is Set
New human rights and AI laws set for increased regulation in the security industry authored
- By Fredrik Nilsson
- Nov 19, 2024
The security industry spans the entire globe, with manufacturers, developers and suppliers on every continent (well, almost—sorry, Antarctica). That means when regulations pop up in one area, they often have a ripple effect that impacts the entire supply chain.
Recent data privacy regulations like GDPR in Europe and CPRA in California made waves when they first went into effect, forcing businesses to change the way they approach data collection and storage to continue operating in those markets. Even highly specific regulations like the U.S.’s National Defense Authorization Act (NDAA) can have international reverberations – and this growing volume of legislation has continued to affect global supply chains in a variety of different ways.
That trend is unlikely to change—if anything, businesses in the security industry can expect more regulation, not less. That’s not necessarily a bad thing—an increased focus on human rights, AI safety, privacy, and the environment should ultimately yield positive results. But it does mean that businesses in the security industry need to be aware of not just their own actions, but those of their partners and vendors up and down the supply chain. Amid a stronger regulatory landscape, effectively managing those relationships will be critical.
Human Rights Considerations Take Center Stage
In July 2024, the European Union’s Corporate Sustainability Due Diligence Directive (CSDDD) entered into enforcement. CSDDD is a groundbreaking piece of legislation that the European Commission says “will ensure that companies in scope identify and address adverse human rights and environmental impacts of their actions inside and outside Europe.” At its core, the law requires businesses to conduct appropriate due diligence into the potential human rights and environmental impact of the company’s operations, as well as that of its subsidiaries, partners, and vendors. This is key, because it means organizations are not just responsible for their own actions, but those of businesses up and down the supply chain.
This means businesses have a responsibility to ensure, for example, that their products are not manufactured in sweat shops and their materials are not obtained via slave labor. It also means they cannot simply shift pollution-heavy activities to countries with lax environmental laws. Under CSDDD, businesses have an obligation to ensure that they are operating in a manner consistent with both the human rights and environmental ideal of the EU. Critically, businesses operating in the EU all share this obligation—an individual business seeking to gain a competitive advantage by circumventing the law risks steep financial penalties (not to mention reputational damage).
CSDDD falls within a category that the United Nations terms “Human Rights Due Diligence” (HRDD). And while the UN lacks a meaningful enforcement mechanism, the ideals outlined in the organization’s “Guiding Principles on Business and Human Rights” have influenced the shape and direction of regulations like CSDDD—as well regulations in other countries. The full impact of CSDDD has yet to be felt (as with GDPR, many organizations are likely holding their breath to see what the first enforcement actions and fines look like), but the gradual shift away from human rights violators has already begun. Businesses in the technology industry (security included) should be evaluating their supply chains with both human rights and environmental sustainability in mind.
Artificial Intelligence Is Increasingly Under the Microscope
As artificial intelligence (AI) has evolved—and become more mainstream—regulators have taken a renewed interest in the technology. AI solutions require vast amounts of data to train them effectively, and where that data comes from matters. Are AI providers using private information to train their solutions? Are they using copyrighted material? Are their data sets discriminatory in any way? How do they identify, account for, and eliminate inherent biases? This last point is particularly important—for example, a facial recognition solution that struggles to differentiate between people of color can (and likely will) result in serious and damaging discrimination. That will negatively impact the customer’s reputation, but it can also have significant legal repercussions. It’s important to remember that it’s not just about the data—how AI is used in practice needs to be carefully considered as well.
Unfortunately, the rapid pace of technological advancement means regulations tend to lag behind—but the EU took a step in the right direction this year when it passed the EU AI Act, billed as “the world’s first comprehensive AI law.” The new law breaks AI applications and systems into three risk categories: unacceptable risk, high risk, and low risk. Those that constitute an unacceptable risk—such as the government-run social scoring systems used in some countries—are banned outright. High risk applications—such as a resume-scanning tool used to sort and prioritize job applications—are fine to use but have specific guidelines they need to comply with. Low risk applications that don’t fall into either category are generally unregulated, though there is always the possibility that they might be reconsidered as the technology evolves.
This has implications for the security industry, where solutions like facial recognition have obvious privacy considerations that need to be addressed. The EU AI Act does a good job of creating a clear delineation of responsibility, underscoring that both the manufacturer of an AI solution and the end user are each accountable for aspects of its use. For manufacturers, it is important to limit the potential for abuse when creating a solution. But when a customer signs the invoice to purchase a solution, the responsibility for using it appropriately shifts to them. That means customers who misuse AI-based solutions to engage in risky or inappropriate behavior will quickly run afoul of the new law. But it also means a solution that creates an “unacceptable risk” with no other practical application will probably get the manufacturer in hot water.
The EU isn’t alone here. In the US, the Biden administration issued a 2023 executive order that took inspiration from the EU AI Act, outlining positive and negative use cases for AI. While not as comprehensive as the EU law, the executive order laid important groundwork—and we can almost certainly expect more thorough legislation in the coming years. Organizations in the security industry need to ensure that any AI-based analytics they provide are built with responsible use in mind.
Body-Worn Regulations Could Be Next
With regulatory oversight trending upward, the security industry should look at not just recently passed laws, but those likely to come in the near future. The market for body-worn cameras is steadily growing—once used primarily in the law enforcement field, a growing number of organizations are finding that body-worn devices can be used in a variety of innovative ways. Retail organizations are increasingly using body-worn cameras to prevent theft, keep employees protected from disruptive customers, and help ensure a high level of service. And retail isn’t alone—healthcare and education have taken a similar approach, using body-worn devices to protect doctors, nurses, teachers, and more, alongside hospitality businesses like large event venues and cruise lines.
Many states now have regulations mandating the use of body-worn cameras by law enforcement officers (the Bureau of Justice Assistance provides a useful breakdown of regulations across the US), but there are—so far—few that govern their use outside of law enforcement. But if adoption of body-worn cameras continues to expand in other industries, it’s likely just a matter of time before regulators begin to expand their reach. Both manufacturers and end users will need to ensure that these devices are being use appropriately and responsibly.
Prioritizing Ethical and Responsible Behavior
Regulations are nothing new, but recent developments underscore the fact that even local regulations can have a global impact—and as the pace of technological advancement has increased, governments around the world have been racing to catch up. Rather than dealing with these new laws as they emerge, businesses can set themselves up for long-term success by keeping one eye on the future, planning for rules and regulations long before they take effect. Of course, the most effective plan is to engage in responsible business practices independent of legal considerations—and as new regulations emerge, businesses that prioritize ethical behavior will find themselves well positioned for the future.
This article originally appeared in the November / December 2024 issue of Security Today.