
Ethics First
Designing Unbiased Security AI
Security is a necessary evil. Its purpose is to mitigate liability and, in its traditional setup, is purely a cost center. Businesses only invest the minimum amount to ensure their safety and security. It isn’t profitable. In fact, security is there to stop the bleeding. But that is changing with the addition of AI.
AI, particularly agentic AI, is going to change the whole of physical security. It’s going to change how we gather data, interpret data, act on data, and even how we employ security professionals. In the past, physical security was about detecting an anomaly and alerting a human who does their best to respond. But agentic AI can move security towards prevention instead of reactionary with the goal to change threatening behavior as it’s happening.
Naturally, as the industry moves away from only human decision making, it raises concerns about privacy, AI policies, and the ethical use of AI in physical security. Like all great technologies, agentic AI in physical security can be used for good but it can also be twisted to be used for bad. The direction the technology takes will all depend on the care in design, the thoughtfulness of long-term impacts, and ultimately the ethics and values of the companies that develop them; is it revenue and speed to market at all costs or is it doing the right thing for the right reasons even if it results in lower revenue?
The Critical Role of Data Governance
Everything starts with data storage and handling even in the physical security industry. This component is often overlooked or deliberately ignored, but it needs to change. End users need to be aware of where their data is (is it on-prem, stored on the edge, or does it live on the cloud), how long their data is stored, how their data is transmitted, and they need to know how their data is used. It is the end user’s responsibility to vet any manufacturer or service provider and to know their data policy. Of course, there are companies who will do anything to earn a buck, but if end users do their homework, they’ll find security providers who share their values, have a solid data security policy, have proper governance to know that policy is followed, and who clearly communicate that policy and show evidence of following it. Mary Rose McCaffrey, a security expert with more than 30 years of experience, said, “All manufacturers promise the moon and don’t necessarily deliver. End users must understand the manufacturing, lifecycle costs, and where the data is stored, retained, and protected.”
This is vitally important because AI models use data to learn, and if end users aren’t careful, their data sovereignty will be violated. Governance of the entire data pipeline from input to analysis to enrichment to reporting needs to be thoughtfully designed to ensure chain of custody. “The ubiquity of data doesn’t have a bias per se, but where stored, how it is used, and who has access can introduce a bias,” said McCaffrey. “Do your homework to avoid ‘red flags.’ What are your requirements? How do you vet your manufacturer? What are your options for remedy if it doesn't deliver against your requirements?”
Shifting the Focus from Identity to Behavior
AI needs to be implemented in a way that does not violate privacy. The public does not like technology that can reveal their identity, especially if that technology can track their movements. Think about the concerns users have with cyber tracking and cookies. Those concerns are more pronounced in the physical world because there can be more serious consequences.
Because AI is trained on data input into the system, it will inherently have bias depending on the type and style of data input. Using AI technology that relies on matching gender, race, or any other human features or characteristics is fraught with problems and produces a lot of mistakes and false positives. Instead, AI should focus on behaviors as it is behavior that results in criminal activity.
The truth is, as security professionals we shouldn’t care about who is committing a crime. We should care about the what, when, where, and why of criminal behavior. Agentic AI, if trained and implemented correctly, will be better at this than humans. We can’t stop being human and can’t remove our biases, but as AI is trained to recognize behaviors, it will naturally help remove biases.
Consciously Designing AI Training to Mitigate Bias
The best way to avoid bias as we’re training the AI is to consciously exclude it from the models to ensure the data coming out is of the highest quality and relevance. Finding ways to identify potential crime is the holy grail of security. If we include racial and facial identifiers, we simply introduce bias with no improvement in recognizing criminal activity. That’s why we need to understand human behavior in context before the crime occurs in order to create the sweet spot for AI modeling.
AI model development is also not a one-and-done type of thing. The sources of training data for AI models should thoroughly and continuously be vetted for lawful, ethical, and unbiased data sources. This includes proper monitoring and governance of those data sources ensuring chain of custody and security that it isn’t tampered with. Never before has zero-trust architecture been a necessary pattern for physical security systems and the data pipelines they generate.
In the early days of agentic AI use, intentional use of deterministic vs. nondeterministic reasoning needs to be monitored carefully, erring on the side of determinism when AI behavior is still unproven. We don’t need to boil the ocean when introducing agentic AI into physical security workflows nor should we throw the baby out with the bathwater. Instead, guardrails should be used to ensure outcomes are predictable and when outcomes fall out of acceptable guardrails, the appropriate circuit breakers need to be in place to suspend agentic automation and bring human monitors into the loop.
Guiding Principles for Ethical AI Implementation
Now is the time for end users to start implementing AI ethically in their physical security plans.
Four guiding principles are:
- Have a values-based approach to data, personally identifiable information (PII), and AI
- Ensure policies reinforce those values throughout the organization
- Have monitoring and governance in place to ensure policies are being followed correctly
- Do business with trusted partners who share your values
Thankfully, it isn’t an all or nothing implementation. Start with a small area where the AI will have an impact on your security but one that is controllable, and ease into using it. See how AI can help your security and business and learn what guardrails you need to have in place to use it ethically. This includes vetting any potential providers and manufacturers and knowing how your data is stored and used.
The Future of Security: Human Efforts Multiplied by AI
AI is coming for the security industry. McCaffrey put it this way:
In the next five years, security professionals must continue to innovate, become more cyber literate, and lean into technology, AI being the most prevalent. The world of security as a “fixed infrastructure” will continue to morph into a suite of tools, driven by requirements, but met by the utility of agentic AI to complement the security requirements of any business. Security professionals will need to become more knowledgeable about agentic AI and other cyber technologies to augment existing approaches to physical security.
Agentic AI will have such an impact on the physical security industry that all security practitioners need to keep an open mind and be willing to rewrite their physical security playbook. Thinking that agentic AI is another tool that is grafting into existing processes and procedures will be a big mistake.
AI will change how data is aggregated and how we consume it. It will change the way we think about physical security. With the correct guardrails in place to ensure the development and deployment and use cases of AI are ethical, AI will help us do more and multiply any human efforts.
Get the Latest Articles
Unlock exclusive insights from the world of intelligent site management and proactive security. Subscribe to ELEVATE to get our latest featured articles, expert interviews, and tactical insights delivered right to your inbox.




