Featured Speakers

Steve Lindsey

Chief Information and Technology Officer, LVT
Steve Lindsey was instrumental in designing, forming, and implementing the LVT Platform, the company’s video and IoT management system. Lindsey joined LVT in 2011 after leading technology, software, engineering, and development teams at multiple companies including i3 Technologies and Novell. He holds a bachelor’s degree in electronic and information technology from Brigham Young University. Outside of all things tech, Steve loves mountain biking, music, food, sports, and especially his family. He and his wife Wendy have seven children and live in Utah County.

Derek Boggs

VP of Marketing, LVT
Derek Boggs is the VP of Marketing for LiveView Technologies. Hailing from Utah, Derek has held similar B2B roles at Agio and Ivanti and was recently named to Utah Business’s Forty Under 40. As LVT’s first marketing hire, he built the company’s demand generation and content engines, helping grow revenue from $17 million to hundreds of millions. Boggs is a visionary in promoting LVT’s mission to make communities safer and more secure.
The Future of Physical Security is Here

Strengthen Security Across Your Entire Business

The reality is clear: it’s time to embrace new technology to create safer workplaces. Companies that implement Agentic AI gain a critical edge by becoming more secure, efficient, and competitive. Agentic AI can assess situations and make decisions independently, reducing the need for human intervention and accelerating response times.

This webinar covers the following:

  • How advanced mobile security units and agentic AI help solve your challenges
  • Understand why agentic AI is rewriting the security budget
  • Experience a live demonstration of LVT’s agentic AI
  • Learn about the importance of open security ecosystems
Learn More

Full Transcript

Derek Boggs:

Welcome everybody to this webinar about revolutionizing the physical security space with Agentic AI. My name is Derek Boggs. I'm the VP of Marketing here at LVT, and we're joined here by the one and only Steve. Steve Lindsay, thanks for joining my friend.

Steve Lindsey:

Thank you. Thanks for having me here.

Derek Boggs:

Tell me, are you excited to dive in today on all things Agentic AI?

Steve Lindsey:

Oh yeah. This is an extremely exciting time, and so I think the ideas that we're going to be talking about today, again, they're probably really new concepts only enabled by technology that's now available.

Derek Boggs:

Yeah, absolutely. Well, before we get going, those of you that have joined us before, we have a guest with us today. The guest is looming right behind Steve.

Steve Lindsey:

Yeah, right behind me, right here. This is Marilyn.

Derek Boggs:

Marilyn, is that what we call her?

Steve Lindsey:

Yeah. Marilyn, she's a little vain, but you'll see how cool she is later on today.

Derek Boggs:

Cool. Yeah, Steve's hinting. We've got a fun live demo today, so always good. Hopefully the demo Gods shine upon us.

Steve Lindsey:

Yes, this is a live demo, so hopefully it all goes well.

Derek Boggs:

Before we get started, I'm going to pose the question to the audience. Everybody, thank you so much for joining live. We're going to be diving in, but before we dive in, I'd love to hear from you and we're going to pick up these questions later, but tell us what is Agentic AI to you? Steve has his definition. He's going to dive into, you'll Google it, you'll get 20 some odd different definitions. We want to hear from you all, so please drop in that response to the question, what is Agentic AI? And we'll be tuning in throughout the webinar to see what you guys are thinking. So with that, Steve, let's get rolling man. Dive into the current state of physical security. What challenges are the industry dealing with?

Steve Lindsey:

Yeah, so when we think about physical security today, it's really centered around the various problems that we have. A lot of people think in asset protection, loss prevention, that we're dealing with theft, but there's a lot more to it than that. We're having a problem with my slide deck here. I can't progress it.

Derek Boggs:

We have live demos.

Steve Lindsey:

There we go.

Derek Boggs:

We have live slide deck.

Steve Lindsey:

You'd think slide deck would actually be the easiest one?

Derek Boggs:

All right, here we go.

Steve Lindsey:

All right, now we're working. Yep. So when we think about this, there's various types of things that we have to deal with as security professionals. Some of these affect the life safety of our employees, some of these, the life safety of our customers, maybe even just bystanders that happen to be around, but everything from vandalism and violence and panhandling. We're dealing with a lot of it. And really what it results in is just unwelcome environments. And these unwelcome environments can lead to revenue loss. Maybe if I'm a storefront, nobody wants to come into my store because they just don't feel safe getting out of their car and walking in, which obviously is a loss of revenue, but it also affects how our employees feel.

One of the most basic things that happens to an employee when they go to work is they get out of their car and they go into the place of business and then at the end of their shift they leave. And that can be an extremely dangerous time for them based on the safety of the environment that they're in. And then due to a lot of that, you have issues where there's just lack of productivity. You're dealing with things that really aren't moving your business forward. And so these are just things that we have to mitigate as security professionals. So when we think about traditionally how we've handled this, we break it down into these jobs to be done. So starting over on the left with detection, moving all the way up through prosecution, and I like to divide these into two halves. The left half is where we're trying to prevent issues from happening, and on the right half, we're really responding and trying to prosecute those.

Derek Boggs:

The LPRC has a version of this. They call it the left of bing, right of bing.

Steve Lindsey:

Yes. So the reason why we like to break it down into these jobs to be done is because if we understand what job we're trying to do at each one of these levels in this flow, we can understand how to optimize technologies to get the force multipliers that we need. So that's why we like to talk about each of those. But just to cover these, we start with detection. So are we able to detect a threat that's here? And there's various types of ways that that can be done, but once we detect that, then we need to validate, well, what is this threat to me?

And then once we validate that, then we try to put in proactive measures to deter that from happening. When we talk about the two sides, the prevention and the other, really the left is a lot of the security professionals are trying to prevent it. I like to use the term an ounce of prevention is worth a pound of cure. So if we can just prevent it, then that just is better for everyone. But in those situations where something does happen, we've got to be able to prosecute that and hold people accountable. And this is where I like to talk about the symbiotic relationship between effective deterrence and prosecution and holding people accountable. If we don't hold people accountable at the prosecution stage, deterrence doesn't work.

Derek Boggs:

Yeah. What do they have to fear at that point.

Steve Lindsey:

Exactly. So there's a lot of things that have to be done along every single one of these jobs to be done. And we can't neglect doing one because it's too expensive or it's too timely. And that's what we tend to do. In fact, when we look at the cost spend, I'm going to add us on the slide here. This is US dollars spent in the United States alone to do each one of these jobs. So if we think about detect and deter, we're talking 21 billion for detect and 39 billion for deter. This is mostly made up of man guard services. That's where the bulk of that expense is at. But you'll notice on the right-hand side, there's not a lot of expense there. And what we find in analyzing the data is that you end up spending about three times the amount of money in loss that you had, to prosecute.

And that's just way too expensive to prosecute. So I think we become just naturally way more selective in what we're prosecuting. But coming back to that symbiotic relationship, if we do not hold people accountable, then we'll never be able to deter and prevent. So we've got to find ways to do that. So when we think about these jobs to be done and these expenses, in some areas, we're spending way too much money and in others we're not spending enough. So we've got to find ways to balance that out. So let's now break this down into current state. How do we solve these problems today? Before we go here, let's talk about technology in general. So as a practicing technologist, especially when we put our business hats on, technology in business has to solve at least, well probably three main problems. And if it doesn't solve these problems in business, then technology really has no place. So the first thing is it has to help reduce cost. It has to be a force multiplier. It has to make things go faster. So it's a speed, it's solving for speed.

Derek Boggs:

And therefore reduce costs in a way.

Steve Lindsey:

And reduce costs. And then the third one is just increasing accuracy, being better at what it is, higher quality, whatever that needs to be. But if technology doesn't make it cheaper, faster, or better quality, then there's no place for it in business. Now, when we put our consumer hats on, there's what I call the cool factor in this, but business, there's no place for cool in business. So when we think about applying technologies to solve these jobs to be done, we've got to be thinking about those three things. So when we think about just table stakes today, and not only just today, but when technology started getting introduced to solve physical security problems, we can go all the way back to burglar alarms and fire and smoke detectors and access control, and then it evolved into cameras and other types of things.

But we've followed the same paradigm through decades, and that is some technology detects something and then it alerts a human. The human then responds to that somehow, and then after the fact, they'll collect evidence. And again, this has been going on for decades, and when we think about even applications of artificial intelligence today where we add that to computer vision, we are still following the same paradigm. The AI now can detect that there's somebody there, but it's still going to send an alert to a human, and the human still has to respond. So the problem with this is it still puts the human as the crutch to make the whole thing work.

And as we continually scale this and we get increased coverage, we still have to scale the humans to be able to respond to this. And so what ends up happening is if the technology and the numbers of these sensors outweighs the number of people, then we get alert fatigue and they can no longer keep up with those alerts.

Derek Boggs:

Loses its efficiency, loses its actual value of cost savings in that instance.

Steve Lindsey:

Yes. Yeah. And so back in 2017, 2018, when LVT introduced the mobile security unit, one of the things that we wanted to do was to find ways to use technology to scale that human that has to respond. And so we introduced this idea of a deterrence action or some kind of response to what it detected. So it detects something, but instead of immediately alerting a human, it tries to use lights and sound and different things to try to scare them away. And it tried to do that in a way that was not a motion light. So by that what I mean is-

Derek Boggs:

Some form of intelligence? Give me an example.

Steve Lindsey:

Well, a motion light to me is predictable. I know that if I move, the light turns on, if I stop moving, it turns off. And if that becomes predictable, that equates to desensitization and it's not believable. And so when we approached this, we introduced this idea of escalations that allowed us to have a different kind of response as the threat happened. And so that kind of made it more variable, dynamic, but it was still like a pre-canned message. It might be a different message though, and maybe we introduced lights at a different escalation level and whatever. But what was really powerful about this is our customers started noticing that a lot of this technology could get rid of some of the noise. And only when people didn't leave or stopped what they were doing, would it then alert a human.

We were able to minimize that noise that humans had that were required for humans there so they could then focus on what was more important there. Now, this was earth-shattering when this came out. And since then there's been a lot more advancements in technology. And there's also been the need to continually work on minimizing the noise. So this is where LVT really sat down and thought, okay, how can we use some of these new technologies and continually be this force multiplier for humans? And so this is where the concept of Agentic AI comes into play. I'm just going to literally read the definition here for everybody. But basically Agentic AI is a type of artificial intelligence that can make decisions and take actions independently to automate multi-step processes. It's designed to operate more like a human employee.

So again, think about an agent that can be a human that does a particular task. And what we're doing is we're training an AI to be able to do that task. Now, to do the task, it has to be able to adapt to different types of situations. It can't just be statically programmed to do this action every single time. And that's really a subtle difference in what we're introducing now with our Agentic AI versus what we had in the past where we would at least take some kind of action when we detected something. And so when we think about this Agentic AI now, what we're actually saying is that we want to plug this in around that validation stage. So we can still detect things, but now we want to be more intelligent about what it is that it's detecting, because that now will determine what should it do next?

Maybe it does want to alert a human immediately because its threat assessment is high enough, but maybe it decides, well, I can actually take care of this trying some of these other things out, and we'll demo some of those today. But it can now decide in a more dynamic way, what should it do about the situation? And then it has a lot more tools at its disposal to be able to take those deterrent actions.

Derek Boggs:

And some of the comments that I saw from people responding to what is Agentic AI, I'm hearing a lot of humans, removing humans, augmenting humans, a couple people saying, don't take my job, stuff like that.

Steve Lindsey:

That's true.

Derek Boggs:

What's your take on that? I saw the definition and how it hints at supporting, augmenting. What's your take on current state of Agentic AI?

Steve Lindsey:

Yeah, I think AI in general can be a scary thing. I mean, it definitely could potentially take some jobs, maybe it doesn't, but that's just generally speaking across the board. But let's talk about it as it relates to physical security. I would bet that most professionals out there feel like they are under budgeted. They need more and more humans to do their jobs, but they're never given the budget to do it. And so this is where I like to use that use of technology for cost and for scale. And so I would think that all of us would want to adopt AI as a force multiplier of our people.

I don't think we're ever at a stage even right now to say that, "Yeah, I can put AI in place and then get rid of humans." We're not even close to that. There's so much happening and there's so many threats that we're dealing with that we're just underwater with alert fatigue right now. And so we need Agentic AI to deal with all of those alerts coming in and then give us only the ones that really need a human to respond with.

Derek Boggs:

Cool. Yeah, I think that answers the question. Well, it's a force multiplier, not a replacement.

Steve Lindsey:

Yes.

Derek Boggs:

Cool.

Steve Lindsey:

Yeah. So this is kind of the way that this is where we see our vision moving forward is adding a lot more intelligence in that validation phase. So now let's stop right here and take a look at just LVT in general because there's some core foundational capabilities that have to be in place before you can even take advantage of Agentic AI. So let's kind of review some of these. So first of all, you've got to be able to deploy a solution that's reliable. So if we're going to think about Agentic AI or any of these technologies as being an augmentation for our human resources, they have to be as good or better at availability and reliability and everything else as what the humans were. And so we think about this in technology, around what's our uptime of the systems?

What's the time to recovery? So how fast can it heal itself? Does it require a human to heal it or can it heal itself? And then what kind of support is available to keep those systems running? And when we think about cost of ownership of technology, this reliability area right here is I think one of the most underestimated when people cost out how much of technology-

Derek Boggs:

It should be table stakes, but depending on where you go or what solutions you go with, that could be the crutch of everything that you're trying to accomplish.

Steve Lindsey:

And then you find yourself being an IT company instead of whatever your business is in business for. So reliability is really key, and that's one of the things that LVT prides ourselves on is, the various things affect uptime, especially on a mobile security unit. You've got power and how much power you're able to generate off of solar panels if that's your primary power source, as well as what's the consumption of power? Because if those things are out of balance, then you'll never have high uptime.

Most of these units are cellular connected or satellite connected, and that's not a persistent connection. So how do you manage how that connection is working? And if it gets overutilized, are there other options that can fail back on because connectivity has to be there. And then we're dealing with devices that just aren't smart. So these are things that typically are going to need a power reset or configuration settings put back into them. And so does that require a human to go out and do all that work? Or can it fix that on its own? So right. So LVT has solved these problems for a decade plus. And so we're just basically introducing these now in a mobile security unit, even though we've been doing these in different use cases forever. So that's reliability. The other bucket is really around data security. So how do you protect the systems themselves from cyber attacks?

How do you protect data privacy and sovereignty? How do you govern that data that's there? So again, LVT prides itself on having an architecture and a system that was built with all of that from the ground up and how we do that. Scalability. It's okay to think onesy, twosies on these things in certain locations, but if you're trying to run a national program and even a multinational program, how do you manage all of those systems? How do you get them made? How do you get them deployed? And then how do you over the air keep software up to date? How do you keep configuration settings up to date? How do you manage access to all of that stuff? And especially employee lifecycle of that. That's a big one. A lot of people don't really think about, but if you're in a business that has higher attrition, how do you get people to forget usernames and passwords to systems? They don't.

So you've got to have more modern mechanisms of how you handle user lifecycle on that. And then I'm going to skip connectivity, we'll cover that, but platform, is this a future proof platform? Right now we're talking about AI. How easily can your system adopt these new capabilities with minimal effort on your part to get it there? When we think about rolling out technologies, I think most of us in the audience and everywhere would count these in years. We don't count these in days. We've got to start getting there because the businesses change. Business needs change. We have to be agile. And if it takes forever to deploy our technologies, then we just never really adopt anything that helps us, and we never get off of old antiquated technologies that really aren't serving us the way they should. But let me point out this connectivity thing.

This is one thing that I think a lot of people don't quite understand is that is when you connect any of these types of systems, especially cameras to cellular networks and satellite networks, there is no such thing as unlimited. If they say that you have an unlimited plan, they've basically got a whole bunch of small text under there that says, but we will throttle you if you use too much data. And if a camera gets throttled, it's pretty much useless. So you've got to be able to manage the costs of data. And that is something that LVT focused on 15, 16 years ago. As we started live streaming video out in remote areas for Department of Transportation, we were the ones who had to pay the bill.

And so we invested very heavily in how do we manage those costs to the point where we can still own the cellular lines and not have to put that burden on the customer. And we can confidently know that we're always going to keep that data small. And it's a very, very unique differentiator for LVT in the industry when you think about cellular connected systems.

Derek Boggs:

And the patents to prove it.

Steve Lindsey:

Yeah. And the patents to prove that. And we have patents all over the board here, but that's really a foundation that when you've got these capabilities in place, now I can start slapping this AI capability-

Derek Boggs:

Layer on the innovation to it.

Steve Lindsey:

Exactly. So why will LVT remain the solution of choice? Well, it comes with this vision that we have that we're really all here to talk about. So let's first of all talk about behavior detection. So when AI was first introduced to cameras, it was really based on what we call object detection. So we're talking the ability to just detect a human versus a vehicle versus an animal. It never knew anything more than that looks like a human shape. So I'm just going to classify it as that.

Derek Boggs:

Fire the alert.

Steve Lindsey:

And that was awesome for the industry because before then it was motion detection, which all it was was the number of pixels changing.

And anyone who's done this before knew about the thousands of alerts per night because some pixel was flickering. You couldn't build a security solution around that. So it was very innovative to have object detection. It took us from thousands down, but now we're starting to get used to this number of detections coming in that are object, that now we're getting more picky. We want to understand behavior. We don't care that we saw a person. For example, if you're securing a parking lot, you probably have a lot of people all night walking through that parking lot. So every single time an alert goes off, you don't want to have to burden your people to look to see if there was a threat.

You want the system to look for the threat, and then if it sees it, let them know. Right? Well, I guess what we'll talk about is maybe deal with it and then let your people know. So let's talk a little bit about this. So there's the difference between a person being detected and a behavior. Maybe that person's lying down or maybe the person's running. And we'll get into some more sophisticated examples. So here would be an example of multiple people and we care about maybe them coming together. Maybe they're going to fight, maybe they're going to be a flash mob or something. But there's a behavior now that it's saying, well, not only am I detecting someone, but I'm detecting many of them and I don't like the way this looks. That would be an example of a behavior.

Derek Boggs:

This is unnatural.

Steve Lindsey:

Yeah. Cool. Yeah. Another use case of this would be like if we're dealing with life safety or compliance type issues with employees. So in this example we've got two employees. One is wearing the safety vest and the hat, and the other one's only wearing the safety vest. Well, we've got to be able to pick out the fact that there is an individual who is not wearing their safety hat. So it goes beyond just saying that I'm detecting a person. It's what are they wearing? What are they doing that might be unsafe?

Derek Boggs:

And the context of where they are too could be brought into play.

Steve Lindsey:

Yep, that's true. And so here's an example of someone laying down next to a tree. So not only do I detect a person, but I detect that they're actually laying down. Now this brings up a really interesting problem, and that is you need more than behavior to really understand threat. And so this is where this idea of putting behavior in context comes into play. So we talked about in our detection that we want to move from object detection to behavior, but now we need to take behavior and put it in context. And this is where the Agentic AI really starts to be magical, and it's in this validation stage.

So in the validation stage, what you're doing is let's say that someone's lying down. Are they laying down next to a tree, which might be a Sunday picnic, or are they laying down next to your car about to steal your catalytic converter? And so when we look at this in action, it would look like something like this. So on the left, obviously that lady is just relaxing, probably not a threat. I would never want to send an alert or even talk down to her and tell her to leave. Whereas on the right, that person obviously is going to take my catalytic converter or something out of my car. So that is something that we do want to pay attention to. So this level of sophistication in detection and validating what the threat is, is where the power of Agentic AI comes in.

Derek Boggs:

Yeah. It goes back to the definition. As a human, we can look at this and look at the behavior, look at the object, look at the behavior, look at the context and say, this one on the left is fine. This one on the right, we got to do something now. So starting to simulate the human ability is what we're talking about here.

Steve Lindsey:

Yeah. Exactly. And as we take this to other use cases, we talk a lot with medical facilities, and you can take this same situation and add a third situation. That is what if you have a patient that just happened to wander outside and they're probably in an unsafe situation. How do you handle that? It's not someone trying to do harm to you, and it's not someone just enjoying a Sunday. It's actually a life safety situation too. So we can think of all kinds of use cases, but this is where the Agentic AI comes in, that validation stage, is we have to start getting more intelligent about what it's seeing and how it should handle that.

So now let's move into deterrence. So let's say that we've detected behaviors, we've classified them, we've brought them into context to validate that this is this kind of threat. Now we need to calculate a response. And that response could be if the threat assessment is high enough, an immediate alert to a security operator because that person is needed to deal with that.

Derek Boggs:

Yeah. You mentioned the instance of the person walking out of a hospital or a care facility. You're not going to try and deter it with technology. You need a human to respond to this.

Steve Lindsey:

Exactly. Yeah. And that's a great example. You probably want to get out there human quick. So let's not be... We probably don't want to be saying messages about, "Hey, please go back to your bed."

Derek Boggs:

No.

Steve Lindsey:

They're probably not lucid enough to even know what that means.

Derek Boggs:

No.

Steve Lindsey:

Yeah. That's a great example of that. But if it doesn't require a human now, how can we leverage technology to be a force multiplier and take care of some of that noise? Let's go back to what LBT introduced back in 2017 and '18 with this mobile security unit. So again, unwanted behavior. I have different types of strobes, floodlights, audio talk downs, whatever I have at my disposal to be able to try to deter that crime. And so now the question is how do we take that to another level? So what we started introducing, now that we have the context with the Agentic AI and we understand where things are happening, we can start becoming more specific as to where we want to shine lights. It's one thing to have a floodlight that turns on that's probably not bright enough to illuminate and really catch anyone's attention, but we can aim a spotlight at somebody with a really high beam.

It's obvious that it's shining on them, and we can be very specific on where we want that to aim. So this is only enabled if there's enough intelligence to know where these things are at. So we've now been able to unlock this capability. Another way to think about this is the way that we do talk downs. I mentioned before we used a lot of just canned, pre-recorded messages. The problem with that is that can become a pattern that is recognizable and predictable, even though we've tried to escalate it and make it kind of more random, it still gets to the point where it could be desensitized. But what we've found in a lot of our research over the years is that when you have a pre-canned message, a certain percentage, in fact, it usually ends up being about 60 to 70% of the time, people will just leave.

And when that alert is escalated to a human and now they get into the system and literally talk out the speaker and start calling out unique identifying characteristics of the people doing the crime, that 20% of the time they will actually believe it and leave. So the question is, well, how can we do that automatically? And when we think about the advancements in artificial intelligence in both large language models as well as generative AI, we thought of there's a way to actually make this automatic. And so as you can see on this example here, the Agentic AI is able to pull out a lot of unique characteristics about what it's seeing. So it can pull out characteristics about the people it's seeing. What are they wearing, what do they have on them.

Derek Boggs:

The stop sign, the mosaic around it.

Steve Lindsey:

So they're by a parking garage, the mosaic, the stop sign, maybe what they're doing even potentially. So now we can take that rich set of information about it and then actually in real time create a customized talk down message and it really addresses that believability. So again, it can't be just any talk down message. It can't sound like, I forgot the-

Derek Boggs:

It can't sound like Siri coming through either. Siri, I'm still like, when are we going to update this and make it sound more like a human.

Steve Lindsey:

Exactly. Yeah. I was thinking even older. I am aging myself with old computer synthesis.

Derek Boggs:

Yeah.

Steve Lindsey:

No, it has to be a very believable voice. So you got to be really careful with how you synthesize this, which again was something we solved for, but it's also, again, understanding these characteristics in there. So this now becomes an actual customized talk down.

Derek Boggs:

Are we ready to show people?

Steve Lindsey:

Yeah. This is the live demo step. Now what's really funny about this is we introduced this at ISC West early last... About a month ago, right? Was when ISC West was, and we started posting some of these videos online and we had a lot of people comment that there's like some Wizard of Oz behind the curtain doing this. And all I can say is there is no magic person behind the curtain.

Derek Boggs:

Should we reveal whose voice is actually the AI voice?

Steve Lindsey:

We actually should.

Derek Boggs:

It's our very own Ryan Porter.

Steve Lindsey:

It is our Ryan Porter.

Derek Boggs:

CEO.

Steve Lindsey:

Yeah. We actually-

Derek Boggs:

I don't think he has enough time to be sitting behind monitoring a bunch of cameras but who knows. Maybe.

Steve Lindsey:

That's true. And who would have thought that his voice would be so believable and-

Derek Boggs:

So scary.

Steve Lindsey:

Scary. Yeah, I would say. But yeah, we sampled his voice and we use that as the actual AI voice.

Derek Boggs:

Let's do it. So I'm going to hop up here. I'll do my best. Vanna White, Steve's going to narrate some of this for us, so let's do it.

Steve Lindsey:

Yeah, let's do it.

Derek Boggs:

So we got Marilyn behind us here.

Steve Lindsey:

We have Marilyn right here behind me. You see Marilyn's kind of in this state where she's just kind of observing. And so now she's detecting that you're standing there and there's the floodlight.

Marilyn:

Attention to the individual in the black jacket and sneakers. You are currently loitering in this area. Please move along and vacate the premises immediately. Thank you.

Steve Lindsey:

Okay. Thank you. So that was awesome. It detected you and your sneakers. Now let's just grab some props and see how well she reacts to you. So holding a basketball.

Marilyn:

Attention to the individual wearing a black jacket and holding a basketball. You are loitering in that area. Leave immediately. Thank you for your cooperation.

Derek Boggs:

Should we? I just happened to have this sword.

Steve Lindsey:

Yeah, everyone brings a sword to work.

Derek Boggs:

Don't tell HR. This is for the LARPers.

Steve Lindsey:

Yes.

Marilyn:

Attention individual wearing a black jacket, light shirt and sneakers, holding a sword. Please vacate the area immediately.

Derek Boggs:

It was building the suspense there.

Steve Lindsey:

Yes. All right.

Marilyn:

Thank you for your cooperation.

Steve Lindsey:

That was interesting because it actually decided to speak a little slower that time.

Derek Boggs:

It was having to figure out. I'm going to try something really quick. I got the letters on my shirt.

Steve Lindsey:

Okay. Yes. Let's see how it does here.

Marilyn:

Attention individual in the gray LVT T-shirt and jacket. Please relocate from this area immediately. Thank you for your cooperation.

Steve Lindsey:

It's technically tan.

Derek Boggs:

I was going to say if I'm a criminal and it's trying to deter me, I'm going to be upset. This is a tan shirt here, not a gray shirt AI. Get it right. No, that's a good, I think that's a good point right there. We could have a whole table full of props, but I think that gets the point. That last one, right? It got the color of my shirt wrong.

Steve Lindsey:

Yeah, but it's still picked up on LVT.

Derek Boggs:

Still picked up on the letters. In a real life scenario, is a criminal going to question the fact that this potential human being talking down and got the color of the shirt wrong? I don't think so.

Steve Lindsey:

Yeah, right. But yeah, you'll notice that the latency, it was real time. And again, we make fun of using Ryan's voice, but we can actually use pretty much any voice that we want and we can actually control what is said. One of the fears of using generative AI is like, are we going to say something that would be perceived as illegal or something that would offend somebody or whatever, and we have controls of what it can or cannot say. Yeah, but we can also control how we want it to say it too. So it's a very dynamic tool. As you can see when you apply the Agentic AI on that now, now it becomes a really believable way to talk down.

Derek Boggs:

Again, I think the demo gods just shined upon us quite well. That worked pretty well. That was awesome.

Steve Lindsey:

Yeah. I think we're batting 10 for 10 on that.

Derek Boggs:

Yeah. It's worked pretty well.

Steve Lindsey:

It's worked pretty well there. So yeah, so we've talked a little bit about now the deterrence on this, and that's really what's helping us prevent, and it's helping us prevent problems without having to have a human involved. Every single time an alert goes off. And let's just say that the person didn't leave the first time when we did this. Well, we still have the ability to do the escalations. Now let's talk about some of the use cases of these escalations. A lot of times when just the general public come across these units in the wild, what we like to call them, they're curious. What is this thing? And so what we find is if you can be really kind and informative on the first time that it detects something... A lot of times when people walk up to this and they want to check it out, we have detections on the units that can detect if people are tampering or trying to break into the thing.

And so what we do is we can play a really nice message. So I'm going to kind of get up here to where we were on that. Okay. Yeah. So what we do is we'll play a nice informative message. Hi, I'm a LVT mobile security unit. I'm here for your safety or whatever. And again, it lets them know that something's detecting them. And a lot of times people, they don't have any nefarious whatever.

Derek Boggs:

If you're just doing your thing or if you're up to no good, you're aware that this is live.

Steve Lindsey:

Yeah, this thing's here, and it just detected me. And so a lot of people will choose to leave, but those who don't, and they're up to no good, they're going to probably keep doing what they need to be doing. And this is where the escalations come in. So maybe the first time it says this nice message, and then let's say that they don't leave. So the next time we get a little more aggressive, this is where we can probably introduce our strobe lights. So with Agentic AI, we have a lot more control now of what we want those lights to do. So instead of just being that same blue light or red light that's flashing, we can control which lights flash, what colors, what strobes, what's the intensity, what pattern, and that can be part of that experience on the escalation.

And then we can also say something different right out the speaker. And then let's say they still don't leave, then we get pretty aggressive and we introduce our pan tilt spotlight. We really get aggressive on what we say. We put the thing in berserk mode. We make it pretty obvious. And like you said, most of the time people are going to leave. They just don't want to mess with it. Now in the unique situations where you've got repeat offenders, these are probably professionals, could be organized crime, maybe they're not lucid and they just aren't aware of what's going on. Or maybe they're just so desperate, they're not a repeat offender, but they're just so desperate, they're going to do it anyway. Now we're entering the prosecution phase. They're going to do it. Now we've got to figure out, well, who is it and how big is this problem?

Derek Boggs:

Did we capture the evidence? Do we have a positive identification? And how easily can we gather that evidence?

Steve Lindsey:

Yep, exactly. Yep. Now before we go there, there's a couple of ways to think about deterrence to validation, and that is what we call human initiated behavior modification. This is where a human is in a situation where they're the ones who detected something wrong, they have validated that something is wrong and they need some help on a deterrence

Derek Boggs:

perspective, maybe it's like the fourth tier escalation there and this individual's not leaving. So now we've been as efficient as we can with technology. We've reduced the cost as much as possible, but we need a human.

Steve Lindsey:

Yeah. We need a human. But keep in mind that we also need a human, maybe a human triggered this. Yeah, okay. So one of the examples we gave at the beginning of this was the employee who's getting out of their car into their place of business or coming back. And we've used this concept called a virtual escort in the industry, for a while. The problem with virtual escorts is they're really problematic to initiate, right? Employees either have to call in, call someone, say, "I'm about to get out my car," or please go into whatever mode. It is just really a hassle, and it doesn't really, the programs don't really work that well. So we thought about this and we realized that you know what? These mobile security units out there with the cloud platform that we have and the mobile app that we have, there's really no reason why we couldn't put that power of LVT in the hands of employees as well.

Usually it's the professionals, security professionals in these organizations that really interact with the LVT system and the software. But now we can extend that to employees in a very easy to use way. So again, they have an app, let's say, that their manager says, "Here. Scan this QR code," or whatever, and it just basically engages their app and associates it with their location. So now when that individual walks outside or is about ready to walk to their car, they can literally engage this virtual guard, this escort, and they can kind of be on the ready. So it announces that the area is secure, it puts the unit in kind of a strobe mode that's a little bit more active. And then if there is a threat, that employee can actually long hold a button down, which puts it in a panic situation.

And then there's different rules that can be applied there. I mean, it can immediately notify a human at a security operations center to then start actually watching. It can even call 911 if needs be. Again, you want to think about how you want this to really work, but there's a lot of flexibility in how that can be used. And so again, this is reducing humans having to always be answering those requests. It allows technology to do most of the work, and then it allows that employee to essentially signal to people, I need help now. And that's when people actually get in there.

Derek Boggs:

Well, that dynamic nature of it, you think of my mind jumps to on campuses, the blue monolith that's there, that you can go and you can click it and there's an intercom. Maybe it's working, maybe it's not, but that could be a hundred yards away from you versus what we all have in our pocket is a phone. And then you mentioned the ease of use for a virtual escort, how it's not easy today, so people probably don't engage it as often as they could. So that dynamic nature of that seems very valuable.

Steve Lindsey:

Yeah. And there's a lot of different single point software solutions that can do some of these things, but it's just an additional cost. And what we're saying is, well, if we've already got a unit out there with let's add this capability and get more value out of the solution. And along those lines also is this idea that we can do other things with our mobile security unit. We've introduced recently what we call a guard gate. So the ability to put some kind of a guard access control into a parking lot or a controlled area and being able to do so again via an app. Or you can use an intercom that connects two-way to a security operation center.

There's clicker buttons. There's many ways this can be initiated. But what's also nice about this is you can also, you're audit logging everything that's going in and out, right with video, you can apply license plate recognition, you can apply character recognition on dot tags, that might be on trucks and stuff. So there's all kinds of additional information you get with this in addition to the actual access control gate. So again, just human initiated ways of using the system and doing it in a very, very easy to use, but rapidly deployable use case.

Derek Boggs:

Yeah.

Steve Lindsey:

All right, so moving on. So we've handled the left side, which is prevention, and now we're getting into prosecution. So if you remember the dollar spend, and you remember I said that there's not a lot of money in comparison in prosecution.

Derek Boggs:

Got a lot lighter.

Steve Lindsey:

Yeah, a lot lighter. And a lot of that's just due to, it's just a very manual process right now. Very manpower intensive. It takes a long time. It's three x what the loss was, so what's the ROI? And so the challenge here is how do we help make that faster, less expensive, less manual and more accurate? So going back to Agentic AI, you'll remember that there was a lot of information in metadata that we are now gathering as incidents occur. And we can immediately package that evidence up and almost with a bow and say, "Look, here's all the evidence that you are probably going to gather anyway, and we can do it automatically."

So instead of searching through hundreds and hundreds of hours of footage from all the various cameras using NVRs and DVRs, we almost just packaged up in real time and send it to you. But you also have the use cases where maybe this wasn't an incident that was triggered, but I still have to go back and investigate something that happened. And that's where a video intelligence platform that's capable of doing what we call forensic search. So this is a new feature that we're introducing. Now, one of the challenges with... It's not like searching video is a new concept in a VMs or an NVR, but they've typically been very specific on what it is that they're searching for. And we call that attribute searching-

Derek Boggs:

Like a dropdown.

Steve Lindsey:

Like a dropdown list or a checkbox. And there's a relationship between the AI that's detecting these things and the attribute declarations that allow you to search on them. So it's almost like upfront, you have to already know what it is you're looking for to make it searchable.

Derek Boggs:

Or it had to be classified already by a human, maybe that was looking at that alert when the incident happened.

Steve Lindsey:

Right. So what it is requiring us to do is to be really good at searching, which I don't think a lot of us humans are great at perfecting search queries. We also have to be omnipotent. We have to know what-

Derek Boggs:

Could you use that in a sentence, please? That one went right over the head there, Steve.

Steve Lindsey:

You have to be all knowing and be able to predict what the future is going to bring.

Derek Boggs:

Thank you. Thank you. Not for the viewers, just for me. Thank you.

Steve Lindsey:

You basically have to have defined every possible thing that you'd ever want to search on. Make sure that you've got a computer vision model that can search for all of that and make sure that you have an attribute for every single one of those. That's impossible. So again, it is just been very restrictive on how searching for video has been to date. But again, technology advancements have now made this possible. We can now, after the fact, search on anything that we want because the AI can actually find it. And not only that, but we don't have to be experts on search queries. We don't have to be checking a whole bunch of boxes and kind of getting that whole thing correct to find what it is I'm looking for.

We literally can just type in the description of what it is we're looking for. So an example here is person wearing a red hat and black jeans. I just typed that in like normal text that I would use, even if I was telling you what do they look like or is a person wearing a red hat and black jeans? I type that in and the AI can find it. But not only that, it can look for activities of what they were doing. We have examples of maybe somebody's pushing a shopping cart in an intersection of a road, they're coming out of a building, going into a building. So you can be very just descriptive in what it is you're looking for, and the AI now can actually do the search for you.

Derek Boggs:

What I think is very necessary to call out is we have the context, we have the behaviors, everything that you were talking about at the beginning that allows us to deter. You're now able to search for that, the identifiers, the context, the behaviors, and see everything populate.

Steve Lindsey:

Yes, exactly. Again, if we can find the evidence faster and make it less manual, we can reduce a significant amount of the cost.

Derek Boggs:

I think a question I'm getting right now, which is a great question, is LVT the only one doing this? I feel like there's others out there in the space that are doing forensics, natural language search out there. Verkada might be one that could come to mind.

Steve Lindsey:

There's a lot of... I would say it's probably been easier for cloud native providers to do this than on-prem providers of this, but yeah this should be something that you should be seeing, is the ability to do this. And if you don't, well, you should ask for it.

Derek Boggs:

Yeah, it's one of those table stakes when you're talking about evidence gathering, not necessarily on the AI deterrent side like we just demonstrated. That's pretty unique to what we do, but evidence gathering that needs to be a table stake.

Steve Lindsey:

Yeah, exactly. Now we also need to think about how do you then take an incident and all of its evidence like this and then start looking at it over time and space? And what I mean by that is if you're going to build a case against a repeat offender, you have to be able to now cross-references independent incidences. And maybe those incidents didn't happen at the same location. Maybe they happened in a different location. So you've got to be able to start correlating all of these things. And there's wonderful AI tools and solution providers out there that can solve this problem. One that comes to mind is a partner of ours, Aura. They do a good job of this. So this is where, again, you're now looking at a broader ecosystem of technologies that can help you solve these problems.

Derek Boggs:

Cool.

Steve Lindsey:

Other things that we're doing to help simplify this is we've always had the ability to retrieve video off of our units just using the cloud interfaces that we have. We thought back in the early days, having someone drive out and grab a USB stick was a little bit antiquated.

Derek Boggs:

A little excessive. Yeah.

Steve Lindsey:

So we've always made this available and we've always thought about it from a high resolution video perspective. And again, we've always had a constraint of cellular being in the center of that. So we've had to be really careful what we transmit over that cellular link as part of our secret sauce. But we've always figured that if somebody's taking the effort to be able to search for video, they probably want a higher resolution video. Now, the problem with streaming high resolution video over the internet, even if you're trying to do historical video scrubbing, is that the bandwidth required to do that probably isn't there.

So we've introduced this new capability where we have the ability still to be able to download high resolution video clips, but we also are introducing the ability to scrub video. It might be a lower resolution, but typically what I'm looking for here is more in a real time crime situation where let's say that I did escalate the alert up and I just quickly want to go back in time and kind of see how did things progress to this. That would be an example of a use case where this could come in valuable.

Derek Boggs:

Well, yeah. You see so many instances on the news today that just capture the tail end of an incident. And so therefore the response, whether it's just the public response to it or police response to it, but giving people now let's look at that full story very quickly so that we can respond appropriately. Yeah. It's very powerful.

Steve Lindsey:

And so that's what we're introducing is the ability to now real-time scrub video, but we can still download high res video and we have all of that, and we can still manage all of that in a very affordable cellular that you don't have to own as a customer and deal with all the headache of that. LVT still provides that as a service. And then lastly, with all of this rich information that we have, this is one of the very strong benefits of, again, a cloud native system, is the fact that you have all of this enterprise data at your disposal. So think about all the data the Agentic AI is generating that you can now run analysis on, and this is what's helping to now train the Agentic AI as well.

When we think about the capabilities of the Agentic AI to be machine learning, there's feedback loops now that help it get better and better. So there's that data that's used, but then there's data that us humans can analyze. How effective are my deterrences working? Should I be making some change to those? What areas are being affected more than others? How effective are my security operators? How fast are they responding to things? What types of things do they tend to be dealing with? So there's just so much information that helps us be way more proactive with not only how do we deal with threats, but also how do we deal with the resources, those scarce resources that we have.

Derek Boggs:

Yeah.

Steve Lindsey:

So the last thing we want to talk about is LVT has been always an open solution. We know that the security problems are bigger than us. We know what we do really well, and we know what our partners do really well, and we want to be able to leverage those. And so we have an open API that allows third-party partners to write and integrate with our solution. We also do some integrations ourselves, and there's some future things coming out I want to get into with integrations, but there's a lot of excitement around this area. So to date, we've actually announced our Immix integration. We also have announced our Fusus integration, and there's future integrations coming very shortly. We won't name them right now because they're not official, but a lot of excitement in this area.

One of the things that's great about LVT and its rapid deployability is the fact that we're a full stack solution and you can just get out of the box start running. But we understand that there's also customers who either internally have their own solutions and want that integrated in, or they might be using an ARS system that has one of those as well. So again, we want to be a great ecosystem partner, and so we're very interested in partnering with these best-in-class solutions to make that possible.

Derek Boggs:

The question I have for you on it too, because it's a scenario I keep hearing more and more, is that bridge too, that partnership. Not just within solution provider to solution provider, but from public company to private entity and how everybody can align to solve the problem of making the community safer and more secure.

Steve Lindsey:

Yeah, I think one of the interesting things, LPRC has really pushed hard on this, companies like LVT and Aura and others and Axon and these others have really brought the awareness of the need for private public partnerships. What we find that's really interesting about these partnerships though is the outcomes, the desired outcomes of a security professional in the private sector is actually different than the outcome that let's say a police officer is looking for. So we got to make sure that the technologies are geared for those outcomes. For example, Fusus is awesome for a real-time crime center where I'm trying to, in real time, direct police forces where they need to go and they need eyes and they need video feeds coming in.

That's a great use case for a real-time crime center. But in a private entity, again, their focus is prevention. They don't want to be in this apprehension mode. In fact, there's a lot of policies where they don't even want employees to engage, so they're more interested in technologies that can try to prevent a problem from happening. So it's not necessarily eyes looking for where to send people per se. Some might be doing that, but we're noticing that not a lot do that. Yeah, we just need to understand what's the problem we're solving for, what's the right technology for that, and then make sure that we've got those integrations.

But in the private public, there's a lot of excitement around this because we're now starting to see technologies that can enable it. We're seeing those partnerships between private and public where they're willing to work together, even to the point of sharing data, which is absolutely critical to make that work. So yeah, I think we're just on the very beginnings of that, but it's something that has to happen because that tail end of prosecution, if we don't figure that out and get that cost-effective, like I said, our deterrences become less effective if we can't hold people accountable.

Derek Boggs:

Yep. No, I saw comments throughout this just around, yeah, the skepticism, but I think skepticism around policies and an actual prosecution, and I think we're all starting to see that pendulum probably swing back towards, Hey, let's have reasonable laws in place that actually deter the crime from happening by having strong prosecution.

Steve Lindsey:

You're right. Well, this brings up the efforts that we've done with the program access task force.

Derek Boggs:

Access task force. Yeah.

Steve Lindsey:

I'm getting old. Yeah. Access task force. Yeah. We saw that exact situation where it's not just public law enforcement and private entities, but you've got solution providers that fit in there, and you've got policymakers that fit in there, and all four of those entities have to work together to really solve this problem.

Derek Boggs:

Yeah. Yeah, we're just wrapping up the access task force in Detroit right now, so we'll be having results here in the next few months.

Steve Lindsey:

Yeah, we went bigger and broader on that effort, and we've even brought in a lot, much broader ecosystem of partners in there, as well as a much larger city to deal with.

Derek Boggs:

No, it's been a great test, a great test of our solution, and more importantly, of just the partnerships to get things done. But Steve, I think that's it. I answered a few questions just with my poking and prodding from the conversation. But with that, we'll thank everybody for joining today live, and we'll be sharing this on demand after as well. So if you joined live, thank you. If you're going to be tuning in later, we'll make sure you get the recording. But without any further ado, thank you so much, Steve.

Steve Lindsey:

A lot of fun.

Derek Boggs:

Yeah. As always.

Steve Lindsey:

Exciting times.

Derek Boggs:

As always.

Steve Lindsey:

Yep.

Derek Boggs:

Thanks everybody.