What if the camera caught it, but nobody was watching?
In this episode of Your AI Injection, Deep Dhillon explores the hidden vulnerability in school security with Egor Olteanu, co-founder and COO of Volt AI. Despite having dozens of cameras installed, most schools have zero people actively monitoring them, meaning emergencies go unnoticed until it's too late. Egor shares powerful stories of lives saved by AI surveillance, from a student having a seizure in an empty hallway to a janitor bleeding from a head injury after hours. Together, they examine how Volt AI's system tracks threats across multiple cameras in real-time, stitching together location data and video to guide responders to moving incidents.
But the conversation turns provocative as they debate ethical boundaries: Should AI detect drug use? Where does health and safety end and authoritarianism begin? Deep challenges Egor on whether founders can maintain moral clarity when profit pressures mount. Tune in for a fascinating discussion about the razor's edge between protection and surveillance.
Learn more about Egor here: https://www.linkedin.com/in/egoro/
and VOLT AI here: https://volt.ai/
Check out some of our related content here:
Get Your AI Injection on the Go:
xyonix partners
At Xyonix, we empower consultancies to deliver powerful AI solutions without the heavy lifting of building an in-house team, infusing your proposals with high-impact, transformative ideas. Learn more about our Partner Program, the ultimate way to ignite new client excitement and drive lasting growth.
[Automated Transcript]
Egor: There was a student that went into a seizure in the hallway during class. So there was nobody in the hallways. And when that happened, the notification was immediately sent to whoever needed to respond.
In this case was the principal. So he was able to immediately go to the hallway, like the student her life was not at risk. But at the same time, just that ability to immediately to respond and comfort her and get her the help that she needed, not only did that increase the confidence of the student itself, but also the teacher, the faculty.
Right. But then talking to them afterwards, it said to me, even the feedback that they got from the parents, it's like, this is amazing. Because if the system wasn't there, the cameras were there, but nobody would've noticed this most likely up until the class was over. And then there were people in the hallways.
So it is this very comforting feeling that if something like that happens. People will get help.
CHECK OUT SOME OF OUR POPULAR PODCAST EPISODES:
Deep: Hello, I'm Deep Dillon, your host, and today on your AI injection.
We're joined by Egor Olteanu, co-founder and CEO of Volt AI with an MBA from American University. Egor Heads Volt's work using AI to monitor millions of cameras simultaneously enhancing security and safeguarding lives. Prior to Volt, Igor worked at Google for eight years, including time leading a team in Alphabet's ex and special projects group.
Egor, thank you so much for coming on the show.
Egor: Thanks for having me. Pleasure meeting you.
Xyonix customers:
Deep: Awesome. Well, I'm excited to dig in. You guys were in the, a hotspot of, you know, a lot of people's, kind of interests and, you know, frankly concerns around ai. So I think it'll be a really fun episode. Maybe start us off by, tell us what did people do without Volt AI like before, or if they're not using it, what's different with your solution?
And maybe walk us through a particular scenario, if you can.
Egor: Sure. So before Volt ai, essentially, a lot of people in the physical security space would have to rely on their guards or their operators to watch anywhere from dozens of cameras to hundreds of cameras. And just as you can imagine, you know, the attention span of a human is not that long.
Meaning that you can do it effectively if you're looking at a couple of cameras. But if you have more than, let's say 10, you have to constantly jump from screen to screen. So you'll never have enough people to cover all of those cameras. So what you end up having is a lot of gaps in coverage. So the chances of you actually noticing something happening at a facility which requires you to respond or take an action on are slim to none.
And one of the main benefits that we add is that essentially overnight you can have a hundred percent of your cameras being watched a hundred percent of the time for those important event triggers, be it safety or security. And you know, that if something does happen, it essentially gets put front and center in front of the operator that needs to take action on it.
Deep: Yeah. I mean, we've all seen the Hollywood movies where, the guy staring at the security cameras like yawns and drinks his coffee and like stares at his phone or something and then something happens. So that, makes a lot of sense. And maybe lay the, the context for a little bit for us for a little bit.
Like who's buying your systems what parts of the system do you manage? are they installing their own cameras and you're just tapping into the stream feeds? Or, or do you have like a role in the camera selection and the camera placement and all of that? And is there any kind of like physical compliment or action beyond monitoring and alerting?
Egor: So by far the largest area that we're involved in is education, and that's K through 12 as well as higher ed. But we also have corporate customers. We also have cities that use our systems. as you can imagine with every single, uh, company's journey, you start off trying to understand where the best fit for them in the market is.
And with us in the past year, we realized that education is where we can provide the most impact. So we started focusing more and more on that as far as actually getting the system to be up and running. We try to make it as plug and play as possible, and obviously with every month and every quarter we're getting closer and closer to that goal.
But we don't sell cameras. If your cameras have been installed in the past 10, 15 years and they sit on the network, chances are we can work with them. I'll put it differently. We haven't run into any cameras that we can't work with yet. As long as it's a good lens, as long as your network provides enough bandwidth we can use them.
At that point. We really try to understand what the infrastructure of the customer is, meaning is your network good enough to go directly to cloud? If it is fantastic, we can do that. It's very simple, it's very easy. If it isn't or if your, um, network security teams or whatever security team that you use for your IT needs, if they say, no, we really need a device to sit on the edge.
And in that case it's either one U or a two year server, depending on how many cameras we need to handle. And then that is installation. Again, it's very plug and play because it. The same server rack access that you already use, and it plugs into the, to the same network that you can sit in. And at that point, it's really trying to understand essentially what your risk profiles are.
They can be very different from building to building or campus to campus, set on the baseline of the rules, and then just letting the system do its magic, essentially. Learn your operation. Yeah. And suggest maybe some of the things that you haven't thought of before that should be monitored.
Deep: Yeah.
Well, let's, let's talk about that a little bit. So maybe we can pick a stereotypical school environment then and, and dig in there. So paint a little bit of a picture, like what are their primary concerns? First of all, I imagine, obviously like the shooter thing in, in the US at least is a big deal.
Other countries don't seem to have this issue. I wonder why, but like in the us we do But there's probably a lot of other issues too. Maybe, I don't know, drug dealing weaponry or something. And then maybe to walk us through a little bit, like where, where are the cameras typically placed and like what are the different maybe types of customers you have, like maybe some of them just have something on the front door.
Maybe some of them are very thorough and have them all over the place. Give us a little bit more color on a specific school and then Yeah. And then we'll take it from there.
Egor: Sure. So let's say a school doesn't matter if it's a big city or a small town, USA, whatever, right? They already have cameras.
And what we've seen so far is that the camera coverage in a typical school is fairly good. It's 60 plus cameras on average, so it covers the, uh, exits, the entrances, the major hallways, the sporting venues. And this is both indoors and outdoors. So generally speaking, the camera coverage is very good, and the cameras themselves are very good, and they're ip, they're set on the network.
Everything is fine. What we have seen so far is that the primary driver that takes a potential customer to start looking into the systems are violence. And when I say violence, it's not just the weapons, uh, the active shooter events, even though that's obviously the top of mind for pretty much every educator these days.
But also things like fights, like bullying, like people, getting access to the perimeter of the school or even inside of a school who are not supposed to be there, right? People go in there after hours for some reason. And you can imagine if you're a parent or if this is higher ed and you're in charge of, you know, protecting all of your students, uh, that's a major concern.
Who is actually on my campus and are they supposed to be there? But. The violence piece typically is the primary driver. And as we start going through this journey and understanding, well, is Vault the best system for them? Or is there a different system that could be better for them? We start asking them those questions like, is it just the weapons that you're concerned or do you want a system that adds daily value?
Because, uh, in a lot of cases, if you just get a system that looks for weapons only, this is a very high impact, low probability event, meaning that you can pay for a system that costs a significant amount of, uh, of money outta your budget, but then it might be system, hopefully it's a system that never gets used because you'll never see a weapons incident in your school.
Right? So how do you justify a cost of a system like that? Well, this is where Vault comes in saying, yes, we do that very well. I would argue we do that better than anybody else. But at the same time, on a daily basis, if God forbid, somebody slips and falls, if you have people that are getting access to your school in an authorized hours any kind of vehicle accidents, bullying, fighting, safety concerns, all of that, we monitor and we can alert you on, on a daily basis, meaning that, yes.
You are protected if, God forbid, one of those weapon incidents ever was to occur at your school, but at the same time, the system adds daily value to everybody that deals with operations within your, your facilities.
Deep: So today, does your average school actually have somebody watching these cameras?
'cause I don't know. I mean, I, I, it was a while since I was in school, but we, a, didn't have any cameras, and B, nobody would've been available to even look at them. So is that really normal? Like that's a position now in a typical school? No,
Egor: very rarely. What we see is that there's cameras that they record, and if something happens, obviously somebody goes in, pulls up the video footage and they investigate and they try to essentially do the lessons learned in order to prevent it from happening again.
But that's a problem because a lot of things, if nobody's watching them actively What are the chances of you responding and actually trying to prevent something from happening, or at least getting there very quickly to try to minimize the potential negative outcomes?
And again,
Deep: this, yeah, and plus there's the whole search and retrieval problem of trying to figure out what camera got that, you know, I don't know, two kids get into a fight. You kind of know where somebody has to tell you what time you gotta figure it out. And then it's probably not something that they do on a daily basis, is my guess.
It's not as easy as maybe navigating an Amazon ring system or something.
Egor: You know, they don't even do it on a weekly basis or in a lot of cases on a monthly basis.
Deep: It's maybe if there's a crime and there's some police involved or something. Okay. So like, walk us through how do you deal with the civil rights questions?
'cause you know, schools are public entities. They're typically government backed. Um, so there is the approach to like just not use personally identifiable signals like facial wreck and that kind of thing, and to just look at like, behaviors. Is that the general approach or is it something else?
Egor: Typically when we have this question, we immediately say, well, you already have cameras. It's not like we're going in and installing cameras everywhere.
Deep: Mm-hmm.
Egor: It's being recorded. People are watching that in some cases. So nothing in that regard changes when it comes to what is it that the system actually does.
We can absolutely identify if it's a human versus a non-human. We can absolutely identify if an action is taking place that shouldn't take place, like a fight or a bullying incident, but we have no idea who that kid is or that adult is. It's human right, but there's no PII that we store in terms of this is the name, this is who they are, this is everything that they've done previously.
Uh, this is where they live, et cetera, et cetera. In that case, this is not something that we ever wanted to get in and we're not doing it.
Deep: let's maybe switch gears a little bit and walk through the signals themselves. So let, let's take something like a fight. how do you get your training data?
and it sounds like you have humans in the loop that sort of assess in the event that there's an incident to like validate it. So I assume you're doing some kind of like frame level analysis first, and then there's some sort of interframe reasoning to determine something that takes place over time.
Like a fight, for example. Maybe there's like aggressive postures before the fight that get detected on an individual level, and then you can kind of aggregate that up I imagine you need, these temporarily segmented moments across a variety of cameras at some point to like hone your algorithms.
So you're, you know, you're in different lighting scenarios, different contexts. let's say in the earlier kind of stages of a new signal that you're trying to extract maybe you're not as good. You fire that up with, you know, more, precision errors to a group of humans who.
Like assess it and then give you a final label, like this is indeed a fight, or they maybe subcategorize it or something, and then now you've got some ground truth and your machine learning folks can go back and fine tune the models. Is that something you guys have done and evolved over time or
Egor: somewhat?
Well, first of all, it's obviously, it's obvious that I'm talking to an ML guy because he just went into an area that I know very little about. Like my engineering team and my cofounder, they're the specialist behind the magic. But in general terms when we started this, we understood that to be able to tell the difference, for example, between somebody just playing around on somebody fighting, we get a lot of data.
And that took a while, of course, to gather because in the beginning we don't have all the resources that we wanted, but we got to the point now where we are very good at detecting the difference between somebody playing around and somebody fighting, somebody stretching or doing yoga on the ground, ver versus somebody having a medical incident.
But even with that once we go into a customer's site, we set expectations that, hey, listen, day one is going to be the dumbest the system will ever be because the position of your cameras, the angles of your cameras, how clear your lenses are, the resolution of your cameras, how well the lighting is, the contrast between the walls and the floors, and what your people are wearing.
All of that matters. And as we deploy this and we do a little bit of data gathering, structured one, and then we start, the system starts watching this. Unstructured on a day-to-day basis, it becomes better. it gathers a lot more contextual information to understand when a problem is potentially happening versus, no, this is not a problem on that node.
There's also three layers of validation that our system does based on the confidence levels. There's the ones that were very confident, meaning the system immediately alerts the customer. We know that the chances of a false positive are slim to none. Then if that doesn't pass, let's say it, it isn't, sure it goes through a second, essentially, an AI validator that does well, how likely is this to be a false positive or is this a true positive?
If it passes that it hits the correct threshold, it gets escalated. If it does not hit the threshold on the second layer, this is where it goes to the human operators. Their job is not to watch cameras, because obviously that doesn't scale. Their job is to validate, meaning that if it pops in front of their screen, the information that they know is where is it happening, what's happening, and which rule or which anomaly was triggered.
And they essentially have up to 30 seconds, even though it strive for 15. And we typically hit those SLAs to say yes or no. If it's a no, the customer never sees that noise. This goes back into our training queue to improve the accuracy. And if it's a yes, the customer gets that, you know, information front and center that is actionable so they can just go and execute.
Deep: So that that human in the loop is getting the snippet, the little video snippet that just got captured, that triggered it, and then they're basically flagged with watch it. Is it or is it not the signal that the thing said something like that?
Egor: It, it's a snippet that includes steel imagery video as well as geolocational component, because a very big piece, actually our entire product is built around geolocation because in some cases, the incident where it started, that's where it stays.
But if it is a fight or an assault, you know, somebody could be on the ground, but then the other person that is involved in this could be traveling to a different facility. And if that's the case, essentially our system not just gives you the breadcrumbs of where the incident began and how it's progressing through your uh, facility.
Also it gives it stitches together, the relevant still imagery and the relevant video that essentially shows you the progression of that incident, including what is happening in real time. Because the big focus on the system is real time, as much information as possible. As quickly as possible.
Deep: Yeah.
'cause in that case, one kid punches another kid and then they're running away and your system is attempting to track Both parties. Precisely. and is giving location. So if it's giving location, that sort of implies that you have a map of the school, maybe a map of the ground, maybe even a map of the city if they leave the school.
Walk us through that. Like what's the map construction process look like when you go into a new school? Are you going in with scanners and like building a whole 3D model of the space? Are you relying on the school's, you know, maps, yeah. Walk us through that process a little bit.
Egor: This typically happens during the onboarding process before we even go live within the facility.
And one of the questions that we ask is that, do you have floor plans? They don't need to be architecturally accurate because we're not building this. It just needs to be accurate enough to give a very easy to understand visual representation of your space to whoever uses the system. In most cases they do have some sort of maps.
They can be more detailed or they can be less detailed. But still, like a fire evacuation plan that gives you a general overlay of the facility will be enough for us to take that, turn that into a 3D model, and then the place the cameras onto that 3D model in order to understand how people will move through that time and space.
In rare cases where, some facilities are very old or they have been. Design and redesign multiple times over decades. They don't have UpToDate maps in that case. We do send people out and they go in. Lucky enough, you know, most Apple devices these day have a LIDAR function and we use essentially an app to walk around scan.
It's actually a fairly quick process. Takes a couple of hours well, depending on the complexity and how many levels of the facility is, but typically it's a couple of hours. And then we'll have that live model. And again, we upload it into the, uh, into the portal. We overlay the cameras on them. And the beauty about it is that now you have a live 3D digital twin of your entire operation.
So if you do end up changing some rooms or hallways or whatever that happens, you know, not just in the corporate world but also in schools, it's a lot easier to modif modify that in the portal. It essentially becomes your source of truth in real time instead of going and dig through all those old maps and trying to understand how has your operation evolved over time.
Deep: Got it. You know, back to the, the incident. So now you've got this 3D map, you know, camera placements and locations. You can sort of determine based on what camera's picking up and you have some ability to like track the kid that's running away and the kid that's staying put maybe, and then the feed that the administrators are getting is sort of saying which cameras got them in sight and maybe maps that back to the name of the hallway or the room or something like that.
Egor: I can give you an example. the first time that we saw this work. Fantastically Well, it was, it was a couple of years ago, but it was a really cool aha moment for us, is that there was some, uh, it wasn't a kid, it was some adult that obvious wasn't supposed to be there, that essentially jumped the fence and started walking on the school grounds, right?
And, the system immediately caught that action of essentially, uh, that adult breach in the perimeter, and then it picked them up on one camera. It started tracking them up until he left. And then the other camera picked it up, and then the other camera picked it up. And the way this was, this was stitched together, is essentially, Hey, there's an incident that happened right here, right here's the location of it, but FYI right now, they're over here.
And by the time whoever needed to respond actually responded and intercepted this person, they were in a completely different area. But on the screens installed in their office, as well as on the mobile devices, the pinpointed location kept changing, and so did the cameras.
It's like, hey, this camera is seen now. Boom, a different camera view comes up. Now they're over here with the location. So this allows the responder to actually go to where the problem is currently and not where it initially was, where it was to be very different.
Deep: Interesting. So with respect to the, you mentioned a lot of these call them like more frequently occurring incidents that aren't all security related.
I think you mentioned, you know, somebody's having a seizure or something. How do you determine which signals to go after? And does every school, monitor all of these things and walk us through that process a little bit. Like how do you determine that? Oh, okay, we wanna do, seizures, we wanna do fights, and then how does your charging work? Do you like charge based on which signal somebody's going to, actually get alerts on? And then do you have insights into, how these things transform the school for, for better? For example, not just the security incidents too.
Like some of the other ones are interesting.
Egor: So we don't charge anything extra. That's not something that we wanted to do and we're not doing that. Uh, essentially when we go in, we tell the customer that you're going to get everything that we have. With the subscription. And then if you ever want something different, tell us.
Right? And we'll tell you where it stands on the priority list and what the, what the timing of launching that, if at all, will be, if it makes sense to build, it'll be somewhere in our roadmap. It could be very high priority, can lower priority, whatever. But also what we tell them is that if we are building something for a different customer, you'll have access to that.
You don't need to use it, you don't need to activate this rule if you don't want it, but at least you know that you, you know, click here. This is everything that's available to you based on feedback from all the other customers. And if you want it, go for it. You know, it's included in the cost. Now, when it comes to what kind of stuff we suggest, it, it's very intuitive though when it comes to schools.
It's health and safety, right? Everything that deals with medical emergencies, they wanna know about. I'll give you an example. The first time this happened. There was a student that went into a seizure in the hallway during, during class. So there was nobody in the hallways. And when that happened, the notification was immediately sent to whoever needed to respond.
In this case was the principal. So he was able to immediately go to the hallway, like the student wasn't, her life was not at risk. But at the same time, just that ability to immediately to respond and comfort her and get her the help that she needed, not only did that increase the confidence of the student itself, but also the teacher, the faculty.
Right. But then talking to them afterwards, it said to me, even the feedback that they got from the parents, it's like, this is amazing. Because if the system wasn't there, the cameras were there, but nobody would've noticed this most likely up until the class was over. And then there were people in the hallways.
So it is this very comforting feeling that if something like that happens. People will get help. Another one, uh, there was a student at a university that went into a diabetic emergency. Same idea on the floor after hours. Nobody was there. System caught it, sent it, people came in. Were able to call the EMTs, get the student help.
The more events like that that we catch and the feedback that we get after that happens in terms of this is amazing. Like the amount of things that we got, this is faculty speaking, right? Just from the parents and the student known that if God forbid something happens, they're protected. This is huge.
And we essentially started with that. We started with the health and safety piece. Let's focus on it. And as we run this, obviously with, you know, the perimeter breaches and all the other stuff, but as we run this and you would like something else that, that possibly is more custom for your school. We can discuss it, but the health and safety piece, you are protected from day one.
Deep: What are there examples like of things that that are more controversial that people are more suspicious of? I remember being in high school I would always keep some ibuprofen on me. I had, you know, I had a headache and the school had this strict policy where you're never allowed to take a pill otherwise you'd get expelled.
But you know, I was just like, kid who wasn't gonna go through all a bunch of paperwork to take a couple of ibuprofen for a headache. are there cases that are more boundary cases where the administrations have a very different level of tolerance for different things that, some people would see as being very violating that you still allow, how do you tread that line?
Egor: Typically the weird request that we get is from the corporate customers, not schools.
Deep: Okay. And,
Egor: I'm not gonna get into the request that we get, but I can tell you that we say no. Oh, come on.
Deep: That's, we want the weird ones.
Egor: I will absolutely not go into that. But what I can tell you is that we say no a lot.
Okay. It sounds weird. If it sounds like this is not something that we want, that we should be doing, we're not going to do it. Because even in our mission, it's like we're here to protect students. We're here to save lives. And if the mission strays away from that too much, we're not going to do it.
Like we, so, like, if it goes
Deep: towards worker productivity, for example.
Egor: Sure. Or, you know, watching kids take aspirin pills, that makes no sense. Like this is the, well, you could
Deep: argue
Egor: that that's
Deep: a health concern, that the school should know every pill that the kid takes.
Egor: Well, no, you said that this was for a very different reason, right?
Because they were probably afraid that kids were gonna be doing drugs and that's why they Yeah, exactly. Yeah. Well, no, this is your internal problem. You deal with that. We are here to make sure that, the system protects your kids, not tells you when they're not doing something they're supposed, uh, when they're doing something.
Oh, so you're not,
Deep: so you're not, you're not like they are maybe asking you for monitoring drug stuff and you're saying no, like I guess walk us through a little bit like, what's the line look like? I
Egor: don't think have that request. I don't think we've ever had that request. Really? Yeah. That seems like such an obvious request because like, but we are very at screening interviews saying like, dude, there are things that we are going to do and there are things that we're not going to do.
And as long as you set your. I wouldn't call them boundaries, but as long as you set your mission from the beginning of those conversations when they're not even a customer yet, generally speaking, people tend to understand what you're focused on and what you would like to be doing.
Deep: I mean, you could make a strong health and safety argument for, uh, stopping kids from smoking weed at school, you know, like, or worse, right?
Sure. This is why you have smoke detectors. No, I don't know. I don't know. I mean, the, the reason I'm asking is not that you should or shouldn't do it, but it's like gi maybe give me a little more detail on like, what is your ethical line? How do you define it exactly. 'cause it's, you know, these things can be interpreted one way or the other.
And then how do you go back to that? What's your process for going back to that and making a determination and maybe give us something that was more gray, where you genuinely had to think about it before doing it or not doing it?
Egor: For example, we're not, uh, we're not doing facial recognition for a reason.
Yeah. We don't need to know who these students are. If we would get a request in terms of, well, we need to be able to tell who's a male student, who's a female student. No, we're not going to do that. It doesn't really matter. What we're focused on is on actions like, this human is either supposed to be doing this, we're not supposed to be doing this.
And that case we're gonna notify you. We're not gonna classify students by whatever classifiers you would like. I'm trying to think of some of the requests that we've gotten that we were like, uh, should we do it? Should we not do it? I think a good one would be, uh, are you familiar with vape detectors these days?
There are devices that are being installed in schools that can tell if students are vaping in. Okay.
Deep: Yeah. I mean, I didn't know there were detectors, but it certainly makes sense that there might be.
Egor: Sure. So now when I was in school. Nobody was vaping. People were smoking in the bathroom.
Deep: Yeah, that's right.
Egor: And there's a smell associated to that, so you didn't need detectors. Right. But now because of, you know, because of vaping and how, uh, widespread it became, there are actual vape detectors that get installed. Right. Uhhuh And we've gotten a lot of requests that we're still talking about what we're going to do about that to essentially integrate our system into those vape detectors to be able to notify if that detector is going off.
Deep: mean, I think that's a, conversation that you would have to have based on what you're telling me because you're, because you're like, well, on the one hand. It's very much a health issue that we don't want middle school kids vaping. a, it's illegal. I think it is actually.
I don't even know, but I think it is for under 14, probably under 16 for sure. Under 18 possibly. And, but at the same time, I can see how you guys might say, well, maybe it doesn't fit in. how did you think about that and what did you guys end up doing?
Egor: No, we're still talking about it.
Uhhuh still on the table whether we do that or not, that's a different story. But this was a very interesting one because it's like, okay, well we're not gonna provide the sensors 'cause this is not what we do, but the sensors are already there, meaning that everything is already functioning. You are getting notified if, if somebody's vaping.
What you're asking us to do is essentially integrate it into our system in order for you to be able to respond a lot faster and more effectively to it because you're using our system for other triggers of response. So that to us was like, well, do we do it? Do we not do it? It's a longer conversation and just like right now, just because Igor says that, you know, we don't wanna do something, that doesn't mean that my co-founder agrees with me or our board agrees with me.
It's, it's a discussion that we weigh pros and cons and then we make a call on it. But the vaping thing, yeah. That wasn't, that was one that we started talking about
Deep: I can imagine it, it's interesting that you even weigh in on it. And I don't say that in a negative way or a positive way, it's just different.
I mean, you could imagine a lot of companies would go a very different route. They would be like, Hey, the customer's always right. In this case, the administrators are, uh, you know, are hired by, you know, political appointees. It's up to the political process to determine. What gets tracked or not tracked in a school, we're simply, you know, you could pull a Zuckerberg and be like, ah, we don't know what goes on on our platform.
And, you know, and you could raise your hands and like ethically absolve yourself from it. So I'm curious why you are in this decision making and like, is it a function of prioritization? Is it a function of brand protection where you don't want to be seen as facilitating authoritarians not quite the right word 'cause you're not operating at the, you know, at the national level, but like, why are you in that conversation?
Why aren't you just saying, Hey, we, you know, we have a, blind approach to this. We take all of our school customers, we staff rank their requests, and we take the ones that intersect with feasibility and, frequency of requests. And that's what we do. And if somebody turns it on or off, that's up to them.
Egor: Because the customer's not always right. I believe it was Henry Ford that said if, we'll, if I listen to my customers, I would build a faster horse, right? Uhhuh we get a lot of requests and one of the things that my co-founder is amazing at is essentially saying, well, let's talk about the problem that you're trying to solve and see if there's a better solution instead of what you believe is the solution to that problem, right?
Everything that we discussed that we could potentially do, it ties into how high is it on the priority list? Does it fit with the mission? Do we want to divert resources to that? How many of these resources, based on the timeline that is being requested, and we get a lot of requests. So some requests are being analyzed very carefully because it just makes sense.
it makes sense with our product, it makes sense with our mission, it makes sense with our customers. Other requests, you know, there can be a lot more complicated, like integrating other hardware into our system. We can work with all the cameras because we inject our TSP streams integrating something.
Anybody who's ever tried to integrate something, understand that it can be something super simple or it can take months. And actually a lot of companies die from trying to integrate with way too many things. So all of that goes into play. I don't think it's a brand protection issue, it's more of a prioritization.
Does it make sense and how, it's just
Deep: how to spend your resources basically. Exactly.
Egor: Of course.
Deep: Yeah. Yeah. I mean that, that makes a lot more sense, particularly in, I mean, a mostly SaaS company like yourself. I think so. What about let's leave the realm of schools for a moment. You mentioned you have some other clientele and one of the clienteles you mentioned were were like governments.
Like who are they, are they municipalities? Like, and where are they putting cameras? Are they putting them in like parks and uh, or public buildings? And what are their concerns?
Egor: We have some smaller municipalities and the concern is typically, you know, around courthouses or like sheriff's departments, right?
Or people. Mm-hmm. there could be an elevated potential of, you know, violent risks there. Uh, we also have customers that are not government customers, but let's take a university that's in the middle of a city, like in the middle of San Francisco and all the outdoor cameras. They're facing onto the city streets, the streets, there's a lot of stuff that happens there from car, uh, accidents to fights to man.
The type of stuff that you see on the streets of New York and San Francisco is just amazing. In that case, yeah, you, you have to work closely with the municipalities as well, because if something happens on the street and that does not directly affect the customer, that actually purchased your system, right?
Or you piloting a system with this is still a good thing to be able to at least make those, uh, responders, those police departments or whoever needs to know about what's happening on those streets. Aware whether the responder do anything to it, that's out of our hands, but at least we have the ability to give them that information.
Deep: Your system sounds fairly scalable. You're talking about pretty large physical arenas like a university campus can be quite large. You could be talking about. I don't know. I'm gonna guess even like hundreds of cameras, maybe even more. do you get requests to just do a city, you know, maybe a small town or something if you're not doing facial ID and you're not doing the stuff that kind of directly violates civil liberties and you're doing behavioral detection, how different is that from like, you know, the gunshot detectors and all the other sort of sensors that cities use pretty regularly.
Um, but are you getting requests like that to just feed directly into the police response units, uh, and do an entire neighborhood or an entire city?
Egor: No, not really, because this is not our area of focus. We don't have any requests from the cities, how different it is from like gun detection. Well, you hear a sound that's very different and by the time you hear a sound that's already too late, if somebody shows up, to your parking lot with a weapon.
Then they take that weapon and they put it into a backpack. Now you can't see that weapon and they walk across your entire campus. What is a gun threat detection system going to tell you absolutely nothing.
Deep: No, I agree. I agree. I mean, that, that's kind of my point is that why aren't you addressing stuff like this?
Like, why aren't we taking this solution to a high crime neighborhood? Is it the lack of ownership in the locations for the cameras that the city has? But you know, there's streetlights. They own the streetlights, they own the fire hydrants there. Like there's lots of cities type of infrastructure they could lay.
And you're talking about health and safety. Mm-hmm. That matters in a city, in a neighborhood, you detect somebody falling over from an epileptic attack, you detect a mugging. Why aren't we taking a solution like this
Egor: focus? All of our resources are focused on education right now.
Deep: Okay. So let's jump out of, just Volt for a moment and like, let's look more societally wide. 'cause I think you have a very interesting perspective and perch. 'cause you, you know, you're intimately familiar with what it takes to detect these things, the kinds of problems that arise from defining the signals and acting on them.
If we walk out, five or 10 years into the future based on our conversation, I'm not seeing any practical reason other than volt specific focus, but that this kind of a technology is not released on a societal scale. Do you think that will happen? Is it happening by other companies is fine.
It doesn't have to be your company.
Egor: It will happen. It is happening not as quickly as I would like to see it happen. In a lot of cases because the people that are in charge of analyzing these systems, and they still are boxed into very strict requirements, like, well, no video should leave the network.
You can't be connected to AWS, right? You can't do this, you can't do that. It's like, well, then you end up with a VMS, they just recourse videos. How are you going to run advanced real time analysis and have access to, you know, very large, very sophisticated models if you're trying to keep everything in house and not allow anything to essentially go to AWS or whatever you're using, it doesn't need to be AWS, right?
So some sort of a cloud provider, but there's definitely big municipalities that started talking about it, you know, years ago. And we've had conversations with them before we started focusing primarily on education. And you can see typically there's two types of, groups involved in those conversations.
The ones that understand the capabilities of a system like that and really want it. And then you have the ones that are saying, sure, you can have this, but you can do A, you can do B, you can do C, you can do D, you can't do F. And it's like, okay, cool. Well, the system doesn't, won't work, right? No, we need to find a system that will work with those limitations.
It's like, well, good luck with that, right? There's certain things that we can, and certain things we cannot do. I think those are barriers that are gonna go away. And yeah, I absolutely do believe that a system like that, it doesn't need to be Vault. I would love it for it to be vault eventually, but a system like that should be looking at bus stops and should be looking at, the little alleys where people park their cars.
I think it's gonna make all of us a lot safer.
Deep: I mean, you can imagine bus stops on the bus streets, alleys like every inch of the city is being filmed and monitored. But you can also imagine companies going into other countries with less rigorous legal structures and legal systems, or even into ours in the modern environment, it's getting pretty sketchy.
The courts aren't really stepping up when there's blatant authoritarian tendencies in the federal government. So the risks seem very real. Like once you have, once you've distilled the ability to identify. Scenarios and actors based on a fairly straight you know, it, it will not have been the first time this company, whoever it is, is training up a model for a particular type of action that in started off being protect kids from school shooters or from, you know, epileptic attacks.
And you know, you can imagine there's no shortage of countries where it's much more benign. Things like, you know, speech being violated, uh, you know, saying something against, our fearless leader, whatever, that line gets much easier to cross when all of the infrastructure, everything is already in place.
And we're just talking about one slight model difference.
Egor: you know, humans are imperfect creatures. And you can have the same tool being used correctly and one tool is being used incorrectly by different people. The way that I think about it. Is that as long as, this type of a technology is not being centrally controlled, meaning that people generally speaking, they're smart and they know what they like and what they don't like.
And if there is a company that does this, let's say a neighborhood A, B, and C that does this correctly, right? And then there's a company that does this in neighborhoods DENF, that does it incorrectly. Those companies that are doing stuff they shouldn't be doing are eventually going to go outta business, right?
And companies that are doing things correctly are gonna take over their business. Maybe I'm naive to think that, but I do believe that a system all the less than ideal players would essentially go out of business and this will level set. Well, I mean, I think you,
Deep: you could probably make that argument more effectively in, in the US or Canada or Western Europe or something, you know, in the Western democracies.
But I think that argument's a lot harder to make in a, you know, in the more authoritarian nations. You know, like, I mean, China is very much centralizing all of this capability, and they very much do want identification of individuals. and it is in place and deployed. I mean, there's a reason that the Hong Kong protestors had their faces covered and, you know, and they were still identifying them through gait analysis and they found them and, did their authoritarian thing.
So I guess what I'm asking is what do we as AI practitioners who actually understand the technology, like what's our role to be played in encouraging thoughtful use of thoughtful signals, which it's clear from our conversation that you guys are applying. You know, in, even in your school setting, in cases that I thought were relatively benign, even, you know, like vape detection sounds like perf, I'd be perfectly fine with that, but like, what role do, do we play? It seems like at a minimum, we play the translation layer to policy makers. Like, these are the kinds of places, you know, that, are suspicious. But if you think about, for example, the, the architects of the internet, you know, they spend a lot of time and energy thinking through how to like, construct the mechanics such that it would ultimately, stay kind of open.
I mean, it, I don't know. And you could argue that it, they failed, but like the intent was there with respect to ai, you know, is there a role like that? Like what do you think the roles are for us when we are so intimately familiar with the potential abuse scenario?
Egor: What I think is that every person that is building a a company like that, or is building a capability like that, should probably agree on what the moral compass is of that company and of that product. And if somebody's asking you to do something that you're internal, and by your internal, I don't mean the individual.
I mean like this is what the company does, right? And if it goes against that moral compass or whatever, right and wrong, you decide it, just don't do it. And yeah, you might lose some contracts and yeah, you might not get that big juicy agreement that you are looking for. But if you really want to try and make this better, then you should probably do that.
By the way, am I saying that I'm a perfect individual that will do that? No. What I'm just trying to say is that if we're talking in general terms to be able to at least somewhat keep in check of somebody abusing these technologies, right? That would essentially be up to the individual companies to say, no, we're not going to do that, or we are going to do that.
Or Hey, if you want to do stuff like that and nobody's doing it, I guess you're gonna have to build it yourself, right? I think that that we can get to, to try and, I don't want to call it police, but at least to try and influence how these technologies are being used.
Deep: I mean, you can imagine, you can imagine a number of folks saying, well, that seems kind of weak, right?
Because from a couple of Vantages, from one vantage, as an individual founder of a company. Today you have tons of market growth opportunity. Let's say, you know, within education, fast forward five, 10 years, let's say you stay on your, your amazing trajectory, you exhaust education. You, you know, but at some point you're gonna feel very real pressure.
assuming you continue to grow you can imagine, it's not that difficult to be faced with a feature that you need to add that's on the line and you're arguing about it. And there's a lot of financial, direct financial pressure and relying on one's inner compass or a company's inner Compass, compass, you know, that hasn't really worked out that well in the past for society.
There's been, there's like all kinds of violations there. So you can imagine if you're thinking at a societal level, leaving it up to companies where the profit motive is the prime and frankly, legally mandated, driver at least up for public companies. it seems grossly insufficient on some level.
Policy makers have to step in and define what that moral landscape actually is. And from what I see when I look at the Senate try to talk about ai, I am like, good God, these people have no idea about anything. they don't understand this stuff at all. Like even, you know, I mean you've got, I think it was Mitch McConnell and Zuckerberg and you know, McConnell's like, well, I don't understand how you make money if you don't charge anyone anything.
You know, he is like, Senator, we run ads. those are the people that are gonna decide this. It feels, like it's not gonna happen.
Egor: You know, there's only one job that I never wanted in my life, and that's politics. If you're asking me for the perfect solution for that silver bullet of how we're going to keep this in check, you're asking the wrong guy.
Because to tell you the truth, I have no idea why the government does 99% of things that they do. I can't control that. But all, all I can control is what I do. And when Dimitri and I started this company, we had a very clear mission. We wanna save some lives. And as we started doing this and as we started seeing this actually work, we started changing, okay, well let's go from weapons to something else, into something else.
Let's add geolocation and all that natural progression. And we're getting more customers and we're becoming more successful. And it does help our customer acquisition and our customer retention and everything that deals with pipeline creation, right? But I still believe that we stayed true to our core mission and to our core principles with which we started the company when we only had one feature in mind.
Deep: So, yeah. No, I mean, I really appreciate your moral clarity. I mean, to be honest, part of my line of questioning is like, how do we get that to other founders? I feel like it's not there with a lot of founders. And so, and it might have started there with a lot of founders and then it disappears, you know?
Egor: I, I was lucky enough to have met some pretty amazing people when I was at Google, and a lot of them became my friends and advisors and investors in this company. Most of them have sold companies more than one. And, they all kind of told me the same thing, maybe in different ways, but the same thing is that.
being a founder and being a startup now, it, it's almost like a cool thing where people get into it just to sell. And I'm not saying that not having the next strategy is a good thing. No, you should absolutely have a, another strategy, but don't get into it just to sell, get into it to whatever, you know, build something cool.
Change the world. You know, learn, make the world a better place. Turn your dreams into realities, whatever. Right. And if a correct opportunity for you and for your investors, come, you know, to sell the company. And that's a good exit. Sure. Do it. Whatever. Yeah. But it's not a, it's not a motivation. Exactly. But it shouldn't be, you did it for the right reasons.
And when you're saying, well, a lot of founders don't have that. Well, a lot of founders don't get into this game for the right reasons. They get into this game to buy themselves a nine, mention Malibu and buy a Lamborghini. Yeah. Which are all nice things. Right. But that probably shouldn't be the motivator where you start a company.
Deep: Fair enough. So one, one last, one last question for you. Have you guys saved like a life, like what's the moment you're sort of most proud of with respect to the system?
Egor: One of my most proud moments because I was involved in that incident there was a lady and she was janitor lady that was, uh, cleaning an office after hours.
And she either slipped or she had some something happen to her and she hit her head on the corner of a table and she was on the ground bleeding. Right? And because our system caught that and immediately notified somebody, she was immediately given help and nothing bad happened to her except for that, head injury.
once that happened, I was thinking, imagine what, could have happened if They didn't have that system. Nobody would've known that this happened. Nobody found
Deep: out until the next morning, you know,
Egor: until the next morning. And I don't even wanna think about the potential negative outcomes that could have happened with that to me, because I saw that and because I was involved in that.
When that happened to me, it was like, holy crap. Yeah. You guys are
Deep: on the right path. Is you, I mean,
Egor: legitimate to save some lives.
Deep: Yeah. Well, that's awesome to hear. Listen, Igar, thanks so much for coming on the show. I've really enjoyed this conversation.
Egor: this was one of the most fun podcasts I've been on a while, so this is awesome.
Deep: All right, good. I like to hear that.