Is Your Truck Driver Awake? AI-Powered Alerts Are Slashing Fleet Crashes with Gareth Bathers of EXEROS Technologies

Could a split-second seat-shake save a life at 70 mph?

In this episode of Your AI Injection, host Deep Dhillon chats with with Gareth Bathers, Head of Data at EXEROS Technologies, to unpack the eye-tracking, seat-vibrating AI keeping long-haul drivers alive. Gareth reveals how a retrofit camera kit slashed one fleet’s crashes by ~86%, why a two-second blink triggers an instant haptic jolt, and how human analysts still sift the footage for ethical red flags. From calibration phases that build driver trust to blind-spot AI trained on rainy London streets, the conversation explores whether shaking drivers awake is a stop-gap, or the future of commercial road safety.

Learn more about Gareth here: https://www.linkedin.com/in/bathers/
and EXEROS Technologies here: https://exeros-technologies.com/

Check out some of our related content: 

Get Your AI Injection on the Go:


xyonix partners

At Xyonix, we empower consultancies to deliver powerful AI solutions without the heavy lifting of building an in-house team, infusing your proposals with high-impact, transformative ideas. Learn more about our Partner Program, the ultimate way to ignite new client excitement and drive lasting growth.

[Automated Transcript]


Gareth:
For example, if I tell you, if you close your eyes for two seconds. We're gonna set an alert off and the seat's gonna vibrate. That's very understandable. Whereas if I say to a driver, I'm gonna detect your pupil dilation and the droops of your eyelids, and I'm gonna track it over time and maybe count how many times you've blinked in half an hour.

One, the driver is bored and uninterested in what I'm saying, but two, they don't really understand what that means to them. So we have to build technology that that works for.

Deep: Yeah, and it creates a nervousness about it.


CHECK OUT SOME OF OUR POPULAR PODCAST EPISODES:


Deep: Hello, I'm Deep Dhillon, your host, and today on your AI injection We'll exploring how AI is transforming road safety with Gareth Bath's head of data at Xero's Technologies. Gareth holds a degree in physical geography from King's College in London and leads the data science and modeling that powers their predictive safety platform.

Xero's Technologies provides smart camera systems and in-vehicle AI driver assist solutions delivering real-time safety insights for fleets. Gareth, thank you so much for coming on the show. [00:01:00]

Gareth: Thanks for having me.

Deep: Awesome. Maybe get us started. Tell us, um, what did people do before your solution existed and what's different with your solution?

Maybe walk us through a particular scenario.


Xyonix customers:


Gareth: Well, I think fundamentally before they had our solutions, they had accidents or certainly had more accidents. So we've been installing uh, fleet safety systems since 2009, primarily based around camera systems. you know, some of the fleets before they installed our systems.

It was something like 86% more accidents before our systems were installed compared to afterwards. So if you go back to 2009, we were primarily installing what we'd consider to be passive, relatively dumb cameras. So that's recording. And you'd use the data after the event. So if there was an accident or something happened, you'd go back, review the footage, uh, and see who was to blame.

And ultimately, what you're looking to do there is reduce your insurance costs, reduce the obligation and the, uh, the accountability. Uh, what we saw over time was people were looking away from these passive systems, looking at, active system, let's say [00:02:00] slightly more proactive. Think, if you will, about maybe a, a ring doorbell, going from a relatively simple recording system into a ring doorbell that is able to alert you when something's happening.

So we now have cameras in vehicles that are alerting us to events. Uh, and then more recently we've started down the, you know, the root of AI and, and assisting systems. So these are cameras and systems that are actively helping drivers become safer drivers, more efficient drivers, ultimately supporting the fleet and the drivers in, in being safer, trying to reduce accidents and being slightly more proactive, to ensure that accidents don't happen in the future.

so we've seen, you know, a real change from customers looking to be relatively reactive to being fully proactive and actually managing their fleet and managing their drivers effectively. Um,

Deep: so maybe like set the context for us a little bit. You know, these are like shipping companies or like, you know, maybe describe the fleets.

Tell us a little bit about like where exactly are the cameras? Are they, is it like a dashboard cam pointed out? Is it a cam pointed at the driver? You know, are there cams when they pull [00:03:00] into places like in warehouses or whatever? Like where exactly. Yeah. Maybe set the context a little bit in more detail for us.

Gareth: Yeah, so, so we have cameras facing the road. We have cameras facing the driver. We have cameras facing various parts of the road. Uh, so we have different, different types of systems. It could be a, uh, a dashboard mounted camera, so facing the driver. But typically we have a sev several set of cameras. One is fo facing the road and observing the road and everything around the road.

One is facing the driver, looking for things like fatigue, looking for, for signs of distraction, mobile phone use, let's say. and then we have other cameras facing the side of the vehicle looking for side impact or you know, risks along the side. and these cameras are looking for lots of different things.

They could be doing things like a DAS, so that's supporting drivers for things like lane detection and, tailgating. I think the one, the system that gets the most interest is the driver monitoring system. So that's a, an infrared camera facing the driver and looking for things like fatigue, looking for things like distraction.

these systems are monitoring you. The whole time and looking to see whether you have signs of fatigue. So [00:04:00] that's emerging fatigue. Plus also that last minute warning to say you've just had an, an incident, maybe you've got your eyes closed. and, and to answer your question, we have, we have systems across all vehicle types, anything from a train through to a HGV.

Uh, we have buses, for example. We have some buses in London operating the system. And we're actively monitoring those drivers. Four signs of fatigue. and if a driver was to have a fatigue incident, we have a haptic sensor, so we vibrate their seat to wake them up.

Deep: Oh, wow. Okay. To make sure that doesn't

Gareth: turn into an accident.

Deep: Huh. walk us through the. data flow. So you've got all this video, what do you have on board? You know, do you have, GPUs and some, and some hardware on board to do kind of real time understanding of the signals? And then what is it that you're, pushing up into the cloud and monitoring kind of after the fact for, you know, kind of analysis across a fleet, across a set of drivers, across scenarios?

Gareth: Yeah, good question. So all, all of our technology is in the vehicle in terms of detection and inference. So all of the AI is really within the vehicle, and it [00:05:00] has to detect very, very quickly. So we have very low latency, you know, immediate responses to what's happening. That's all sitting in the vehicle.

and, and it's important to note that what's happening in the vehicle is the, the system is monitoring you the whole time, but it's not recording you the whole time necessarily. for our fatigue detection system, it's relatively passive. it's only gonna. Record a clip. If you have an event at that point, we then push that footage to the cloud.

and depending on the contract you have with us, vast majority of our customers, they use what we call the driver support center or the DSC, which is a human monitoring center that then monitors that event to see what levels of fatigue you're displaying. Uh, as much as we love AI and systems, the reality is people are very, very good at detecting these things and very, very good at reading the signs and symptoms.

So what will happen then is if you've had a serious event that could be escalated further, into your operator, into your, into your management team, and then maybe the next day you, you might be taken aside and someone will say, Hey, are you tired? And they may take you through a, an intervention, let's say, on your level of tiredness.

Typically, that happens if you've had a few [00:06:00] events, the first event is usually fine. What's interesting with our technology is that if you have it installed. And let's say you've managed to set it off so you've managed to vibrate the seat. Uh, we find that 92% of the times, that's the only time it will happen within the hour.

So we're pretty much avoiding any chance of it happening again. Whereas we've seen other fleets where that is, this isn't the case. You could have 27 events. You know, we've seen, we've seen a case of someone having 27 events because we're not able to haptically warn that person. They're just having these repeat events.

know, we're very much about trying to wake them up, make sure happening. And what do they do when they,

Deep: when they get a a haptic warning, do they pull over and rest or do they just, I mean, like what's, what's the typical response and And typical, you ever have like conflicts of interest where the, the truck's gotta be at a certain point, at a certain time and, and it's like, okay, yeah, they're a little tired, but they seem to be able to snap out of it, or do you mandate they pull over or is that all kind of decisions up to the fleet managers?

Gareth: No, I absolutely, so the, the decision to what to do is very much [00:07:00] to the fleet manager. And what we find is as, as, I suppose it's worth pointing out that when we install our technology within a fleet, we typically go through three phases. So phase one is what we call the calibration phase. the system self calibrate.

They, they don't need us to calibrate them, but we tend to calibrate the organization and calibrate the system within the organization. So what we do within that first phase is we run the system quietly and passively and silently. So there's no Inca alerts. There's no warnings. That gives us an understanding of what that fleet has at a sort of benchmark level.

So what's their minimum level of, of problems. We then go through with the fleet what a sort of level of acceptable risk might be, or an acceptable level. And I'm not saying an acceptable level of fatigue, but there will be occasions where you need to look at this data and say, okay, what is a level that I'm willing to deal with?

and then looking to say, well, I. What happens if we have an alert, like a fatigue alert, very serious event, like a fatigue event is very serious. Whereas something like, a, a series of yawns or symptoms of fatigue, we may just keep that alert in the cab and just say to the driver, Hey, take some rest.

That's up to [00:08:00] them to do that. If they have a genuine fatigue alert and an alarm where they've closed their eyes and they've gone to sleep, it's then up to the organization to decide what would happen. Now, some organizations already have policies and procedures for this, others need help in defining that.

Once we've done that phase, we then move into the active phase where we take driver feedback. Once that's gone live in the cab, we say, Hey, how did, what did you think of our technology? did you think it worked? Did it, did it help you? And then we, we make that part of the next journey into going live.

We have to have the driver buy-in for our technology before it goes live, because ultimately they're the ones in the cab with the tech. So it's very important. Once that happens, we then work with the organization to make sure that we can react to those events and take correct procedures.

It's not up to us what you do. It's up to the organization, the fleet managers themselves, what they decide to do. Some will take a very active approach in terms of maybe removing that driver from the shift at that moment. Others will, will maybe call the driver and say, Hey, can you take some rest? what we are looking for long term is we're looking to work with fleets to look at things like shift patterns, look at driver routes, look at driver [00:09:00] rotation, looking at the way we can use our data to ensure our systems and never really needed in the cab, if that makes sense.

Yeah. We don't really want it going off in the cab. We actually want to make sure Yeah, it's already a high risk

Deep: at that point, right?

Gareth: Absolutely. Yeah. If you're, if you're in a situation where you're closing your eyes and going to sleep, we've missed a trick. We've missed the opportunity to help you. So we're absolutely looking to bring that back.

Deep: it feels like there's sort of two tech two approaches kind of converging. I'm curious if you agree, you know, if you think about a modern self-driving car, you know, like a Tesla or something, it has all these cameras on the external side. it has a lot of this kind of alert capability and, it's not fleet oriented.

but it also, importantly can take over and, or is already driving. it seems like at some point this is gonna get into commercial vehicles as well, and once it's in there, It's not a very far leap to think that you would be taking over the safety features. So how are you guys thinking about that?

Are you [00:10:00] thinking, well, we have no choice but to get in, into full self-driving on the, on the commercial vehicles and be able to intercede not only to wake up the driver, but to actually like, you know, stop the vehicle from doing something? Or are you thinking that the two systems would end up evolving and running together?

Gareth: probably evolving together, I think, or at least working together at some stage. I think as much as we all wish self-driving cars were a reality tomorrow, I think we're still a long way away from fully self-driving vehicles on our roads. Especially in somewhere like the UK where the legislative framework is quite strict.

Uh, what we expect to see is self-driving cars operating in certain domains, so maybe on the highway in very controlled circumstances. Um. I think from our technology perspective, and it's important that we are a retrofit organization, so we always put our technology in after the vehicle's being built.

Typically fleets come to us to look for one technology across the whole fleet. So they might have a fleet of vehicle types, different brands, different models, all with slightly different systems in there. Our [00:11:00] system is the same, whichever fleet you put it in, whichever vehicle. So they're looking for that consistency.

They're also looking to capture the data from those systems, which they can't do necessarily with a, with an OEM product. so there is, there is a requirement for our system to sit relatively independently or very independently from the car system. Provide that agnostic data in a way they control.

The other thing that's important is we are operating within a commercial fleet environment, which is very different to you and I driving our cars.

Deep: Sure. So when

Gareth: we get into our cars, I dunno if you, if you own your car, if you lease your car, but ultimately we have a relationship with our car that is very different to a, a driver in a fleet.

For example, I own my car, which means I take, I'm invested in the technology, I understand the safety technology I get, I love it, I hate it in equal measure, but it's my car, you know, it's, it's full of my equipment and my child car seats and all those things. Whereas the fleet driver turns up in the morning, they dunno what vehicle they're gonna be driving.

Typically they might be driving a different vehicle every day. So the relationship between them and the vehicle is very, very different. and therefore the relationship with the technology on board can be quite [00:12:00] different. So we need to provide that consistency across the fleet to say, if you turn up to a vehicle that's, that belongs to this fleet, there will be a consistent set of technology.

And ultimately that technology needs to stay the same. So if you are, if you are delivering a fatigue detection system, it needs to be consistent across those vehicles so that someone knows that when they get into that vehicle, it's gonna behave in a certain way and act in a certain way. but, and I'll let you into a secret that's quite difficult for, for, as AI people, because we want our system to constantly change, evolve, and improve, but actually in the, in the, in the environment of a vehicle, sometimes we need to keep things the same.

We need to have that consistency in the way that system operates so that there's no surprises on a Monday versus the Friday.

Deep: Yeah. I think that consistency is kind of a really good point, right? Like if we think back to, an old technology like anti-lock brake systems in the early days, you know, a BS, before a BS you, if you're driving on snow or ice, you know, you, you had to like really pump the brakes and, you had to do things like counter steer and in order to kind of like function, but a BS sort of [00:13:00] really changed the game where all of a sudden you can just press down the brakes and steer where you want to go.

It simplified it greatly. But one of the challenges was like, you know, during the period where a huge percentage of cars didn't have it, was that people would be in their car, they get used to the ab BS, they'd be in another car. There's no a BS there. They need to pump and steer. They can't, or they forget or it is not instinctive.

So that argument kind of resonates with me that you know, that you really want that consistency and particularly in the scenario where people are swapping vehicles.

Gareth: Oh, absolutely. It needs to be consistent. It also needs to be predictable and understandable, and I think we, we have to be careful with technology that we don't build technology that people don't understand.

They need to understand why it's doing something, what's what it's doing. And our fatigue technology is quite interesting because there are very advanced fatigue systems out there that do all sorts of things about looking at driver behavior, and they look at the way that you are responding to things.

Your, your eye level, your di, your pupil dilation, et cetera, et cetera. The problem is when an alert then is then fired, the driver doesn't really understand why and it becomes a, conflict, a sort of [00:14:00] tension between the driver and the technology.

Deep: Yeah. So often

Gareth: when we're, when we're giving an alert, it's really gonna be something relatively basic.

For example, if I tell you, if you close your eyes for two seconds. We're gonna set an alert off and the seat's gonna vibrate. That's very understandable. Whereas if I say to a driver, I'm gonna detect your pupil dilation and the droops of your eyelids, and I'm gonna track it over time and maybe count how many times you've blinked in half an hour.

One, the driver is bored and uninterested in what I'm saying, but two, they don't really understand what that means to them. So we have to build technology that that works for. Yeah, and it

Deep: creates a nervousness about it, you know, particularly if they're being, uh, reprimanded for the events.

Gareth: we don't typically reprimand drivers, by the way, just to, you know, we're looking to support drivers. Ultimately we're look to support drivers. So drivers are working well, you're looking,

Deep: but, but the managers might, you know.

Gareth: Yeah. Well, so this is one of the concerns that drivers have.

Obviously drivers, drivers are concerned about using data, using video for discipline, I think it's important to remember that I don't, I imagine it's the same in the US but in the UK we [00:15:00] have a driver shortage. There are not enough drivers for the vehicles we need them for.

So nobody really wants the discipline driver. We don't want to reduce the number of drivers in our fleet. We need them, uh, until self-driving trucks come along, which is a long way off. We need those drivers. So really we're not looking to discipline. What we're looking to do is help those drivers. I'll give you an example from the buses in London.

we have lots of systems running in London right now monitoring drivers, monitoring fatigue levels. If there's a fatigue event and it's considered a very serious event, the driver is taken through, uh, a very supportive counseling session to understand what might be causing their fatigue. Is it something to do with the route they're driving?

Is it the vehicle, is it the temperature, is it the weather? Or is it something in their lives that maybe we can help 'em with? So it is very supportive. Only if that doesn't. Fix something, would it turn into disciplinary action? That's something we want to avoid. 'cause we, last thing we need is a driver taken off the road.

'cause we just don't have enough of them.

Deep: Yeah, that makes sense. So what is the KPI or that you're really [00:16:00] optimizing for? Is it minimizing events?

Gareth: well ultimately we're looking to increase safety. the KPI is, we want to reduce accidents and improve safety.

I was looking at some research recently. So the US has something like five times the level of road deaths in the UK per a hundred thousand people. our roads are relatively safe in the uk but they're not safe enough. So we have, for example, in London, there's what they call the Vision Zero, I think, which is looking to have zero road deaths caused by vehicles by 2041.

That's a bold aim, but it's absolutely something they're aiming to do by 2030. They want to ensure that nobody is killed by or killed in a bus. So there's an absolute driver for this safety.

Deep: Yeah. So the

Gareth: KPI really there is about reducing safety and of course on a, on a more transactional level, we are looking to reduce the number of events, reduce the number of alerts, and ultimately increase driver performance.

Deep: I mean one of the reasons that the death rates are so much higher in the US with traffic is 'cause people just drive a lot more. So you have to look at it as a as a function of the, distance drive, but [00:17:00] also as the function of the road. It's like a driver crossing the passes in Wyoming in January is just inherently taking a much greater risk than one driving in the South somewhere where there's no ice on the road.

And

Gareth: Oh yeah, I agree and I think, I think there are circumstances there, but I think What you see in Europe is a very active, approach to reducing Yeah. Whether that's reducing the number of people in cars, so public transport, you know? Right. Yeah, no, I think

Deep: that's a really important point.

Whereas that particular angle is not taken in the states, which I think it, you know, it makes a lot of sense. If you're reducing the net number of deaths per capita, then that would be a natural lever to pull. Whereas Yeah, here there's a very anti-public transport lobby, so it's hard to

Gareth: Absolutely. This is, and you know, we, we work for public transport, we work for HGV, we work for car fleets.

We don't typically sell our technology to a car drivers. if you buy our technology and you have a car, it's 'cause you have a commercial fleet. So you might have a company car, for example, or a fleet car. You might send it to like rental

Deep: car companies though. Like a her? Yeah,

Gareth: potentially.

Potentially. Although rental cars companies is slightly [00:18:00] different in terms of their expectations. What we talk about quite often is the stakeholder relationships. So if you look at a fleet, a commercial fleet, Buying technology like ours is very much a strategic investment. So it's strategic in the sense that it has multiple requirements.

 You're looking after the asset mate. Ultimately the asset is the vehicle, the driver, the cargo, the passenger and the public. so a fleet manager has a vested interest in making sure that the investment they make is, not just about the driver, not just about safer driving, but also about protecting that vehicle and the asset, ultimately reducing their insurance costs, ensuring their brand reputation is held.

We've all got vehicles driving around with our logos on them and there's nothing worse than a, a logoed vehicle driving badly. So the reputational pieces at stake. So yeah. Rolling down the

Deep: window and hollering at people or something, you know. Yeah. What road rage incident from your fleet driver is probably, uh, pretty bad if you're Amazon or something.

Gareth: Absolutely. I dunno if you have it in the States, but we have stickers on the back of the cars in the, on the back of the van in the UK saying, how's my driving? You know.

Deep: those have been there for decades now,

Gareth: so. Well, there's always the joke, isn't it, that [00:19:00] when you ring the number, the person in the cab answers the phone.

So

Deep: I actually have never called one of those numbers.

Gareth: That's probably why.

Deep: Yeah. Well, I mean, it's, you know, you're driving, I mean, were you gonna remember the number?

Gareth: but it's, it's, it's a nudge, isn't it? It's a nudge to say, I, you know, my logos on the side of this vehicle. I take responsibility for the driver, even though that driver is,

could be a fleet driver, it could be someone who I don't know, but they're driving my vehicle, therefore, they have to conform to my rules, I want 'em to drive safely.

Deep: yeah. So let's, let's take a little bit of a, a turn. I, I kind of wanna dig in a little bit more on the, AI machine learning side.

I assume you're gathering training data so you, you have to like. Figure out what determines an event, how to like gather the data for doing that. how to curate it, you probably have a team that works on, you know, new model development.

How do you determine what things you're going to, you know, make an event? Then how do you actually gather the training data because you have potentially just an incredible amount of hours of footage and being able to like, capture [00:20:00] exactly the moment when eyes go closed for a period and then open.

And I imagine you have new product, scenarios that you're considering. Like you mentioned, the side cameras, being able to maybe alert that someone's about to whack a car from the side or something. So, so walk us through that training, data gathering process. Like how do you get the data, how do you decide what to, what to label?

That's a good question.

Gareth: I'll give you an example of a new product we have, which is I mentioned the, the blind spot detection down the, the left hand side of the vehicle. Mm-hmm. Uh, so obviously in the UK you're driving on the left, therefore the left is the risk. So we have technology that can detect vulnerable road users, that cyclists, pedestrians, motorcyclists, and typically the risk, let's say for a bus for example, the risk is people in front of the vehicle, but maybe not visible to the driver.

So that's someone who's on the road, and, and maybe impacted by the vehicle as it pulls into the left. that's a really difficult challenge because if you look at typical systems, especially systems for trucks and cars, any person within the meter of the vehicle is a risk. So we build a system to [00:21:00] detect that person and alert, the driver.

But if you look at something like a a bus, for example. A bus, its job is to turn left and aim at people. 'cause those people are waiting at the bus stop. So those people are not a risk to the vehicle. They're waiting exactly where they should do. we've built a technology that detects the curb line.

So if someone is on the curb or on the, what we call the pavement, so the, the sidewalk, if you're on the sidewalk, there's no alarm. If you step off the sidewalk, we get an alarm. So we have to build that in. It's really hard to do. so for example, last year, we spent the whole year training it with new data because every day is different.

We have four seasons in the uk. Right. Four very distinct season. That's right. I mean, rain is

Deep: different from non rain. Snow is a totally different ball game. unlike the, the full self-driving kind of mode, which just shuts off in snow or ice, you know, largely you, don't have that option.

You have to persevere because you're, uh, in alerting system still.

Gareth: We, we do. And our, our customers aren't, aren't buying a summer product. They're not buying a product that works in, you know, the customer that says, [00:22:00] can I have a product that works in the sun, but only when the sun is at the right level in the sky and, and all these things just doesn't exist.

Yeah. It needs to work everywhere. So we've had snow, we have water, we have leaves. so autumn or fall is particularly challenging because all of a sudden we can't see the curb anymore and we can't see the lines that make up the curb. So we've had to take lots of training data to look at where and when we want those to, to work and.

What was surprising, I think for me getting into some of this stuff is just how many combinations and nuances there are around these things. So when is a curb line? A curb line? When is the side of a sidewalk? The side of a sidewalk? When should it alert? When shouldn't it? What about pedestrian crossings?

If someone is on a pedestrian crossing, are they a risk to the ve Yes they could be. All of these things we have to build into these algorithm. So lots and lots of data, I suppose is the answer. lots of data. We, we have this joke internally that we wanna unlock 99% of our data the moment we use about 1% of our data.

So we are looking at what more can we do with the data we're already collecting. Um, so who's labeling

Deep: it right now? Do you outsource the labeling? Yes, we

Gareth: do. we typically [00:23:00] outsource labeling. we do some internally, but we, we mostly outsource 'cause it's a huge amount of data.

Deep: Oh yeah, most of the, uh, labeling companies, like for machine learning, I mean the bulk of their jobs comes from self-driving stuff, or in your case it's technically not self-driving, but it's basically the same data or a lot of it plus the driver facing cameras.

But

Gareth: yeah, it's, similar data, slightly different requirements in the sense we're not actively controlling the vehicle anyway. We're, we're disconnected from the vehicle in that sense. We're not, we're not applying brake, but that feels, I mean, feels to

Deep: me like it's in your mission, maybe not today, but medium term it, I, it seems like if I was running your company, I would see no choice but to go there

Gareth: because I, I think ultimately we, we haven't, so we're not going there yet, but I think we probably will.

I think you're right. I think we're trying to avoid that at the moment. 'cause our main mission is to be a supporting technology. And at the moment we find that, and I dunno if you have. I dunno what car you drive, but whether you've got any AR or adas in the car.

Deep: Yeah, I'll, uh, I'll invite a lot of hate.

I have a Tesla, but for the record, I cannot stand [00:24:00] Elon Musk and

Gareth: you, you had it before. It's okay.

Deep: I had it before, but I knew he was an asshole, but I just didn't think he was Avis ginormous of an asshole. So, but like, yes, I have, I have this vehicle. I love the car and I, you know, there's a lot of great people that work there, despite him being such a wanker, but like, beautiful

Gareth: use of that word, by the way. Thank you very much.

Deep: But I'll tell you, this car has saved me a couple of times.

I'm too cheap to pay for the FSD, but you know, they give it to you like a couple months a year. So I, I'm very familiar with it. And I'm also a machine learning AI guy, so I wouldn't get it even if I wasn't too cheap too, because, you know, I know that they didn't get enough cats running in front of the car, enough deer running in front of the car or

Gareth: a maybe a, a, a child in a Halloween costume.

Deep: Yeah. It's, it's not expecting a ghost to run in front of you. Right. There's, that's right's.

Gareth: All of these things. Yeah.

Deep: That said, to our point before, I mean, I, I was driving home from mountain biking one day, and you know, I was just not paying attention and I'm just changing into the lane and, you know, it just grabbed the steering wheel and yanked me [00:25:00] back and sure enough I was about to like walk into somebody and that's happened a couple of times.

So it does seem like if safety is your number one thing, that you do have to be able to jack the steering wheel away and slam on the brakes and do some things maybe you're very cautious about when you do it. even Tesla, I think it's like a weird ethical line that they kind of walk between having all of the knowledge and the full orthographic projection, the full 3D view of the world, and then not using it to save somebody's life.

That's you know, an ethical concern. which is something I wanted to ask you about I think you called it the three phases, the calibration period. Mm-hmm. Because it feels like. When you have this much knowledge and ability, there's a real ethical concern about not using it for safety when you know you could use it for safety.

So how do you deal with that in that calibration phase? Do you just not put the sensors in cause it feels like if you know that this person's falling asleep, why would you not vibrate the seat? You know,

Gareth: it's an awkward phase and we don't really enjoy it. we, we install the technology.

It's fully working. It's just not [00:26:00] actively alerting in the cab. we found this, it is difficult for us. We, we do have an ethical question and we, we haven't debates internally or constantly about whether we should be doing something with that data because it's completely passive. and maybe this sounds wrong, but we don't know what we're missing.

We don't know what we don't know in this situation 'cause it's completely silent. We're not picking up data at that phase. Later on we start to pick up the data and we do start to see some of those events. What we've agreed is, is there may be the occasion if we see a very serious event, that we will take action and we think it's a repeated event.

So if someone has had, but ultimately, typically we wouldn't. what we are finding is using this phase is a way of building up the trust. I think I said it before, it's about the trust between us, the driver, and the operator. Without this phase, there's probably more risk that we wouldn't be able to deploy the technology at all.

So in a long-term vision, this is the best way of making sure that everyone understands the tech and that we have the right level of balance. Yeah, it, it is awkward, but I think we've, we've found the balance with certain customers I wouldn't want to run it. I, we typically try and run it for no more than two weeks in [00:27:00] that phase.

so yeah. I mean, not, not all

Deep: ethical dilemmas have clear cut answers. Right. Like the troll. I wish they wouldn't, the t car problem is not easy. Right. you know, you're thinking about it, you're digging in. That's, that's super important. you know that you've thought through like, the trust, increases the chance that we save, you know, lives in the future. I mean, all that makes sense. I understand a lot of technology just invites ethical questions, particularly when it's as good as it can be. even when we do think about FSD, you wind up in weird scenarios where I myself wouldn't crash the car in this particular scenario.

But in the statistical aggregate, the machines are gonna save a lot more people and drive a lot better. So. That's, that's where it's like there's no choice but for us to kind of evolve step by step in this.

Gareth: Yeah. Our machines don't listen to the radio or get distracted by the view out the window and things like that.

Machines tend to just focus on the job at hand, which is driving safely or driving effectively and efficiently. So we, tend to be human unfortunately, and we tend to have [00:28:00] all the foibles of humans and we do tend to make those mistakes. Computers don't tend, but I imagine,

Deep: I imagine you'd be hard pressed to find a driver that doesn't think it drives better than, the FSDI.

Like. I know, I, it's funny. Certainly do. It's funny, but like, I also know I'm speeding and doing all kinds, you know, all kinds of stuff that.

Gareth: Yeah. Well, and we know that self-driving cars sometimes will speed and they won't always, 'cause they're a lot of the good ones have been trained by human drivers, and therefore they adapt and they adopt the human driver behavior.

Well,

Deep: that's, that's very much a decision that really surprised me when I was in FSD because it asks you flat out like, how much above the speed limit are you willing to go? Yep. You know, and I said, three miles an hour. It turns out that that's like way low for me.

Like, and it was driving me crazy. That's if they drove exactly according to letter of the law, people probably wouldn't want to use them at all.

Gareth: Oh, you've just summarized it. I, I think what Tesla's very good at is putting the customer first.

Even if that means bending certain things and, and changing what we might think FSD should be, they know that to get drivers on board, they have to have technology that allows you to drive a [00:29:00] car, like you drive a car. Um, but here's

Deep: where it gets weird with the FSD is like, it drives more aggressively than I do, I'm sure there's some setting somewhere buried in their menu that I could muck around with, but I usually only have it for a month here and a month there.

that's why I just, I don't trust it on the highway 'cause it just constantly wants to change lanes and I'm like, dude, just chill. Stay in this lane. Like it's okay,

Gareth: well there's there's aggressive and there's bad and we don't know the difference, right? So I, I think most people would probably argue that aggressive is bad driving and therefore it's driving badly.

Not, I don't, I don't know. You know, there's, uh,

Deep: yeah. I mean, unless you need aggressive in the moment, you know? Of course. And you don't have that capability and you crashed and the aggressive driver was great, like,

Gareth: yeah. Well, and it, it's funny you said before about, you know, it saved you a couple of times.

So if you, the vast majority of drivers, and it is typically drivers of a certain age, they may be driving, they've driven all their lives and Yeah. You know, they, they think they're the best driver in the world, and yet when you follow them, you go, wow, you are. You know, you're not great driver, but they have a very different [00:30:00] opinion of adas and, and automated systems on cars.

So the vast majority of them will say that it doesn't work. Oh, it tugs the steering wheel too often. It breaks too often. Yes. It sets the alarm off too often. They have relatively negative experiences of their vehicles and that's vehicles they own and understand that's a vehicle they, they bought from the showroom.

Understanding what the technology does. I remember someone telling me once they do what they call the free flight check, where they get into the vehicle before they drive off and they turn off all the ADAS systems. And I said to the guy, I said, why? Why are you turning this off? He said, well, it keeps tugging at the steering wheel.

And I'm driving through town and I said, hang on. That tugging technology comes in at 37 mile an hour and all the towns in the UK are either 20 or 30 miles an hour, which means that you are speeding, you're going over 37 mile an hour in a town and it's tugging you in. Who's wrong here?

Is it the technology or is it your driving? So there is a little bit of that that we have to work with, but of course. Most people will tell you that their system intervenes too often and incorrectly. Now what's interesting is you and I [00:31:00] as AI aware people, and you as a very, very deep AI technology person will understand that for every decision that vehicle makes that you see there are many, many decisions that vehicle's making that you do not see.

so if it's decided to apply the brakes, for example, because it thinks you're about to hit someone just because that may or may not be right or wrong, you might say, oh, I, ah, I think I was okay. And the think is critical. 'cause most people think they were fine, but if you were to watch the footage back would say, oh, okay, maybe I was a bit close, or maybe that was a bit risky.

But they don't agree, so they might disagree with the decision, but that vehicle has made. End number of decisions in that journey, not to apply the brakes, not to do something. That's right. It's been deciding that there's not an event. We give that as drivers, we give that zero credibility and zero respect.

We don't think about that. When it makes a decision. When it makes a decision. We should be aware that that's one of potentially millions of decisions. It's decided the other way it's decided. I think

Deep: that's a, I think that's really a good point. Like I, as humans, we don't understand intuitively the nature of how these [00:32:00] algorithms work.

Right. And we definitely do not think this way. Right. We have a much it's a weird term to use, but a much more like ballistic, way of thinking about things. And I use that term because. If you think about like a robot drawing the letter D, it's gonna be like very slow controlled, and it's gonna constantly be telling it feedback along the way.

And we can do that as humans. Like when you're in kindergarten, you might draw really slowly, but you know, you get older, you learn cursive. Like you use these very ballistic movements, which are like accelerated. And that intuitive decision making is not how these systems work.

They're making decisions, I don't know the exact frame rate or probably something close to it. So they're, and they're also making decisions at various levels, right? Like that 3D map that they're showing you. that 3D map has to be updated continuously.

and then decisions are made from that. And if that thing doesn't have a, the cat in it or the kid with the Halloween costume or whatever. we're not used to considering our mental [00:33:00] faculties in that way.

Gareth: No. And is, is that 3D map that we're watching in the vehicle, is that actually part of the vehicle's technology or is that just a visualization to help us understand what the vehicle's doing?

Is it post the event or pre the event? I'm never quite sure where some of this tech, because the, the 3D vision, you know, I know that obviously Tesla doesn't do lidar, but Lidar iss an amazing technology, but actually doing stuff with Lidars quite hard. Yeah. What we're watching there is really a readout of, you know, something that that's there to help us understand what the vehicle's doing.

It's a visualization. Yeah, I mean, I think that's a really good point.

Deep: They're not actually, 'cause you can just tell because that the person's like walking like, you know, a robot. So they have some representation of an object there. But it's, it's not the way we think.

It's probably like, uh, like a heat map that's like somewhere in here is a thing called person. Yeah. So like, just stick in a, you know, an Android object or whatever.

Gareth: There was, um, there was a report issued by Waymo this week, I think it was on the first, um, around the effectiveness of their self-driving vehicles.

And obviously there's some very good numbers there in terms of reduction compared to. Human [00:34:00] drivers. There's a couple of videos in there that you, you're quite familiar with probably. But if you watch those videos, you'll see the vehicle react before the collision, before the actual person is detected on the, on the video.

And therefore that suggests to me that that is actually just the visualization of what's happening as opposed to the technology that's driving those things. But I, but I, I mean, I, I feel like that's a fair,

Deep: that's a fair assumption because it doesn't make sense for them to try to reconstruct every, like a pixel level understanding of the outer world.

, Because they, they really have to represent things that guide decisions, you know? but that was something I wanted to ask you, like, how do you guys think about it? Because you know, you have to detect curbs and you have to detect people and you have to detect these decisions. Do you try to like reconstruct, , in three dimensional space, like an understanding of the environment, or is it more, um, like in a narrower decision space?

Gareth: Typically it's much more narrow. So we're not typically doing a lot in 3D or in situ, mainly because 3D, a lot of 3D is very, very computationally heavy. [00:35:00] Sure it is. So we are, we are, we are dealing with relatively, let's say low powered devices. And I mean power in terms of both power and in terms of processing capability.

These are devices sitting in a vehicle. We don't have, the joke used to be that a self-driving car is essentially a data center on wheels that the, the back seats are covered in Oh, yeah, yeah. Servers and things, you know, so we don't have that. Of course we don't. And nobody can afford that. So we're looking at devices that are small, low powered.

So in that sense, 3D is, is, we're limited to how much 3D we can really do. we are trying to do some of that. So we are looking at, you know, for example, taking multiple images and looking at three dimensional view of that, uh, looking at whether we can measure distance and things like this. But ultimately that isn't required.

What we're looking for is we're looking for in the moment, is there a risk.

Deep: I mean, let's talk about that a little bit. Like what is your hardware setup? Because I mean, I think it's, when you're making an ev you, your power needs of the EV are so great that you can afford to stick some GPUs in there and run a whole pretty serious hardware setup.

But in your case, it's [00:36:00] mostly, I'm guessing, in gas or diesel powered vehicles. And you know, normally they got the 12 volt battery to start up the vehicle, so you gotta put in your own, battery systems. There's probably kind of constraints around it. Like what, what do you put in a vehicle?

Like how much battery, what kind of. Processing power do you have in there and what kind of bandwidth and Exactly,

Gareth: so exactly that. So we're, we're typically operating i 12 volts or 24 volts, 24 if we're lucky because you have a commercial vehicle, might have a 24 battery, but typically you're in 12 volts.

Um, oh, you're looking off of the

Deep: vehicle's battery?

Gareth: Yeah, absolutely. Yeah. We, we have our, well, we'll have an inbuilt battery, which is a, a backup battery, but typically you're running off the vehicle's battery because you are capturing data on a camera or multiple cameras is quite heavy, but it's pulling off the vehicle.

Um, obviously an electric vehicle that's less of a problem because they have the high capacity batteries. But typically we're pulling, we'd actually pull that much off the battery to be fair. Um, it certainly wouldn't run your battery down at any time. You'd have to sit there for a couple of weeks doing that.

So it's quite low in that sense. But we are not giving it huge amount. There's not a huge amount of process and power in there. You are talking [00:37:00] mostly CPU powered, so not, not GPU because again, cost base. we are relatively limited. We are trying to keep it down. What typically happens is we don't have one edge device.

We might have several edge devices. So you might have a, uh, a camera with a, processor. You might have, another camera with a processor and then connect it to a DVR and a, a central processing unit, which then takes the triggers from the other systems. So you might have this sort of multi, you know, a multi-device system.

All of them relatively low powered.

Deep: do you ever like, throw starlink on there and, and do processing up in the cloud even real time or,

Gareth: we don't do starlink. Mainly these days, probably not because of that bloke that owns starlink. No, we, so we have, we have all of our devices, are 4G connected, and increasingly 5G connected.

So they do push the cloud. Yeah. And in

Deep: Britain, I imagine you don't lose connectivity very often?

Gareth: Oh, I think if you spoke to most people, they'll tell you they lose connectivity more often than they'd like. Okay. But we're, it's pretty good. I think, you know, majority of our customers are driving on the highways, so this is relatively well connected areas.

Our system will wait for a connection, so [00:38:00] they'll hold on and they'll send the device, send the push. We don't do any, um, detection based stuff in the cloud, so everything is done on the ground. so in the vehicle, what we do in the cloud is we will, we will use AI to give us further insights. So we might further review each clip to say, well, what, what actually happened in that clip?

Deep: Yeah. And

Gareth: then of course we use various models to look at the data and all those things, but the actual. In cab stuff all happens today, all happens on the ground and in the vehicle. We may in the future, look to do cloud augmentation, so providing feedback back to the vehicle. but ultimately just like a self-driving car, we, we'd like to be completely independent from any connectivity 'cause we have a GPS sensor in there as well and all those good things and acceler on.

But I mean, at

Deep: some point with the kinds of processing that you're talking about, I mean, seems to me like you probably, you know, want some GPU and some, some heavier duty hardware down there on the vehicle if you're gonna do it. And then, you know, that raises your costs, it raises the complexity of your system.

but [00:39:00] ultimately we, it might be worth it. I mean, given what you're doing, I mean, it, these are a highly, uh, impactful scenarios.

Gareth: We've got to find a balance between what a fleet is willing to pay and obviously we could do so much. With a vehicle, we could, we could put so much technology out there.

but ultimately it has to be, has to be affordable for a fleet that's already spent a lot of money on a vehicle. And, we've gotta find that balance, I think. so ultimately we're looking for tech that works at the right price point. As much as we'd all love to say, Hey, let's all buy the best technology for saving lives.

It, the reality is it has to be cost effective. If you are working on a bus, for example, or on a train or something, it has to be, you know, you can't add 20% of the ticket price of the vehicle on top just for the technology. So there are, there are limits to what,

Deep: sure,

Gareth: there's common sense in terms of what you're willing to pay.

Deep: Awesome. Well this has been a super fun and fascinating conversation. I always end with, a question about the future. So if we fast forward, five or 10 years out and everything you guys are kind of working on [00:40:00] today, I. Materializes and everything that you kind of envision being able to work on tomorrow?

Like what's different about the world from both a good standpoint and a bad standpoint?

Gareth: Well, it's safer ultimately, and that's what we're aiming for. We want to be safer. I've got two young kids. it's important to me to know that my kids, when they get into a vehicle are safe.

But also when they, when they walk around town, they're safe. When they get into a bus, they're safe. So one of the reasons that, that I'm really excited about being a part of the bus projects is that I can really impact things for my kids. That's fantastic. Think as they grow older, that if they get in a bus, it'll be safer.

If they walk around a bus, they'll be safer. all of these things, I think help, you know, just make the. Make the world safer. Uh, we're also looking to make it more efficient because ultimately if you're using public transport and things like this, then we can improve that efficiency. ultimately that's what we're aiming for, a safer place.

I think in terms of the negatives, what am I expecting to see? I would expect to see continued driver pushback. I think over time we will see drivers, pushing back on the technology and, and, and our only, our way of countering that is to make sure our technology works, [00:41:00] make sure it's trustworthy, make sure it actually helps you.

We've had drivers, you know, come to us with an alert, a fatigue alert, and say thank you. You know, that really worked. Uh, we have a system for detecting low bridges, for example, if you've got a vehicle that's too high,

Deep: that's, I was just gonna mention that one that. Yeah, that's, that's such a huge thing.

'cause you do see trucks stuck by like an inch or a parking well, I mean, I guess your semis aren't going into parking garage. Well, maybe they are. Maybe they're going into big parking garage. Things should be,

Gareth: so low bridge is something we we're doing a lot of work at the moment and it's really nuanced and there's lots of stuff there.

But typically with, with a low bridge, the majority of low bridge strikes are because someone was off route. So they've already got sat nav and they've already been taken around the low bridges. But for example, the road might have been closed or there might have been a problem. They've been on diversion.

and surprisingly, sometimes it's the driver just forgets what they're driving. They forget they're driving a tall vehicle and all of a sudden they're in a position where they're going to a low bridge. They're not, they acknowledge the low bridge. They don't even consider their vehicle as too high.

They think they're driving their own vehicle and they just ram straight into

Deep: the Yeah, I mean imagine you don't, you don't really need [00:42:00] too much fancy vision stuff there. You just need to know the bridge Heights, have all that data up front and then know the vehicle height and

Gareth: Well, so we, we, our technology does, uh, a bit of both.

So it reads the sign on the bridge

Deep: to make sure

Gareth: that it's agnostic. The idea being that if you haven't mapped that bridge beforehand or you haven't got in your database, that it's, that it works. Also, it works in GPS denied areas. So urban canyons, things like that, we can still work. So there's, there's two ways of doing that.

what I can tell you is it's not as simple as we wish it was.

Deep: Yeah. everything is always harder than, than, than, than what I think, especially the world when, when I like assess it quickly. I've been told this. personally, I'm kind of shocked at the speed that this stuff has been deployed. I don't think you can drive one of these vehicles and not have it almost kill you.

Because it's always grabbing the steering wheel, trying to do something nutty, you know, like, like yeah. I think it was very aggressively deployed. it's kind of shocking to me that the stuff hasn't been retracted largely,

Gareth: but, um, and it's worth, it's worth noting, I mean, I didn't mention it before, but we do have a difference in Europe [00:43:00] to America.

So Europe has a very, very strict legislation framework that means that most self-driving technology is not enabled here. So we're not as familiar with it as you are.

Deep: Oh my God. Yeah. No, it's, it's everywhere here. And, it does screw up. I mean, it, it, it really does, but it's, it's very reliant on drivers to intercede and I think that situational awareness question that we sort of touched on earlier, it's, it's, it's really a huge issue cause all the literature basically says, no, you can't take somebody that's zonked out.

That's just listen to their music and technically has a finger on the steering wheel and then suddenly throw them into a potential accident scenario and expect them to like immediately slurp up all the situational awareness they need to act properly. Like that doesn't work. Yeah. Our brains don't work like that.

Gareth: The example we used to use a lot was, imagine the car does all the driving until the parking and the parking's. One of the most complicated maneuvers you're gonna make if you're having to parallel park or park into a bay or something. So your vehicle's done all the easy work and then at the end of the journey it says, Hey, look.

Now it's up to you. I can't do this bit [00:44:00] myself or I'll take too long to do it, so good luck with it. So you've just come out of your slumber, you've been reading your newspaper, drinking your slushy, all of a sudden you're having to park a vehicle. I, I can't imagine that. I can't park a vehicle best of times, and I drive my car.

Deep: Yeah. Um,

Gareth: so all these things, I think it's gonna be interesting, but I, I think, like you say, we have moved too quickly, but that's technology for you. I always moves too quickly and it waits for the world to catch up. Now we're in a situation where the world has caught up, doesn't quite understand it, and is trying to maybe pull that tech back.

Deep: there's real problems that are sort of almost fundamentally unad addressable, like the trolley car problem. you know, maybe like there, like there's a baby if you go left that you kill, and then there's like two old people that you kill if you go Right.

I don't think these are. solvable via rules, you know,

Gareth: like, I don't think it is solvable. I, I don't think it's solvable via rules. I think what's interesting for me is, is what will happen afterwards. So there is a misconception at the moment. I think that vehicles make decisions, right? I think there's this misconception that they are thinking the sentiment vehicles, right?

The beings and they're making decisions in the way that we make decisions. So this sort of thoughtful decision. [00:45:00] Yeah. Most vehicles are making. Constant, let's, let's call them decisions, but they're constant frame based decisions. They're not necessarily connected to the decision before and in a linear pattern in the way that we would do.

So they're making decisions in the moment. So if they were to take what we perceive to be the wrong one, they killed the baby, the chances are the vehicle doesn't get to explain why it made that decision. It couldn't, even if it wanted to, it would just say, it would just go, I saw this first.

Deep: Well, I mean, at the end of the day, like, I mean, maybe the ethical response is, well, we took, you know, 2 million humans and they were all sent through a simulator and some of them killed the baby.

And some of 'em, and it has to do with like all of the surrounding contexts. Like maybe they thought, if I go for the baby, I gotta s. Slightly greater chance of stopping the car or or I don't wanna kill two people, I wanna kill one. Well,

Gareth: but I suspect, but I suspect the vehicle will say, well actually I saw the baby one millisecond or part, part of the millisecond before I saw the four people and therefore I went to the four people.

'cause I'd already made that decision.

Deep: Yeah.

Gareth: That, that can't be explained. Right. We can't do that in our heads. We have no way of doing that. So. [00:46:00] Right. The simple example we use often is if you look at a SAT nav, uh, when a, NAV system tells you to go a certain way, it's based on the road speed.

So it says, Hey, go down this way, you'll be fine. And then when you go down that road, you realize you can't drive at 60 miles an hour down this country road because it's tight, it's winding and it's difficult. You've never driven it before. So you end up driving at 30 or 40 and therefore you are late for your appointment.

The SAT nav doesn't know that. 'cause the sat nav is just using basic maths. And it's the same with decisions. I think ultimately they're gonna come down to frame based decisions that it cannot defend. And it won't be an ethical decision 'cause computers are incapable of the moment of ethical decisions.

Deep: Well, I mean, I think representative a little more, I think, I would go, I don't think we'll only be at the frame level there. There'll be some long-term memory that gets incorporated. I hope so. It already, it already is. But, but it is, it, it is always a challenge with these machines to like, get them to think in multiple levels.

And it will never really be the case that they think the way we do and, and sort of reflect the way we do. And it will, I think, always be the case that when they make mistakes, they'll be. [00:47:00] Typically different in nature than the kinds of mistakes we make. It's very easy to explain like, okay, you know, I was doing the speed limit, but there was black ice.

I accidentally hit the black ice and killed somebody. but it's very hard to explain. I was doing 35 in a 35 mile an hour spot. I mis detected, you know, a kid in a Halloween costume and I just smacked into them. That's like, well, every human looks at that and knows that, but that's like a clear lack of, of training data in that particular scenario.

And then you could go into like, well, why didn't you use physics-based modeling and just take a little kids and throw them in a million different costumes? It's like, well, I don't know. Like, you know, like there's a lot of things you have to do.

Gareth: There was always a statistic saying that if we wanted to train for every single, and, and of course we have to train in two ways, right?

We have to train in a computer, in a virtual environment, then we have to train physically, which is very hard, right? You can't put vehicles, can't aim them at children. I think there was a, a statement saying that you'd have to train for 5,000 years to get to the level of, of, of human ability. The one thing, and I'll finish on this, one of the things that I find interesting is we no longer [00:48:00] talk about the steering wheel disappearing.

So if you go back six, seven years, we were talking about self-driving cars and all the photo, all the photos and all the visions were about the steering wheel being gone. Oh yeah. That,

Deep: that was, that was. So, we no

Gareth: longer seem to talk about that. Right now we talk about the steering wheel being there. We talk about eyes off and hands off, but it's still there.

The implication being that we will take over and we will drive and we need to wait. Don't,

Deep: I thought the Waymo cars don't have steering wheels.

Gareth: They, they don't, but I, I don't, I can't see a situation where we as drivers will accept that. Yeah.

Deep: No, because like, I mean, it's, it's sociology. I think they, by the way, I think they have somewhere, I mean, you mentioned like the relationship we have with our cars.

I mean, it's, it's very much like identity oriented. I mean, certainly in the US I'm like, my God, people buy cars because the, you know, because it's like an extension of their personality. Like, I will buy this. I won't buy that. They buy it for political reason. You know, there's a million reasons why people buy cars and the vehicles they do thanks so much for coming on the show. This was really fun. Well, thanks for having me.