Advantages of Standalone Autonomous Driving over Connected Autonomy

by Eran Ofir, CEO | Imagry
July 25, 2024

TTi podcast editor Tom Stone interviewed me to learn more about the unique benefits of Imagry’s HD-mapless autonomous driving technology. In the recording I shared my views about the benefits of self-sufficient autonomous vehicles compared to connected autonomy technology. I also explain why I believe driverless cars could be commonplace by the end of this decade.

Listen to the podcast, read the AI-generated transcript below, or visit the TTi page for more information.

[TTI]

Hello, I’m Saul Wordsworth, Editor at Large for Traffic Technology International, the leading publication for traffic management, intelligent transportation systems, and tolling. Welcome to this episode of the Transportation Podcast from TTI. Our interview today is with Eran Ofir, CEO of Mapless AI-based autonomous driving solution, Imagry.

Eran discusses how stand-alone autonomous driving works and his views about its advantages over connected autonomy.

[…]

Tom, tell me about today’s interview. Today’s chat is with Eran Ofir, who is CEO of Imagry. Now, they are an autonomous vehicle company, who are looking at, they’re creating autonomous vehicle systems. But it’s very interesting that he says that an autonomous vehicle should not need to be connected to anything else on the road.

A lot of these systems relying on maps, off board maps, on board maps, sensors on the side of the road, sensors on the vehicles, V2X technology. but no, this technology that he’s talking about is stand-alone. And he says that’s important because as humans, we are stand-alone. We don’t need to need to be connected to anything else to drive our cars.

And he believes that autonomous vehicles should be the same. So, I won’t go into loads of detail because I think he probably explains it better than me. That’s, that’s really interesting. I’m genuinely fascinated to hear this interview, Tom. Although I should add that not all humans are stand-alone because some have a sixth sense.

[…]

We’ve been talking about autonomous driving for a decade now. I still remember Elon Musk saying, like a decade ago, that he’s going to drive coast to coast, right, with an autonomous car. It never happened.

[Imagry]

No, that won’t happen anytime soon. Even though the FSD 12. 3 is a really, really, really great achievement.

[TTI]

So, I think that that’s the story, isn’t it? I think it is. I mean, like, yeah, I’ve been thinking about similar kinds of stories myself and like, how Google, I mean, they’re Waymo now, aren’t they? But, what’s the name of the guy from Google who was originally involved in the Google car? I did interview him. He had a great sound bite, which was that his children, who I think were, about 13 or something at the time, they would not need to get a driving license, because they would be driven, they would be driven in their autonomous vehicles in sort of like eight years’ time, or whatever it was, and I expect they’ve probably got driving licenses now, but…

[Imagry]

By the way, I make the same promise to my daughters that are teenagers, 16 and 13, and I tell them, so it’s kind of a repeat, that yeah, you’re going to get a driving license, but you’re not going to drive for many years.

Because, in my opinion, by 2030 we’ll have autonomous vehicles. Well, already Mercedes, Tesla, and BMW are with licensed Level 3 vehicles on the road. Right. And in a few countries and not only are the vehicles licensed, but in many States, it was regulated. So, so that’s it. Meaning now it’s just a matter of catching up to it and buses, which is currently, that 22 markets globally that are running pilots (I can send you the list if you want of to the 22 countries worldwide that are running pilots with autonomous buses).

We are involved currently in five different projects in three countries for getting Level 4 autonomous buses on public roads. It’s not in a designated lane and not in an operational zone, like in a campus, but it’s a real, public road.

Let’s keep optimistic and believe that, now it’s going to happen, just for you to see, meaning, the best is video and we have videos. And for that, you can see two of our lines. One is in a large medical center, which is an operational zone, but covers 200 acres. It’s huge. 18 stops along that campus and another one, which is a commercial line in the northern city in Israel on line number 5 with a public transportation operator. Now we start working with Transdev, the world’s largest public transportation operator, working in 21 countries. We were licensed by them as a vendor for autonomous, ATS, autonomous transport system.

So we start working with them this summer, providing the first bus, and we’re getting now into Japan, into Portugal, into other places, with, full-size autonomous buses. So, I think that, maybe we are beyond that point of, okay, will that happen? Now it’s only a question of the pace.

[TTI]

Just to rewind slightly, tell me, give me just a quick summary. Of what Imagry does and what you do, how it started.

[Imagry]

So we are a unique animal in this domain. The company started in 2015. But it was doing then, computer vision, not for autonomous, but for retail and stuff like that, AI-based. Then it wasn’t called AI, it was just called neural networks and stuff like that.

In 2018 the company kind of shifted to do autonomous during the first hype about the autonomous driving. We have two main offices, one in San Jose, California (headquarters) and the other one in Haifa in Israel, and we’ve been driving autonomous, operating passenger vehicles autonomously on public roads almost from the get-go, from 2019.

We have two lines of business. So, for example, in the UK, you have Oxa, you have Fusion Processing, right? That are doing autonomous buses, and they’re dealing only with autonomous buses, and you have Wayve, for example, that will start now after the huge, fundraising, to do autonomous driving for passenger vehicles.

So, we are doing both. There are a few OEMs that we are working with, providing them with a software stack for autonomous driving. This software stack includes perception and motion planning for passenger vehicles. And we also work with public transportation operators. It’s a different line of business, providing them autonomous buses. So we have a few vendors that we work with on buses, bus manufacturers, and we work with them and we create that bus, installing computing and cameras and software and everything in order to convert that bus to an autonomous bus, and then we bring it on the road.

We’ve been awarded as, as the leading enabling technology for AV solutions by Frost & Sullivan, last year. Everything that we do is based on AI. It was always like that and it’s not a coincidence that the three companies that remain in this sector are all AI-based kind of neural networks, and so on.

It’s Wayve, Helm.ai, and Imagry. So, we do perception and motion planning. I won’t get much into technology, but as you, cover this domain for so many years, maybe I’ll, touch it to explain. How does it work? Our architecture is somewhat similar to Tesla. We have a configuration of eight cameras and we take the live video feed from those cameras into an array of neural networks and, and then, where each neural network is responsible to understand a different type of object. So one is looking at traffic lights, and one and traffic signs, and one on pedestrians, and one at lanes, and one at parked vehicles, and moving vehicles…I think that you got the point. And then we build in real time a three dimensional map that represents everything surrounding the vehicle up to a distance of 300 meters.

That map gets into another section of the software, which is really cool, which is the motion planning. That’s like our cortex, right? If perception is like our eyes, because everything here is like it pretends to mimic human behavior, the human brain. So, motion planning is like our cortex.

I’m sure that you’ve heard a lot of terms like supervised learning versus unsupervised learning, when you cover AI. So we don’t write a code that is like rule-based, that you call decision trees. What if, then do what? And so on, instead, it’s like a black box that everything is being poured into.

And it learns with learning by imitation, like a human, like a child, how to drive over time after, hundreds and hundreds of millions of images that are being processed and decisions that are being vetted, whether they were wrong or right, by humans, that’s why it’s called supervised learning, and that’s how the system learns. So, this is how we do it. We’ve been doing it for, many years. There are a few things that are unique about Imagry: three things. Maybe it’s the only thing that I want you to remember. And maybe, I’ll switch to a video to explain how it works.

The first one is, I think, from Arizona. So, the first thing is that we are mapless. That’s the big thing, for Imagry. It was always, mapless autonomous driving. So, unlike most of the players in the market and only Imagry and Tesla at the time were mapless until like two years ago, we don’t use those HD maps that are being fed to vehicles in order for them to know what’s on the road, where a crosswalk is and what kind of traffic signs and traffic lights and everything. Instead, we build the map as I explained to you on the go, on the fly like we do, the same way that we humans drive, right?

We get the route from a navigation layer, like from Google Maps. I need to drive now from A to B. And then I get on the road and, I drive because I understand what I see around me. That’s the same way that we drive. We get on the road.  You can see here, it’s a random route.

The vehicle receives the destination. And it knows which street it needs to go down, but that’s it. So, it gets on the road and, it avoids that bike lane on, the right. So, it moves to the left, kind of swerves to the left, then right to get to an intersection, yields to pedestrians and to other vehicles and, and then takes a decision to cross the intersection.

Then it noticed, you see, a construction on road, which wouldn’t appear on an HD map, right? Because it’s from this morning, those guys got there and started working. So again, it centered itself on the road, and slowed down because construction on road, and pedestrians, and so on.

Then after turning, centered itself again because of all those parked vehicles. And then it noticed a situation where a person is standing on the road with the traffic sign. So it acknowledged the situation and the change of situation, continued driving against the moving truck, returned to its original lane, avoided that guy who came on the skateboard.

Now continued driving, got to a temporary stop sign that they placed there. And again, understood the temporary stop sign that wouldn’t be on an HD map, and continued driving. Now this vehicle is driving that way because the system knows how to drive. It doesn’t work by scenarios that okay, if that’s the scenario, you do that.

But instead, it’s all operating on AI machines that are running on really strong computing. So that’s, I would say, I even dare to say that that’s what’s enabling, currently, autonomous driving, I would even call it a revolution because now it’s happening and it’s happening that the two main dependencies, right, are regulation, which is, typically follows technology, and computing.

And now we finally see computing that is strong enough in terms of TOPS, Tera Operations Per Second, which is a little bit populistic term. It’s not the exact the correct technical term to use, but it’s good enough, for the public. So cool. So, so the computers that you can get now in vehicles, right?

Edge computing, meaning “within the vehicle” are strong enough. They are not like the ADA CCUs that are really weak, and can’t do much, those are real computers that can get to 120, 200, 250 TOPS, and that’s sufficient. Really get kind of Level 3 vehicles on the road.

So, that’s the first, unique differentiator. The second one is something which is very unique to kind of currently, I think that only in Tesla, Tesla and Imagry can do, which is the ability to drive like location independent. We get to a new place, we rent a car, we go out and we drive.

So, same here. Like we, when we started working with Continental, we are partners with Continental. We jointly developed the software development vehicle platform, SDV, Software Defined Vehicle, using our motion planning. So this is kind of the system.

It’s called AI driver. You see some of the cameras and you see the different neural networks in different colors that are doing object detection, and you can see here that green line, which is the path that was selected by the vehicle for the vehicle, by the motion planning. And it gets now into an intersection and he’s kind of, doing the designation between the traffic light and the respective lane.

But the reason that I show you this video is not to show that we know how to drive, because we’ve been doing it for many years. This is from like late 2021, November, 2021. But because this is the very first drive that we ever had in Germany. So we brought the vehicle to, to Frankfurt, and three days after it arrived to Germany, we computed, we calibrated the cameras and we simply took it out to the road, crazy enough to see what would happen the first time ever that we drive in Germany. And as you can see, it drives in Frankfurt. First day, first time ever, like it was born there, and again, compare it with some of the names that you said before, Waymo, Cruise, and so on.

We are not doing robotaxis. They’re doing robotaxis with systems that are 20 times more expensive. Still, they drive in a very specific geofenced location with HD maps and so on. But when you drive with AI, basically again, you can get to a new place and drive, right? This is how a real autonomous vehicle should drive. That’s the point.

And the last thing that maybe that I would mention about our technology before shutting up a little bit is that we are hardware agnostic, which is really are selling a kind of a silver bullet selling to OEMs and Tier-1, because it’s not just, as you can imagine, for different vehicles, for different models, they need to use different hardware, both on computing and cameras, right?

A vehicle of 30,000 £ versus a vehicle of, 50,000 £ versus a vehicle of 70,000 £ will have a completely different hardware configuration, right? Because of the cost, because of capabilities, how high it is positioned and so on. And, what we give them, what we provide them is a seamless software, so our software is not tightly coupled to a specific hardware, like most of the players in the market.

Instead, it can run seamlessly, and we already tested and demonstrated it on at least four different hardware configure platforms, in, in different environments. And for them, it’s exactly what they dreamed of, right? Because then they don’t need to develop different software for different vehicles that are using different computing and cameras and so on.

So that’s something again, which Imagry is unique at. So, I’ll pause here because I just want to see if I triggered your curiosity.

[TTI]

Yeah. It’s amazing. It’s really, really interesting, yeah, cause it makes me think back. I was just thinking back when you were showing me that too. So, I was lucky enough that the ITS, as I say, I just, when I started editing TTI, the ITS world Congress in 2014 was held in Detroit. I had a ride in an autonomous vehicle from Honda, at that event.

I don’t think, I’m not sure they’ve been doing much recently with their autonomous vehicle development, but it did run very much on that, that sort of map kind of based. Yeah. The area fenced. I think it was slightly more impressive than maybe it really was, but it worked in 2014 and I did stumble across it.

I’m probably going to try and make a video out of it, to, to look back at that. I took a video back then in 2014 of that, of that ride and to think about how things have changed. And in some ways things have changed a lot, but in other ways, maybe they haven’t. And like you say, this sort of map based development that’s happening with some companies, would you say that’s a dead end? The way that you describe that sounds a lot more versatile and a lot more, in some ways advanced than relying on maps, which no matter how quickly (and that second video you showed me really, gave a feel of it now), how quickly you update maps, there’s always going to be things that happen that maybe it’s impossible to put on a map, cause it will happen.

[Imagry]

And then the seconds later you’re driving on the road. So, what if you have network congestion, or you are within a tunnel, and then you don’t have communication, high bandwidth communication, too, for you to get the map? Maybe that’s worth an article by itself. There is the cyber framework, which is becoming a mandatory across Europe, the UNR-155, so now there is a problem with autonomous vehicles that are continuously, using communication in order to get driving directions, right?

Think about the bus full of passengers that is relying on external communication that will tell it where to drive. That’s a big problem. And when the vehicle is independent or, the way we would put it, self-sufficient, it’s not relying on anything from the outside, in order for it to drive, then it’s not prone to cyber attacks.

Right. Which is super important because as counterintuitive as it may sound, in fact, autonomous buses are going to be here probably before the mass production of Level 3 autonomous passenger vehicles. Autonomous buses, even though it’s a big animal, right, a bus, and you can see here a Level 3 bus. Because we need to drive 100,000 autonomous kilometers before we are allowed to pull out the driver by legislation and an autonomous bus is driving at a much lower speed, right? Because it’s an urban environment and it can only drive up to 50 km per hour. And it’s geofenced, right? Because it’s doing it every day, the same route that it learns by with, AI.

So, that’s it. After, I don’t know, three weeks, it learned and now it’s perfect. And it’s a controlled environment, right? And like a vehicle that could go anywhere, but even an autonomous bus — think about driving in a city — and then it needs to do a detour because the road is blocked.

I don’t know. There is a demonstration. Whatever. There is an accident. It needs to go through a different route. If it only knows HD maps, then it will get stuck because the other route wasn’t mapped for that vehicle and need. Let’s talk about the communication hazard, but when it’s truly autonomous, it will find its way to get to the destination, right?

That’s the point about it. So, I think that we’ll see at the end, bids all over Europe, now, and also in Japan. And we started also talking with the regulators in the U.S. about that, because it’s only a very specific type of buses. That have a steering box that is controlled by wire, meaning that you can connect to their CAN bus and give commands to it. Typically electric buses that were converted from diesel have hydraulic steering boxes, not, electric steering boxes, so not every bus could be converted to autonomous, but it will happen within the next decade, it will happen. And I think at a faster pace than passenger vehicles. Yeah, that’s really interesting.

[TTI]

The fact that the geofencing of a bus route will be advantageous for your system, that, right, that even though, like, I suppose in my head, I, I often think of, oh, well, geofencing of autonomous vehicle routes is great because you’ve got a map of that geofenced area. But in fact, what you’re saying is, well, geofencing is still great for buses, but not because you’ve got a map, but because it’s learning the same route over and over and over and over again.

[Imagry]

Yeah, perfected in a very short time.

[TTI]

Just to be clear on your system, and I think this, you mentioned it’s a bit like the Tesla system, does it rely only on cameras?

[Imagry]

Yes, it is similar to the Tesla system, and that’s true, and we are using cameras, Unless like, we are being forced by regulation, I don’t know, maybe for buses or whatever, to use the radar for safety.

But we are using like Tesla’s FSD, we’re using cameras only. No LiDAR, because most of the use cases, in that road map when you get from autonomous parking that we currently do in autonomous valet parking and then traffic jams and then safe driver overwatch and then eventually full self drive, most cases are in fact urban, in an urban environment.

That’s where, most people are willing to pay, in order to take the hassle of driving from them when they’re stuck in traffic, when they’re driving in cities and stuff like that. If you, ask the Tesla users or the BMW sedan or the Mercedes S class, the vehicles that are already on the road with that and when you are driving in an urban environment, also when you drive on highway, but especially when you drive on an urban environment, what you really need is cameras because you need to understand the complicated environment on a mixed traffic road, right? Where you have scooters and when you have bicycles and you have people, pedestrians and other vehicles.

So it needs to be understood by the AI system. And for that, only a camera can understand, what it sees, like our eyes. And the best way to look at it is to ask yourself, okay, so when I drive, do I have radar? Do I have LIDAR? I don’t, right? And we still drive for, I don’t know, a hundred years now, pretty well, without it.

And here the perception of the vehicle is so much better because it sees in real time, 360 degrees, 300 meters, and understands and classifies every object within that distance, it’s much better than what we humans can do. And because, I remember that I saw some articles of yours that talks about safety.

This is so much safer than humans. Right. And I think that maybe I’ll send you some recent statistics, studies that were made that, if humans were to drive the same millions of miles that were driven until now by autonomous driving, because the number of accidents would be so much higher than those.

I know that every accident of an autonomous vehicle is broadcasted and so on, but statistics are telling us that that’s not the case. That autonomous vehicles are safer. And, and this is why you’re seeing in the UK now so much. There is a new bill, a new law now, and the King is talking about it.

And, you see all the traction now in the U.K. also funding, right? Heavy government funding. I think that you covered that as well as the funding of the Alexander Dennis project.

[TTI]

Yeah, I think it’s that it’s just a much higher bar for safety, isn’t it? That people will accept and the comparisons are often made to the aviation industry and how safe that is compared to traveling on roads. And, of course, a lot of aircraft are automated and handing over that. That is part of the reason why they’re, they’re so safe, but also it’s very much a sort of psychological thing, isn’t it? That if we’re handing over our control to a computer system, we’re much more willing to accept human error, but we’re not so willing to accept computer error, even if we talk about it with regulators all over Europe.

[Imagry]

And the autonomous bus driver is not sick and is not tired and is not hungry and is not agitated, so it’s simply safer. We will get there.

[TTI]

Anyhow, one final point, which is really interesting about your system is that, and thinking about the evolution over the last sort of like 10 years. People will talk about autonomous vehicles, and then there was definitely a real sort of ethos to stop talking about autonomous vehicles and start talking about automated vehicles, because there was definitely a drive to say that autonomous vehicles must be connected, and they must have a connect.

They must also be connected vehicles because it’s two different types. And you disagree with that. You’re saying, and perhaps it’s sort of almost come full circle, but actually, no. Autonomous vehicles must be completely autonomous for them to be as safe as possible.

[Imagry]

Yes, they should be independent. They should be self sufficient. They should act and behave like humans. I’m not getting external driving directions. Well, with the exception of when my wife is sitting in the car 😊.

The vehicle should really drive independently, otherwise it’s not really an autonomous driving vehicle. So presumably you still use a map as I would for, I use my SATNAV so I know where to go. It’s called navigation. I get the map from Google Maps in order to know the A to B.

I need to drive now on Oxford Street, for, two kilometers and then turn right. And drive for another 500 meters and then turn left. But I’m not getting a map telling me in a resolution of five centimeters what is on that road, right? In terms of traffic circles and crosswalks, and traffic lights, and traffic signs, and whatever, and curbs, and lanes.

And so this, I need to understand. By myself as a human. This is why we are so proud of being like a bio-inspired system, which drives like a human. The philosophy from the get-go of both Tesla and Imagry, there are three companies currently that are at that level, with the same philosophical, system design, which are Tesla, Imagry, and Wayve.

We build a system that acts and behaves like a human. We mimic every possible behavior. It’s not a coincidence again, that the system was built by people that had experience and background in brain sciences, right? Because that’s what we are trying to mimic.

Yeah, and now finally, the computing is there in order for it to happen technologically. Thanks to that Moore’s Law that we’re all familiar with from the last three decades. And now it’s really finally right. I go to automotive where automotive grade computing got that exponential jump. And now it has enough TOPS (Tera Operations Per Second) in order for it to process so much data in real time within a vehicle.

[TTI]

I hope you enjoyed Tom’s conversation with Eran. Join us again soon for another episode of the Transportation Podcast from TTI. In the meantime, stay in touch with us at traffictechnologiestoday.com, on LinkedIn, Twitter, and via this podcast.

 

Next stop, full autonomy!

Are you coming? Got a question for us?

    Company Locations

    Imagry, Inc.
    1630 Old Oakland Rd.
    Suite #A112
    San Jose CA 95131
    USA
    Imagry (Israel) Ltd.
    53 Derekh HaAtsma'ut
    3rd Floor
    Haifa 3303327
    Israel

    Accessibility Toolbar