Autonomous driving is no longer a moonshot, it’s the future of mobility. But one of the biggest questions still divides the industry:
Autonomous driving is no longer a moonshot, it’s the future of mobility. But one of the biggest questions still divides the industry:
Tesla says no. Waymo says absolutely. And the rest of the industry is split down the middle.
But at Imagry, we’ve chosen a third path: A real-time, vision-based autonomous driving system that doesn’t rely on LiDAR, HD maps, or cloud connectivity, but which can integrate easily with other types of sensors if the customer so desires. And here’s the kicker, it’s already proving itself on real roads.
Let’s break down the two major approaches, why this debate matters, and where Imagry fits in.

LiDAR (left) vs Vision (right): Two different approaches to autonomous driving technology.
LiDAR (Light Detection and Ranging) uses lasers to build a precise 3D map of the environment around the vehicle. It’s like giving your car a sixth sense, measuring distance by firing light pulses and detecting how long they take to bounce back.
The Pros:
The Cons:
This approach relies on cameras, just like human eyes. Combined with advanced computer vision and AI, these systems interpret images into a 3D understanding of the world.
The Pros:
The Cons:
Tesla has gone all-in on vision-only autonomy. With its Full Self-Driving (FSD) system, launched in 2020, Tesla removed radar and LiDAR entirely. Elon Musk has repeatedly stated that “the road system is designed for eyes”, and therefore cameras are enough.
On the other hand, Waymo has built its system around LiDAR + HD maps. It relies on centimeter-level mapping and a full sensor stack (LiDAR, radar, cameras) to drive in highly geofenced zones.
This has led to a divide with Tesla prioritizing scalability, learning through data, and edge inference while Waymo prioritizes safety through redundancy, mapping, and control.
At Imagry, we’ve developed a third path. One that combines the scalability of vision with a real-time understanding of the world that doesn’t rely on pre-built maps or expensive sensors.
Here’s how we’re different:
We believe that autonomy has to be affordable, adaptable, and scalable.
That means:
LiDAR makes sense for research. HD maps make sense for short-term demos. But for autonomy to really scale, it has to work like a human driver: Perceive. Decide. Act.
The debate isn’t really about hardware. It’s about philosophy: Do we build self-driving systems that measure the world, or understand it? Do we encode rules, or build models that learn?
Our platform does just that. It learns from every experience, improves with every mile, and drives safely without needing a predefined map of the world.
We’re not just betting on cameras. We’re betting on real-time intelligence. On open systems. On freedom from costly dependencies.
In the battle between vision and LiDAR, we believe the winner won’t be the flashiest sensor…it’ll be the smartest system.
Next stop, full autonomy!
Are you coming? Got a question for us?
Company Locations