Innovative SAE L3 / L4 AI-Based Autonomous Driving Software
Innovative SAE L3 / L4 AI-Based Autonomous Driving Software
Home > Innovative SAE L3 / L4 AI-Based Autonomous Driving Software
Imagry is an Autonomous Driving software provider that has created an HD-mapless AI-based driving system
It supports SAE L3/L4 autonomy in passenger cars as well as buses
Our bio-inspired technology combines real-time vision-based perception and imitation-learning AI to create a driving decision-making network based on the vehicle’s current surroundings, eliminating the need for high-speed bandwidth essential for the HD mapping approach used by other autonomous driving solutions. Imagry’s software enables the autonomous vehicle to understand the road as it goes and reacts to dynamic contexts and environments, just like a skilled human driver.
The technology works with existing on-board cameras, allowing for a lower-cost deployment, but can also be integrated with other perception sensors (e.g., long-range cameras, RADAR/LiDAR) if required by use case.
The Imagry AI-based autonomous driving solution is divided into two software processes:
Perception & Motion Planning
Perception
The Perception stack is a real-time image recognition system that uses the video feed from onboard cameras to produce a reliable view of the surrounding environment.
How it Works
During the Perception stage, images are collected by cameras positioned around the vehicle.
Then, using proprietary IP tools for image annotation, various deep convolutional neural networks (DCNN) are trained, each responsible for identifying a different object (traffic light, parked/moving vehicle, pedestrian, road marking, etc.). The result is a real-time, 360°, 3D, HD-equivalent map of the area up to a distance of 300 meters in front of the vehicle.
Features
Scans 360˚ using Visible Imaging Sensor (“VIS”) cameras
Detects and perceives road geometry and markings, traffic signals and signs, and tracks various objects to predict trajectory and velocity
Real-time Data Collection
Delivers critical input of surroundings and environment to direct perception/motion planning and vehicle controls
Motion Planning
The Motion Planning stack uses spatial DCNNs (deep convolutional neural networks) that learn how to drive by imitating human driving behavior.
How it Works
The generated map is passed to the Motion Planning stage, which includes a DCNN trained by supervised learning techniques
to mimic a human driver to produce instructions for the autonomous vehicle as to how to react under the current circumstances.
Features
Real-time Data-driven Decisions
Makes complex data-driven decisions relying on what is perceived in real-time
Consistently Appropriate Response
Responds intelligently to new and unforeseen situations (unlike autonomous driving systems that rely on rule-based code)
Avoids large investment in rule-based code writing and verification
Uses rule-based augmented neural network architecture to better handle edge cases
In a nutshell, the Imagry AI-based autonomous driving solution uses a combination of Artificial Intelligence (AI), image recognition IP, and supervised learning to mimic skillful human driving. It can operate the vehicle autonomously within existing road infrastructure.
Imagry continues to improve the accuracy of the motion planning and expand its database of annotated use cases via its fleet of autonomous test vehicles in the U.S., Germany, Japan, and Israel. Customers (OEM and Tier-1 automotive manufacturers) can choose to benefit from these improvements using OTA (Over The Air) downloads, and update their models when they deem appropriate.
Safety is at the heart of Imagry’s AI-based autonomous driving solution, embedded deep within our bio-inspired technology.
Patterned after a skilled human driver, our SAE L3 / L4 system uses cameras like eyes for perception of the surroundings, and Artificial Intelligence (AI) and neural networks like a brain for motion planning.