Imagry’s software solution for L3/L4 autonomous driving uses deep neural networks and artificial intelligence (AI) to teach the vehicle, using supervised learning techniques, to mimic the behavior of a skilled human driver and address driving decisions on-the-fly. This is where neural networks can play a key role, with their ability to process data in a parallel fashion. As a result, the vehicle can adapt to situations which have neither been seen nor navigated previously (as opposed to the linear fashion of motion planning used by rule-based solutions). The adaptation in the Imagry method is done by producing a motion plan based on a combination of scenarios that the neural network has already learned, just like humans do.
Imagry developed a software stack that uses regular camera feeds to perceive, in real-time, the immediate environment around the vehicle. Several deep neural networks process the video feeds from the cameras, resulting in a perception map that is fed to Imagry’s second software stack which handles the motion planning phase.
Make no mistake, though, there are no shortcuts with this method! It takes years to train neural networks to drive autonomously, and that is exactly what we have been doing at Imagry for over five years now. During that time, autonomous vehicles using our software solution have been operating in three different countries (the U.S., Europe, and Israel), using supervised learning techniques to hone our technology. Our solution is HD-mapless, thereby avoiding expensive and complex mapping, localization, and communication issues. It is hardware-agnostic, providing a platform for easy integration into various vehicles and settings that make it easily deployable and scalable. Last but not least, because it adapts to new environments and situations on-the-fly, it is location-independent. Roll-out is scalable to new locations, worldwide, based on fast, small-scale localized adaptation optionally performed as an over-the-air software upgrade to the system.