Perception and Motion Planning for AI Autonomous Driving
Imagry has developed a software stack that uses regular camera feeds to perceive the immediate environment around the self-driving vehicle in real-time. Several deep neural networks process the video feeds from the cameras, resulting in a perception map that is fed to Imagry’s second software stack, which handles the motion planning phase. Unlike rule-based approaches, Imagry leverages a neural network approach to motion planning. By imitating human behavior, Imagry teaches its network to drive, producing a path solution to traverse the nearby area covered by the perception map.
Although we are a software company, we built the necessary hardware to showcase our software stack on the road. We integrated our test autonomous driving system into a passenger vehicle and took it for a drive in downtown Haifa, Israel, an area that’s notoriously challenging to navigate. Over the past four years, in addition to Haifa, we have driven, safely and without any accidents, in San Jose, California; Tempe, Arizona; Frankfurt, Germany; and Tokyo, Japan. We were recently offered a unique opportunity to showcase our capabilities in an Israel government-funded projects to deploy autonomous buses on public roads.