Imagry Cortex™:

The Generative Autonomy Stack

From vision to control, AI that learns like a human and acts even faster.

Drive Like a Human, Only Better

See How It Works
Most autonomous systems are scripted. Imagry Cortex™ is adaptive.
It learns to drive the way humans do: by perceiving the world, understanding it, and reacting in real time.

That’s what makes it safer, smarter, and scalable.

Inside Imagry Cortex™

Imagry Cortex™ is a unified, real-time autonomy stack that mirrors how humans drive: seeing everything, staying aware, predicting what’s next, and responding instantly. With camera-based perception, adaptive scene understanding, and intelligent motion planning, Cortex makes fast, safe decisions without relying on maps, LiDAR, or the cloud. It’s vision-based AI built for the unpredictability of real roads, and it runs on the hardware you already have.

preception

Perception

The Perception stack is a real-time image recognition system that uses the video feed from onboard cameras to produce a reliable view of the surrounding environment.

How it Works

During the Perception stage, images are collected by cameras positioned around the vehicle.

Then, using proprietary IP tools for image annotation, various deep convolutional neural networks (DCNN) are trained, each responsible for identifying a different object (traffic light, parked/moving vehicle, pedestrian, road marking, etc.). The result is a real-time, 360°, 3D, HD-equivalent map of the area up to a distance of 300 meters in front of the vehicle.

Features

Camera-based
Scans 360˚ using Visible Imaging Sensor (“VIS”) cameras
Smart Object Detection
Detects and perceives road geometry and markings, traffic signals and signs, and tracks various objects to predict trajectory and velocity
Real-time Data Collection
Delivers critical input of surroundings and environment to direct perception/motion planning and vehicle controls
motion

Motion Planning

The Motion Planning stack uses spatial DCNNs (deep convolutional neural networks) that learn how to drive by imitating human driving behavior.

How it Works

The generated map is passed to the Motion Planning stage, which includes a DCNN trained by supervised learning techniques

to mimic a human driver to produce instructions for the autonomous vehicle as to how to react under the current circumstances.

Features

Real-time Data-driven Decisions
Makes complex data-driven decisions relying on what is perceived in real-time
Consistently Appropriate Response
Responds intelligently to new and unforeseen situations (unlike autonomous driving systems that rely on rule-based code)
Economical
Avoids large investment in rule-based code writing and verification
Handles Edge Cases Well
Uses rule-based augmented neural network architecture to better handle edge cases

Built to Scale, Built for Now.

This isn’t a concept. It’s production ready autonomy that works today.

accuracy

Accuracy

We continuously improve our motion planning models via our fleet of autonomous test vehicles in the U.S., Germany, Japan, and Israel. Customers (OEM and Tier-1 automotive manufacturers) can choose to benefit from these improvements using OTA (Over The Air) downloads, and update their models when they deem appropriate.

safety

Safety

Safety is at the core of Imagry’s technology.

Our SAE Level 3 / 4 system mimics how experienced drivers operate. Cameras act as the vehicle’s eyes, while neural networks process the environment like a brain, constantly assessing, predicting, and planning safe motion paths.

We are the first and only company to pass NCAP safety testing for autonomous buses.

Because safety isn’t a feature, it’s a foundation.

Next stop, full autonomy!

Are you coming? Got a question for us?

    Company Locations

    Imagry, Inc.
    1630 Old Oakland Rd.
    Suite #A112
    San Jose CA 95131
    USA
    Imagry (Israel) Ltd.
    53 Derekh HaAtsma'ut
    3rd Floor
    Haifa 3303327
    Israel

    Accessibility Toolbar