
Technology
|
Autonomous Vehicles
|
YC W26
|
Valuation:
Undisclosed

Last Updated:
March 24, 2026

Builds AI-native radar systems for self-driving cars, using end-to-end deep learning to replace traditional radar signal processing pipelines with neural network architectures for superior perception compatible with modern autonomous driving training pipelines.
Building radar hardware and software compatible with end-to-end autonomy training pipelines. Radar is the only all-weather depth sensor at price points that scale to mass-market vehicles. Research-first culture. No commercial product launch or automotive OEM partnership announced yet.
Both founders from Zendar (radar-for-AV company). Technical thesis: current radars output processed point clouds incompatible with end-to-end training, and no raw radar simulator exists. Targeting AV developers, Tier 1 automotive suppliers, and robotics companies. Very early stage, stealth posture protecting IP pre-patent.
Replaces the entire traditional radar digital signal processing (DSP) chain with a single end-to-end deep learning model that directly converts raw radar returns into high-fidelity object detections, classifications, and tracks for autonomous driving.
Instead of radar doing math step-by-step like a textbook, a single AI brain looks at the raw radar echoes and instantly tells the car "that's a pedestrian, that's a truck, that's a guardrail."
It's like replacing a factory assembly line of 12 specialized workers with one genius savant who sees the raw materials and instantly builds the finished product better than all 12 combined.
Uses deep reinforcement learning to dynamically optimize radar transmit waveforms in real time, adapting to driving context, interference, and environmental conditions to maximize perception quality.
The radar teaches itself to shout in exactly the right "voice" for each driving situation—whispering in clear weather, belting through rain, and dodging other radars' noise—all in real time.
It's like a DJ who reads the room in real time and adjusts the music perfectly for every moment, instead of just hitting play on the same playlist every night.
Builds a physics-based digital twin simulation platform that generates synthetic radar data at scale to train, validate, and stress-test AI radar perception models against millions of rare and dangerous driving scenarios without physical road testing.
Instead of driving a million miles to find every weird thing that could go wrong, they build a virtual world where the radar practices against millions of scary scenarios from the safety of a computer.
It's like a flight simulator for radar—pilots don't learn to handle engine failures by crashing real planes, and Congruent's radar AI doesn't learn to handle black ice by skidding on real highways.
Evan Carnahan (PhD UT Austin Geophysics) spent 1.5 years at NASA JPL, then 3.5 years at Zendar progressing from intern to Research Engineering Manager leading radar ML/perception teams. Clement Barthes (PhD UC Berkeley Structural Engineering) was Lab Manager at Berkeley PEER for 5+ years, CTO at Safehub (sensor startup) for 5+ years, then ML Manager at Zendar for 2.75 years. Both came directly from Zendar, giving exceptional domain credibility in radar-for-AV that's rare at the YC stage.