
Technology
|
Foundation Models
|
YC W26
|
Valuation:
Undisclosed

Last Updated:
March 24, 2026

Builds foundation models optimized for learning efficiency, enabling AI systems to rapidly adapt to new tasks with minimal data, particularly in data-sparse scientific fields. Achieved state-of-the-art on ARC-AGI-2 benchmark with an open-source solver (97.9% SOTA at $11.77/task).
Open-sourced ARC-AGI-2 solver (97.9% SOTA at $11.77/task) on GitHub. Approach uses LLMs to write code describing transformations, structured to optimally resemble training data, enabling long-horizon work. Focus on data-sparse domains: hardware engineering, drug design, physics research. Approaching from two angles: hypothesis generation and experiment design automation.
Deep investment in parameter-efficient continual fine-tuning, test-time training, and modular multi-agent orchestration. Hiring scientists signals pivot toward wet-lab or simulation-integrated AI.
Rapidly adapts foundation models to new scientific domains using minimal experimental data, accelerating discovery in biology and materials science.
It's like giving a scientist an AI lab partner that reads one paper and already knows how to design the next experiment.
It's like a new hire who shows up on day one already knowing 95% of the job and only needs to shadow you for an afternoon before they're running experiments independently.
Applies test-time training and refinement loops so models can solve novel abstract reasoning tasks on the fly without retraining from scratch.
The AI figures out the rules of a brand-new puzzle while it's solving it, instead of needing to study thousands of examples first.
It's like an athlete who's never played pickleball before but figures out the strategy mid-game by the third point and starts winning by the fifth.
Orchestrates specialized AI sub-agents that collaborate to solve complex scientific problems where no single model has sufficient training data.
Instead of one AI trying to do everything, a team of specialist AIs divide and conquer a complex science problem like a well-run research lab.
It's like assembling an Ocean's Eleven crew for science—each member has one specialty, but together they pull off heists that no solo operator could dream of.
Brent Burdick is a self-taught engineer who left college in 2022 to focus on learning efficiency. Open-source, benchmark-topping ARC-AGI-2 solver demonstrates technical depth most stealth-stage startups cannot match. Core belief: AI can help in data-sparse domains where each data point costs thousands of dollars.