Confluence Labs

Product & Competitive Intelligence

Builds foundation models that learn from minimal data, achieving SOTA on ARC-AGI-2 at $11.77/task.

Company Overview

Builds foundation models optimized for learning efficiency, enabling AI systems to rapidly adapt to new tasks with minimal data, particularly in data-sparse scientific fields. Achieved state-of-the-art on ARC-AGI-2 benchmark with an open-source solver (97.9% SOTA at $11.77/task).

Competitive Advantage & Moat

Product Roadmap & Public Announcements

Open-sourced ARC-AGI-2 solver (97.9% SOTA at $11.77/task) on GitHub. Approach uses LLMs to write code describing transformations, structured to optimally resemble training data, enabling long-horizon work. Focus on data-sparse domains: hardware engineering, drug design, physics research. Approaching from two angles: hypothesis generation and experiment design automation.

Signals & Private Analysis

Deep investment in parameter-efficient continual fine-tuning, test-time training, and modular multi-agent orchestration. Hiring scientists signals pivot toward wet-lab or simulation-integrated AI.

Product Roadmap Priorities

Continual learning adaptation
Improving
Product Differentiation
Product

Rapidly adapts foundation models to new scientific domains using minimal experimental data, accelerating discovery in biology and materials science.

In Plain English

It's like giving a scientist an AI lab partner that reads one paper and already knows how to design the next experiment.

Analogy

It's like a new hire who shows up on day one already knowing 95% of the job and only needs to shadow you for an afternoon before they're running experiments independently.

Test-time task adaptation
Improving
Decision Quality
Engineering

Applies test-time training and refinement loops so models can solve novel abstract reasoning tasks on the fly without retraining from scratch.

In Plain English

The AI figures out the rules of a brand-new puzzle while it's solving it, instead of needing to study thousands of examples first.

Analogy

It's like an athlete who's never played pickleball before but figures out the strategy mid-game by the third point and starts winning by the fifth.

Multi-agent orchestration
Improving
Operational Efficiency
Operations

Orchestrates specialized AI sub-agents that collaborate to solve complex scientific problems where no single model has sufficient training data.

In Plain English

Instead of one AI trying to do everything, a team of specialist AIs divide and conquer a complex science problem like a well-run research lab.

Analogy

It's like assembling an Ocean's Eleven crew for science—each member has one specialty, but together they pull off heists that no solo operator could dream of.

Company Overview

Key Team Members

  • Brent Burdick, Co-Founder
  • Niranjan Baskaran, Co-Founder

Brent Burdick is a self-taught engineer who left college in 2022 to focus on learning efficiency. Open-source, benchmark-topping ARC-AGI-2 solver demonstrates technical depth most stealth-stage startups cannot match. Core belief: AI can help in data-sparse domains where each data point costs thousands of dollars.

Funding History

  • 2025 | Brent Burdick and Niranjan Baskaran co-found Confluence Labs.
  • 2026 | Accepted into Y Combinator W26 batch.
  • 2026 | Open-sources ARC-AGI-2 solver with 97.9% SOTA at $11.77/task.
  • 2026 | Recruiting interdisciplinary collaborators (hardware engineers, biologists, materials scientists).

Competitors

  • Foundation Models: Mistral, Cohere, AI21 Labs.
  • Efficient Fine-Tuning: Together AI, Predibase, Lamini.
  • Scientific AI: Recursion, Isomorphic Labs.
  • Reasoning/AGI: Numenta, Liquid AI, Ndea (YC W26).