How Is

Confluence Labs

Using AI?

Builds foundation models that learn from minimal data, achieving SOTA on ARC-AGI-2 at $11.77/task.

Using continual learning adaptation that retains knowledge across domains, test-time task adaptation for zero-shot transfer, and multi-agent orchestration for complex reasoning.

Company Overview

Builds foundation models optimized for learning efficiency, enabling AI systems to rapidly adapt to new tasks with minimal data, particularly in data-sparse scientific fields. Achieved state-of-the-art on ARC-AGI-2 benchmark with an open-source solver (97.9% SOTA at $11.77/task).

Product Roadmap & Public Announcements

Open-sourced ARC-AGI-2 solver (97.9% SOTA at $11.77/task). Focus on experimental sciences (biology, materials science). Actively recruiting hardware engineers, biologists, and materials scientists.

Signals & Private Analysis

Deep investment in parameter-efficient continual fine-tuning, test-time training, and modular multi-agent orchestration. Hiring scientists signals pivot toward wet-lab or simulation-integrated AI.

Confluence Labs

Machine Learning Use Cases

Continual learning adaptation
For
Product Differentiation
Product

<p>Rapidly adapts foundation models to new scientific domains using minimal experimental data, accelerating discovery in biology and materials science.</p>

Layman's Explanation

It's like giving a scientist an AI lab partner that reads one paper and already knows how to design the next experiment.

Use Case Details

Confluence Labs' continual learning system allows foundation models to be incrementally updated with small batches of new experimental data—such as assay results, spectroscopy readings, or materials characterization outputs—without catastrophic forgetting of previously learned knowledge. Using parameter-efficient continual fine-tuning (PECFT) techniques like LoRA adapters and selective parameter updating (PIECE), the model rapidly specializes to a new scientific subdomain while retaining broad reasoning capabilities. This dramatically reduces the data and compute requirements for domain adaptation, making state-of-the-art AI accessible to research labs that lack massive proprietary datasets. The result is a model that evolves alongside the researcher's experimental program, continuously improving its predictions and recommendations as new data arrives.

Analogy

It's like a new hire who shows up on day one already knowing 95% of the job and only needs to shadow you for an afternoon before they're running experiments independently.

Test-time task adaptation
For
Decision Quality
Engineering

<p>Applies test-time training and refinement loops so models can solve novel abstract reasoning tasks on the fly without retraining from scratch.</p>

Layman's Explanation

The AI figures out the rules of a brand-new puzzle while it's solving it, instead of needing to study thousands of examples first.

Use Case Details

Confluence Labs' test-time adaptation system enables their foundation model to dynamically adjust its internal representations during inference when confronted with a novel, previously unseen task. Rather than relying solely on patterns memorized during training, the model performs lightweight parameter updates at inference time—using gradient-based refinement loops and task-specific adapter injection—to rapidly align its reasoning with the structure of the new problem. This approach was central to their state-of-the-art performance on the ARC-AGI-2 benchmark, where the model achieved 97.9% accuracy at just $11.77 per task. The technique is broadly applicable beyond benchmarks: in production, it means the model can handle edge cases, distribution shifts, and entirely new problem categories without expensive retraining cycles, making it ideal for dynamic, real-world scientific and engineering environments.

Analogy

It's like an athlete who's never played pickleball before but figures out the strategy mid-game by the third point and starts winning by the fifth.

Multi-agent orchestration
For
Operational Efficiency
Operations

<p>Orchestrates specialized AI sub-agents that collaborate to solve complex scientific problems where no single model has sufficient training data.</p>

Layman's Explanation

Instead of one AI trying to do everything, a team of specialist AIs divide and conquer a complex science problem like a well-run research lab.

Use Case Details

Confluence Labs is developing a modular multi-agent orchestration framework where multiple specialized foundation model instances—each fine-tuned on a narrow subtask—collaborate to tackle complex, multi-step scientific workflows. For example, one agent may specialize in hypothesis generation from literature, another in experimental design optimization, and a third in statistical analysis of results. A central orchestrator routes tasks, manages context, and synthesizes outputs across agents. This architecture is particularly powerful in data-sparse domains because each agent only needs to master a narrow slice of the problem, dramatically reducing the per-agent data requirements. The modular design also enables rapid swapping, upgrading, or adding of agents as new scientific capabilities are needed, making the system highly extensible. Signals from their hiring patterns (hardware engineers, biologists, materials scientists) and technical approach strongly suggest this multi-agent framework is being designed to integrate directly with laboratory automation and simulation tools.

Analogy

It's like assembling an Ocean's Eleven crew for science—each member has one specialty, but together they pull off heists that no solo operator could dream of.

Key Technical Team Members

  • Brent Burdick, Founder/CEO

Open-source, benchmark-topping ARC-AGI-2 solver demonstrates technical depth most stealth-stage startups cannot match. Models learn from orders of magnitude less data, critical for domains where each data point costs thousands.

Confluence Labs

Funding History

  • 2022: Brent Burdick leaves college to focus on learning efficiency
  • 2026 Feb: Confluence Labs launches via YC, open-sources ARC-AGI-2 solver
  • 2026: Recruiting interdisciplinary collaborators

Confluence Labs

Competitors

  • Foundation Models: Mistral, Cohere, AI21 Labs
  • Efficient Fine-Tuning: Together AI, Predibase, Lamini
  • Scientific AI: Recursion, Isomorphic Labs
  • Reasoning/AGI: Numenta, Liquid AI
More

Companies
Get Every New ML Use Cases Directly to Your Inbox
Thank you! Your submission has been received!
Oops! Something went wrong while submitting the form.