
Technology
|
AI Safety & Reliability
|
YC W26
|
Valuation:
Undisclosed

Last Updated:
March 24, 2026

Builds an autonomous intelligence safety platform with red teaming frameworks, guardrails, grounding modules, and uncertainty quantification to evaluate, monitor, and certify AI agents for safe deployment in high-stakes environments.
Custom evaluation infrastructure that learns from production runs. Adaptive scaffolding that operationalizes across different domains. Already deployed across production agents in legal reasoning and customer support. Generates training signal from company's own operational data for continuous improvement.
Guardrails market projected to grow from $0.7B in 2024 to $109B by 2034. No platform owns sandboxing, monitoring, and guardrails across the full agent lifecycle. Adam also had VC experience at HOF Capital. End-to-end focus could let Cascade define the stack.
Automated cross-layer red teaming that uses LLMs to simulate and compose multi-stage adversarial attack chains across model, software, and hardware layers of compound AI systems.
It's like hiring a team of elite hackers that never sleep, automatically finding every way your AI system could be tricked or broken before a real attacker does.
It's like a chess grandmaster who doesn't just look at one piece on the board but sees the entire sequence of moves that leads to checkmate—except the board is your AI system and the grandmaster is trying to break it.
Mathematically guaranteed uncertainty quantification using conformal prediction and variational temperature scaling to provide calibrated confidence scores and predictive sets for autonomous AI agent outputs.
It gives every AI answer a trustworthy confidence score backed by math, so you know exactly when to trust the robot and when to double-check.
It's like a weather forecast that doesn't just say "it'll be sunny" but tells you "there's a 95% chance the temperature will be between 72°F and 78°F"—except for every decision your AI makes.
Real-time guardrail and grounding system that filters unsafe outputs, rewrites adversarial prompts, and cross-references AI-generated claims against verified knowledge sources to prevent hallucinations and data poisoning in production AI agents.
It's a real-time fact-checker and bouncer for your AI, catching made-up answers and blocking malicious inputs before they ever reach the user.
It's like having a combination spell-checker, fact-checker, and nightclub bouncer standing between your AI and the outside world—nothing gets through unless it's accurate, safe, and on the guest list.
Adam AlSayyad (UC Berkeley CS) researched graph reasoning models and agentic safety at BAIR Lab under leading ML and AI safety researchers. Haluk Cem Demirhan (UC Berkeley CS & Mathematics) built production monitoring infrastructure and scaled agent systems at Netflix and Amazon; his BAIR Lab research covered long-horizon memory optimization and failure mode taxonomies for AI agents. Combines BAIR Lab AI safety research with real-world production deployment experience.