Salus

Product & Competitive Intelligence

Real-time guardrails that validate AI agent actions before execution to prevent unsafe outputs.

Company Overview

Builds real-time, ML-driven guardrails that validate AI agent actions before execution, using hybrid LLM-based evaluators and evidence grounding to prevent unsafe or hallucinated outputs.

Competitive Advantage & Moat

Product Roadmap & Public Announcements

Product website describes core capabilities: real-time action validation via API wrapper, evidence cache for grounding agent decisions, guided retries with structured feedback for blocked actions, and policy configuration in YAML/Markdown/plain English. Benchmarked at 52% reduction in misalignment on ODCV-Bench across 12 frontier models, with 58% recovery rate on blocked actions. Integration via pip install and a few lines of code.

Signals & Private Analysis

Minimal public hiring footprint suggests a lean, founder-only team in deep build mode. The focus on "evidence grounding" and "guided retries" signals a differentiated technical approach not yet replicated by competitors. Conference and community activity around agentic AI safety is growing, and Salus's positioning as an in-the-loop (not post-hoc) validator suggests they are building toward enterprise-grade compliance and audit tooling. Expanding framework integrations (LangChain, CrewAI, AutoGen) and multi-agent orchestration support likely.

Product Roadmap Priorities

Hybrid LLM action validation
Improving
Risk Reduction
Engineering

Real-time hybrid guardrail system that combines LLM-based semantic evaluators with deterministic rule-based checks to validate every AI agent action before execution, including evidence grounding and guided retry feedback loops.

In Plain English

It's like a spell-checker for AI decisions—catching dangerous or nonsensical agent actions before they happen and helping the agent fix itself.

Analogy

It's like having a co-pilot who grabs the steering wheel before you run a red light, then calmly tells you which turn to take instead.

Evidence-grounded observability
Improving
Decision Quality
Product

Full observability and traceability system that logs every agent decision, blocked action, and evidence reference in natural language, enabling product teams to audit, debug, and improve agent behavior at scale.

In Plain English

It gives product teams a clear, readable trail of every decision an AI agent made and why—like a flight recorder for robots.

Analogy

It's like giving your AI agent a diary that writes itself—except instead of teenage angst, it's a detailed account of every decision and why it didn't send that embarrassing email.

Autonomous agent self-repair
Improving
Operational Efficiency
Operations

Autonomous self-repair system where blocked agent actions trigger structured feedback loops, enabling agents to retry and self-correct without human intervention—reducing operational escalations and maintaining workflow continuity.

In Plain English

When an AI agent makes a mistake, Salus tells it exactly what went wrong and lets it fix itself—like autocorrect for robot workers.

Analogy

It's like a GPS that doesn't just say "recalculating" when you miss a turn—it actually explains what went wrong and suggests a better route before you end up in a lake.

Company Overview

Key Team Members

  • Kevin Pan, Co-Founder
  • Vedant Singh, Co-Founder

Kevin Pan and Vedant Singh were roommates at Stanford, where they both studied computer science. Kevin previously worked at WindBorne Systems. Vedant is an AI researcher. Their Stanford CS background and firsthand experience with agent failure modes give them deep technical credibility in building in-the-loop validation systems that combine LLM-based semantic evaluation with evidence caching and guided retries.

Funding History

  • 2026 | Kevin Pan and Vedant Singh co-found Salus while at Stanford.
  • 2026 | Accepted into Y Combinator W26 batch.
  • 2026 | Product launched at usesalus.ai with pip-installable SDK.

Competitors

  • Guardrails & Validation: Guardrails AI (open-source output validation), NVIDIA NeMo Guardrails (dialogue safety rails).
  • LLM Security: Lakera (LLM firewall/prompt injection defense), Robust Intelligence (AI validation platform).
  • LLM Evaluation: Patronus AI (LLM evaluation & testing), Calypso AI (model security & governance).
  • Open Source: LangChain safety modules, various agent safety frameworks.