How Is

Agentic Fabriq

Using AI?

Gives enterprises a single control plane to permission, monitor, and audit every AI agent.

Using ML-powered behavioral anomaly detection, adaptive least-privilege policy learning from usage patterns, and predictive risk scoring for new agent integrations.

Company Overview

Builds the identity, governance, and visibility layer for AI agents. Sits in the middle of every agent's calls, managing both agent and user identity, handling token exchange, enforcing least-privilege access, and logging everything for audit and compliance. Positioned as 'Okta for Agents.'

Product Roadmap & Public Announcements

Launched TypeScript and Python SDKs, centralized agent registry with per-agent/per-user permissioning, OAuth2/JWT token management, and real-time audit logging. Demo available with OpenWebUI integration. VentureBeat coverage confirms the product positioning. Website has pricing page and feature descriptions.

Signals & Private Analysis

MIT AI/ML research backgrounds hint at forthcoming ML-driven anomaly detection and adaptive policy enforcement. 'Okta for Agents' positioning signals intent to become the default identity layer for agentic AI. Likely targeting regulated industries for hybrid human-agent governance.

Agentic Fabriq

Machine Learning Use Cases

Agent Behavioral Anomaly Detection
For
Risk Reduction
IT-Security

<p>ML-powered anomaly detection that monitors every AI agent action in real time, flagging and blocking suspicious or out-of-policy behavior before damage occurs.</p>

Layman's Explanation

It's like a security camera that watches every AI agent in your company and sounds the alarm the instant one starts doing something it shouldn't.

Use Case Details

Agentic Fabriq's platform ingests a continuous stream of structured audit logs from every registered AI agent—capturing tool calls, data access events, permission escalations, and inter-agent communications. A lightweight ML pipeline built on top of this telemetry uses unsupervised learning (e.g., isolation forests, autoencoders) to establish behavioral baselines for each agent identity and detect deviations in real time. When an agent suddenly requests access to a data source it has never touched, attempts a bulk export outside business hours, or chains tool calls in an unusual sequence, the system triggers graduated responses: alerting security teams, throttling the agent, or revoking its tokens entirely. This transforms agent governance from static, rule-based permissioning into a dynamic, adaptive security posture that evolves as agent behaviors and enterprise environments change. For regulated industries, this also produces audit-ready evidence trails that map every anomaly to the specific agent identity, user delegation chain, and policy context.

Analogy

It's like giving every AI agent in your company a parole officer who's read every rule book and never sleeps.

Adaptive Access Policy Learning
For
Operational Efficiency
Engineering

<p>ML-assisted automatic generation and continuous refinement of least-privilege access policies for AI agents based on observed usage patterns.</p>

Layman's Explanation

It figures out exactly what permissions each AI agent actually needs by watching what it does, then locks everything else down automatically.

Use Case Details

When enterprises deploy dozens or hundreds of AI agents, manually defining granular access policies for each one is impractical and error-prone—teams either over-permission (creating security risk) or under-permission (breaking workflows). Agentic Fabriq addresses this by collecting fine-grained usage telemetry from every agent interaction and applying supervised and semi-supervised ML models to learn the minimal set of permissions each agent truly requires. The system clusters agents by role and behavior, recommends tightly scoped policies, and surfaces unused or redundant permissions for review. Over time, the model continuously refines policies as agent tasks evolve, automatically proposing policy updates when new tool integrations are added or workflows change. Security teams get a human-in-the-loop approval interface that lets them accept, modify, or reject recommendations—ensuring governance without bottlenecking agent deployment velocity. The result is a self-tuning permission fabric that keeps every agent at the precise privilege level it needs and nothing more.

Analogy

It's like a smart thermostat for permissions—it learns exactly how much access each agent needs and automatically dials everything else down.

Integration Risk Prediction
For
Decision Quality
Strategy

<p>ML-powered risk scoring that predicts the security and compliance impact of connecting a new AI agent to an enterprise tool before the integration goes live.</p>

Layman's Explanation

Before you plug a new AI agent into Salesforce or Slack, it tells you exactly how risky that connection is and what could go wrong.

Use Case Details

As enterprises scale their AI agent ecosystems, each new agent-tool integration introduces potential attack surface, data leakage vectors, and compliance exposure. Agentic Fabriq's predictive risk scoring engine uses a gradient-boosted classification model trained on historical integration metadata, permission scopes, data sensitivity labels, agent behavioral profiles, and known vulnerability patterns across its customer base. Before an integration is activated, the platform generates a composite risk score with explainable contributing factors—such as overly broad OAuth scopes, access to PII-containing systems, lack of audit logging on the target tool, or behavioral similarity to previously flagged agents. Security and platform teams can use these scores to enforce approval gates, require additional guardrails, or block high-risk integrations outright. The model improves over time as it ingests outcomes from across the Agentic Fabriq customer network, creating a shared intelligence layer that benefits every organization on the platform. This shifts agent governance from reactive incident response to proactive risk management.

Analogy

It's like a credit score for AI agent integrations—before you approve the connection, you already know if it's trustworthy or trouble.

Key Technical Team Members

  • Paulina Xu, Co-Founder & CEO
  • Matthew Xu, Co-Founder & CTO

Both founders are MIT-trained AI/ML researchers who dropped out to build agent-native infrastructure from first principles. They understand how autonomous agents reason, authenticate, and fail at a level that incumbents retrofitting human IAM systems fundamentally lack. VentureBeat coverage validates market positioning.

Agentic Fabriq

Funding History

  • 2025: Paulina Xu and Matthew Xu found Agentic Fabriq
  • 2026: Y Combinator W26 batch (~$500K)
  • 2026: Product live with TypeScript/Python SDKs and demo

Agentic Fabriq

Competitors

  • Enterprise IAM: Okta/Auth0, Microsoft Entra ID
  • Agent Auth Startups: Composio, Arcade, Nango
  • Enterprise Integration: Merge, WorkOS
  • Broader Agentic Infra: LangChain, CrewAI, AutoGen
  • Emerging: Permit.io, Indent, various AI firewall startups
More

Companies
Get Every New ML Use Cases Directly to Your Inbox
Thank you! Your submission has been received!
Oops! Something went wrong while submitting the form.