Stops scams at the source by deploying AI agents that engage fraudsters and extract intelligence.
Using adversarial dialogue agents that waste scammer resources, graph neural networks that map mule accounts, and NLP that detects new scam types within 48 hours.

Technology
|
Cyber Security
|
YC W26

Last Updated:
March 19, 2026

Builds AI agents that engage with scammers at scale to map the fraud lifecycle, extract actionable intelligence on mule accounts and attacker infrastructure, and deliver real-time fraud signals to financial institutions, telecoms, crypto platforms, and law enforcement. Pioneered the first AI system for engaging sophisticated trust-based scammers.
Intel has already enabled financial services and government agencies to intercept scammers mid-transaction. AI agents are multiplatform, multimodal, and multilingual, capable of sustaining engagement for months. Targeting financial services, crypto, telecoms, social media companies, and law enforcement.
DOD Cyber Competition winners with published research provide strong credibility for government and enterprise sales. The $12B trust-based scam market is growing rapidly with AI-enabled scams (deepfakes, voice cloning, LLM persuasion). Likely raising post-Demo Day with strong defense/fintech interest.
<p>Deploys autonomous conversational AI agents that engage scammers in real-time dialogue to waste their resources, extract intelligence, and map fraudulent operations before any victim is contacted.</p>
An AI chatbot pretends to be a gullible victim, strings scammers along in conversation, and secretly collects all their account details and tactics to shut them down.
BeeSafe AI's conversational AI agents use large language models fine-tuned on real-world scam dialogues to autonomously engage fraudsters across messaging, email, and voice channels. These agents simulate believable victim personas, maintain multi-turn conversations, and dynamically adapt their responses based on scam typology detection (romance, investment, tech support, etc.). During each interaction, NLP pipelines perform real-time entity extraction—capturing bank account numbers, crypto wallet addresses, phone numbers, URLs, and organizational details—while behavioral analysis models fingerprint the scammer's tactics and communication patterns. Every interaction feeds back into a continuously growing proprietary dataset, enabling the system to improve its engagement strategies and detect emerging scam scripts. The result is a dual-purpose system: it wastes scammers' time and resources (reducing their capacity to target real victims) while simultaneously generating high-fidelity intelligence that powers downstream detection and interdiction.
It's like hiring a thousand incredibly patient undercover agents who never sleep, never get frustrated, and secretly write down everything the con artist says—then hand the notes to the police before breakfast.
<p>Uses graph neural networks and anomaly detection to identify mule accounts and map interconnected scam infrastructure across financial networks in real time.</p>
The AI connects the dots between suspicious bank accounts, phone numbers, and websites to build a map of the entire scam operation—like a detective's evidence board that builds itself.
BeeSafe AI's infrastructure detection engine ingests intelligence extracted from scammer engagements—account numbers, wallet addresses, domains, phone numbers, and communication metadata—and feeds it into a graph neural network (GNN) that models relationships between entities across the financial ecosystem. Anomaly detection algorithms identify accounts exhibiting behavioral signatures consistent with money mule activity (e.g., rapid fund pass-through, unusual transaction timing, connections to known scam nodes). Link analysis and community detection algorithms then map the broader scam infrastructure, revealing clusters of coordinated fraudulent accounts, shared hosting infrastructure, and overlapping campaign assets. The system continuously enriches its graph with new intelligence from every scammer interaction, creating a living map of fraud networks that grows more accurate over time. Real-time alerts are pushed to partner financial institutions and law enforcement, enabling pre-transaction account freezes and coordinated takedowns.
It's like Google Maps for crime networks—except every time a scammer talks to the AI, a new pin drops on the map and the routes between accomplices light up automatically.
<p>Applies NLP-based classification models to incoming scam communications to automatically categorize scam type, detect novel scam scripts, and trigger early warnings for emerging fraud campaigns.</p>
The AI reads every scam message it encounters, instantly sorts it by type (romance, crypto, tech support, etc.), and raises an alarm the moment it spots a brand-new kind of scam no one has seen before.
BeeSafe AI's scam typology classification system uses transformer-based NLP models trained on a proprietary corpus of real scammer communications to automatically categorize each interaction by fraud type—romance scams, pig butchering (investment fraud), tech support scams, impersonation, phishing, and more. Beyond classification, the system employs unsupervised learning techniques (clustering, topic modeling, and embedding drift detection) to identify communications that don't fit existing categories, flagging them as potential novel scam variants. When a new cluster reaches a statistical significance threshold, the system generates an emerging threat advisory with extracted indicators of compromise (IoCs), sample scripts, and recommended detection rules. This intelligence is distributed to partner financial institutions and government agencies in near real-time, enabling proactive defense against scam campaigns that traditional rule-based systems would miss for weeks or months. The classification models are continuously retrained as the adversary-sourced dataset grows, ensuring the system adapts as scammers evolve their tactics.
It's like a librarian who not only instantly shelves every book by genre but also notices when someone sneaks in a book from a genre that doesn't exist yet—and immediately tells everyone to watch out.
Three PhD founders who literally wrote the paper introducing the first AI system for sophisticated trust-based scam engagement. Combined 20+ years in ML, NLP, and security research across CMU, UCSD, Google, Microsoft, and Censys. DOD Cyber Competition winners. No competitor has this combination of published scam-engagement research and production security experience.