How Is

BeeSafe AI

Using AI?

Stops scams at the source by deploying AI agents that engage fraudsters and extract intelligence.

Using adversarial dialogue agents that waste scammer resources, graph neural networks that map mule accounts, and NLP that detects new scam types within 48 hours.

Company Overview

Builds AI agents that engage with scammers at scale to map the fraud lifecycle, extract actionable intelligence on mule accounts and attacker infrastructure, and deliver real-time fraud signals to financial institutions, telecoms, crypto platforms, and law enforcement. Pioneered the first AI system for engaging sophisticated trust-based scammers.

Product Roadmap & Public Announcements

Intel has already enabled financial services and government agencies to intercept scammers mid-transaction. AI agents are multiplatform, multimodal, and multilingual, capable of sustaining engagement for months. Targeting financial services, crypto, telecoms, social media companies, and law enforcement.

Signals & Private Analysis

DOD Cyber Competition winners with published research provide strong credibility for government and enterprise sales. The $12B trust-based scam market is growing rapidly with AI-enabled scams (deepfakes, voice cloning, LLM persuasion). Likely raising post-Demo Day with strong defense/fintech interest.

BeeSafe AI

Machine Learning Use Cases

Multi-turn adversarial dialogue
For
Product Differentiation
Product

<p>Deploys autonomous conversational AI agents that engage scammers in real-time dialogue to waste their resources, extract intelligence, and map fraudulent operations before any victim is contacted.</p>

Layman's Explanation

An AI chatbot pretends to be a gullible victim, strings scammers along in conversation, and secretly collects all their account details and tactics to shut them down.

Use Case Details

BeeSafe AI's conversational AI agents use large language models fine-tuned on real-world scam dialogues to autonomously engage fraudsters across messaging, email, and voice channels. These agents simulate believable victim personas, maintain multi-turn conversations, and dynamically adapt their responses based on scam typology detection (romance, investment, tech support, etc.). During each interaction, NLP pipelines perform real-time entity extraction—capturing bank account numbers, crypto wallet addresses, phone numbers, URLs, and organizational details—while behavioral analysis models fingerprint the scammer's tactics and communication patterns. Every interaction feeds back into a continuously growing proprietary dataset, enabling the system to improve its engagement strategies and detect emerging scam scripts. The result is a dual-purpose system: it wastes scammers' time and resources (reducing their capacity to target real victims) while simultaneously generating high-fidelity intelligence that powers downstream detection and interdiction.

Analogy

It's like hiring a thousand incredibly patient undercover agents who never sleep, never get frustrated, and secretly write down everything the con artist says—then hand the notes to the police before breakfast.

Graph-based fraud detection
For
Risk Reduction
IT-Security

<p>Uses graph neural networks and anomaly detection to identify mule accounts and map interconnected scam infrastructure across financial networks in real time.</p>

Layman's Explanation

The AI connects the dots between suspicious bank accounts, phone numbers, and websites to build a map of the entire scam operation—like a detective's evidence board that builds itself.

Use Case Details

BeeSafe AI's infrastructure detection engine ingests intelligence extracted from scammer engagements—account numbers, wallet addresses, domains, phone numbers, and communication metadata—and feeds it into a graph neural network (GNN) that models relationships between entities across the financial ecosystem. Anomaly detection algorithms identify accounts exhibiting behavioral signatures consistent with money mule activity (e.g., rapid fund pass-through, unusual transaction timing, connections to known scam nodes). Link analysis and community detection algorithms then map the broader scam infrastructure, revealing clusters of coordinated fraudulent accounts, shared hosting infrastructure, and overlapping campaign assets. The system continuously enriches its graph with new intelligence from every scammer interaction, creating a living map of fraud networks that grows more accurate over time. Real-time alerts are pushed to partner financial institutions and law enforcement, enabling pre-transaction account freezes and coordinated takedowns.

Analogy

It's like Google Maps for crime networks—except every time a scammer talks to the AI, a new pin drops on the map and the routes between accomplices light up automatically.

Scam typology NLP classification
For
Decision Quality
Data

<p>Applies NLP-based classification models to incoming scam communications to automatically categorize scam type, detect novel scam scripts, and trigger early warnings for emerging fraud campaigns.</p>

Layman's Explanation

The AI reads every scam message it encounters, instantly sorts it by type (romance, crypto, tech support, etc.), and raises an alarm the moment it spots a brand-new kind of scam no one has seen before.

Use Case Details

BeeSafe AI's scam typology classification system uses transformer-based NLP models trained on a proprietary corpus of real scammer communications to automatically categorize each interaction by fraud type—romance scams, pig butchering (investment fraud), tech support scams, impersonation, phishing, and more. Beyond classification, the system employs unsupervised learning techniques (clustering, topic modeling, and embedding drift detection) to identify communications that don't fit existing categories, flagging them as potential novel scam variants. When a new cluster reaches a statistical significance threshold, the system generates an emerging threat advisory with extracted indicators of compromise (IoCs), sample scripts, and recommended detection rules. This intelligence is distributed to partner financial institutions and government agencies in near real-time, enabling proactive defense against scam campaigns that traditional rule-based systems would miss for weeks or months. The classification models are continuously retrained as the adversary-sourced dataset grows, ensuring the system adapts as scammers evolve their tactics.

Analogy

It's like a librarian who not only instantly shelves every book by genre but also notices when someone sneaks in a book from a genre that doesn't exist yet—and immediately tells everyone to watch out.

Key Technical Team Members

  • Daniel Spokoyny, Co-founder
  • Ariana Mirian, Co-founder
  • Nikolai Vogler, Co-founder

Three PhD founders who literally wrote the paper introducing the first AI system for sophisticated trust-based scam engagement. Combined 20+ years in ML, NLP, and security research across CMU, UCSD, Google, Microsoft, and Censys. DOD Cyber Competition winners. No competitor has this combination of published scam-engagement research and production security experience.

BeeSafe AI

Funding History

  • 2025: Spokoyny, Mirian, and Vogler begin research collaboration at UCSD
  • 2026 Q1: BeeSafe AI founded, accepted into YC W26
  • 2026 Q1: $500K seed via YC
  • 2026: Intel already enabling financial services and government agencies to intercept scammers

BeeSafe AI

Competitors

  • Fraud Detection: Sardine, Unit21, Featurespace
  • Identity Verification: Socure, Persona
  • Scam-Specific: Kitboga (entertainment/awareness), various telecom scam filters
  • Enterprise Security: CrowdStrike, Palo Alto Networks
  • AI Safety: Anthropic (trust/safety), OpenAI (abuse detection)
More

Companies
Get Every New ML Use Cases Directly to Your Inbox
Thank you! Your submission has been received!
Oops! Something went wrong while submitting the form.