How Is

Resonate

Using AI?

AI-native messaging with voice emotion recognition and agentic workflows in team chat.

Using voice emotion recognition from audio, agentic workflow orchestration within conversations, and multimodal speech-to-knowledge extraction.

Company Overview

Builds the first AI-native messaging platform that uses LLM-powered agentic workflows, live transcription, emotion extraction, and generative video to reimagine voice, text, and video communication from the ground up.

Product Roadmap & Public Announcements

Resonate has publicly announced real-time messaging capabilities, AI-powered voice messaging with smart formatting and emotion extraction, live transcription and editing, interoperability with WhatsApp/Signal/Telegram voice messages, Hindi language support, creator following features, direct calling, and an "Advanced Agentic Video" product called Resonate Create. App store changelogs confirm chat rooms, notifications, faster search, and shareable video features in active development.

Signals & Private Analysis

Job postings for Founding Full Stack Engineers ($150k,$250k, 1,2.5% equity) and Associate Product Marketing Managers signal imminent product launch and go-to-market push. The hire of Manjot Pal (10+ years AI experience) suggests investment in proprietary model fine-tuning or advanced orchestration. GitHub activity around durable execution frameworks, agentic AI SDKs (Python/TypeScript), and human-in-the-loop patterns indicates infrastructure for multi-agent communication workflows. Low social media presence suggests stealth-mode positioning ahead of a broader consumer or enterprise launch. Conference and YC Demo Day appearances likely planned for mid-2026.

Resonate

Machine Learning Use Cases

Voice Emotion Recognition
For
Product Differentiation
Product

<p>Real-time emotion extraction and sentiment analysis on voice messages to enable emotionally intelligent communication and adaptive UI responses.</p>

Layman's Explanation

The app listens to how you say something—not just what you say—and uses that emotional context to make conversations richer and smarter.

Use Case Details

Resonate applies speech emotion recognition (SER) models to incoming voice messages in real time, extracting paralinguistic features such as tone, pitch, cadence, and energy to classify emotional states (e.g., excitement, frustration, calm, urgency). These emotion labels are surfaced inline alongside transcriptions, enabling recipients to understand the sender's emotional intent before or instead of listening to the full audio. The system also feeds emotion signals into downstream features like adaptive UI theming, smart reply suggestions calibrated to emotional tone, and live communication coaching that nudges users toward more empathetic or effective responses. By embedding emotion as a first-class data type in the messaging layer, Resonate creates a feedback loop where users become more intentional communicators and the platform continuously improves its emotion models through implicit user corrections and engagement signals.

Analogy

It's like having a best friend who whispers "heads up, they sound upset" before you read a text—except the best friend is an AI that never misses a cue.

Agentic Workflow Orchestration
For
Operational Efficiency
Engineering

<p>Multi-agent LLM orchestration framework powering durable, stateful agentic workflows for automated messaging, content generation, and human-in-the-loop task completion.</p>

Layman's Explanation

Instead of building every AI feature from scratch, Resonate built a system where AI agents can be assembled like LEGO blocks—each one handling a piece of a complex task and handing off to the next.

Use Case Details

Resonate has built a durable execution framework with Python and TypeScript SDKs that abstracts the complexity of orchestrating multiple LLM-powered agents across distributed systems. Each agent is defined as a composable function with registered dependencies, persistent state, and built-in retry/recovery logic, enabling recursive tool-augmented workflows where agents can call other agents, invoke external APIs, or pause for human-in-the-loop intervention. This infrastructure powers features across the product—from AI-generated video in Resonate Create (where one agent scripts, another generates visuals, and a third composes the final output) to smart reply generation in chat (where an agent reads conversation context, infers intent, and drafts contextually appropriate responses). The framework is LLM-agnostic, supporting prompt-based agent templates that can be swapped between model providers (OpenAI, Anthropic, open-source) without rewriting application logic. Centralized orchestration APIs manage lock states, execution queues, and observability, giving engineers full visibility into agent behavior and enabling rapid iteration on new agentic features.

Analogy

It's like a relay race where each AI runner knows exactly when to grab the baton, what leg to run, and when to call a human coach onto the track if things get tricky.

Multimodal Speech-to-Knowledge
For
Decision Quality
Data

<p>Cross-platform voice message ingestion with automatic transcription, summarization, and semantic search across imported conversations from WhatsApp, Signal, and Telegram.</p>

Layman's Explanation

You can dump all your voice messages from every chat app into Resonate and instantly search, read, or get a summary of anything anyone ever said to you—across all platforms.

Use Case Details

Resonate's interoperability layer ingests voice messages exported from WhatsApp, Signal, Telegram, and other platforms, running them through an automatic speech recognition (ASR) pipeline that supports multiple languages (with Hindi confirmed as an early priority). Each voice message is transcribed, timestamped, and speaker-diarized where possible, then passed through an NLP summarization model that generates concise, searchable abstracts. The resulting text is embedded into a vector store using sentence-level embeddings, enabling semantic search—users can query by meaning rather than exact keywords (e.g., "that time Maria talked about the restaurant in Brooklyn" retrieves the relevant voice clip and transcript). Smart formatting post-processing structures transcriptions with paragraph breaks, punctuation restoration, and named entity recognition to make raw voice content as scannable as written text. This transforms historically ephemeral, unsearchable voice messages into a structured, queryable knowledge base, dramatically improving information retrieval and reducing the friction of voice-first communication.

Analogy

It's like having a personal secretary who listens to every voice note you've ever received across every app, takes perfect notes, and can instantly find the one where your mom told you her secret recipe.

Key Technical Team Members

  • Sam Kaplan, Founder & CEO
  • Manjot

Sam Kaplan combines deep engineering leadership from scaling Brex's infrastructure with a ground-up AI-native architecture, allowing Resonate to embed generative AI and agentic workflows into the messaging layer itself rather than retrofitting legacy chat systems,giving them a structural speed advantage over incumbents like WhatsApp or Telegram adding AI as a feature.

Resonate

Funding History

  • 2024,2025 | Sam Kaplan begins building Resonate. Winter 2026 | Accepted into Y Combinator (Jared Friedman as group partner). 2026 | Beta launched on iOS/Android app stores. 2026 | Hiring founding engineers and GTM team in San Francisco. 2026 | ~$500K (estimated YC standard deal) raised to date; no additional rounds disclosed.

Resonate

Competitors

  • AI-Enhanced Messaging: WhatsApp (Meta AI integration), Telegram (AI bots/assistants), Google Messages (Gemini), Apple Messages (Apple Intelligence). AI-Native Communication Startups: Beeper (unified messaging), Dust.tt (AI-native workspace), Character.AI (conversational AI). Voice/Video AI: Otter.ai (transcription), Descript (AI video/audio editing), ElevenLabs (voice AI). Agentic Platforms: LangChain, CrewAI, AutoGen (developer-focused agent orchestration).
More

Companies
Get Every New ML Use Cases Directly to Your Inbox
Thank you! Your submission has been received!
Oops! Something went wrong while submitting the form.