Collaborative AI filmmaking platform orchestrating the best video generation models for directors.
Using multi-model orchestration across Veo, Sora, and Kling, collaborative generative editing with cinematic controls, and AI previsualization.

|
AI Filmmaking
|
YC W26

Last Updated:
March 19, 2026

Collaborative, browser-based AI-native filmmaking platform that orchestrates leading AI video and image generation models (Veo 3.1, Sora 2 Pro, Kling 3.0 Pro, etc.) with professional cinematic controls, real-time team collaboration, and timeline editing with XML export to Premiere Pro, DaVinci Resolve, and Final Cut Pro.
Martini has publicly launched its browser-based collaborative filmmaking platform with model-agnostic AI video/image generation, cinematic camera and lens controls, real-time multi-user collaboration, and XML export to major NLEs. Over 200 films have been produced on the platform. The company has announced a Creative Partnership Program offering early access, priority support, and custom integrations for studios and agencies. Near-term public signals point to expanded model integrations as new AI video models emerge and continued UX refinement based on community feedback.
Reddit and community feedback highlight requests for improved video extension controls and better UX discoverability, suggesting active iteration cycles. The absence of public job postings implies a lean, founder-led engineering team still focused on core product-market fit rather than scaling headcount. GitHub and technical signals suggest investment in deeper NLE plugin development (native Premiere/Resolve plugins beyond XML), potential Frame.io or cloud review platform integration, and early exploration of custom model fine-tuning for filmmaker-specific visual styles. Conference and YC demo day positioning hints at enterprise pipeline features (ShotGrid integration, permissions, governance) and eventual expansion into AI-driven previsualization, virtual production blocking, and audio/sound design tooling.
<p>Model-agnostic AI orchestration engine that routes each shot to the optimal generative video or image model based on creative intent, enabling filmmakers to direct AI-generated footage with professional cinematic controls (camera, lens, movement) across multiple foundation models in a unified workspace.</p>
Instead of juggling five different AI video tools, filmmakers pick the best AI brain for each shot from one control room.
Martini's engineering team has built a model-agnostic orchestration layer that integrates leading AI video generation models (Veo 3.1, Sora 2 Pro, Kling 3.0 Pro, Kling O1, Hailuo 02, Seedance 1.5) and image generation models (Nano Banana Pro, Flux 2 Max) into a single browser-based workspace. The system translates traditional filmmaking parameters—camera position, lens selection, focal length, and camera movement—into model-specific generative prompts and conditioning inputs, abstracting away the technical differences between diffusion-based architectures (DDPMs, LDMs) and proprietary model APIs. This allows directors and cinematographers to maintain creative intentionality while the platform handles model routing, parameter mapping, and output normalization. The orchestration engine evaluates model strengths (e.g., photorealism, motion coherence, stylization) and presents unified controls so users can switch models mid-project without workflow disruption, dramatically reducing the friction of multi-tool workflows that plague current AI filmmaking pipelines.
It's like having a universal remote that controls every streaming service, except instead of movies you're making them—and each button picks the best AI director for the scene.
<p>Real-time collaborative AI filmmaking workspace enabling multiple team members to simultaneously prompt, generate, review, and iterate on AI-generated shots within a shared browser-based timeline editor, with XML export to industry-standard NLEs.</p>
Multiple filmmakers work on the same AI-generated movie at the same time in one browser, like Google Docs but for making films.
Martini's product team has developed a real-time collaborative workspace modeled after the multiplayer paradigm of tools like Figma, but purpose-built for generative AI filmmaking. Multiple users—directors, cinematographers, editors, producers—can simultaneously create prompts, generate AI video and image assets, assemble shots on a shared cloud-based timeline, and provide inline feedback without version conflicts or file-passing bottlenecks. The platform maintains synchronized state across all participants, with live cursors, shared prompt histories, and collaborative shot selection. The timeline editor supports professional editorial workflows including shot ordering, trimming, and assembly, with one-click XML export that preserves timeline structure, clip metadata, and edit decisions for seamless import into Adobe Premiere Pro, DaVinci Resolve, and Final Cut Pro. This bridges the gap between AI generation and traditional post-production, allowing teams to move from ideation to rough cut within the platform and then finish in their preferred NLE. The collaborative layer also powers Martini's Creative Partnership Program, where studios and agencies receive custom integrations and priority support for high-profile productions.
It's like Google Docs met a Hollywood editing suite and they had a baby that also happens to conjure footage out of thin air.
<p>AI-driven cinematic previsualization system that translates directorial intent—camera angles, lens choices, blocking, and movement—into structured generative parameters, enabling rapid visual prototyping of scenes before committing to full production or final AI rendering.</p>
Directors can now storyboard entire scenes with AI-generated footage in hours instead of spending days and thousands of dollars on traditional previsualization.
Martini's platform enables a fundamentally new approach to previsualization by allowing filmmakers to translate traditional directorial decisions—camera position, lens focal length, camera movement trajectories, and start/end frame compositions—into structured conditioning inputs for generative AI models. Rather than relying on expensive 3D previsualization software, motion capture stages, or hand-drawn storyboards, directors and cinematographers can rapidly generate multiple visual interpretations of a scene, compare compositions side-by-side, and iterate in real time. The system leverages the platform's multi-model orchestration to select the most appropriate generative model for each previsualization need (e.g., fast low-fidelity drafts for blocking vs. high-fidelity photorealistic renders for client presentations). With over 200 films already produced on the platform—including TV commercials and branded content—this workflow has proven particularly valuable for agencies and studios that need to present creative concepts to clients quickly. The cinematic parameter controls ensure that AI-generated previs footage maintains the visual language and intentionality that professional filmmakers expect, making the output directly useful for production planning rather than merely inspirational.
It's like sketching your dream house in photorealistic detail in minutes instead of hiring an architect for weeks—except the house is a movie and the sketch actually moves.
Martini uniquely combines real filmmaking expertise (co-founder is a working cinematographer) with a model-agnostic orchestration layer, giving professional filmmakers directorial-grade control over the best AI models in a collaborative, browser-based environment,bridging the gap between AI generation and professional post-production workflows that no competitor currently spans end-to-end.