World Labs

Product & Competitive Intelligence

Generates navigable 3D worlds from text and images using large world models.

Company Overview

World Labs is a spatial intelligence company that trains large world models to generate, edit, and render photorealistic 3D environments from multimodal inputs. Serving customers across gaming, VFX, robotics simulation, and AR/VR, with integrations targeting Unity, Unreal, and Blender users.

Latest Intel

Zeitgeist tracks private signals to determine where the company is heading and what it means competitively.

View All The Latest Signals

What They're Building

The company's public product roadmap & what they're committed to building.

Marble:

Multimodal world model for generating and editing 3D Gaussian Splat scenes from text, images, video, or 3D layouts.

World API:

Developer API for programmable 3D world generation, accepting text, images, panoramas, and video as inputs.

Spark 2.0:

Open-source 3D Gaussian Splatting renderer built on THREE.js, WebGL2, and Rust/WASM for browser-based visualization.

RTFM:

Real-time generative world model research preview for interactive, frame-by-frame world generation on GPU.

Competitive Landscape & Moat

The co-founders created ImageNet and NeRF, giving them proprietary intuition about 3D data curation and neural rendering that translates directly into training data advantages and architectural decisions competitors must rediscover independently.

Direct Competitors

Luma AI:

Focuses on visual realism and video generation rather than persistent, navigable 3D worlds; more consumer-oriented with lighter infrastructure requirements.

NVIDIA Omniverse:

Enterprise digital twin platform with massive distribution but relies on explicit 3D authoring rather than generative AI for world creation. |

Google DeepMind (Genie):

Research-stage world models with access to enormous compute and data, but no standalone product or developer API, and constrained by Google's product prioritization.

Founding Team

Fei-Fei Li, CEO

Co-Founder (ex-Google Cloud, Stanford)

Justin Johnson

Co-Founder (ex-Meta FAIR, University of Michigan)

Christoph Lassner

Co-Founder (ex-Epic Games, ex-Meta, ex-Amazon)

Ben Mildenhall

Co-Founder (ex-Google Research, UC Berkeley)

Li built ImageNet (the dataset that started the deep learning era), Mildenhall invented NeRF (the representation this company commercializes), Johnson trained under Li and shipped 3D research at FAIR, and Lassner built production rendering at Epic Games. This is the rare team where the founders literally wrote the papers the entire field builds on, and one of them has shipped it in a game engine.

Funding History

2024 | Seed: ~$30M from Radical Ventures

2024 | Series A: $100M led by NEA

2024 | Series B: $100M led by NEA

2024 | Series C: $230M led by a16z

2026 | Series D: $1B with $200M strategic from Autodesk, plus AMD, Emerson Collective, Fidelity, Nvidia