How Is

Piris Labs

Using AI?

Builds photonic AI inference hardware, 5x lower latency and 10x lower power than GPUs.

Using photonic inference optimization for optical data movement, scalable inference orchestration, and automated model adaptation for photonic hardware.

Company Overview

Builds a vertically integrated AI inference platform combining proprietary photonic (optical) interconnects with a custom software stack to deliver ultra-fast, scalable, and cost-effective inference for trillion-parameter AI models.

Product Roadmap & Public Announcements

No official public roadmap has been disclosed. Based on YC Demo Day materials and founder communications, Piris Labs has signaled plans for modular photonic interconnect hardware, a vertically optimized inference software stack, and partnerships with chip makers and ODMs to integrate optical data movement into existing data center architectures. Their public positioning emphasizes 5x lower latency, 10x lower power per bit, and 2x lower cost per token versus GPU clusters.

Signals & Private Analysis

GitHub and social media activity from founder Ali Khalatpour points to active R&D in photonic interconnect prototyping and hardware-software co-design. Conference appearances at photonics and AI hardware events suggest ongoing academic collaborations (MIT RLE, Stanford LINQS). Hiring patterns indicate a focus on photonics engineers and systems software developers. The advisory relationship with the ex-Groq President signals potential enterprise go-to-market strategy development and possible future fundraising at a significantly higher valuation. Stealth posture suggests they are protecting IP and timing a larger reveal around a working hardware demo.

Piris Labs

Machine Learning Use Cases

Photonic inference optimization
For
Product Differentiation
Engineering

<p>Photonic interconnect-optimized inference engine that minimizes data movement bottlenecks to deliver ultra-low-latency token generation for trillion-parameter LLMs.</p>

Layman's Explanation

They built a special light-based data highway inside AI chips so massive AI models can think faster while using way less electricity.

Use Case Details

Piris Labs' core engineering use case centers on replacing traditional copper-based electrical interconnects with proprietary photonic (optical) interconnects for AI inference workloads. In conventional GPU clusters, the dominant bottleneck for large model inference is not compute but data movement—shuttling billions of parameters and activations between memory, processors, and nodes over electrical links that consume enormous power and introduce latency. Piris Labs addresses this by designing optical interconnects that transmit data at the speed of light with dramatically lower energy per bit. Their vertically integrated software stack is co-designed with this hardware to schedule computation, manage memory placement, and orchestrate data flow in a photonics-aware manner, ensuring that the theoretical advantages of optical data movement translate into real-world inference speedups. This full-stack co-design approach—from photonic chip layout to compiler-level optimizations—enables them to claim 5x lower latency and 10x lower power per bit versus standard GPU inference clusters, making trillion-parameter model serving economically viable at scale.

Analogy

It's like replacing a congested highway of delivery trucks (electrical wires) with a network of teleportation portals (light beams) so packages (data) arrive instantly without burning any gas.

Scalable inference orchestration
For
Cost Reduction
Operations

<p>Modular, horizontally scalable inference architecture that enables seamless scaling from single-node to multi-rack deployments without degradation in per-token latency or throughput.</p>

Layman's Explanation

They designed their AI system like stackable LEGO blocks so you can keep adding more power without everything slowing down or getting expensive.

Use Case Details

The second major ML use case at Piris Labs is their approach to scalable inference orchestration. While competitors like Cerebras deliver exceptional single-device inference speed via wafer-scale chips, their architecture is inherently difficult to scale horizontally—you cannot simply "add another wafer" to double throughput. Piris Labs' modular photonic architecture is designed from the ground up for horizontal scalability: each inference node communicates with others via high-bandwidth, low-latency optical links, enabling clusters to grow from a single node to full data center racks with near-linear scaling efficiency. Their operations-layer software handles intelligent workload distribution, load balancing, fault tolerance, and resource allocation across the photonic mesh. This means customers can start small and scale inference capacity on demand without re-architecting their deployment, paying only for the capacity they need. The result is a 2x lower cost per token compared to GPU clusters, achieved not just through faster hardware but through superior utilization and elimination of the communication overhead that plagues distributed GPU inference at scale.

Analogy

It's like having a restaurant kitchen where adding more chefs actually makes every dish come out faster instead of causing everyone to bump into each other and slow down.

Automated model adaptation
For
Operational Efficiency
Product

<p>Hardware-software co-designed model optimization pipeline that automatically adapts large AI models for photonic inference, maximizing throughput and minimizing latency without manual model surgery.</p>

Layman's Explanation

They built a smart translator that automatically reshapes any AI brain to run perfectly on their light-speed hardware without anyone needing to manually tinker with it.

Use Case Details

Piris Labs' third key ML use case is an automated model optimization and adaptation pipeline that sits at the intersection of product and ML engineering. Deploying a trillion-parameter model on novel hardware is notoriously difficult: models trained on GPUs encode assumptions about memory hierarchy, parallelism patterns, and numerical precision that may not map cleanly onto photonic architectures. Piris Labs addresses this with an automated pipeline that takes standard model checkpoints (e.g., from Hugging Face, custom training runs) and applies a sequence of hardware-aware optimizations—including graph rewriting, operator fusion, quantization-aware calibration, and photonic-specific data layout transformations—to produce an inference-ready artifact tuned for their optical hardware. The pipeline uses ML-driven search (e.g., learned cost models, reinforcement learning-based optimization) to explore the space of possible transformations and select configurations that maximize throughput and minimize latency for each specific model architecture. This dramatically lowers the barrier to adoption: customers do not need photonics expertise or manual model surgery to benefit from the platform, making Piris Labs' offering accessible to any AI team with a standard model artifact.

Analogy

It's like a universal travel adapter that automatically reshapes itself to fit any country's power outlet so your devices just work wherever you go.

Key Technical Team Members

  • Ali Khalatpour, Co-Founder & CEO
  • Keyvan Moghadam, Team Member
  • Advisor: MIT RLE Director, Advisor: Ex, Groq President

Ali Khalatpour's rare combination of world-class photonics research (MIT/Stanford/Harvard/NASA) and big-tech AI product scaling experience (Meta, X) gives Piris Labs a unique ability to co-design optical hardware and ML software from first principles,a vertical integration advantage that pure-software or pure-hardware competitors cannot easily replicate.

Piris Labs

Funding History

  • 2025 | Ali Khalatpour founds Piris Labs. 2025 | Accepted into Y Combinator (S25 batch). 2025 | $500K Seed investment from Y Combinator. 2026 | Operating in stealth; no additional funding rounds publicly announced.

Piris Labs

Competitors

  • Wafer-Scale / Custom ASIC Inference: Cerebras Systems, Groq, SambaNova Systems. GPU Cloud Inference: Together AI, Fireworks AI, Anyscale. Optical/Photonic AI Hardware: Lightmatter, Luminous Computing, Ayar Labs (optical interconnects). Hyperscaler Inference: AWS Inferentia, Google TPU, Azure Maia.
More

Companies
Get Every New ML Use Cases Directly to Your Inbox
Thank you! Your submission has been received!
Oops! Something went wrong while submitting the form.