Piris Labs

Product & Competitive Intelligence

Builds photonic AI inference hardware, 5x lower latency and 10x lower power than GPUs.

Company Overview

Builds a vertically integrated AI inference platform combining proprietary photonic (optical) interconnects with a custom software stack to deliver ultra-fast, scalable, and cost-effective inference for trillion-parameter AI models.

Competitive Advantage & Moat

Product Roadmap & Public Announcements

Piris Labs has signaled plans for modular photonic interconnect hardware, a vertically optimized inference software stack, and partnerships with chip makers and ODMs to integrate optical data movement into existing data center architectures. Public positioning: 5x lower latency, 10x lower power per bit, and 2x lower cost per token versus GPU clusters. Working π Conversion Engine prototype (October 2025). SBIR government partnership secured. Early access waitlist stage.

Signals & Private Analysis

Conference appearances at photonics and AI hardware events suggest ongoing academic collaborations (MIT RLE, Stanford LINQS). Hiring patterns indicate a focus on photonics engineers and systems software developers. The advisory relationship with ex-Groq/NVIDIA President Mohsen Moazami signals potential enterprise go-to-market strategy development. Prof. Marc Baldo (MIT RLE Director) as advisor provides academic credibility. Stealth posture suggests they are protecting IP and timing a larger reveal around a working hardware demo.

Product Roadmap Priorities

Photonic inference optimization
Improving
Product Differentiation
Engineering

Photonic interconnect-optimized inference engine that minimizes data movement bottlenecks to deliver ultra-low-latency token generation for trillion-parameter LLMs.

In Plain English

They built a special light-based data highway inside AI chips so massive AI models can think faster while using way less electricity.

Analogy

It's like replacing a congested highway of delivery trucks (electrical wires) with a network of teleportation portals (light beams) so packages (data) arrive instantly without burning any gas.

Scalable inference orchestration
Improving
Cost Reduction
Operations

Modular, horizontally scalable inference architecture that enables seamless scaling from single-node to multi-rack deployments without degradation in per-token latency or throughput.

In Plain English

They designed their AI system like stackable LEGO blocks so you can keep adding more power without everything slowing down or getting expensive.

Analogy

It's like having a restaurant kitchen where adding more chefs actually makes every dish come out faster instead of causing everyone to bump into each other and slow down.

Automated model adaptation
Improving
Operational Efficiency
Product

Hardware-software co-designed model optimization pipeline that automatically adapts large AI models for photonic inference, maximizing throughput and minimizing latency without manual model surgery.

In Plain English

They built a smart translator that automatically reshapes any AI brain to run perfectly on their light-speed hardware without anyone needing to manually tinker with it.

Analogy

It's like a universal travel adapter that automatically reshapes itself to fit any country's power outlet so your devices just work wherever you go.

Company Overview

Key Team Members

  • Ali Khalatpour, Co-Founder & CEO
  • Keyvan Rezaei Moghadam, Co-Founder & President

Ali Khalatpour holds multiple MIT degrees (BSc EE/Math, MASc, MSc Physics, PhD EE), is a Harvard- and Stanford-trained optical scientist who developed the first room-temperature terahertz semiconductor laser, and led the development of GUSTO's optical engine for NASA. 10+ years of directly relevant optical hardware R&D. Keyvan Rezaei Moghadam has a USC PhD in EE, 5 years at Meta as Technical Lead/Engineering Manager, and 2 years as Tech Lead at X (Twitter), with experience building 0-to-1 AI infrastructure. Advisors: Prof. Marc Baldo (MIT RLE Director) and Mohsen Moazami (ex-Groq/NVIDIA President).

Funding History

  • 2025 | Ali Khalatpour and Keyvan Rezaei Moghadam co-found Piris Labs.
  • 2025 | Working π Conversion Engine prototype completed.
  • 2025 | SBIR government partnership secured.
  • 2026 | Accepted into Y Combinator Winter 2026 batch.

Competitors

  • Wafer-Scale / Custom ASIC Inference: Cerebras Systems, Groq, SambaNova Systems.
  • GPU Cloud Inference: Together AI, Fireworks AI, Anyscale.
  • Optical/Photonic AI Hardware: Lightmatter, Luminous Computing, Ayar Labs (optical interconnects), Celestial AI.
  • Hyperscaler Inference: AWS Inferentia, Google TPU, Azure Maia.