
Technology
|
Photonic Computing
|
YC W26
|
Valuation:
Undisclosed

Last Updated:
March 24, 2026

Builds a vertically integrated AI inference platform combining proprietary photonic (optical) interconnects with a custom software stack to deliver ultra-fast, scalable, and cost-effective inference for trillion-parameter AI models.
Piris Labs has signaled plans for modular photonic interconnect hardware, a vertically optimized inference software stack, and partnerships with chip makers and ODMs to integrate optical data movement into existing data center architectures. Public positioning: 5x lower latency, 10x lower power per bit, and 2x lower cost per token versus GPU clusters. Working π Conversion Engine prototype (October 2025). SBIR government partnership secured. Early access waitlist stage.
Conference appearances at photonics and AI hardware events suggest ongoing academic collaborations (MIT RLE, Stanford LINQS). Hiring patterns indicate a focus on photonics engineers and systems software developers. The advisory relationship with ex-Groq/NVIDIA President Mohsen Moazami signals potential enterprise go-to-market strategy development. Prof. Marc Baldo (MIT RLE Director) as advisor provides academic credibility. Stealth posture suggests they are protecting IP and timing a larger reveal around a working hardware demo.
Photonic interconnect-optimized inference engine that minimizes data movement bottlenecks to deliver ultra-low-latency token generation for trillion-parameter LLMs.
They built a special light-based data highway inside AI chips so massive AI models can think faster while using way less electricity.
It's like replacing a congested highway of delivery trucks (electrical wires) with a network of teleportation portals (light beams) so packages (data) arrive instantly without burning any gas.
Modular, horizontally scalable inference architecture that enables seamless scaling from single-node to multi-rack deployments without degradation in per-token latency or throughput.
They designed their AI system like stackable LEGO blocks so you can keep adding more power without everything slowing down or getting expensive.
It's like having a restaurant kitchen where adding more chefs actually makes every dish come out faster instead of causing everyone to bump into each other and slow down.
Hardware-software co-designed model optimization pipeline that automatically adapts large AI models for photonic inference, maximizing throughput and minimizing latency without manual model surgery.
They built a smart translator that automatically reshapes any AI brain to run perfectly on their light-speed hardware without anyone needing to manually tinker with it.
It's like a universal travel adapter that automatically reshapes itself to fit any country's power outlet so your devices just work wherever you go.
Ali Khalatpour holds multiple MIT degrees (BSc EE/Math, MASc, MSc Physics, PhD EE), is a Harvard- and Stanford-trained optical scientist who developed the first room-temperature terahertz semiconductor laser, and led the development of GUSTO's optical engine for NASA. 10+ years of directly relevant optical hardware R&D. Keyvan Rezaei Moghadam has a USC PhD in EE, 5 years at Meta as Technical Lead/Engineering Manager, and 2 years as Tech Lead at X (Twitter), with experience building 0-to-1 AI infrastructure. Advisors: Prof. Marc Baldo (MIT RLE Director) and Mohsen Moazami (ex-Groq/NVIDIA President).