AI coordination layer for chip design that detects drift, proposes fixes, and surfaces tapeout risk.
Using real-time drift detection across specs and RTL, generative fix synthesis for design issues, and predictive risk analytics for tapeout readiness.

Technology
|
Semiconductor
|
YC W26

Last Updated:
March 20, 2026

Builds an AI-powered coordination layer for chip design that uses autonomous agents to detect design drift, auto-triage issues, propose verified fixes, and surface tapeout-risk insights to executives.
Visibl has publicly described a four-layer product: (1) continuous monitoring of specs, RTL, and CI for design drift, (2) automated case generation with supporting evidence, (3) AI-proposed and verified fix diffs packaged for human review, and (4) executive dashboards showing readiness, schedule impact, and tapeout risk. Their website positions the platform as an integration layer that sits atop existing EDA and CI toolchains rather than replacing them.
Job postings and GitHub activity remain minimal, consistent with a two-person founding team still in stealth-to-launch mode inside YC W26. The CTO's Google Scholar profile and prior work at Arm and Intel on ML-for-hardware suggest the team is building proprietary models trained on RTL and verification corpora. Conference and community signals hint at early design-partner conversations with fabless semiconductor companies frustrated by late-stage respins. The absence of any open-source repos or patent filings indicates a fully proprietary, closed-source commercialization strategy. Expansion into automated verification sign-off and physical-design optimization is a logical next step given the founders' backgrounds.
<p>AI agents continuously monitor specifications, RTL code, and CI results to detect design drift before it causes costly late-stage chip respins.</p>
Think of it as a spell-checker that constantly compares what the chip blueprint says versus what the engineers actually built, and flags every mismatch instantly.
Visibl's drift-detection system ingests heterogeneous design artifacts—natural-language specifications, RTL source code, verification test results, CI/CD logs, and issue-tracker tickets—and builds a continuously updated knowledge graph of intended versus implemented behavior. Supervised ML classifiers, trained on historical design-change and bug-report data, score each detected divergence by severity and blast radius. NLP models parse and cross-reference requirement documents against code comments and register maps to surface semantic mismatches that simple diff tools miss. When a divergence exceeds a configurable confidence threshold, the system automatically opens a case, attaches the relevant evidence chain (spec clause, code snippet, failing test), and routes it to the responsible engineer. This closes the feedback loop that traditionally relies on periodic manual reviews, dramatically compressing the window in which silent drift can accumulate into a multi-million-dollar respin.
It's like having a GPS that yells "recalculating!" the moment your chip design takes a wrong turn, instead of letting you drive 200 miles in the wrong direction.
<p>AI agents propose code-level fixes for detected design issues and formally verify them, packaging review-ready diffs for human approval.</p>
It's like an auto-mechanic robot that not only tells you what's wrong with your car but also hands you the exact replacement part, pre-tested and ready to install.
When the drift-detection layer opens a case, Visibl's fix-proposal engine takes over. A code-generation model, fine-tuned on RTL (Verilog/SystemVerilog) corpora and the customer's own design history, synthesizes candidate patches that realign implementation with the original specification intent. Each candidate patch is then run through an automated verification harness—leveraging existing testbenches and formal property checks—to confirm functional correctness and regression safety. Patches that pass are packaged as review-ready diffs with inline annotations explaining the rationale, linked evidence from the original case, and verification coverage reports. Engineers review and approve with a single click, collapsing what traditionally involves hours of root-cause analysis, manual coding, and re-verification into a streamlined approval workflow. The system learns from accepted and rejected patches over time, continuously improving suggestion quality through reinforcement learning from human feedback (RLHF).
It's like autocorrect for chip design—except it actually runs the spell-check, grammar-check, and fact-check before suggesting the fix.
<p>ML-powered dashboards aggregate signals across the entire design organization to give executives real-time tapeout readiness scores and schedule-impact predictions.</p>
It's a weather forecast for your chip project—instead of guessing if tapeout will be sunny or stormy, executives get a data-driven probability with a five-day outlook.
Visibl's executive dashboard consumes the full telemetry stream generated by its monitoring and triage layers—open case counts, drift severity distributions, fix acceptance rates, verification coverage trends, and CI pass/fail trajectories—and feeds them into a predictive analytics engine. Time-series forecasting models project remaining work against historical velocity curves to estimate schedule-slip probability. A composite "tapeout readiness" score, computed via a gradient-boosted ensemble model, distills dozens of signals into a single actionable metric that updates in real time. Role-specific views let VPs of Engineering drill into block-level risk heat maps while C-suite leaders see portfolio-level summaries with financial impact estimates for delay scenarios. Anomaly-detection models flag sudden regressions—such as a spike in open cases in a critical IP block—and trigger proactive alerts before they cascade. The result is a shift from gut-feel milestone reviews to continuous, evidence-based program management.
It's like replacing the "Are we going to make it?" gut check at every staff meeting with an actual scoreboard that updates in real time.
The CTO built silicon at Intel, Arm, and Microsoft and publishes ML-for-hardware research, while the CEO operationalized enterprise AI at Deloitte's OmniaAI practice,together they uniquely understand both the chip-design pain and the AI solution space from the inside.