Gives the 98% of schools without a library system an AI-powered cataloging and search platform.
Using computer vision for bulk book scanning, LLM-powered RAG for library Q&A, and knowledge graph entity resolution for literary metadata.

|
Library Management
|
YC W26

Last Updated:
March 19, 2026

Builds an AI-powered data and intelligence layer for literature, automating library cataloging, inventory, circulation, and analytics using computer vision, NLP, and LLMs,starting with the 98% of schools that lack a proper library system.
Mobile-first ILS with camera-based bulk scanning, Librar Intelligence AI assistant for admin automation and NLP Q&A, OSSUS self-healing data infrastructure for agent-ready structured data, GDPR/EU AI Act/ISO 27001 compliance, claims 99% reduction in setup time and 92% reduction in inventory time.
Investment in knowledge graph construction and entity resolution signals a platform play with third-party API access. YC participation and advisor ties to OpenAI, Depict, Kahoot, and Google Maps founders suggest upcoming integrations with LMS and publisher platforms. Team composition hints at advanced RAG pipelines and expansion into public/academic libraries and data-as-a-service for publishing.
<p>Camera-based bulk scanning that lets librarians photograph an entire shelf and instantly catalog or inventory every book using computer vision and metadata enrichment.</p>
Point your phone at a bookshelf and the AI instantly knows every book, its author, edition, and condition—no barcode scanner needed.
Librar Labs uses fine-tuned object detection and OCR models to identify individual book spines from a single shelf photograph. Once spines are detected and text extracted, the system runs entity resolution against authoritative bibliographic databases (OCLC, ISBN registries, OpenLibrary) to pull rich metadata including title, author, edition, subject classification, cover art, and reading level. A confidence-scoring pipeline flags ambiguous matches for human review while auto-cataloging high-confidence results. For inventory, the system compares scanned shelf state against the existing digital catalog to surface missing, misplaced, or newly added titles in real time. This eliminates barcode scanners, RFID infrastructure, and manual data entry. The model continuously improves through active learning as librarians confirm or correct edge cases, building a proprietary training dataset of spine imagery across languages and publishers that competitors cannot easily replicate.
It's like Shazam for bookshelves—snap a photo and the AI instantly tells you everything about every book it sees.
<p>An LLM-powered AI assistant (Librar Intelligence) that provides natural language Q&A, personalized book recommendations, collection gap analysis, and automated administrative workflows for librarians and readers.</p>
Instead of searching a clunky catalog, you just ask the AI "What's a good adventure book for a reluctant 10-year-old reader?" and it gives you a perfect, personalized answer.
Librar Intelligence is a retrieval-augmented generation (RAG) system combining large language models with a vector database of structured literary metadata, circulation history, and reading-level data to deliver context-aware responses to natural language queries. Librarians can ask questions like "Which genres are underrepresented in our Grade 4 collection?" or "Suggest 20 diverse titles for our summer reading program under $200 budget," and the system synthesizes answers by querying the knowledge graph, analyzing circulation patterns, and cross-referencing publisher catalogs. For students, the system offers conversational book discovery using a hybrid collaborative and content-based filtering approach. Administrative automation includes generating overdue notices, compiling usage reports, and drafting grant applications with embedded collection analytics. The entire system is grounded in the OSSUS data layer, ensuring hallucination-resistant responses by constraining LLM outputs to verified bibliographic facts.
It's like having a brilliant librarian who has read every book in the world, remembers every student's taste, and never takes a sick day.
<p>OSSUS, a self-healing data backend that continuously unifies, deduplicates, and enriches fragmented literary metadata from heterogeneous sources into a single structured, agent-ready knowledge graph.</p>
The system automatically finds and fixes messy, duplicate, or conflicting book records from dozens of sources so every AI feature built on top can trust the data completely.
OSSUS is Librar Labs' proprietary self-healing data infrastructure and arguably the company's deepest technical moat. Library metadata is notoriously fragmented: the same book may appear with different ISBNs across editions, inconsistent author name formats (e.g., "J.K. Rowling" vs. "Joanne Rowling" vs. "Robert Galbraith"), conflicting subject classifications (Dewey vs. Library of Congress vs. BISAC), and incomplete records from legacy systems. OSSUS ingests data from camera scans, manual entries, MARC record imports, and external APIs, then runs a multi-stage entity resolution pipeline using probabilistic matching, embedding similarity, and rule-based heuristics to merge duplicates, resolve conflicts, and fill missing fields. A continuous anomaly detection layer monitors the knowledge graph for drift, corruption, or inconsistency—automatically triggering correction workflows or flagging issues for human review. The self-healing aspect means resolution accuracy improves over time as it encounters more edge cases across diverse collections worldwide. This clean unified data layer makes every downstream ML application reliable and positions Librar Labs to offer structured literary data as an API service to publishers, researchers, and edtech platforms.
It's like autocorrect for the world's messiest card catalog—constantly scanning millions of records, spotting errors, and making sure "Tolkien, J.R.R." and "JRR Tolkien" are recognized as the same legendary author.
First-mover in AI-native school library infrastructure combined with OSSUS self-healing data backend that turns fragmented literary metadata into clean, structured, agent-ready data,a moat that deepens with every book scanned and library onboarded.