Why teams would choose LithicDB
Most vector databases implicitly optimize for the fully in-memory case. LithicDB starts from a different constraint: many retrieval systems want predictable single-node cost, can tolerate approximate search plus reranking, and care more about operational simplicity than absolute leaderboard recall.
- It lowers memory pressure by keeping the full payload on disk.
- It still supports approximate nearest neighbor search with a real routing structure.
- It supports online inserts, deletes, filtering, and recovery rather than being a static benchmark toy.
- It is benchmarkable against brute-force cosine from the exact same collection state.
Product thesis
Who it is for: teams building RAG, semantic search, and recommendation systems that want a credible single-node vector engine they can own and extend.
Why they would choose it: lower memory footprint, transparent architecture, and an operational model that is easier to reason about than a large distributed system.
Tradeoffs: LithicDB accepts that it will not beat mature all-memory HNSW systems on every recall/latency frontier. Its advantage is cost shape, control, and extensibility.