Skip to main content
VectoriaDB is a production-ready in-memory vector database built on transformers.js. Use it to surface the right tool, prompt, or document snippet from natural-language queries without shipping data to an external service.
VectoriaDB runs entirely offline - your data never leaves the server, and you avoid API quotas or rate limits.

Features

Offline Embeddings

Embeddings run locally via transformers.js, so your data never leaves the server and you avoid API quotas.

Type-safe Metadata

Strong generics ensure every document you index keeps the same shape as your metadata interface.

Operational Guardrails

Built-in rate limits, batch validation, HNSW indexing, and storage adapters keep the index production ready.

When to Use VectoriaDB

  • Tool discovery - Surface the right tool from natural-language queries
  • Document search - Semantic search over documents, prompts, or code snippets
  • Recommendation systems - Find similar items based on text embeddings
  • Offline-first applications - No external API dependencies
The default Xenova all-MiniLM-L6-v2 model is ~22 MB. The first initialization downloads and caches it under cacheDir; subsequent boots reuse the local copy.

Core Concepts

Documents

Each document has:
  • id - Unique identifier
  • text - Natural language text to embed
  • metadata - Type-safe custom data

Embeddings

VectoriaDB generates embeddings locally using transformers.js. The default model is all-MiniLM-L6-v2 which provides good quality with fast inference. Search returns documents ranked by cosine similarity to your query. You can filter results by metadata and set minimum similarity thresholds.

Next Steps

Installation

Install VectoriaDB in your project

Quickstart

Build your first semantic search

Tool Discovery

Complete tool discovery guide

API Reference

Explore the full API