Skip to main content
Get VectoriaDB set up in your project.

Requirements

  • Node.js 18+ (Node.js 22 recommended)
  • npm, pnpm, or yarn

Install

npm install vectoriadb

Peer Dependencies

VectoriaDB uses @huggingface/transformers for embedding generation. Install it if you’re using semantic search:
npm install @huggingface/transformers
If you only need TF-IDF keyword search (TFIDFVectoria), you can skip installing @huggingface/transformers.

Verify Installation

src/verify.ts
import { VectoriaDB } from 'vectoriadb';

const db = new VectoriaDB();
console.log('VectoriaDB installed successfully!');

Model Download

On first initialization, VectoriaDB downloads the embedding model (~22 MB):
src/initialize.ts
const db = new VectoriaDB({
  cacheDir: './.cache/transformers', // Model cache location
});

await db.initialize(); // Downloads model on first run
The model is cached locally and reused on subsequent runs.

Pre-download Model

For production deployments, pre-download the model during build:
# Pre-download during Docker build
node -e "require('@huggingface/transformers').pipeline('feature-extraction', 'Xenova/all-MiniLM-L6-v2')"

TypeScript Configuration

VectoriaDB is written in TypeScript and includes type definitions. No additional setup required.
tsconfig.json
{
  "compilerOptions": {
    "moduleResolution": "node",
    "esModuleInterop": true
  }
}

Next Steps

Quickstart

Build your first semantic search

Configuration

Explore configuration options