Database Setup

Memory Service supports multiple database backends. This guide covers setup and configuration for each option.

PostgreSQL with pgvector is the recommended setup for most deployments.

Installation

Using Docker:

docker run -d \
  --name postgres \
  -e POSTGRES_DB=memoryservice \
  -e POSTGRES_USER=postgres \
  -e POSTGRES_PASSWORD=postgres \
  -p 5432:5432 \
  pgvector/pgvector:pg18

Configuration

MEMORY_SERVICE_DB_KIND=postgres
MEMORY_SERVICE_DB_URL=postgres://postgres:postgres@localhost:5432/memoryservice?sslmode=disable
MEMORY_SERVICE_DB_MIGRATE_AT_START=true

# Connection pool
MEMORY_SERVICE_DB_MAX_OPEN_CONNS=20
MEMORY_SERVICE_DB_MAX_IDLE_CONNS=5

pgvector Extension

Enable pgvector for semantic search:

CREATE EXTENSION IF NOT EXISTS vector;

Configure in Memory Service:

MEMORY_SERVICE_VECTOR_KIND=pgvector

Index Types

TypeDescriptionBest For
ivfflatInverted file with flat storageGeneral use
hnswHierarchical navigable small worldHigh recall
-- Create HNSW index for better performance
CREATE INDEX ON entries USING hnsw (embedding vector_cosine_ops);

MongoDB

MongoDB is supported for teams already using MongoDB infrastructure.

Installation

docker run -d \
  --name mongodb \
  -e MONGO_INITDB_ROOT_USERNAME=admin \
  -e MONGO_INITDB_ROOT_PASSWORD=password \
  -p 27017:27017 \
  mongo:7

Configuration

MEMORY_SERVICE_DB_KIND=mongo
MEMORY_SERVICE_DB_URL=mongodb://admin:password@localhost:27017/memoryservice

SQLite

SQLite is the lightest-weight backend and works well for local development, demos, and single-node deployments.

Configuration

MEMORY_SERVICE_DB_KIND=sqlite
MEMORY_SERVICE_DB_URL=/tmp/memory-service.sqlite
MEMORY_SERVICE_ATTACHMENTS_KIND=fs

# Optional override. When omitted, memory-service uses
# /tmp/memory-service.sqlite.attachments
MEMORY_SERVICE_ATTACHMENTS_FS_DIR=/tmp/memory-service-attachments

SQLite semantic search uses sqlite-vec inside the same database file:

MEMORY_SERVICE_VECTOR_KIND=sqlite
MEMORY_SERVICE_EMBEDDING_KIND=local

This enables both conversation semantic search and episodic memory semantic search.

Embedding Configuration

Configure the embedding model for vector generation:

OpenAI

MEMORY_SERVICE_EMBEDDING_KIND=openai
MEMORY_SERVICE_EMBEDDING_OPENAI_MODEL_NAME=text-embedding-ada-002
MEMORY_SERVICE_EMBEDDING_OPENAI_API_KEY=${OPENAI_API_KEY}
MEMORY_SERVICE_EMBEDDING_OPENAI_DIMENSIONS=1536

Azure OpenAI

Use the OpenAI provider with the Azure base URL:

MEMORY_SERVICE_EMBEDDING_KIND=openai
MEMORY_SERVICE_EMBEDDING_OPENAI_BASE_URL=https://your-resource.openai.azure.com/openai/deployments/text-embedding-ada-002
MEMORY_SERVICE_EMBEDDING_OPENAI_API_KEY=${AZURE_OPENAI_API_KEY}

Local Model

The built-in local provider uses all-MiniLM-L6-v2 (384 dimensions) with no external API calls:

MEMORY_SERVICE_EMBEDDING_KIND=local

Performance Tuning

Connection Pooling

# PostgreSQL / MongoDB
MEMORY_SERVICE_DB_MAX_OPEN_CONNS=50
MEMORY_SERVICE_DB_MAX_IDLE_CONNS=10

Batch Operations

# Background vector indexer batch size
MEMORY_SERVICE_VECTOR_INDEXER_BATCH_SIZE=100

Backup and Recovery

PostgreSQL

# Backup
pg_dump -h localhost -U postgres memoryservice > backup.sql

# Restore
psql -h localhost -U postgres memoryservice < backup.sql

MongoDB

# Backup
mongodump --uri="mongodb://localhost:27017" --db=memoryservice --out=backup/

# Restore
mongorestore --uri="mongodb://localhost:27017" backup/

Next Steps