Storage Adapters
Brainy supports multiple storage backends. Use the filesystem for development, OPFS for browsers, or cloud storage for production.
Available Adapters
| Adapter | Environment | Use Case |
|---|---|---|
| FileSystem | Node.js | Development, single-server production |
| OPFS | Browser | Client-side apps, PWAs |
| Memory | Any | Testing, ephemeral workloads |
| S3 | Node.js | AWS production, scalable |
| R2 | Node.js | Cloudflare Workers, edge |
| GCS | Node.js | Google Cloud production |
| Azure Blob | Node.js | Azure production |
Auto-Detection
By default, Brainy auto-detects the best storage for your environment:
import { Brainy } from '@soulcraft/brainy'
// Auto-detect: FileSystem in Node.js, OPFS in browser
const brain = new Brainy()
await brain.init()
// Data stored in ./brainy-data (Node.js) or IndexedDB/OPFS (browser)
FileSystem Storage
Best for Node.js applications with local storage needs.
const brain = new Brainy({
storage: {
type: 'filesystem',
path: './my-brain-data'
}
})
await brain.init()
// Directory structure:
// ./my-brain-data/
// nouns/ # Entity metadata
// verbs/ # Relationship data
// hnsw/ # Vector index
// blobs/ # Large content (COW)
OPFS Storage (Browser)
Origin Private File System for browser applications. Fast, persistent, and private.
const brain = new Brainy({
storage: {
type: 'opfs',
path: '/brainy' // Virtual path in OPFS
}
})
await brain.init()
// Data persists across sessions
// Private to this origin (domain)
// Works offline
Memory Storage
In-memory storage for testing and ephemeral workloads. Data is lost on restart.
const brain = new Brainy({
storage: { type: 'memory' }
})
await brain.init()
// Perfect for unit tests
afterEach(async () => {
await brain.clear() // Fast cleanup
})
AWS S3 Storage
Scalable cloud storage for AWS deployments.
const brain = new Brainy({
storage: {
type: 's3',
bucket: 'my-brainy-bucket',
region: 'us-east-1',
prefix: 'production/', // Optional: namespace in bucket
// Credentials (or use IAM role)
credentials: {
accessKeyId: process.env.AWS_ACCESS_KEY_ID,
secretAccessKey: process.env.AWS_SECRET_ACCESS_KEY
}
}
})
await brain.init()
S3 with IAM Role (Recommended)
// When running on EC2/ECS/Lambda with IAM role
const brain = new Brainy({
storage: {
type: 's3',
bucket: 'my-brainy-bucket',
region: 'us-east-1'
// No credentials needed - uses IAM role automatically
}
})
Cloudflare R2 Storage
S3-compatible storage optimized for Cloudflare Workers and edge deployments.
const brain = new Brainy({
storage: {
type: 'r2',
bucket: 'my-brainy-bucket',
accountId: process.env.CF_ACCOUNT_ID,
// R2 API credentials
credentials: {
accessKeyId: process.env.R2_ACCESS_KEY_ID,
secretAccessKey: process.env.R2_SECRET_ACCESS_KEY
}
}
})
await brain.init()
Google Cloud Storage
For Google Cloud Platform deployments.
const brain = new Brainy({
storage: {
type: 'gcs',
bucket: 'my-brainy-bucket',
prefix: 'production/',
// Service account key (or use default credentials)
keyFilename: './service-account.json'
}
})
await brain.init()
// Or use Application Default Credentials
const brain2 = new Brainy({
storage: {
type: 'gcs',
bucket: 'my-brainy-bucket'
// Uses GOOGLE_APPLICATION_CREDENTIALS env var
}
})
Azure Blob Storage
For Microsoft Azure deployments.
const brain = new Brainy({
storage: {
type: 'azure',
container: 'my-brainy-container',
accountName: process.env.AZURE_STORAGE_ACCOUNT,
// Account key or SAS token
accountKey: process.env.AZURE_STORAGE_KEY
}
})
await brain.init()
// Or use connection string
const brain2 = new Brainy({
storage: {
type: 'azure',
connectionString: process.env.AZURE_STORAGE_CONNECTION_STRING,
container: 'my-brainy-container'
}
})
Storage Performance
| Adapter | Read Latency | Write Latency | Cost |
|---|---|---|---|
| Memory | <1ms | <1ms | RAM |
| FileSystem | 1-10ms | 5-20ms | Disk |
| OPFS | 1-10ms | 5-20ms | Browser |
| S3/R2/GCS/Azure | 50-200ms | 100-500ms | Per request |
Brainy uses deferred persistence for cloud storage by default. HNSW index changes are batched and written in bulk, making adds 30-50x faster. Call brain.flush() before shutdown to ensure all data is persisted.
Environment Variables
Brainy respects standard cloud environment variables:
# AWS
AWS_ACCESS_KEY_ID=...
AWS_SECRET_ACCESS_KEY=...
AWS_REGION=us-east-1
# Google Cloud
GOOGLE_APPLICATION_CREDENTIALS=./service-account.json
# Azure
AZURE_STORAGE_ACCOUNT=...
AZURE_STORAGE_KEY=...
# or
AZURE_STORAGE_CONNECTION_STRING=...
# Cloudflare R2
CF_ACCOUNT_ID=...
R2_ACCESS_KEY_ID=...
R2_SECRET_ACCESS_KEY=...
See Also
- Branching & Time Travel - How storage enables COW
- Installation - Environment setup