Skip to content

Latest commit

 

History

History
368 lines (270 loc) · 6.22 KB

File metadata and controls

368 lines (270 loc) · 6.22 KB

ContextLab Deployment Guide

Development Setup

Quick Start

# 1. Clone repository
git clone https://github.com/siddhant-k-code/contextlab.git
cd contextlab

# 2. Run setup script
./scripts/dev_env_setup.sh

# 3. Activate environment
source venv/bin/activate

# 4. Add API keys to .env
echo "OPENAI_API_KEY=your-key-here" >> .env

Manual Setup

# Create virtual environment
python3 -m venv venv
source venv/bin/activate

# Install dependencies
pip install -e ".[dev,docs,optimization]"

# Install pre-commit hooks
pre-commit install

# Run tests
make test

Running Locally

Python SDK

import asyncio
from contextlab import analyze

async def main():
    report = await analyze(
        text="Your text here",
        model="gpt-4o-mini"
    )
    print(f"Analyzed {len(report.chunks)} chunks")

asyncio.run(main())

CLI

# Analyze documents
contextlab analyze docs/*.md --model gpt-4o-mini

# Compress context
contextlab compress <run_id> --strategy hybrid --limit 8000

# Visualize results
contextlab viz <run_id>

API Server

# Start API
make api

# Or directly
uvicorn api.main:app --reload

# Test
curl http://localhost:8000/health

Web Dashboard

# Install dependencies
cd web
npm install

# Start dev server
npm run dev

# Visit http://localhost:5173

Running with Docker

Build Images

# API
docker build -f docker/api.Dockerfile -t contextlab-api .

# Web
docker build -f docker/web.Dockerfile -t contextlab-web .

Run Containers

# API
docker run -p 8000:8000 --env-file .env contextlab-api

# Web
docker run -p 3000:3000 contextlab-web

Docker Compose (Recommended)

# docker-compose.yml
version: '3.8'
services:
  api:
    build:
      context: .
      dockerfile: docker/api.Dockerfile
    ports:
      - "8000:8000"
    env_file:
      - .env
    volumes:
      - ./data:/app/.contextlab

  web:
    build:
      context: .
      dockerfile: docker/web.Dockerfile
    ports:
      - "3000:3000"
    depends_on:
      - api
    environment:
      - API_URL=http://api:8000
docker-compose up

Production Deployment

Environment Variables

Required:

  • OPENAI_API_KEY: OpenAI API key for embeddings/summarization
  • CONTEXTLAB_STORAGE_PATH: Path for storing analysis runs

Optional:

  • CONTEXTLAB_EMBEDDING_MODEL: Embedding model (default: text-embedding-3-small)
  • API_HOST: API host (default: 0.0.0.0)
  • API_PORT: API port (default: 8000)

Security Considerations

  1. API Keys: Never commit API keys to git
  2. CORS: Configure CORS for production in api/main.py
  3. Authentication: Enable token-based auth if exposing publicly
  4. Rate Limiting: Add rate limiting middleware
  5. HTTPS: Use HTTPS in production

Deployment Options

Option 1: VM/VPS

# Install dependencies
sudo apt update
sudo apt install python3.11 python3-pip nodejs npm

# Clone and setup
git clone https://github.com/siddhant-k-code/contextlab.git
cd contextlab
./scripts/dev_env_setup.sh

# Run with systemd
sudo cp deployment/contextlab-api.service /etc/systemd/system/
sudo systemctl enable contextlab-api
sudo systemctl start contextlab-api

Option 2: Kubernetes

# k8s/deployment.yaml
apiVersion: apps/v1
kind: Deployment
metadata:
  name: contextlab-api
spec:
  replicas: 3
  selector:
    matchLabels:
      app: contextlab-api
  template:
    metadata:
      labels:
        app: contextlab-api
    spec:
      containers:
      - name: api
        image: ghcr.io/siddhant-k-code/contextlab/api:latest
        ports:
        - containerPort: 8000
        env:
        - name: OPENAI_API_KEY
          valueFrom:
            secretKeyRef:
              name: contextlab-secrets
              key: openai-api-key

Option 3: Cloud Run / Lambda

See cloud-specific deployment guides in docs/deployment/.

Monitoring

Health Checks

# API health
curl http://localhost:8000/health

# Check metrics
curl http://localhost:8000/metrics

Logging

Configure logging level:

import logging
logging.basicConfig(level=logging.INFO)

Metrics

Key metrics to monitor:

  • Request latency (p50, p95, p99)
  • Error rate
  • Token usage
  • Storage size
  • Embedding API calls

Backup and Recovery

Backup Storage

# Backup analysis runs
tar -czf contextlab-backup-$(date +%Y%m%d).tar.gz .contextlab/

# Backup to S3
aws s3 cp contextlab-backup-*.tar.gz s3://your-bucket/backups/

Restore

# Restore from backup
tar -xzf contextlab-backup-20250115.tar.gz

Troubleshooting

Common Issues

Issue: Import errors

# Solution: Reinstall package
pip install -e .

Issue: API key not found

# Solution: Check .env file
cat .env | grep OPENAI_API_KEY

Issue: Port already in use

# Solution: Use different port
uvicorn api.main:app --port 8001

Issue: Database locked

# Solution: Close all connections and restart
rm .contextlab/contextlab.db-shm .contextlab/contextlab.db-wal

Debug Mode

# Enable debug logging
export CONTEXTLAB_DEV_MODE=true

# Run with verbose output
contextlab analyze docs/*.md --verbose

Scaling

Horizontal Scaling

  • Deploy multiple API instances behind load balancer
  • Use shared storage (S3/NFS) for analysis runs
  • Use Redis for session/cache management

Database Scaling

  • PostgreSQL instead of SQLite for production
  • Read replicas for visualization queries
  • Regular vacuum and optimization

Caching

  • Cache embeddings for frequently analyzed documents
  • Cache compression results
  • Use CDN for web UI assets

Updates and Maintenance

Update ContextLab

# Pull latest
git pull origin main

# Update dependencies
pip install -e ".[dev,docs,optimization]"

# Run migrations (if any)
python scripts/migrate.py

# Restart services
sudo systemctl restart contextlab-api

Database Maintenance

# Clean up old runs
contextlab cleanup --days 30

# Optimize database
sqlite3 .contextlab/contextlab.db "VACUUM;"

Support