Learn how to visualize your analysis results using ContextLab's web UI and API.
ContextLab provides multiple ways to visualize your results:
- Web Dashboard: Interactive SvelteKit UI
- CLI Visualizations: Text-based tables and charts
- REST API: Programmatic access to visualization data
# Terminal 1: Start API
make api
# or
uvicorn api.main:app --reload
# Terminal 2: Start Web UI
cd web
npm install
npm run devVisit http://localhost:5173
# Build and run
docker-compose up
# Or individually
make docker-api
make docker-webThe dashboard home shows:
- Stats Cards: Total runs, chunks, and tokens
- Recent Runs: Quick access to latest analyses
- Feature Overview: Links to tutorials
View all analysis runs at /runs:
- Sortable table of runs
- Filter by model, date, or tokens
- Quick actions: View details, Delete
View detailed analysis at /runs/{run_id}:
Interactive scatter plot showing:
- 2D projection of embedding space (UMAP)
- Points colored by redundancy score
- Hover for chunk details
Interpretation:
- Clusters indicate similar content
- Isolated points are unique chunks
- Color gradient shows redundancy (purple = high, yellow = low)
Dual-axis chart showing:
- Bar chart: Token distribution per chunk
- Line chart: Salience scores over time
Interpretation:
- Peaks in salience indicate important sections
- Consistent token counts suggest uniform chunking
- Dips may indicate section boundaries
Searchable, sortable table of all chunks:
- Click column headers to sort
- Search by ID, text, or source
- Color-coded salience/redundancy badges
For headless environments or quick checks:
contextlab vizOutput:
┌─────────┬───────────────┬────────┬────────┬──────────────────┐
│ Run ID │ Model │ Chunks │ Tokens │ Timestamp │
├─────────┼───────────────┼────────┼────────┼──────────────────┤
│ a1b2c3d │ gpt-4o-mini │ 45 │ 8234 │ 2025-01-15 10:30 │
│ e4f5g6h │ gpt-4 │ 23 │ 12450 │ 2025-01-14 15:20 │
└─────────┴───────────────┴────────┴────────┴──────────────────┘
contextlab viz <run_id> --headlessOutput:
Run: a1b2c3d
├─ Model: gpt-4o-mini
├─ Chunks: 45
├─ Total Tokens: 8234
└─ Timestamp: 2025-01-15 10:30
Redundancy Distribution:
0.0-0.2: ████████████████ (20)
0.2-0.4: ██████████ (12)
0.4-0.6: █████ (7)
0.6-0.8: ███ (4)
0.8-1.0: ██ (2)
Top 5 Most Salient Chunks:
1. chunk_12: salience=0.892
"Machine learning algorithms require..."
2. chunk_7: salience=0.765
"Deep neural networks consist of..."
curl http://localhost:8000/api/viz/runs?limit=10Response:
{
"runs": [
{
"run_id": "a1b2c3d",
"model": "gpt-4o-mini",
"num_chunks": 45,
"total_tokens": 8234,
"timestamp": "2025-01-15T10:30:00"
}
]
}curl http://localhost:8000/api/viz/runs/a1b2c3dcurl http://localhost:8000/api/viz/runs/a1b2c3d/embeddingsResponse:
{
"run_id": "a1b2c3d",
"embeddings": [[0.1, 0.2, ...], [0.3, 0.4, ...]],
"metadata": [
{
"id": "chunk_0",
"tokens": 150,
"salience": 0.65,
"redundancy": 0.23
}
]
}import matplotlib.pyplot as plt
from contextlab.io.ds import DataStore
store = DataStore()
_, chunks = store.load_run("a1b2c3d")
# Salience distribution
saliences = [c.salience for c in chunks]
plt.hist(saliences, bins=20)
plt.xlabel("Salience")
plt.ylabel("Frequency")
plt.title("Salience Distribution")
plt.show()import pandas as pd
from contextlab.io.ds import DataStore
store = DataStore()
_, chunks = store.load_run("a1b2c3d")
# Create DataFrame
df = pd.DataFrame([
{
"id": c.id,
"tokens": c.tokens,
"salience": c.salience,
"redundancy": c.redundancy,
"source": c.source
}
for c in chunks
])
# Analyze
print(df.describe())
print("\nTop 10 by salience:")
print(df.nlargest(10, "salience"))# Export to CSV
df.to_csv("analysis_results.csv", index=False)
# Export to JSON
import json
with open("analysis_results.json", "w") as f:
json.dump([c.model_dump() for c in chunks], f, indent=2)Create a monitoring dashboard that updates as new runs complete:
from contextlab.io.ds import DataStore
import time
store = DataStore()
def monitor_runs(interval=5):
seen_ids = set()
while True:
runs = store.list_runs(limit=10)
new_runs = [r for r in runs if r.run_id not in seen_ids]
for run in new_runs:
print(f"New run detected: {run.run_id}")
print(f" Chunks: {run.num_chunks}, Tokens: {run.total_tokens}")
seen_ids.add(run.run_id)
time.sleep(interval)
# Run in background
monitor_runs()- Check API is running:
curl http://localhost:8000/health - Check web dev server:
cd web && npm run dev - Check browser console for errors
- Verify run exists:
contextlab viz - Check embeddings were generated:
curl http://localhost:8000/api/viz/runs/{run_id}/embeddings - Re-run analysis with
--mockflag to test
- Check CORS settings in
api/main.py - Verify proxy config in
web/vite.config.ts - Check firewall settings
- Use web UI for exploration: Interactive charts best for understanding patterns
- Use CLI for automation: Integrate into scripts and pipelines
- Use API for integration: Build custom dashboards or export data
- Export important runs: Save visualizations as PNG/PDF before cleanup
- Monitor storage: Clean up old runs periodically
- Explore API Reference
- Build custom compression strategies
- Integrate ContextLab into your LLM application