Local-First AI Workspace powered by MLX, DrawThings, and the MCP Protocol
A modular, full-stack platform for creativity tools, academic assistants, and multi-agent workflows. Designed to run entirely local-first with optional cloud provider integrations.
- π€ Local-First AI β Image generation (DrawThings), LLM (MLX-RAG-Lab), and embeddings run entirely offline
- π§© Modular Apps β 13 independent micro-apps: Idea Lab, Image Booth, HugginPapers, Kanban, Planner, Character Lab, Calendar AI, Workflows, and more
- π MCP Protocol β Backend executes via Model Context Protocol for tool orchestration
- π¨ Design System β Style Dictionary tokens with semantic spacing and consistent theming
- βοΈ Cloud-Optional β Boots with zero API keys; cloud providers are opt-in enhancements
git clone https://github.com/KBLLR/gen-idea-lab
cd gen-idea-lab
npm install
npm run devURLs:
- Frontend: http://localhost:3000
- Backend: http://localhost:8081
Create .env with:
# Required for OAuth
GOOGLE_CLIENT_ID=your_client_id
GOOGLE_CLIENT_SECRET=your_client_secret
# Required for session/encryption
SESSION_SECRET=random_secret
ENCRYPTION_KEY=$(openssl rand -hex 32)
# Optional: Cloud AI providers
OPENAI_API_KEY=sk-...
ANTHROPIC_API_KEY=sk-...
HUME_API_KEY=...
# Figma OAuth (server uses FIGMA_CLIENT_SECRET from env; client ID can be public)
FIGMA_CLIENT_ID=gD6UQSwun8TeikH2ZONT56
FIGMA_CLIENT_SECRET=your_figma_client_secretDevelopment Mode (Auth Bypass):
# Require login by default; set to false to allow the bypass flags below
REQUIRE_AUTH=true
# AUTH_BYPASS=1 # Backend: bypass requireAuth middleware
# VITE_AUTH_BYPASS=1 # Frontend: auto-authenticate as demo userSecrets check (optional preflight):
python .secretsbank/check_required_secrets.py --house tier2-orchestrator
Each app is independent and communicates only through the centralized Zustand store:
src/apps/
βββ home/ # Dashboard
βββ ideaLab/ # Multi-agent academic assistant
βββ imageBooth/ # AI image transformations
βββ hugginPapers/ # Research paper explorer
βββ kanban/ # Task management
βββ planner/ # Graph-based planning
βββ workflows/ # Reusable AI workflows
βββ [10 more apps] # Character Lab, Calendar AI, Archiva, etc.
Core Principles:
- β Apps declare UI via layout slots (left/right panes)
- β Zero prop-passing between apps
- β
All state via
useStore.use.sliceName()selectors - β
All mutations via
useStore.use.actions().actionName()
server/
βββ routes/
β βββ mcpTools.js # MCP backend execution
β βββ auth.js # Google OAuth
β βββ models.js # AI model discovery
β βββ [12 more routes] # Services, kanban, rigging, etc.
βββ lib/
β βββ authMiddleware.js
β βββ encryptionUtils.js
βββ index.js # Express server
API Conventions:
- All routes prefixed
/api/ - Protected routes use
requireAuthmiddleware - Errors return
{ error: message }with proper status codes
βββββββββββββββββββββββββββββββββ
β FRONTEND (Vite) β
β React Apps (IdeaLab, etc.) β
β Zustand Store + Actions β
β β
β LLM/Image Actions β
β β β
β βΌ β
β FE MCP Client β
β POST /api/mcp/execute β
βββββββββββββββββ¬ββββββββββββββββ
β
βΌ
ββββββββββββββββββββββββββββββββββββββββββ
β BACKEND (Node) β
β apiRouter.js β /api/mcp/execute β
β----------------------------------------β
β MCP LAYER β
β mcpTools.js + tool registry β
β β
β Tools: β
β β’ llm_chat β
β β’ image_generate β
β β’ rag_query β
β β’ rag_upsert β
β β
β (Legacy cloud routes remain but return β
β 501 intentionally) β
βββββββββββββββββ¬βββββββββββββββββββββββββ
β
βΌ
βββββββββββββββββββββββββββββββββββββββββ
β LOCAL AI RUNTIMES β
β---------------------------------------β
β DrawThings Server β image_generate β
β MLX LLM Runtime β llm_chat β
β MLX-RAG-Lab β rag_query β
β rag_upsert β
βββββββββββββββββββββββββββββββββββββββββ
Current Implementation Status:
- β Frontend MCP client complete
- β Backend MCP layer with tool registry complete
- β MCP tools return stub responses (Phase 2)
- β Phase-4 Orchestrator complete (Smart Campus integration)
- π§ Local runtime connections in Phase 4 (in progress)
The Phase-4 orchestrator provides Smart Campus-aware AI by fusing RAG, LLM, and Smart Campus providers into unified endpoints.
Tier-1 UIs (Smart Campus, CLIs)
β
βΌ
βββββββββββββββββββββββββββββββββ
β Tier-2: Orchestrator β β Phase-4 Layer
β β’ Room-aware query fusion β
β β’ Structured context[] β
β β’ HTDI metadata β
β β’ Health aggregation β
ββββββββββββ¬βββββββββββββββββββββ
β
ββββββββΌβββββββ
βΌ βΌ βΌ
ββββββββββ¬βββββββββ¬ββββββββββ
βTier-3A βTier-3B β Tier-3C β
β MLX β RAG β Smart β
β LLM β Engine β Campus β
ββββββββββ΄βββββββββ΄ββββββββββ
Queries AI with Smart Campus room and entity context:
curl -X POST http://localhost:8081/orchestrate/room_query \
-H "Content-Type: application/json" \
-d '{
"requestId": "req_001",
"source": "smart-campus",
"timestamp": "2025-11-20T12:00:00Z",
"room": "peace",
"query": "What is the current state of this room?",
"includeRag": true,
"includeEntities": true
}'Response:
Standard chat with optional RAG:
curl -X POST http://localhost:8081/orchestrate/chat \
-H "Content-Type: application/json" \
-d '{
"requestId": "req_002",
"source": "web-ui",
"timestamp": "2025-11-20T12:05:00Z",
"messages": [
{"role": "user", "content": "How do I set up MLX?"}
],
"useRag": true,
"ragCollection": "documentation"
}'Check health of all providers:
curl http://localhost:8081/healthResponse:
{
"ok": true,
"status": "healthy",
"providers": {
"mlx": {"ok": true, "models_healthy": true, "latencyMs": 12.3},
"rag": {"ok": true, "latencyMs": 8.7},
"smartCampus": {"ok": true, "latencyMs": 15.2}
}
}Add to .env:
# Tier-3 Provider URLs
MLX_URL=http://localhost:8000 # MLX LLM server
RAG_URL=http://localhost:5100 # RAG engine
SMART_CAMPUS_URL=http://localhost:5200 # Smart Campus service
# Defaults
DEFAULT_LLM_MODEL=mlx-qwen2.5-7b
DEFAULT_RAG_COLLECTION=smart-campus-docs1. Start MLX LLM Server (Tier-3A):
cd ../mlx-openai-server-lab
python server.py --port 80002. Start RAG Engine (Tier-3B):
cd ../mlx-rag-lab
python app.py --port 51003. Start Smart Campus Service (Tier-3C) (if available):
cd ../smart-campus-service
# Follow service-specific instructions- PHASE4_ORCHESTRATOR_CONTRACT.md β Complete API specification
- Phase-4 Protocol Types β TypeScript/JSDoc types
- Provider Implementations β MLX, RAG, Smart Campus, Orchestrator
| File | Purpose |
|---|---|
src/shared/lib/store.js |
Zustand store (single source of truth) |
src/shared/lib/routes.js |
React Router configuration |
src/shared/data/appManifests.js |
App metadata for dashboard |
src/shared/data/serviceConfigs.js |
Service registry (icons, colors, configs) |
server/apiRouter.js |
Route aggregator |
CLAUDE.md |
Complete architectural guide for AI assistants |
# Development
npm run dev # Full stack (Vite + Express)
npm run dev:client # Frontend only
npm run dev:server # Backend only
# Testing
npm test # Jest tests
npm run test:ui # Vitest UI tests
npm run test:ui:watch # UI tests (watch mode)
# Build
npm run build # Production build
npm run preview # Preview production build
# Design System
npm run tokens:build # Generate CSS tokens
npm run tokens:watch # Watch token changes
npm run ds:check # Audit for hardcoded pixels
# Utilities
npm run storybook # Component libraryβ Phase 0-3 Complete:
- Frontend defaults to local providers (DrawThings, MCP)
- Backend MCP stubs in place
- Service registry centralized
- Cloud providers fully optional
- All apps crash-proof and loading correctly
π§ Phase 4 In Progress:
- Connect MCP β MLX-RAG-Lab runtime
- Connect MCP β DrawThings server
- Add streaming support
- Remove legacy cloud dependencies
Branch Naming: feature/*, fix/*, claude/*
Before Committing:
npm run build # Must pass
npm test # Must passKey Rules:
- Follow local-first principle (no required cloud dependencies)
- Update
CHANGELOG.mdfor significant changes - Use Zustand store patterns (see
CLAUDE.md) - No direct prop-passing between apps
- CLAUDE.md β Complete architectural guide
- DATA_FLOW_ARCHITECTURE.md β Data contracts & patterns
- OAUTH_SETUP.md β Service integration guide
- .gemini/project-overview.md β 700+ line deep-dive
Apache-2.0
Local-First AI Workspace for Creativity & Learning
{ "ok": true, "answer": "The Peace room currently has...", "ragContext": [...], // RAG documentation chunks "roomContext": { // Smart Campus context "id": "peace", "entities": [...] }, "htdi": { // Phase-4 metadata "providersUsed": {...}, "contextUsage": {...} }, "latencyMs": 1024.5 }