Skip to content

KBLLR/gen-idea-lab

Folders and files

NameName
Last commit message
Last commit date

Latest commit

Β 

History

345 Commits
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 

Repository files navigation

Gen-Idea-Lab

Local-First AI Workspace powered by MLX, DrawThings, and the MCP Protocol

A modular, full-stack platform for creativity tools, academic assistants, and multi-agent workflows. Designed to run entirely local-first with optional cloud provider integrations.


✨ Features

  • πŸ€– Local-First AI β€” Image generation (DrawThings), LLM (MLX-RAG-Lab), and embeddings run entirely offline
  • 🧩 Modular Apps β€” 13 independent micro-apps: Idea Lab, Image Booth, HugginPapers, Kanban, Planner, Character Lab, Calendar AI, Workflows, and more
  • πŸ”Œ MCP Protocol β€” Backend executes via Model Context Protocol for tool orchestration
  • 🎨 Design System β€” Style Dictionary tokens with semantic spacing and consistent theming
  • ☁️ Cloud-Optional β€” Boots with zero API keys; cloud providers are opt-in enhancements

πŸš€ Quick Start

git clone https://github.com/KBLLR/gen-idea-lab
cd gen-idea-lab
npm install
npm run dev

URLs:

Environment Setup

Create .env with:

# Required for OAuth
GOOGLE_CLIENT_ID=your_client_id
GOOGLE_CLIENT_SECRET=your_client_secret

# Required for session/encryption
SESSION_SECRET=random_secret
ENCRYPTION_KEY=$(openssl rand -hex 32)

# Optional: Cloud AI providers
OPENAI_API_KEY=sk-...
ANTHROPIC_API_KEY=sk-...
HUME_API_KEY=...

# Figma OAuth (server uses FIGMA_CLIENT_SECRET from env; client ID can be public)
FIGMA_CLIENT_ID=gD6UQSwun8TeikH2ZONT56
FIGMA_CLIENT_SECRET=your_figma_client_secret

Development Mode (Auth Bypass):

# Require login by default; set to false to allow the bypass flags below
REQUIRE_AUTH=true
# AUTH_BYPASS=1      # Backend: bypass requireAuth middleware
# VITE_AUTH_BYPASS=1 # Frontend: auto-authenticate as demo user

Secrets check (optional preflight):

python .secretsbank/check_required_secrets.py --house tier2-orchestrator

πŸ—οΈ Architecture

Micro-App System

Each app is independent and communicates only through the centralized Zustand store:

src/apps/
β”œβ”€β”€ home/              # Dashboard
β”œβ”€β”€ ideaLab/          # Multi-agent academic assistant
β”œβ”€β”€ imageBooth/       # AI image transformations
β”œβ”€β”€ hugginPapers/     # Research paper explorer
β”œβ”€β”€ kanban/           # Task management
β”œβ”€β”€ planner/          # Graph-based planning
β”œβ”€β”€ workflows/        # Reusable AI workflows
└── [10 more apps]    # Character Lab, Calendar AI, Archiva, etc.

Core Principles:

  • βœ… Apps declare UI via layout slots (left/right panes)
  • βœ… Zero prop-passing between apps
  • βœ… All state via useStore.use.sliceName() selectors
  • βœ… All mutations via useStore.use.actions().actionName()

Backend Structure

server/
β”œβ”€β”€ routes/
β”‚   β”œβ”€β”€ mcpTools.js       # MCP backend execution
β”‚   β”œβ”€β”€ auth.js           # Google OAuth
β”‚   β”œβ”€β”€ models.js         # AI model discovery
β”‚   └── [12 more routes]  # Services, kanban, rigging, etc.
β”œβ”€β”€ lib/
β”‚   β”œβ”€β”€ authMiddleware.js
β”‚   └── encryptionUtils.js
└── index.js              # Express server

API Conventions:

  • All routes prefixed /api/
  • Protected routes use requireAuth middleware
  • Errors return { error: message } with proper status codes

Data Flow Architecture

β”Œβ”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”
β”‚        FRONTEND (Vite)        β”‚
β”‚  React Apps (IdeaLab, etc.)   β”‚
β”‚  Zustand Store + Actions       β”‚
β”‚                               β”‚
β”‚  LLM/Image Actions             β”‚
β”‚        β”‚                       β”‚
β”‚        β–Ό                       β”‚
β”‚  FE MCP Client                 β”‚
β”‚  POST /api/mcp/execute         β”‚
β””β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”¬β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”˜
                β”‚
                β–Ό
β”Œβ”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”
β”‚             BACKEND (Node)             β”‚
β”‚      apiRouter.js β†’ /api/mcp/execute   β”‚
β”‚----------------------------------------β”‚
β”‚              MCP LAYER                 β”‚
β”‚   mcpTools.js + tool registry          β”‚
β”‚                                        β”‚
β”‚   Tools:                               β”‚
β”‚     β€’ llm_chat                         β”‚
β”‚     β€’ image_generate                   β”‚
β”‚     β€’ rag_query                        β”‚
β”‚     β€’ rag_upsert                       β”‚
β”‚                                        β”‚
β”‚ (Legacy cloud routes remain but return β”‚
β”‚ 501 intentionally)                     β”‚
β””β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”¬β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”˜
                β”‚
                β–Ό
β”Œβ”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”
β”‚          LOCAL AI RUNTIMES            β”‚
β”‚---------------------------------------β”‚
β”‚ DrawThings Server  β†’ image_generate   β”‚
β”‚ MLX LLM Runtime    β†’ llm_chat         β”‚
β”‚ MLX-RAG-Lab        β†’ rag_query        β”‚
β”‚                       rag_upsert      β”‚
β””β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”˜

Current Implementation Status:

  • βœ… Frontend MCP client complete
  • βœ… Backend MCP layer with tool registry complete
  • βœ… MCP tools return stub responses (Phase 2)
  • βœ… Phase-4 Orchestrator complete (Smart Campus integration)
  • 🚧 Local runtime connections in Phase 4 (in progress)

🏒 Phase-4 Smart Campus Orchestration

The Phase-4 orchestrator provides Smart Campus-aware AI by fusing RAG, LLM, and Smart Campus providers into unified endpoints.

Architecture

Tier-1 UIs (Smart Campus, CLIs)
        β”‚
        β–Ό
β”Œβ”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”
β”‚   Tier-2: Orchestrator        β”‚ ← Phase-4 Layer
β”‚   β€’ Room-aware query fusion   β”‚
β”‚   β€’ Structured context[]      β”‚
β”‚   β€’ HTDI metadata             β”‚
β”‚   β€’ Health aggregation        β”‚
β””β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”¬β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”˜
           β”‚
    β”Œβ”€β”€β”€β”€β”€β”€β”Όβ”€β”€β”€β”€β”€β”€β”
    β–Ό      β–Ό      β–Ό
β”Œβ”€β”€β”€β”€β”€β”€β”€β”€β”¬β”€β”€β”€β”€β”€β”€β”€β”€β”¬β”€β”€β”€β”€β”€β”€β”€β”€β”€β”
β”‚Tier-3A β”‚Tier-3B β”‚ Tier-3C β”‚
β”‚  MLX   β”‚  RAG   β”‚ Smart   β”‚
β”‚  LLM   β”‚ Engine β”‚ Campus  β”‚
β””β”€β”€β”€β”€β”€β”€β”€β”€β”΄β”€β”€β”€β”€β”€β”€β”€β”€β”΄β”€β”€β”€β”€β”€β”€β”€β”€β”€β”˜

Endpoints

1. Room-Aware Query (POST /orchestrate/room_query)

Queries AI with Smart Campus room and entity context:

curl -X POST http://localhost:8081/orchestrate/room_query \
  -H "Content-Type: application/json" \
  -d '{
    "requestId": "req_001",
    "source": "smart-campus",
    "timestamp": "2025-11-20T12:00:00Z",
    "room": "peace",
    "query": "What is the current state of this room?",
    "includeRag": true,
    "includeEntities": true
  }'

Response:

{
  "ok": true,
  "answer": "The Peace room currently has...",
  "ragContext": [...],     // RAG documentation chunks
  "roomContext": {         // Smart Campus context
    "id": "peace",
    "entities": [...]
  },
  "htdi": {                // Phase-4 metadata
    "providersUsed": {...},
    "contextUsage": {...}
  },
  "latencyMs": 1024.5
}

2. Generic Chat (POST /orchestrate/chat)

Standard chat with optional RAG:

curl -X POST http://localhost:8081/orchestrate/chat \
  -H "Content-Type: application/json" \
  -d '{
    "requestId": "req_002",
    "source": "web-ui",
    "timestamp": "2025-11-20T12:05:00Z",
    "messages": [
      {"role": "user", "content": "How do I set up MLX?"}
    ],
    "useRag": true,
    "ragCollection": "documentation"
  }'

3. Aggregate Health (GET /health)

Check health of all providers:

curl http://localhost:8081/health

Response:

{
  "ok": true,
  "status": "healthy",
  "providers": {
    "mlx": {"ok": true, "models_healthy": true, "latencyMs": 12.3},
    "rag": {"ok": true, "latencyMs": 8.7},
    "smartCampus": {"ok": true, "latencyMs": 15.2}
  }
}

Configuration

Add to .env:

# Tier-3 Provider URLs
MLX_URL=http://localhost:8000        # MLX LLM server
RAG_URL=http://localhost:5100        # RAG engine
SMART_CAMPUS_URL=http://localhost:5200  # Smart Campus service

# Defaults
DEFAULT_LLM_MODEL=mlx-qwen2.5-7b
DEFAULT_RAG_COLLECTION=smart-campus-docs

Provider Setup

1. Start MLX LLM Server (Tier-3A):

cd ../mlx-openai-server-lab
python server.py --port 8000

2. Start RAG Engine (Tier-3B):

cd ../mlx-rag-lab
python app.py --port 5100

3. Start Smart Campus Service (Tier-3C) (if available):

cd ../smart-campus-service
# Follow service-specific instructions

Documentation


πŸ“¦ Key Files

File Purpose
src/shared/lib/store.js Zustand store (single source of truth)
src/shared/lib/routes.js React Router configuration
src/shared/data/appManifests.js App metadata for dashboard
src/shared/data/serviceConfigs.js Service registry (icons, colors, configs)
server/apiRouter.js Route aggregator
CLAUDE.md Complete architectural guide for AI assistants

πŸ› οΈ Development Commands

# Development
npm run dev              # Full stack (Vite + Express)
npm run dev:client       # Frontend only
npm run dev:server       # Backend only

# Testing
npm test                 # Jest tests
npm run test:ui          # Vitest UI tests
npm run test:ui:watch    # UI tests (watch mode)

# Build
npm run build            # Production build
npm run preview          # Preview production build

# Design System
npm run tokens:build     # Generate CSS tokens
npm run tokens:watch     # Watch token changes
npm run ds:check         # Audit for hardcoded pixels

# Utilities
npm run storybook        # Component library

🎯 Current Status

βœ… Phase 0-3 Complete:

  • Frontend defaults to local providers (DrawThings, MCP)
  • Backend MCP stubs in place
  • Service registry centralized
  • Cloud providers fully optional
  • All apps crash-proof and loading correctly

🚧 Phase 4 In Progress:

  • Connect MCP β†’ MLX-RAG-Lab runtime
  • Connect MCP β†’ DrawThings server
  • Add streaming support
  • Remove legacy cloud dependencies

🀝 Contributing

Branch Naming: feature/*, fix/*, claude/*

Before Committing:

npm run build    # Must pass
npm test         # Must pass

Key Rules:

  • Follow local-first principle (no required cloud dependencies)
  • Update CHANGELOG.md for significant changes
  • Use Zustand store patterns (see CLAUDE.md)
  • No direct prop-passing between apps

πŸ“š Documentation


πŸ“„ License

Apache-2.0


Local-First AI Workspace for Creativity & Learning

About

Connect with expert AI assistants for each subject and generate creative project ideas. A growing collection of micro-apps designed for agents with Human in the loop.

Topics

Resources

Stars

Watchers

Forks

Releases

No releases published

Packages

 
 
 

Contributors