A complete Streamlit-based chatbot that demonstrates MemMachine's persistent memory capabilities with support for multiple LLM providers.
This example provides a simple chatbot interface that showcases:
- Persistent Memory: Conversations are stored and retrieved using MemMachine's memory system
- Multi-Provider LLM Support: Works with OpenAI, Anthropic, Google Gemini etc.
- Persona Management: Support for multiple user personas with isolated memory profiles
- Side-by-Side Comparison: Compare MemMachine-enhanced responses vs control persona (no memory)
- Memory Import: Import conversation history from external sources (ChatGPT, etc.)
- Session Management: Create, rename, and manage multiple conversation sessions
- Automatic memory storage for all conversations
- Context-aware responses using retrieved memories
- Profile-based personalization
- Create multiple independent conversation sessions
- Rename and delete sessions
- Switch between sessions seamlessly
- The sessions are not persistent for now, meaning they will disappear when you refresh/restart the app. But the memories in the sessions is persistent in MemMachine.
- Support for multiple user personas
- Custom persona names
- Isolated memory profiles per persona
- Side-by-side comparison of MemMachine vs Control persona
- Visual distinction between memory-enhanced and baseline responses
- Toggle to enable/disable comparison
- Import conversation history from external sources
- Support for text files, JSON, and markdown formats
- Preview before ingesting into MemMachine
- OpenAI: GPT-4.1 Mini, GPT-5, GPT-5 Mini, GPT-5 Nano
- Anthropic (Through AWS Bedrock): Claude Haiku 4.5, Claude Sonnet 4.5, Claude Opus 4
- Google Gemini: Gemini 3 Pro (Preview), Gemini 2.5 Pro
simple_chatbot/
├── app.py # Main Streamlit application
├── llm.py # LLM provider integration (OpenAI, Anthropic, Google, etc.)
├── gateway_client.py # MemMachine API client
├── model_config.py # Model configuration and provider mappings
├── requirements.txt # Python dependencies
├── styles.css # Custom styling (optional)
├── assets/ # Logo and image assets
│ ├── memmachine_logo.png
│ └── memverge_logo.png
└── README.md # This file
- Python 3.12+
- MemMachine Backend Running (see main README)
- LLM API Keys (at least one):
- OpenAI API key (for OpenAI models)
- AWS credentials (for Bedrock models)
- Google API key (for Gemini models)
-
Install dependencies:
cd examples/simple_chatbot pip install -r requirements.txt -
Set up environment variables:
You can set environment variables either via:
- Environment variables (export/set)
.envfile in theexamples/simple_chatbot/directory (automatically loaded viapython-dotenv)
# MemMachine backend URL export MEMORY_SERVER_URL="http://localhost:8080" # MemMachine organization ID (required for v2 API) export ORG_ID="default-org" # Your organization ID # Note: Project ID is automatically set per user as "project_{user_id}" # LLM Provider API Keys (choose based on which models you want to use) export OPENAI_API_KEY="your-openai-api-key" # Required for OpenAI models export AWS_ACCESS_KEY_ID="your-aws-key" # Required for Bedrock models export AWS_SECRET_ACCESS_KEY="your-aws-secret" export AWS_DEFAULT_REGION="us-east-1" # Your AWS region (default: us-east-1) export GOOGLE_API_KEY="your-google-api-key" # Required for Gemini models # Optional: Provisioned throughput ARNs for Bedrock (if using) export BEDROCK_HAIKU_4_5_ARN="arn:aws:bedrock:..." export BEDROCK_SONNET_4_5_ARN="arn:aws:bedrock:..." export BEDROCK_OPUS_4_ARN="arn:aws:bedrock:..."
Example
.envfile (createexamples/simple_chatbot/.env):MEMORY_SERVER_URL=http://localhost:8080 ORG_ID=default-org OPENAI_API_KEY=sk-... AWS_ACCESS_KEY_ID=AKIA... AWS_SECRET_ACCESS_KEY=... AWS_DEFAULT_REGION=us-east-1 GOOGLE_API_KEY=...
-
Start MemMachine backend (if not already running):
# See main README for instructions on starting MemMachine
cd examples/simple_chatbot
streamlit run app.pyThe application will be available at http://localhost:8501
- Select a Model: Choose your preferred LLM from the sidebar
- Choose Persona: Select or enter a persona name
- Enable MemMachine: Toggle "Enable MemMachine" to use persistent memory
- Start Chatting: Type messages and receive context-aware responses
- Create Session: Use "Create session" form in sidebar
- Switch Sessions: Click on session name in sidebar
- Rename Session: Click ⋯ menu next to session name
- Delete Session: Use delete option in session menu
- Enable "Enable MemMachine" checkbox
- Enable "🔄 Compare with control persona" checkbox
- Send a message to see side-by-side comparison:
- Left: MemMachine-enhanced response (with memory)
- Right: Control persona response (no memory)
- Expand "📋 Load Previous Memories" section
- Paste conversation history or upload a file
- Click "👁️ Preview" to review
- Click "💉 Ingest into MemMachine" to import
- Delete Profile: Removes all memories for the current persona
- Clear Chat: Clears current conversation history (keeps memories)
Edit model_config.py to:
- Add new models
- Change provider mappings
- Update display names
- Configure inference profile ARNs for provisioned throughput
- Styling: Add CSS to
styles.css - Prompt: Modify the prompt template in
gateway_client.py - UI: Customize Streamlit components in
app.py
The application loads environment variables from:
- System environment variables
.envfile inexamples/simple_chatbot/directory (viapython-dotenv)
| Variable | Description | Required | Default | Used By |
|---|---|---|---|---|
MEMORY_SERVER_URL |
MemMachine backend URL | Yes | http://localhost:8080 |
gateway_client.py |
ORG_ID |
Organization ID for v2 API | No | default-org |
gateway_client.py |
OPENAI_API_KEY |
OpenAI API key | Yes* | - | llm.py |
AWS_ACCESS_KEY_ID |
AWS access key for Bedrock | Yes* | - | llm.py |
AWS_SECRET_ACCESS_KEY |
AWS secret key for Bedrock | Yes* | - | llm.py |
AWS_DEFAULT_REGION |
AWS region for Bedrock | No | us-east-1 |
llm.py |
GOOGLE_API_KEY |
Google API key for Gemini | Yes* | - | llm.py |
BEDROCK_HAIKU_4_5_ARN |
Provisioned throughput ARN for Haiku 4.5 | No | - | model_config.py |
BEDROCK_SONNET_4_5_ARN |
Provisioned throughput ARN for Sonnet 4.5 | No | - | model_config.py |
BEDROCK_OPUS_4_ARN |
Provisioned throughput ARN for Opus 4 | No | - | model_config.py |
*At least one LLM provider API key is required (OpenAI, AWS Bedrock, or Google)
Problem: "Failed to connect to MemMachine backend"
- Solution: Verify
MEMORY_SERVER_URLis correct and backend is running - Check firewall/network settings
Problem: "Invalid token" or authentication failures
- OpenAI: Verify
OPENAI_API_KEYis correct and has credits - AWS Bedrock: Check
AWS_ACCESS_KEY_IDandAWS_SECRET_ACCESS_KEYare correct and have Bedrock access permissions - Google: Verify
GOOGLE_API_KEYis valid and has Gemini API access
Problem: Responses don't seem personalized
- Verify "Enable MemMachine" checkbox is enabled
- Check that memories are being stored (use memory import/preview)
- Verify backend is processing requests correctly
- Check that
MEMORY_SERVER_URLis correct and backend is accessible - Verify
ORG_IDis set (defaults to "default-org" if not set)
Problem: Selected model not working
- Verify API keys for that provider are set
- Check model ID spelling in
model_config.py - For Bedrock models: verify model is available in your AWS region
User Input → Gateway Client → MemMachine Backend
↓
Memory Search & Storage
↓
Context-Enhanced Query
↓
LLM Provider (OpenAI/Bedrock/Gemini)
↓
Personalized Response
- app.py: Main Streamlit UI and session management
- llm.py: Handles communication with different LLM providers
- gateway_client.py: MemMachine API integration
- model_config.py: Model and provider configuration
This example uses MemMachine's v2 API:
- User messages are automatically ingested via
/api/v2/memoriesendpoint - Context is retrieved via
/api/v2/memories/searchendpoint - Episodic and semantic memory types supported
- Uses
org_idfor organization scoping (set viaORG_IDenv var) - Project ID is dynamically generated per user, not set via environment variable
For models with provisioned throughput, set the inference profile ARN via environment variables:
export BEDROCK_HAIKU_4_5_ARN="arn:aws:bedrock:us-east-1:..."
export BEDROCK_SONNET_4_5_ARN="arn:aws:bedrock:us-east-1:..."
export BEDROCK_OPUS_4_ARN="arn:aws:bedrock:us-east-1:..."Or in your .env file:
BEDROCK_HAIKU_4_5_ARN=arn:aws:bedrock:us-east-1:...
BEDROCK_SONNET_4_5_ARN=arn:aws:bedrock:us-east-1:...
BEDROCK_OPUS_4_ARN=arn:aws:bedrock:us-east-1:...The app will automatically use the ARN instead of the model ID when available.
You can create custom personas by:
- Entering a custom name in the "Or enter your name" field
- Each persona maintains isolated memory profiles
- Switch between personas to see different memory contexts
- Each persona's memories are completely isolated from others
Supported formats for memory import:
- Plain text conversations
- Markdown files
- JSON files
- ChatGPT export formats
When improving this example:
- Maintain backward compatibility with existing configurations
- Add error handling for new features
- Update this README with new features
- Test with multiple LLM providers
See main project LICENSE file.
For issues or questions:
- GitHub Issues: MemMachine Repository
- Discord: MemMachine Community
- Documentation: MemMachine Docs