This directory contains the integration between MemMachine and LangChain, providing persistent memory capabilities for LangChain applications.
The MemMachine integration for LangChain implements the BaseMemory interface, allowing you to use MemMachine as a memory backend for LangChain chains and agents. This enables:
- Persistent Memory: Conversations and context persist across sessions
- Semantic Search: Retrieve relevant memories based on semantic similarity
- User Context: Automatic filtering by user_id, agent_id, session_id
- Episodic & Semantic Memory: Access to both conversation history and extracted knowledge
- MemMachine server running (default:
http://localhost:8080) - Python 3.10+
- Required packages:
pip install langchain memmachine
For using with OpenAI LLMs:
pip install openaifrom langchain.llms import OpenAI
from langchain.chains import ConversationChain
from integrations.langchain.memory import MemMachineMemory
# Initialize MemMachine memory
memory = MemMachineMemory(
base_url="http://localhost:8080",
org_id="my_org",
project_id="my_project",
user_id="user123",
session_id="session456",
)
# Create LLM
llm = OpenAI(temperature=0)
# Create conversation chain with MemMachine memory
chain = ConversationChain(
llm=llm,
memory=memory,
verbose=True,
)
# Use the chain
response = chain.run("Hello, my name is Alice")
response = chain.run("What's my name?") # Will remember from previous interactionfrom integrations.langchain.memory import MemMachineMemory
memory = MemMachineMemory(
base_url="http://localhost:8080",
org_id="my_org",
project_id="my_project",
user_id="user123",
agent_id="agent456",
session_id="session789",
group_id="group1",
search_limit=10, # Number of memories to retrieve
return_messages=False, # Set to True to return LangChain message objects
)The integration can be configured via environment variables or constructor parameters:
| Parameter | Environment Variable | Default | Description |
|---|---|---|---|
base_url |
MEMORY_BACKEND_URL |
http://localhost:8080 |
MemMachine server URL |
org_id |
LANGCHAIN_ORG_ID |
langchain_org |
Organization ID |
project_id |
LANGCHAIN_PROJECT_ID |
langchain_project |
Project ID |
user_id |
LANGCHAIN_USER_ID |
None |
User identifier |
agent_id |
LANGCHAIN_AGENT_ID |
None |
Agent identifier |
session_id |
LANGCHAIN_SESSION_ID |
None |
Session identifier |
group_id |
LANGCHAIN_GROUP_ID |
None |
Group identifier |
search_limit |
- | 10 |
Max memories to retrieve |
return_messages |
- | False |
Return LangChain message objects |
When save_context() is called:
- Extracts user input and AI output from inputs/outputs dictionaries
- Stores user message to MemMachine with
role="user" - Stores AI response to MemMachine with
role="assistant" - Messages are automatically filtered by the Memory instance's context (user_id, agent_id, session_id)
When load_memory_variables() is called:
- Builds a search query from the input (or uses a default query)
- Searches MemMachine for relevant episodic and semantic memories
- Formats episodic memories as conversation history
- Formats semantic memories as context facts
- Returns both as memory variables
The memory provides two variables:
history: Conversation history from episodic memory (formatted as "Human: ..." / "AI: ...")memmachine_context: Extracted facts from semantic memory (formatted as "feature: value")
from langchain.llms import OpenAI
from langchain.chains import ConversationChain
from integrations.langchain.memory import MemMachineMemory
memory = MemMachineMemory(
base_url="http://localhost:8080",
org_id="demo_org",
project_id="demo_project",
user_id="user123",
)
llm = OpenAI(temperature=0)
chain = ConversationChain(llm=llm, memory=memory)
# First interaction
chain.run("My name is Bob and I like Python")
# Second interaction - will remember the name
chain.run("What's my name?")from langchain.llms import OpenAI
from langchain.chains import LLMChain
from langchain.prompts import PromptTemplate
from integrations.langchain.memory import MemMachineMemory
memory = MemMachineMemory(
base_url="http://localhost:8080",
org_id="demo_org",
project_id="demo_project",
user_id="user123",
)
prompt = PromptTemplate(
input_variables=["history", "memmachine_context", "input"],
template="""You are a helpful assistant with access to the user's memory.
Relevant context from memory:
{memmachine_context}
Conversation history:
{history}
User: {input}
Assistant:""",
)
llm = OpenAI(temperature=0)
chain = LLMChain(llm=llm, prompt=prompt, memory=memory)
response = chain.run("What do I like?")from integrations.langchain.memory import MemMachineMemory
memory = MemMachineMemory(
base_url="http://localhost:8080",
org_id="demo_org",
project_id="demo_project",
user_id="user123",
)
# Add memory directly
memory._memory.add(
content="I prefer working in the morning",
role="user",
)
# Search memories
results = memory.load_memory_variables({"input": "What are my preferences?"})
print(results["history"])
print(results["memmachine_context"])# Set environment variables (optional)
export MEMORY_BACKEND_URL="http://localhost:8080"
export LANGCHAIN_ORG_ID="my_org"
export LANGCHAIN_PROJECT_ID="my_project"
export LANGCHAIN_USER_ID="user123"
# Run the example
cd integrations/langchain
python example.pyInitialize MemMachine memory for LangChain.
Parameters:
base_url(str): Base URL for MemMachine serverorg_id(str): Organization IDproject_id(str): Project IDuser_id(str, optional): User identifieragent_id(str, optional): Agent identifiersession_id(str, optional): Session identifiergroup_id(str, optional): Group identifiersearch_limit(int): Maximum number of memories to retrieve (default: 10)client(MemMachineClient, optional): Pre-initialized clientreturn_messages(bool): Return LangChain message objects (default: False)
Load memory variables from MemMachine.
Parameters:
inputs: Input dictionary (may contain "input", "question", "query", or "messages")
Returns:
- Dictionary with keys:
history: Conversation history string or list of messagesmemmachine_context: Semantic memory context string
Save conversation context to MemMachine.
Parameters:
inputs: Input dictionary (typically contains user message)outputs: Output dictionary (typically contains AI response)
Clear memory (note: doesn't delete from MemMachine, only resets local state).
from langchain.chains import ConversationChain
from integrations.langchain.memory import MemMachineMemory
memory = MemMachineMemory(...)
chain = ConversationChain(llm=llm, memory=memory)from langchain.chains import LLMChain
from langchain.prompts import PromptTemplate
from integrations.langchain.memory import MemMachineMemory
memory = MemMachineMemory(...)
prompt = PromptTemplate(
input_variables=["history", "memmachine_context", "input"],
template="...",
)
chain = LLMChain(llm=llm, prompt=prompt, memory=memory)from langchain.agents import initialize_agent
from integrations.langchain.memory import MemMachineMemory
memory = MemMachineMemory(...)
agent = initialize_agent(
tools=[],
llm=llm,
agent="conversational-react-description",
memory=memory,
verbose=True,
)-
Connection Error: Ensure MemMachine server is running at the specified URL
curl http://localhost:8080/health
-
Import Error: Install required dependencies
pip install langchain memmachine
-
Memory Not Persisting: Check that user_id, session_id are set correctly
-
No Search Results:
- Ensure memories have been added first
- Check that search_limit is appropriate
- Verify context filters (user_id, agent_id, session_id) match stored memories
- MemMachine Documentation
- LangChain Memory Documentation
- LangGraph Integration - Similar integration for LangGraph
This integration follows the same license as MemMachine.