Overview
Integrating MemMachine with LangChain implements theBaseMemory interface, allowing you to use MemMachine as a persistent memory backend. This enables:
- Persistent Memory: Conversations and context persist across multiple sessions.
- Semantic Search: Retrieve relevant memories based on semantic similarity.
- Context Scoping: Automatic filtering by
user_id,agent_id, andsession_id. - Episodic & Semantic Memory: Access to both conversation history and extracted knowledge.
Configuration
The integration can be managed via environment variables or constructor parameters.| Parameter | Environment Variable | Default | Description |
|---|---|---|---|
base_url | MEMORY_BACKEND_URL | http://localhost:8080 | MemMachine server URL |
org_id | LANGCHAIN_ORG_ID | langchain_org | Organization identifier |
project_id | LANGCHAIN_PROJECT_ID | langchain_project | Project identifier |
user_id | LANGCHAIN_USER_ID | None | Scopes memory to a specific user |
agent_id | LANGCHAIN_AGENT_ID | None | Scopes memory to a specific agent |
session_id | LANGCHAIN_SESSION_ID | None | Scopes memory to a specific session |
Advanced Usage
Custom Prompt with Memory Context
You can use aPromptTemplate to explicitly include both conversation history and semantic facts (extracted context).
Direct Memory Operations
For granular control, you can interface with the underlying memory storage directly. Pro Tip: Use the
search_limit parameter (default: 10) in the constructor to control the number of memory fragments retrieved during each interaction. Requirements
- MemMachine Server: Must be reachable at the
MEMORY_BACKEND_URL. - LLM API Key: An
OPENAI_API_KEY(or equivalent) must be configured in your environment or within the MemMachineconfiguration.yml. - Python: 3.10 or higher.
- Framework:
langchainandmemmachine.

