Skip to main content
For the full source code and advanced implementation details, see the official LangChain Integration repository section..

Overview

Integrating MemMachine with LangChain implements the BaseMemory interface, allowing you to use MemMachine as a persistent memory backend. This enables:
  • Persistent Memory: Conversations and context persist across multiple sessions.
  • Semantic Search: Retrieve relevant memories based on semantic similarity.
  • Context Scoping: Automatic filtering by user_id, agent_id, and session_id.
  • Episodic & Semantic Memory: Access to both conversation history and extracted knowledge.

Configuration

The integration can be managed via environment variables or constructor parameters.
ParameterEnvironment VariableDefaultDescription
base_urlMEMORY_BACKEND_URLhttp://localhost:8080MemMachine server URL
org_idLANGCHAIN_ORG_IDlangchain_orgOrganization identifier
project_idLANGCHAIN_PROJECT_IDlangchain_projectProject identifier
user_idLANGCHAIN_USER_IDNoneScopes memory to a specific user
agent_idLANGCHAIN_AGENT_IDNoneScopes memory to a specific agent
session_idLANGCHAIN_SESSION_IDNoneScopes memory to a specific session

1

Install Dependencies

Install the core LangChain framework along with the MemMachine client:
pip install langchain memmachine-client
2

Initialize MemMachine Memory

Initialize the memory class with your project configuration.
from integrations.langchain.memory import MemMachineMemory

memory = MemMachineMemory(
    base_url="http://localhost:8080",
    org_id="my_org",
    project_id="my_project",
    user_id="user123",
    session_id="session456",
)
3

Integrate with a Chain

Pass the memory instance to your LangChain ConversationChain or LLMChain.
from langchain.llms import OpenAI
from langchain.chains import ConversationChain

llm = OpenAI(temperature=0)

# Create conversation chain with MemMachine memory
chain = ConversationChain(
    llm=llm,
    memory=memory,
    verbose=True,
)

# The chain will now remember context across runs
chain.run("Hello, my name is Alice")

Advanced Usage

Custom Prompt with Memory Context

You can use a PromptTemplate to explicitly include both conversation history and semantic facts (extracted context).
from langchain.prompts import PromptTemplate
from langchain.chains import LLMChain

prompt = PromptTemplate(
    input_variables=["history", "memmachine_context", "input"],
    template="""You are a helpful assistant with access to the user's memory.

Relevant context from memory:
{memmachine_context}

Conversation history:
{history}

User: {input}
Assistant:""",
)

chain = LLMChain(llm=llm, prompt=prompt, memory=memory)

Direct Memory Operations

For granular control, you can interface with the underlying memory storage directly.
# Manually add a specific fact
memory._memory.add(
    content="I prefer working in the morning",
    role="user",
)

# Search memories manually
results = memory.load_memory_variables({"input": "What are my preferences?"})
print(results["memmachine_context"])
Pro Tip: Use the search_limit parameter (default: 10) in the constructor to control the number of memory fragments retrieved during each interaction.

Requirements

  • MemMachine Server: Must be reachable at the MEMORY_BACKEND_URL.
  • LLM API Key: An OPENAI_API_KEY (or equivalent) must be configured in your environment or within the MemMachine configuration.yml.
  • Python: 3.10 or higher.
  • Framework: langchain and memmachine.