Skip to main content
MemMachine offers flexible installation options to meet diverse development needs. Choose the method that best fits your environment, whether you are starting fresh or integrating with specific AI backends.

Standard Installation

Access the main guide for installing MemMachine using the preferred methods:

Using PIP or Source

Quickest Start for Most Developers. The simplest, one-line setup using Python’s package manager. Recommended for all users integrating MemMachine into an existing project.

Configure Wizard Guide

Configuration wizard for manual installation A detailed guide for using a Wizard to assist with MemMachine Configuration. Ideal for contributors and active development.

Cloud Deployments

Deploy MemMachine to your preferred infrastructure using our verified orchestration templates and containerized workflows:

AWS CloudFormation

Automate Your AWS Infrastructure. Deploy a full MemMachine stack into your VPC using our CloudFormation template. This setup orchestrates EC2, networking, and storage for a production-ready environment.

Helm Chart (Kubernetes)

Container Orchestration at Scale. Use our official Helm chart to deploy MemMachine, PostgreSQL, and Neo4j into any Kubernetes cluster. Supports external database integration and fine-grained resource limits.

Configurations

Find a Quickstart Guide and specific details for setting up MemMachine’s backend with popular AI platforms:

Using Ollama Models

Self-Host Your AI Backend. Instructions for configuring MemMachine to use Ollama for local, open-source LLM and embedding models, maximizing privacy and control.

Using AWS Bedrock

Scale with AWS Managed Services. Learn how to leverage AWS Bedrock’s enterprise-ready models for reliable, high-volume memory management and processing.

Integrations

Find specific configuration and integration details for using MemMachine in popular AI applications and workflows:

OpenAI GPTStore

Persistent Memory for Custom GPTs. Our most popular integration. Learn how to set up MemMachine as a Custom Action to furnish your GPTs with reliable, long-term memory and accurate user history.

OpenClaw

Automated Agent Recall. Ground your OpenClaw agents with persistent memory, auto-capture of exchanges, and CLI-based memory management.

LangChain

Standard Framework Support. Use MemMachine as a persistent BaseMemory backend for your chains, agents, and custom document loaders.

Claude Code and MCP

Give Claude Real-Time Recall. Configure the MemMachine Control Protocol (MCP) as a Custom Tool to provide Claude with persistent user profiles.

LangGraph

Master State and Workflow. A practical guide to integrating MemMachine into your graph chains for managing complex agent handoffs.

CrewAI

Empower Agent Squads. Enable cross-agent knowledge sharing, user preference tracking, and context retention across multi-task workflows.

n8n

No-Code Memory for n8n. Install our community nodes to give your n8n AI agents persistent recall and deep context windowing.

Dify

Build State-of-the-Art LLM Apps. Add MemMachine as a Custom Tool Provider in Dify for your self-hosted or cloud agents.

Google ADK

Extend Google GenAI. Integrate MemMachine as a BaseMemoryService to empower Gemini agents to recall past interactions.

LlamaIndex

Data-Driven Memory. Add persistent context to your LlamaIndex chat engines, enabling agents to recall user preferences across diverse data sources.