Prerequisites
Before you begin the installation and configuration of MemMachine, you must ensure that your local environment is ready by having Ollama installed and the necessary models downloaded.1. Ollama Service
MemMachine connects directly to Ollama using its local API, which must be running in the background.-
Installation: If you do not yet have Ollama installed, please follow the official setup guide.
- Resource: Download Ollama
-
Start the Service: Once installed, start the Ollama service to make the local API available. Open your terminal or command prompt and run:
-
Verification: You can confirm the service is running successfully by opening your web browser and navigating to the following address:
http://localhost:11434You should see the Ollama web interface if the service is active.
2. Required Ollama Models
MemMachine requires two types of models to function: a Large Language Model (LLM) for generative tasks and an embedding model for converting text into vectors (e.g., for retrieval-augmented generation). You must download these models to your local Ollama repository before starting MemMachine.- Download Models: Use the
ollama pullcommand to download the models you want.
| Model Type | Example Model ID | Command to Run |
|---|---|---|
| Large Language Model (LLM) | Llama 3 | ollama pull llama3 |
| Embedding Model | Nomic Embed Text | ollama pull nomic-embed-text |
You can choose any compatible LLM (like
mixtral, gemma, etc.) and embedding model available on Ollama, but the examples above are recommended starting points.-
View Downloaded Models: To see a list of all models currently available in your local Ollama repository, run:
Installation: QuickStart Configuration
The installation script will automatically guide you through setting up your Large Language Model (LLM) provider. When prompted, you must select Ollama to integrate with Ollama. Your prompt input should match the following example: If you are unsure about model selection, simply press Enter at the respective prompts to use the recommended default options.
Manually Configuring MemMachine to use Ollama
To manually configure MemMachine to use your local Ollama instance, you need to define your resources in theresources section of your cfg.yml file, pointing them to your local Ollama API endpoint (usually http://host.docker.internal:11434/v1 if running MemMachine in Docker).
1. Define Ollama Resources
Add or update theresources block in your cfg.yml. We will use the openai-chat-completions provider for the LLM and the openai provider for embeddings, as Ollama is API-compatible.
2. Update Memory Configuration
Now, reference these resource IDs in yourepisodic_memory and semantic_memory sections.
Make sure to restart the MemMachine server for these changes to take effect.

