Test MemMachine in your Environment
Ready to see MemMachine in action? Click any link below to got directly to the MemMachine Examples Repo. You can download and deploy the following agents straight into your own environment to quickly test MemMachine and explore how it shines in different use cases!Default Agent
General-purpose AI assistant for any chatbot
CRM Agent
An Agent targeting Customer Relationship Management
Financial Analyst Agent
Financial analysis and reporting
Healthcare Agent
Healthcare assistant for patient management
Streamlit Frontend
Web-based testing interface for all agents
Advice recieved from using these examples comes from AI generation, and should not be considered the same as advice recieved from an expert in any given field.
Installation
Getting started is a breeze! We’ll just need to set up your local environment and grab a few necessary libraries.-
Create a Virtual Environment: We highly recommend using a virtual environment to keep your project dependencies tidy. You can create one with this simple command:
-
Activate Your Environment: Once created, you’ll need to activate it so we can install the necessary packages.
-
On macOS/Linux:
-
On Windows:
-
On macOS/Linux:
-
Install the Libraries: Now, let’s install the core dependencies for our agents. We’ll grab
uvicorn
,requests
,fastapi
, andstreamlit
so you’re ready to go!
Connecting to MemMachine
Start MemMachine by either running the Python file or the Docker container. These example agents all use the REST API from MemMachine’sapp.py
, but you can also integrate using the MCP server.
Be sure to check out the README file in the examples folder for more details on how make the most of our Example Agents.
Complete .env file example
Complete .env file example
For ease of management, instead of setting each environment variable individually in your terminal, you can create a
.env
file in the root directory of your project. This file will contain all the necessary environment variables for running the agents. Here’s an example of what your .env
file might look like:Available Agents
We offer four available agents to choose from, each tailored to different use cases and functionalities. You can run any of these agents by executing their respective Python files. Each agent is designed to interact with the MemMachine backend for memory storage and retrieval.- File Name:
example_server.py
- Purpose: General-purpose AI assistant for any chatbot
- Port: 8000 (configurable via
EXAMPLE_SERVER_PORT
) - Features: Basic memory storage and retrieval
- Use Case: General conversations and information management
Agent Quick Start
Prerequisites- Python 3.12+
- FastAPI
- Requests library
- MemMachine backend running
- Environment variables configured
- Default:
http://localhost:8000
- CRM:
http://localhost:8000
(when running CRM server) - Financial:
http://localhost:8000
(when running Financial server) - Frontend:
http://localhost:8502
(when running Streamlit app)
Using the CRM Agent in SlackTo enable Slack integration, you will need to install the You will also need to run the python slack server file found in our examples directory:
slack_sdk
library. You can do this by running the following command:Using the Streamlit Frontend for Testing
The Streamlit frontend provides an interactive web interface for testing all agents and their memory capabilities. Starting the Frontend Prerequisites:- MemMachine backend running (see main README)
- At least one agent server running (CRM, Financial, or Default)
- Required environment variables set
http://localhost:8502
Frontend Features
Model Configuration
- Model Selection: Choose from various LLM providers (OpenAI, Anthropic, DeepSeek, Meta, Mistral)
- API Key Management: Configure API keys for different providers
- Model Parameters: Adjust temperature, max tokens, and other settings
- Persona Management: Create and manage different user personas
- Memory Storage: Test memory storage and retrieval
- Context Search: Search through stored memories
- Profile Management: View and manage user profiles
- Real-time Chat: Test conversations with different agents
- Memory Integration: See how agents use stored memories
- Response Analysis: Compare responses with and without memory context
- Rationale Display: View how personas influence responses
-
Start Services:
-
Terminal 1: Start MemMachine backend
-
Terminal 2: Start an agent (e.g., CRM)
-
Terminal 3: Start the frontend
-
Terminal 1: Start MemMachine backend
-
Configure the Frontend:
- Set the CRM Server URL (default:
http://localhost:8000
) - Select your preferred model and provider
- Enter your API key
- Set the CRM Server URL (default:
-
Test Memory Operations:
- Create a new persona or use existing ones
- Send messages to test memory storage
- Use search functionality to retrieve memories
- Test different conversation patterns
-
Analyze Results:
- View memory storage logs
- Compare responses with/without memory context
- Check persona influence on responses
- Connection Refused: Ensure the agent server is running
- API Key Errors: Verify your API keys are correct
- Memory Not Storing: Check MemMachine backend is running
- Model Not Responding: Verify model selection and API key
-
Run with debug logging
app.py
: Main Streamlit applicationllm.py
: LLM integration and chat functionalitygateway_client.py
: API client for agent communicationmodel_config.py
: Model configuration and provider mappingstyles.css
: Custom styling for the interface
Variable | Description | Default |
---|---|---|
MEMORY_BACKEND_URL | URL of the MemMachine backend service | http://localhost:8080 |
OPENAI_API_KEY | OpenAI API key for LLM access | Required |
EXAMPLE_SERVER_PORT | Port for example server | 8000 |
CRM_PORT | Port for CRM server | 8000 |
FINANCIAL_PORT | Port for financial analyst server | 8000 |
HEALTH_PORT | Port for health check endpoint | 8000 |
LOG_LEVEL | Logging level (DEBUG, INFO, WARNING, ERROR) | INFO |
- Storing conversation episodes as memories
- Retrieving relevant context for queries
- Using profile information for personalized responses
- Maintaining conversation history and context