Test MemMachine in your Environment

Ready to see MemMachine in action? Click any link below to got directly to the MemMachine Examples Repo. You can download and deploy the following agents straight into your own environment to quickly test MemMachine and explore how it shines in different use cases!
Advice recieved from using these examples comes from AI generation, and should not be considered the same as advice recieved from an expert in any given field.
These agents are designed to showcase how to integrate with MemMachine’s memory services. Follow these simple steps to get an agent up and running in no time.

Installation

Getting started is a breeze! We’ll just need to set up your local environment and grab a few necessary libraries.
  1. Create a Virtual Environment: We highly recommend using a virtual environment to keep your project dependencies tidy. You can create one with this simple command:
    python -m venv .venv
    
  2. Activate Your Environment: Once created, you’ll need to activate it so we can install the necessary packages.
    • On macOS/Linux:
      source .venv/bin/activate
      
    • On Windows:
      .venv\Scripts\activate
      
  3. Install the Libraries: Now, let’s install the core dependencies for our agents. We’ll grab uvicorn, requests, fastapi, and streamlit so you’re ready to go!
    pip install "uvicorn[standard]" requests fastapi streamlit
    

Connecting to MemMachine

Start MemMachine by either running the Python file or the Docker container. These example agents all use the REST API from MemMachine’s app.py, but you can also integrate using the MCP server. Be sure to check out the README file in the examples folder for more details on how make the most of our Example Agents.

Available Agents

We offer four available agents to choose from, each tailored to different use cases and functionalities. You can run any of these agents by executing their respective Python files. Each agent is designed to interact with the MemMachine backend for memory storage and retrieval.
  • File Name: example_server.py
  • Purpose: General-purpose AI assistant for any chatbot
  • Port: 8000 (configurable via EXAMPLE_SERVER_PORT)
  • Features: Basic memory storage and retrieval
  • Use Case: General conversations and information management

Agent Quick Start

Prerequisites
  • Python 3.12+
  • FastAPI
  • Requests library
  • MemMachine backend running
  • Environment variables configured
Running an Agent Set up environment variables:
MEMORY_BACKEND_URL="http://localhost:8080"
OPENAI_API_KEY="your-openai-api-key"
Run a specific agent:
# Default agent
python example_server.py

# CRM agent
cd crm
python crm_server.py

# Financial analyst agent
cd financial_analyst
python financial_server.py
Access the API:
  • Default: http://localhost:8000
  • CRM: http://localhost:8000 (when running CRM server)
  • Financial: http://localhost:8000 (when running Financial server)
  • Frontend: http://localhost:8502 (when running Streamlit app)
Using the CRM Agent in SlackTo enable Slack integration, you will need to install the slack_sdk library. You can do this by running the following command:
# pip install slack_sdk
You will also need to run the python slack server file found in our examples directory:
# python slack_server.py

Using the Streamlit Frontend for Testing

The Streamlit frontend provides an interactive web interface for testing all agents and their memory capabilities. Starting the Frontend Prerequisites:
  • MemMachine backend running (see main README)
  • At least one agent server running (CRM, Financial, or Default)
  • Required environment variables set
Run the frontend:
cd agents/frontend
streamlit run app.py
Access the interface: Open your browser to http://localhost:8502 Frontend Features Model Configuration
  • Model Selection: Choose from various LLM providers (OpenAI, Anthropic, DeepSeek, Meta, Mistral)
  • API Key Management: Configure API keys for different providers
  • Model Parameters: Adjust temperature, max tokens, and other settings
Memory Testing
  • Persona Management: Create and manage different user personas
  • Memory Storage: Test memory storage and retrieval
  • Context Search: Search through stored memories
  • Profile Management: View and manage user profiles
Agent Testing
  • Real-time Chat: Test conversations with different agents
  • Memory Integration: See how agents use stored memories
  • Response Analysis: Compare responses with and without memory context
  • Rationale Display: View how personas influence responses
Testing Workflow
  1. Start Services:
    • Terminal 1: Start MemMachine backend
      cd memmachine/src
      python -m server.app
      
    • Terminal 2: Start an agent (e.g., CRM)
      cd agents/crm
      python crm_server.py
      
    • Terminal 3: Start the frontend
      cd agents/frontend
      streamlit run app.py
      
  2. Configure the Frontend:
    • Set the CRM Server URL (default: http://localhost:8000)
    • Select your preferred model and provider
    • Enter your API key
  3. Test Memory Operations:
    • Create a new persona or use existing ones
    • Send messages to test memory storage
    • Use search functionality to retrieve memories
    • Test different conversation patterns
  4. Analyze Results:
    • View memory storage logs
    • Compare responses with/without memory context
    • Check persona influence on responses
Environment Variables for Frontend
# Required for frontend functionality
CRM_SERVER_URL=http://localhost:8000
MODEL_API_KEY=your-openai-api-key
OPENAI_API_KEY=your-openai-api-key

# Optional: For other providers
ANTHROPIC_API_KEY=your-anthropic-key
AWS_ACCESS_KEY_ID=your-aws-key
AWS_SECRET_ACCESS_KEY=your-aws-secret
Troubleshooting Frontend Issues Common Issues:
  • Connection Refused: Ensure the agent server is running
  • API Key Errors: Verify your API keys are correct
  • Memory Not Storing: Check MemMachine backend is running
  • Model Not Responding: Verify model selection and API key
Debug Mode:
  • Run with debug logging
    LOG_LEVEL=DEBUG streamlit run app.py
    
Frontend Architecture The frontend consists of:
  • app.py: Main Streamlit application
  • llm.py: LLM integration and chat functionality
  • gateway_client.py: API client for agent communication
  • model_config.py: Model configuration and provider mapping
  • styles.css: Custom styling for the interface
Configuration Environment Variables
VariableDescriptionDefault
MEMORY_BACKEND_URLURL of the MemMachine backend servicehttp://localhost:8080
OPENAI_API_KEYOpenAI API key for LLM accessRequired
EXAMPLE_SERVER_PORTPort for example server8000
CRM_PORTPort for CRM server8000
FINANCIAL_PORTPort for financial analyst server8000
HEALTH_PORTPort for health check endpoint8000
LOG_LEVELLogging level (DEBUG, INFO, WARNING, ERROR)INFO
MemMachine Integration All agents integrate with the MemMachine backend by:
  • Storing conversation episodes as memories
  • Retrieving relevant context for queries
  • Using profile information for personalized responses
  • Maintaining conversation history and context