Get your AI to remember your project context in 5 minutes.
tinyMem is a local tool that sits between your code and your AI assistant. It creates a "memory brain" for your specific project so you don't have to keep repeating context.
First, get the single executable file. No complex installers or dependencies required.
- Download the latest release (
tinymem-windows-amd64.exe). - Create a folder
C:\Tools(or use an existing one) and put the file there. - Rename it to
tinymem.exefor convenience. - Important: Add this folder to your PATH so you can run it from anywhere.
- Search "Edit the system environment variables" > "Environment Variables" > Select "Path" in User variables > "Edit" > "New" > Paste
C:\Tools> OK > OK.
- Search "Edit the system environment variables" > "Environment Variables" > Select "Path" in User variables > "Edit" > "New" > Paste
- Open your terminal.
- Run this command to download and install to
/usr/local/bin:os="$(uname -s | tr '[:upper:]' '[:lower:]')" arch="$(uname -m)" case "$arch" in x86_64|amd64) arch="amd64" ;; aarch64|arm64) arch="arm64" ;; *) echo "Unsupported arch: $arch" >&2; exit 1 ;; esac curl -L "https://github.com/daverage/tinyMem/releases/latest/download/tinymem-${os}-${arch}" -o tinymem chmod +x tinymem sudo mv tinymem /usr/local/bin/
tinyMem creates a memory database inside your project folder (in a hidden .tinyMem directory). You need to tell it which project to manage.
- Open your terminal.
- Navigate to your project's root folder:
cd /path/to/my-cool-app - Initialize the memory system:
(You should see "✅ System is healthy" and a new
tinymem health
.tinyMemfolder created).
Choose the method that matches how you work.
Best for: Coding assistants and chat interfaces.
You need to tell your IDE to run tinyMem as a "Model Context Protocol" (MCP) server.
For Claude Desktop:
Edit your config file (usually ~/Library/Application Support/Claude/claude_desktop_config.json on macOS or %APPDATA%\Claude\claude_desktop_config.json on Windows):
{
"mcpServers": {
"tinymem": {
"command": "tinymem",
"args": ["mcp"]
}
}
}Restart Claude Desktop. The 🔌 icon should appear, indicating tinyMem is connected.
Best for: Running Python scripts, Aider, generic OpenAI clients, or terminal tools.
-
Configure for Local LLMs (Optional): If you use LM Studio or Ollama, create a file at
.tinyMem/config.toml:[proxy] base_url = "http://localhost:1234/v1" # Point to LM Studio
-
Start the proxy in a separate terminal window:
cd /path/to/my-cool-app tinymem proxy -
In your main terminal, set the environment variable to route requests through tinyMem:
export OPENAI_API_BASE_URL=http://localhost:8080/v1For Aider:
aider --openai-api-base http://localhost:8080/v1 --model openai/qwen2.5-coder-7b-instruct -
Run your tool or script as usual. tinyMem will transparently intercept and inject memory.
- Ask your AI something about your project.
- Check the tinyMem status:
tinymem stats
- See your memories visually:
tinymem dashboard
If things aren't working as expected, you can enable full debug logging:
- Environment Variable:
export TINYMEM_LOG_LEVEL=debug - Config File: Set
level = "debug"in the[logging]section of.tinyMem/config.toml.
Logs are stored by default in .tinyMem/logs/tinymem-<date>.log.
- Read the full README for advanced configuration.
- Learn about Chain-of-Verification (CoVe) to understand how tinyMem filters hallucinations automatically (enabled by default).
- Learn about Memory Types to understand the difference between a
fact(verified) and aclaim(unverified). - Check
tinymem doctorif you run into any issues.