prih/mcp-graph-memory
MCP server that builds a semantic graph memory from your project — indexes docs, code, and files, exposes 57 tools for search, knowledge, tasks, and skills.
An MCP server that builds a semantic graph memory from a project directory. It indexes markdown documentation and TypeScript/JavaScript source code into graph structures, then exposes them as MCP tools that any AI assistant can use to navigate and search the project.
Create graph-memory.yaml — paths must be relative to the container filesystem:
server:
host: "0.0.0.0"
port: 3000
modelsDir: "/data/models"
projects:
my-app:
projectDir: "/data/projects/my-app"
docsPattern: "docs/**/*.md"
codePattern: "src/**/*.{ts,tsx}"
excludePattern: "node_modules/**,dist/**"docker run -d \
--name graph-memory \
-p 3000:3000 \
-v $(pwd)/graph-memory.yaml:/data/config/graph-memory.yaml:ro \
-v /path/to/my-app:/data/projects/my-app:ro \
-v graph-memory-models:/data/models \
ghcr.io/prih/mcp-graph-memoryThree mounts: | Mount | Container path | Description | |-------|---------------|-------------| | Config | /data/config/graph-memory.yaml | Your config file (read-only) | | Projects | /data/projects/ | Project directories to index (read-only, unless you use knowledge/tasks/skills — then remove :ro) | | Models | /data/models/ | Embedding model cache — use a named volume so models persist across container restarts |
The embedding model (Xenova/all-MiniLM-L6-v2, ~90MB) is downloaded on first startup. Subsequent starts use the cached model from the volume.
# docker-compose.yaml
services:
graph-memory:
image: ghcr.io/prih/mcp-graph-memory
ports:
- "3000:3000"
volumes:
- ./graph-memory.yaml:/data/config/graph-memory.yaml:ro
- /path/to/my-app:/data/projects/my-app
- models:/data/models
restart: unless-stopped
volumes:
models:docker compose up -dhttp://localhost:3000Loading reviews...