PromptExecution/just-mcp
mcp server for just
Platform-specific configuration:
{
"mcpServers": {
"just-mcp": {
"command": "npx",
"args": [
"-y",
"just-mcp"
]
}
}
}Add the config above to .claude/settings.json under the mcpServers key.
[](https://github.com/PromptExecution/just-mcp/actions/workflows/ci.yml) [](https://github.com/PromptExecution/just-mcp/actions/workflows/release.yml) [](https://crates.io/crates/just-mcp) [](https://opensource.org/licenses/MIT)
[](https://archestra.ai/mcp-catalog/promptexecution__just-mcp)
π A way to let LLMs speak Just
A production-ready MCP server that provides seamless integration with Just command runner, enabling AI assistants to discover, execute, and introspect Justfile recipes through the standardized MCP protocol.
If it isn't immediately obvious, the benefit of having LLMs use Just vs. bash is that running Just commands (via MCP) provides a context-saving abstraction where they don't need to waste context opening/reading bash files, Python scripts, or other build artifacts. The LLM via MCP simply gets the command, parameters, and hints - it's in their memory as "these are commands available to you."
No more watching LLMs execute just -l to get command lists, inevitably start reading the justfile, then try to write justfile syntax (like it's a Makefile), corrupt the justfile, and create a bad experience. Just's evolving syntax simply doesn't have a large enough corpus in frontier models today - we need more popular repos with justfiles in the training dataset.
Just-mcp is fundamentally safer than bash. If you read HackerNews, there's a story at l
Loading reviews...