maxim-saplin/mcp_safe_local_python_executor
Stdio MCP Server wrapping custom Python runtime (LocalPythonExecutor) from Hugging Faces' `smolagents` framework. The runtime combines the ease of setup (compared to docker, VM, cloud runtimes) while providing safeguards and limiting operations/imports that are allowed inside the runtime.
Platform-specific configuration:
{
"mcpServers": {
"mcp_safe_local_python_executor": {
"command": "npx",
"args": [
"-y",
"mcp_safe_local_python_executor"
]
}
}
}Add the config above to .claude/settings.json under the mcpServers key.
An MCP server (stdio transport) that wraps Hugging Face's `LocalPythonExecutor` (from the `smolagents` framework). It is a custom Python runtime that provides basic isolation/security when running Python code generated by LLMs locally. It does not require Docker or VM. This package allows to expose the Python executor via MCP (Model Context Protocol) as a tool for LLM apps like Claude Desktop, Cursor or any other MCP compatible client. In case of Claude Desktop this tool is an easy way to add a missing Code Interpreter (available as a plugin in ChatGPT for quite a while already).
run_python tooleva()lBe careful with execution of code produced by LLM on your machine, stay away from MCP servers that run Python via command line or using eval(). The safest option is using a VM or a docker container, though it requires some effort to set-up, consumes resources/slower. There're 3rd party servcices providing Python runtime, though they require registration, API keys etc.
LocalPythonExecutor provides a good balance between direct use of local Python environment (which is easier to set-up) AND remote execution in Dokcer container or a VM/3rd party service (which is safe). Hugginng Face team has invested time into creating a quick and safe option to run LLM generated code used by their code agents. This MCP server builds upon it:
>To add
Loading reviews...