Devatva24/LLM-Router-MCP
MCP server that intelligently routes prompts across Claude, Gemini, and GPT-4o based on task type — minimising token cost without sacrificing quality.
Platform-specific configuration:
{
"mcpServers": {
"LLM-Router-MCP": {
"command": "npx",
"args": [
"-y",
"LLM-Router-MCP"
]
}
}
}Add the config above to .claude/settings.json under the mcpServers key.
> Route prompts intelligently across Claude, Gemini, and GPT-4o — automatically picking the best model for every task while minimising token cost.
[](https://www.typescriptlang.org/) [](https://modelcontextprotocol.io/) [](https://nodejs.org/) [](LICENSE)
---
llm-router-mcp is a Model Context Protocol (MCP) server that acts as an intelligent dispatcher for your AI workloads. Instead of hardcoding a single LLM into your workflow, the router analyses the intent of each prompt and automatically selects the most cost-effective and capable model for that specific task type.
Your prompt ──► LLM Router ──► PLANNING → Gemini
├── CODEGEN → Claude
├── TESTING → GPT-4o
└── REVIEW → Claude---
plan_workflow and implement_feature tools---
| Task Category | Trigger Ke
Loading reviews...