KryptosAI/mcp-observatory
Regression intelligence for MCP targets: detect, diff, and explain interoperability drift over time.
Platform-specific configuration:
{
"mcpServers": {
"mcp-observatory": {
"command": "npx",
"args": [
"-y",
"mcp-observatory"
]
}
}
}Add the config above to .claude/settings.json under the mcpServers key.
[](https://github.com/KryptosAI/mcp-observatory/actions/workflows/ci.yml) [](https://github.com/KryptosAI/mcp-observatory/actions/workflows/real-server-matrix.yml) [](https://github.com/KryptosAI/mcp-observatory/releases) [](./LICENSE) [](./package.json)
MCP Observatory exists because real MCP servers drift in ways conformance does not explain.
> Maintainer note, March 19, 2026: I started this repo after running a small real-server matrix and seeing two different truths at once. @modelcontextprotocol/server-filesystem, @modelcontextprotocol/server-everything, ref-tools-mcp, @upstash/context7-mcp, and puppeteer-mcp-server all worked, but they exposed meaningfully different capability shapes. At the same time, packages like @modelcontextprotocol/server-map and @modelcontextprotocol/server-pdf still timed out or closed early when treated as plain local-process stdio targets. Official conformance still matters. This repo exists because field evidence matters too.
If the project ever turns into generic MCP theater, that is a regression.
The product surface is deliberately small:
run: execute checks against one target and always persist a run artifactdiff: compare two runs and classify regressions and recoveriesreport: turn a saved run artifact into readable terminal, JSON, or Markdown outputThat is enough to answer a practical question: what changed, what regressed, what recovered, and what artifact proves it?
This repo is not trying to be:
Loading reviews...