You're missing the point on MCP

May 29, 2025

Six months into the MCP boom, the industry is making the same mistakes we've seen in every platform rush: prioritizing "look what's possible" over "here's what actually works"

The MCP ecosystem today reminds me of the early days of mobile apps: thousands of clever demos, very few solving real problems. Scroll through mcp.so or pulsemcp.com and you'll find an endless stream of "X MCP server" projects that wrap APIs in the thinnest possible layer and call it integration.

This wouldn't matter if MCP was just another developer toy, but the stakes are higher. As Shrivu points out in "Everything Wrong with MCP", we're essentially handing AI assistants the keys to our most sensitive systems while the protocol itself barely acknowledges the security implications. The famous quip "the S in MCP stands for Security" has become a sarcastic shorthand for this fundamental oversight. Raz's "Critical Look at MCP" challenges the protocol's transport layer, calling out the "descent into madness" of trying to recreate WebSocket functionality on top of Server-Sent Events.

But the real problem isn't technical- it's conceptual. Most MCP servers today are solving the wrong problem entirely. MCP is a product problem, not an engineering one.

The API Wrapper Fallacy

Here's what I see happening everywhere: teams look at MCP as a way to "make our API AI-accessible" and ship thin wrappers around existing endpoints. The most common approach is literally converting OpenAPI specifications to MCP tools- essentially code-generating their way to "AI integration". This thinking is fundamendatlly flawed, creates the worst possible user experience and grossly misunderstands how AI assistants actually work.

The core issue is that LLMs are terrible at the exact things that naive API-to-MCP conversions force them to do. They struggle with tool selection when presented with many options, they're poor at handling complex JSON structures, and they have severe limitations in processing tool descriptions, barely enough to explain a single complex endpoint.

Consider a user asking Claude to "find failed logins from suspicious IP addresses". A traditional API wrapper exposes list_logs() and get_log() endpoints, forcing the AI to figure out the query syntax, pagination, filtering, and correlation logic. Moreover, the API responses were designed for programmatic consumption and not AI agent access, leaving the LLM with no context about what the data means or what to do next. Even state-of-the-art models like Sonnet 3.7 fail at similar multi-step tasks.

This gets exponentially worse as you add more tools. Even with just 2 out 10 endpoints being slightly similar, AI assistants routinely fail to select the right tool unless users employ "magical" prompting techniques that are extremely unnatural for an average user. The promise of natural language interaction crumbles under the weight of poorly designed abstractions.

Meanwhile, the security vulnerabilities are very real and happening now. Even when an agent has the same permissions as a user, the LLM's ability to intelligently aggregate data can surface insights the user was never meant to access. A sales rep with normal Salesforce access suddenly becomes capable of generating detailed revenue projections by having Claude analyze all accessible accounts. The recent discovery by Invariant Labs of a prompt injection attack against the GitHub MCP server demonstrates how easily agents can be manipulated into leaking private repository data through malicious issues. The attack worked against Claude 4 Opus- one of the most aligned models available- showing that model-level safety isn't sufficient.

POC to Product

The industry is treating MCP as an infrastructure play when it's actually a product design challenge.

David Cramer's recent deep-dive into Sentry's MCP implementation reveals a different approach. Instead of exposing raw API endpoints, Sentry built curated tool sets that encode domain expertise and solve specific workflows. Their use of "evals as tests" demonstrates something crucial: successful MCP servers require the same product thinking as user-facing applications.

This shift from API-first to workflow-first thinking addresses the core limitation of current MCP servers. Users don't want to "access GitHub APIs". They want to "investigate this security incident" or "prepare this feature branch for review". These are complex, multi-step workflows that require domain knowledge, not just API access.

The constraint of MCP's tool-based design actually improves this process by forcing product teams to think clearly about what workflows or use-cases matter and how to encode best practices into discrete, well-documented tools. But a successful MCP Server implementation goes beyond just tools. The protocol's support for resources and prompts creates opportunities to provide context and guidance that make AI interactions more effective.

When your MCP server can pass evals that simulate actual user scenarios(eg: investigate this security incident), you've moved beyond "demo-ready POC" into "product" territory.

My experience launching the first version of Auth0 MCP Server reinforced these observations. We initially thought our job was wrapping the Auth0 Management API. But real users don't want generic API access. They want "set up authentication for my new project" or "investigate this suspicious login pattern". The difference between auth0_call_api() and auth0_create_application_with_secure_defaults() isn't just naming- it's the difference between making AI work harder and making it work smarter. Each tool needed to be self-contained and contextual.

The MCP servers that will drive real adoption in 2025 won't be the ones with the most endpoints or the cleverest technical implementations. They'll be the ones that solve real problems with purpose-built workflows. The companies that succeed will be those that resist the temptation to ship fast and instead invest in understanding the workflows that matter, encoding domain expertise, and building the security guardrails that make AI workflows genuinely safe and useful.

As the ecosystem matures beyond its current demo-heavy phase, this product lens will separate successful MCP servers from impressive technical artifacts. The question isn't whether your MCP server can call APIs. It's whether it helps users accomplish work they couldn't do before.

Bharath Natarajan