All Articles
✦

Don’t Sideload Your AI: Why MCP Is a Dead-End Abstraction

Insight
June 30, 2025   —
Author
Pandelis Zembashis CTO @PandelisZ

MCP: The Android Sideloading of AI Integration

MCP (Model-Command Pattern or Modern Context Protocol) has been touted as a way to attach new tools to AI systems, but it’s more of a sideload than a true integration. Like sideloading apps on Android, it forces features in through unofficial channels, bypassing the quality, security, and coherence of a native setup. Apple’s security team once warned that sideloading would cripple the privacy and security protections of iOS—and the same logic applies here.

Using MCP as a primary interface is a patchwork fix at best: functional but brittle. It mimics integration without delivering its reliability. A polished system shouldn’t require users to act as handymen. Duct-tape solutions may work temporarily, but they signal deeper design flaws. Real integration should be seamless, not cobbled together like a leaf blower cleaning a desk—messy and counterproductive.

Instead of this sideload-style chaos, we need integrations that are tidy and built-in from the start.

Insecure by Design: A Half-Baked Solution

MCP’s biggest flaw is security. It relies on third-party “adapter” servers—often cobbled together open-source projects—to bridge AI models to external APIs. These are frequently installed via tools like npx, piping unvetted code straight from GitHub into your system.

It’s reckless by default. MCP servers can access your data and environment, with no guarantees of sandboxing or permissions model. Each MCP is so vastly different from one another, it’s impossible to know what each is doing.  A plugin meant to access GitHub could just as easily steal your private repos, and you’d never know.

Until real safeguards exist, running MCP plugins is like installing unsigned apps on a jailbroken phone. It’s not clever—it’s dangerous.

Brittle Glue and Band-Aid Development Experience

Even setting security aside, MCP is a fragile mess with a clunky developer experience. It turns simple API calls into Rube Goldberg machines—manifest files, local servers, and prompt-based commands instead of clean SDKs.

There’s no standardisation or versioning. Each plugin is a snowflake, prone to breakage if its author changes anything. You’re not just integrating a tool—you’re managing a mini distributed system.

Debugging is a nightmare: cryptic logs, YAML configs, and vague model errors. Even developers struggle. For users? Forget it. Worse, large toolsets confuse LLMs, leading to hallucinations or misuse. More tools = more bugs.

In short, MCP replaces stable code with probabilistic glue—a design that breaks exactly when you need it to work most.

Architecture & Maintainability Nightmare

From an engineering standpoint, MCP is poor architecture. It adds complexity, obscures control flow, and creates long-term maintenance liabilities. Each MCP adapter is another fragile link—if an API evolves and the open-source server isn’t updated, you’re stuck fixing or forking it yourself. There’s no centralised upkeep—just a sprawl of micro-integrations you now own.

LangChain’s Harrison Chase aptly noted that while it provides a handy protocol, MCP is essentially just plumbing—it doesn’t handle real integration needs like auth, edge cases, or validation. You still need to prompt around errors and quirks that proper SDKs abstract away. And because MCP strips out nuance, models often fumble through tasks they don’t fully understand.

Worse, the ecosystem is in flux. The MCP spec and best practices shift constantly. What works with one model or schema version may break in the next. Unlike stable SDKs with clear deprecation paths, MCP leaves you chasing community updates just to stay functional.

In the end, what seems like “no-code integration” is a mirage. The cost is just deferred into a pile of brittle, AI-mediated scripts you’ll end up debugging and rewriting. Direct APIs remain faster, safer, and far more maintainable.

And it bears repeating: faster, safer, more maintainable––pretty much everything we want in good software engineering––are all in favour of first-party, direct integration over MCP’s convoluted detour.

First-Party Integrations: Polished and Powerful

What’s the alternative to MCP chaos? Simple: build proper, first-party integrations. Instead of routing through fragile LLM prompts, connect directly to APIs using official SDKs or backend code. It’s more work upfront, but the payoff is huge—better reliability, security, and maintainability.

Native integrations offer tested libraries, stable interfaces, and clear error handling. You can unit-test functions like get_messages_from_slack() directly—no prompts, no guesswork. With MCP, you’re stuck hoping the model behaves the same way twice.

It’s also more efficient. If your AI needs access to a few services, hardwiring them is faster and cleaner than spinning up adapter servers. And with AI coding assistants, generating boilerplate is easier than ever—why over-engineer when the simpler path works better?

Finally, great user experience demands invisibility. Users shouldn’t need to edit YAML or run background processes—they just want features that work. Polished products abstract complexity; MCP exposes it. If we want AI to scale beyond power users, we need real integrations.

Toward a Future of Clean, Native Model Collaboration

MCP was a start, but the future is structured, secure, and native—not prompt-based hacks. Protocols like Google’s A2A offer a better model: specialised agents communicating through defined APIs, like microservices, not guesswork.

Instead of running unvetted scripts, AI systems should use audited, first-party connectors— reliable, testable, and safe. This mirrors how iOS apps access system features through trusted APIs.

The industry is shifting toward standards for a reason: we need AI systems that integrate cleanly, not stitched together. Let’s build for clarity, modularity, and trust—not another generation of brittle glue.

Demand Better Than Band-Aids

MCP was born from a real need, but it’s a stopgap, not a solution. It pushes devs and users into sideload-style hacks instead of offering clean, reliable integrations.

We’ve seen this before in tech: quick fixes give way to native, polished systems. AI is at that turning point. It’s time to demand secure, maintainable, first-party tools—not fragile plugins and prompt gymnastics.

A smart system shouldn’t feel like a Frankenstein of YAML and guesswork. It should just work. Ditch the duct tape and build it right—from the start.

At Cosine, we believe in building 1st party tooling and integrations. This lets us keep tight control over how the model interacts with data. It also means we can build systems that are incredibly good at their specific niche. Just see our eval results.

Or try Genie yourself.

Have you heard of our AutoPM product?