Your Zero-Trust Architecture Has a Blind Spot. It's Called MCP.
You spent years building zero-trust. Verified every user. Locked down every device. Inspected every packet. Then you connected an AI agent to your systems via the Model Context Protocol and implicitly trusted everything the agent was told.
That contradiction just became the most talked-about topic in AI security.
In the past 72 hours, Dark Reading previewed an RSAC 2026 session where Netskope researcher Gianpietro Cutolo will argue that MCP's security risks are architectural — not the kind you can address via patching or configuration changes. SC Media published an essay calling MCP "the backdoor your zero-trust architecture forgot to close". And an independent researcher scanned 900 MCP configurations on GitHub and found that 75% had security problems.
The industry is converging on an uncomfortable conclusion: the protocol that connects your AI agents to everything isn't covered by any of the security layers you already have.
The Numbers
The data from independent research groups paints a consistent picture.
Security researchers catalogued approximately 7,000 internet-exposed MCP servers — roughly half of all known deployments. Many operate with no authorization controls whatsoever.
Knostic researchers scanned approximately 2,000 publicly accessible MCP servers and found that every single verified instance granted access to internal tool listings without any authentication.
A comprehensive scan of 2,614 MCP implementations found that 82% of those handling file operations were vulnerable to path traversal attacks, and 67% carried code injection risk. Between 38 and 41% of 518 officially registered MCP servers offered no meaningful authentication at all.
And the Orchesis scan of 900+ MCP configurations committed to public GitHub repositories found that three out of four failed basic security checks.
These aren't theoretical vulnerabilities. They're the current state of production deployments.
The Attack Surface Nobody Named
Here's the conceptual problem: the cybersecurity industry has mature defences for network-layer attacks, compromised credentials, and device posture. We have names for these threats, frameworks to address them, and tools to enforce policy.
But MCP introduces something different. SC Media's Sunil Gentyala describes what researchers now call the "context-layer attack surface": the capacity for malicious or manipulated content flowing into an AI agent's reasoning process to induce it to perform unauthorised operations — without any underlying model compromise.
This is not a network attack. It's not credential theft. It's not even prompt injection in the traditional sense. It's the ability to manipulate what an agent believes about the world, and then watch it act on those manipulated beliefs using real tools with real permissions.
Your zero-trust architecture verified the user. It verified the device. It inspected the network traffic. But the MCP connection sits between your agent and its tools, carrying context that nobody is inspecting, authenticating, or rate-limiting.
As Security Boulevard put it in their comprehensive MCP vulnerability guide: "Unlike static APIs that process predictable, human-driven requests, MCP involves agent-driven decision-making, shifting contexts, and evolving chains of tools. Every interaction creates new risk vectors. Every context switch opens new paths for exploitation."
Why You Can't Patch This
Netskope's Gianpietro Cutolo, whose RSAC 2026 session is scheduled for next week, makes a specific claim: MCP's security risks exist at the architectural level in both LLMs and in MCP itself. They're not implementation bugs. They're design decisions.
The protocol was designed for interoperability. It succeeded. Every major AI platform adopted it — Anthropic, OpenAI, Google, Microsoft, LangChain, Vercel, Pydantic AI. The standardisation worked exactly as intended.
But that interoperability came with an implicit trust model: the agent trusts the server to return honest tool descriptions. The server trusts the agent to make reasonable requests. Neither party verifies the other's identity, integrity, or intent.
You can patch individual CVEs. You can fix specific server implementations. But you can't patch away the fact that the protocol itself has no authentication, no message signing, no tamper detection, and no way to verify that the tools an agent sees are the tools the administrator intended.
This is what Cutolo means by "architectural." The attack surface isn't in the bugs — it's in the blueprint.
What Zero-Trust for the Context Layer Looks Like
If the security industry spent a decade extending zero-trust from networks to identities to devices, the next extension is to the context layer. Here's what that means in practice:
Treat MCP connections as privileged access pathways. Every connection between an agent and an MCP server is a pathway to sensitive data and operations. Inventory them. Classify them. Govern them with the same rigour as admin access.
Inspect every tool call. The agent doesn't just send requests — it sends context. Tool names, parameters, embedded content. Every tool call is a potential exfiltration channel, and every response is a potential injection point. If you wouldn't let unaudited HTTP requests reach your database, you shouldn't let unaudited tool calls reach your tools.
Enforce least privilege per tool, not per server. Most MCP servers expose a bundle of tools with a single set of permissions. An agent that needs read access to a calendar shouldn't automatically get write access to email. Tool-level authorisation is the MCP equivalent of role-based access control.
Scan for data in transit. Traditional DLP catches files leaving the perimeter. MCP DLP has to catch data leaving through tool call parameters — API keys in arguments, PII in prompts, credentials in responses. The exfiltration channel is the tool call itself.
Log everything. If your agent made a decision, you need to know what context it saw when it made it. Without audit logging at the MCP layer, you can't investigate incidents, prove compliance, or even know that something went wrong.
Where This Is Going
The products are starting to appear. PointGuard AI shipped an MCP Security Gateway on March 18. Aurascape announced a Zero-Bypass MCP Gateway on March 17. Open-source projects like AgentSign are building cryptographic identity layers for AI agents.
The message from RSAC is clear: this won't be solved by a patch to MCP v2. It'll be solved by building the same kind of security infrastructure around the context layer that we built around the network, the identity, and the device.
The zero-trust architecture you already have isn't wrong. It's just incomplete. The context layer is the next perimeter to close.
This post draws on reporting from Dark Reading (March 19, 2026), SC Media (March 18, 2026), Security Boulevard (March 19, 2026), Orchesis (March 18, 2026), Descope (February 25, 2026), and Agent Wars (March 13, 2026). All statistics are from the cited sources.