Pipeline Active
Last: 21:00 UTC|Next: 03:00 UTC
← Back to Insights

MCP Is Winning and Losing Simultaneously: 47% Token Savings, 25% Vulnerable Skills

The Model Context Protocol is becoming the de facto standard for AI tool integration — GPT-5.4's Tool Search achieved 47% token savings across 36 MCP servers, and Cognigy operationalized MCP enterprise-wide. Yet Zenity Labs found 25% of MCP skills contain vulnerabilities. Enterprise adoption is accelerating while the open-source ecosystem remains a persistent attack surface.

TL;DRNeutral
  • <strong>MCP adoption is accelerating:</strong> OpenAI's GPT-5.4 Tool Search reduced context window consumption by 47% across 36 MCP servers, and NiCE Cognigy is operationalizing MCP as the production standard for enterprise CX customers like Allianz and Lufthansa.
  • <strong>Security deficit at scale:</strong> Zenity Labs' PleaseFix disclosure revealed 25% of 30,000+ analyzed MCP skills contain vulnerabilities, with 7.7% (820+ skills) in ClawHub being outright malicious — an npm-scale supply chain crisis at peak adoption velocity.
  • <strong>Enterprise-open source bifurcation:</strong> Cognigy is hardening MCP with OAuth 2.0 credential rotation and governance layers, while open-source repositories remain largely unaudited, creating a two-tier market where safety comes at a premium.
  • <strong>Attack surface paradox:</strong> GPT-5.4's Tool Search inadvertently reduces vulnerability by loading fewer tool definitions simultaneously, but does not address the fundamental problem of poisoned skills.
  • <strong>The next 6-12 months will determine whether MCP becomes HTTP (trusted standard) or Flash (ubiquitous then killed by security).</strong>
MCPModel Context Protocolsecurityprompt injectionenterprise AI5 min readMar 11, 2026

Key Takeaways

  • MCP adoption is accelerating: OpenAI's GPT-5.4 Tool Search reduced context window consumption by 47% across 36 MCP servers, and NiCE Cognigy is operationalizing MCP as the production standard for enterprise CX customers like Allianz and Lufthansa.
  • Security deficit at scale: Zenity Labs' PleaseFix disclosure revealed 25% of 30,000+ analyzed MCP skills contain vulnerabilities, with 7.7% (820+ skills) in ClawHub being outright malicious — an npm-scale supply chain crisis at peak adoption velocity.
  • Enterprise-open source bifurcation: Cognigy is hardening MCP with OAuth 2.0 credential rotation and governance layers, while open-source repositories remain largely unaudited, creating a two-tier market where safety comes at a premium.
  • Attack surface paradox: GPT-5.4's Tool Search inadvertently reduces vulnerability by loading fewer tool definitions simultaneously, but does not address the fundamental problem of poisoned skills.
  • The next 6-12 months will determine whether MCP becomes HTTP (trusted standard) or Flash (ubiquitous then killed by security).

MCP's Three-Layer Adoption Wave

The Model Context Protocol, released by Anthropic in November 2024, has achieved something rare in AI infrastructure: cross-ecosystem adoption without vendor lock-in. In the first two weeks of March 2026 alone, three developments confirmed MCP's trajectory toward becoming the de facto standard.

First, the frontier model layer: OpenAI's GPT-5.4 (March 5) shipped Tool Search, a lazy-loading system that dynamically discovers and loads MCP tool definitions on demand rather than pre-loading all schemas into context. Tested across 250 tasks with 36 MCP servers, Tool Search reduced total token usage by 47% while maintaining accuracy. This is not marginal: in enterprise deployments where dozens of integrated tools previously consumed 30-50% of context window, Tool Search fundamentally changes the unit economics of agentic reasoning.

Second, the enterprise platform layer: NiCE Cognigy's Nexus 2026 announcements (March 10) made MCP the production integration standard for the enterprise CX market leader. Cognigy — acquired by NiCE for $955M in July 2025 — is deploying MCP with OAuth 2.0 credential management across customers like Allianz, Lufthansa, and Generali. This is not developer experimentation; it is regulated enterprise production.

Third, the developer tool layer: The protocol's adoption footprint now spans developer tools (Cursor, Zed), enterprise platforms (Cognigy, Salesforce), and frontier model APIs (OpenAI, Anthropic). MCP is approaching the network effect threshold where alternatives become impractical.

The 25% Vulnerability Rate: An npm-Scale Supply Chain Crisis

Simultaneous with this adoption explosion, Zenity Labs' PleaseFix disclosure (March 4) revealed that the MCP ecosystem has a systemic security deficit. Of 30,000+ analyzed skills across repositories, 25%+ contained vulnerabilities. In ClawHub alone (10,700 skills), 820+ were outright malicious — a 7.7% poisoning rate. The attack vectors are not theoretical: PerplexedBrowser demonstrated zero-click agent compromise via indirect prompt injection through a calendar invite, and CVE-2026-2256 mapped a complete prompt-to-tool-to-shell RCE chain.

The parallel to npm's early security crisis is instructive but understates the risk. When malicious npm packages execute, they run with Node.js process permissions. When malicious MCP skills execute, they run with the AI agent's full credential set — potentially including access to databases, APIs, password managers, and shell execution. The blast radius per compromised skill is orders of magnitude larger.

MCP Ecosystem Security: Key Metrics

Quantifying the scale of the MCP security deficit across the open-source skill ecosystem

30,000+
Skills Analyzed
25%+
Vulnerable Skills
7.7%
Malicious (ClawHub)
47%
GPT-5.4 Token Savings

Source: Zenity Labs, OpenAI

Enterprise Hardening vs. Open-Source Vulnerability

The market response is bifurcating along predictable lines. Enterprise platforms (Cognigy) are hardening MCP with OAuth 2.0 credential rotation, governance layers between model and tool execution, and embedded multivariate testing for pre-production validation. Open-source MCP repositories remain largely unaudited. GPT-5.4's Tool Search reduces the attack surface by loading fewer tool definitions simultaneously, but does not address poisoned tool definitions themselves.

This creates a critical question for the next 6-12 months: will MCP's security posture converge toward enterprise-grade (led by Cognigy-style governance layers) or remain fragmented (with open-source repositories as persistent infection vectors)? The answer likely determines whether MCP becomes HTTP (trusted universal standard) or Flash (ubiquitous but eventually killed by security concerns).

MCP Protocol: Simultaneous Adoption and Vulnerability Discovery (Feb-Mar 2026)

Key events showing MCP achieving enterprise adoption while security vulnerabilities are simultaneously disclosed

Feb 15, 2026Cognigy Ships OAuth 2.0 for MCP

Proactive security hardening before PleaseFix disclosure

Feb 28, 2026CVE-2026-2256 Published

MS-Agent RCE via prompt-to-shell attack chain

Mar 3, 2026Gemini 3.1 Flash-Lite Preview

Google joins MCP-compatible model ecosystem at $0.25/1M input

Mar 4, 2026PleaseFix Disclosure

25% of 30,000+ MCP skills vulnerable, 7.7% malicious

Mar 5, 2026GPT-5.4 Tool Search Launches

47% token reduction across 36 MCP servers

Mar 10, 2026NiCE Cognigy Nexus 2026

MCP becomes enterprise CX production standard

Source: OpenAI, Zenity Labs, Cognigy, Google announcements

Contrarian Perspectives

The security crisis may accelerate enterprise adoption: If open-source MCP deployments become known attack vectors, enterprises will pay premium prices for Cognigy/NiCE-style governed platforms — creating a moat for incumbents. The security crisis becomes a competitive advantage for companies that solve it first.

What the bulls miss: The responsible disclosure process is working. Perplexity patched PerplexedBrowser before public disclosure. Cognigy shipped OAuth 2.0 for MCP before PleaseFix was published. The security community is engaging with AI vendors at a pace that suggests the ecosystem can harden faster than attackers can exploit.

What the bears miss: The 25% vulnerability rate in a 30,000+ skill ecosystem means thousands of vulnerable integration points are already deployed in production. Even if new skills are better audited, the installed base of vulnerable skills creates a persistent attack surface that will take years to remediate.

What This Means for Practitioners

If you are building agentic systems on MCP:

  • Audit skill dependencies with npm rigor: MCP skills are now as critical to your AI supply chain as npm packages. Use SBOM-style skill inventories, maintain upgrade schedules, and monitor security disclosures from Zenity Labs and the security community.
  • Mandate OAuth 2.0 and governance layers: For enterprise deployments, follow Cognigy's pattern: implement credential rotation, govern the space between model and tool execution, and embed adversarial testing before production. Do not rely on the open-source ecosystem's current security posture.
  • GPT-5.4 Tool Search is a partial mitigation: The 47% token savings is real and valuable for context management, but it does not solve the fundamental skill poisoning vector. Tool Search inadvertently reduces attack surface by minimizing the number of simultaneously loaded tool definitions, but remain vigilant about which skills you load on demand.
  • Plan for a 6-12 month security hardening window: Expect emergence of MCP security scanning tools (analogous to Snyk/Socket for npm) within 3-6 months. Prioritize vendors and tools that embed security auditing before that window closes.
Share