Key Takeaways
- Apple pays Google ~$1B/year for Gemini powering Siri on 500M+ iOS devices with hybrid on-device/cloud architecture
- Microsoft integrates Claude Sonnet into M365 Copilot, making Claude available across all three major clouds (AWS, GCP, Azure)
- Anthropic launches $100M Claude Partner Network, positioning certification and partnership as the primary moat
- Legora raises $550M at $5.55B valuation built entirely on Claude, validating vertical AI as the value capture layer
- Three-tier market structure emerging: intelligence providers (Anthropic, Google, OpenAI), distribution platforms (Apple, Microsoft, Salesforce), and vertical specialists (Legora, Harvey)
The Platform-as-Intelligence Era
January through March 2026 witnessed the crystallization of a new business model that will define AI economics for the next decade: platform-as-intelligence (PaaI). Instead of every company building its own AI capability (the 2023-2024 thesis), the market is bifurcating into intelligence providers (frontier model labs) and intelligence consumers (platforms with distribution).
The Defining Deal: Apple and Google
Apple pays Google approximately $1B/year for a custom Gemini model powering Siri on 500M+ active iOS devices. Apple lacks a competitive frontier model despite billions in AI R&D. Rather than spend the estimated $5-10B to train one, Apple chose to license the capability and focus on the privacy architecture (60% on-device, <200ms) and user experience layer.
Google gets guaranteed distribution to half a billion devices—a scale no enterprise sales team could achieve. Apple gets frontier-class reasoning embedded into its flagship product without the $5-10B capex. Both win.
Anthropic's Tri-Cloud Distribution
Three days after the Apple-Google announcement, on March 9, Microsoft integrated Claude Sonnet into M365 Copilot. Three days after that, Anthropic launched the Claude Partner Network with $100M commitment.
Claude is now the only frontier model available across all three major clouds (AWS Bedrock, Google Cloud Vertex AI, Microsoft Azure). No other frontier model has achieved this distribution breadth. For enterprise buyers, this eliminates the cloud lock-in objection that has historically limited AI model standardization.
The Three-Tier Market Structure
The market is crystallizing into three distinct tiers:
Tier 1: Intelligence Providers
Anthropic (Claude), Google (Gemini), OpenAI (GPT). These companies sell reasoning capability through API access, licensing deals, and cloud marketplace availability. Their moat is model quality plus ecosystem stickiness (certifications, workflow integration).
Tier 2: Distribution Platforms
Apple (500M iOS devices), Microsoft (400M+ M365 users), Samsung (Galaxy AI), Salesforce, ServiceNow. These companies buy intelligence and embed it in their existing user base. Their moat is distribution and customer relationship.
Tier 3: Vertical Specialists
Legora ($5.55B, legal), Harvey ($8B, legal), and equivalents in healthcare, finance, and scientific research. These companies buy intelligence, wrap it in domain expertise, and sell domain-specific workflow automation. Their moat is vertical knowledge and integration depth.
The Vertical AI Thesis: Legora's Success
Legora illustrates all three tiers operating simultaneously. Anthropic provides the Claude backbone (Tier 1). Legora builds legal-specific workflow automation on top (Tier 3). Legora's 800+ law firm customers buy through Legora, not directly from Anthropic.
The 65-point adoption gap in legal (80% capability reach vs 15% actual usage) is being closed by Tier 3 specialists, not by Tier 1 providers directly. Legora's customers are not buying Claude—they are buying deposition review compressed from 20 hours to 2 hours, and $1,200/hr outside counsel replaced by internal AI review.
With $4.08B in legaltech VC funding in 2025 (+77% YoY) and Harvey and Legora racing to $8-11B valuations, the vertical specialists capture end-customer value. Anthropic's role is to be the preferred model backbone.
The Economics of PaaI
The economic structure of PaaI favors distribution platforms. Apple's $1B/year Gemini license is approximately $2/device/year for transformative AI capability across 500M devices. Google earns $1-5B/year but shares none of the consumer relationship.
Anthropic's Claude in M365 Copilot reaches hundreds of millions of users but Microsoft controls the customer relationship and pricing. Tier 1 providers become commoditized suppliers to Tier 2 platforms.
The Open-Source Wildcard
MiniMax M2.5 at $0.30/1M tokens means Tier 2 and Tier 3 companies have a credible option to self-host frontier-quality AI rather than license from Tier 1 providers. If open-source models continue closing the quality gap, the PaaI market could bifurcate: premium proprietary for high-stakes reasoning, open-source for routine intelligence.
This would compress Tier 1 provider margins on commodity workloads while maintaining premium pricing for differentiated capability.
The Privacy Play
Apple's architecture routes 60% of queries on-device with zero data egress, but 10% still reach Google servers. For enterprise and regulated industries, this privacy ambiguity may favor Anthropic's approach (dedicated cloud environments, known data handling) over consumer-facing arrangements where the intelligence provider is hidden.
Open-source self-hosting becomes the privacy play, not just the cost play.
The Convergence Risk
PaaI assumes model quality differentiation persists. If all frontier models converge to commodity quality (as MiniMax and DeepSeek suggest for coding tasks), then the licensing premium collapses. The counter-argument is that frontier capabilities (multimodal reasoning, million-token context, agentic planning) remain differentiated even as specific benchmarks converge.
What This Means for Practitioners
For startups: build vertical AI (Tier 3), not foundation models. The adoption gap (65 points in legal, likely 50+ in healthcare and finance) is where value accrues. The companies that close this gap win, regardless of which foundation model powers them.
For enterprises evaluating AI: negotiate multi-model licensing agreements now. Tri-cloud availability of Claude and increasing parity of open-source alternatives mean you have leverage. Lock in preferred vendor terms before model quality fully commoditizes.
For Tier 1 providers: the PaaI market rewards distribution breadth and ecosystem stickiness more than pure model quality. Invest in certification programs, partner co-development, and cloud marketplace presence. The API margin is shrinking; ecosystem margin is where long-term value concentrates.
The platform-as-intelligence market is operationalizing now. The companies that understand which tier they occupy and execute accordingly will thrive. Those caught in between—trying to compete as both intelligence provider and distribution platform—will struggle.
Platform-as-Intelligence: The Three-Tier Market Structure
AI market bifurcating into intelligence providers, distribution platforms, and vertical specialists—each with distinct moats and value capture.
| Moat | Risk | Tier | Example | Distribution | Revenue Model |
|---|---|---|---|---|---|
| Model quality + ecosystem | Open-source parity | Intelligence Provider | Anthropic (Claude) | All 3 clouds | API + Licensing |
| 500M+ users | Privacy tension | Distribution Platform | Apple (Siri+Gemini) | iOS ecosystem | Device/subscription |
| Domain expertise | Platform encroachment | Vertical Specialist | Legora ($5.55B) | 800+ law firms | SaaS per-seat |
Source: Cross-dossier synthesis (Apple, Anthropic, Legora announcements)