Pipeline Active
Last: 15:00 UTC|Next: 21:00 UTC
← Back to Insights

Google's AI Triple Moat: Powering Apple's Siri, Training Open-Source Competitors, and Collecting $20B in Search Revenue

Google licenses Gemini to Apple at $1B/year, trained LTX-2.3 for open-source competitors of its own Veo, and collects $20B from Apple for search defaults. It wins every distribution channel.

TL;DRBreakthrough 🟢
  • Google occupies three simultaneous roles in the AI value chain: paid model provider (Gemini to Apple, $1B/year), open-source infrastructure enabler (trained LTX-2.3, which competes with Google's own Veo), and search default revenue collector ($20B/year from Apple). No other AI company occupies more than two.
  • The Apple-Google relationship has inverted: Google previously paid Apple for distribution (search default). Now Apple also pays Google for capability (Gemini). Mutual dependency makes the relationship structurally difficult to terminate for either party.
  • Google providing training infrastructure for LTX-2.3 — an open-source video model that directly threatens Veo — follows the same strategic logic as NVIDIA's Nemotron: seed open-source to drive infrastructure demand, even at the cost of short-term proprietary revenue in a secondary market segment.
  • OpenAI occupies one role (paid model provider). NVIDIA occupies two (hardware + model). Anthropic occupies one (safety-differentiated model). Google's three-layer positioning makes it the most structurally resilient AI company regardless of which distribution model wins.
  • Primary vulnerability: DOJ antitrust scrutiny of the entangled Apple-Google financial relationship, and Apple's stated intention to eventually build its own frontier model (closing the 8x parameter gap is a multi-year, multi-billion effort).
Google AIGeminiApple SiriLTX-2.3competitive strategy5 min readMar 23, 2026
High ImpactMedium-termDevelopers building AI products should consider which layer of the value chain they are operating in. Building on Google's infrastructure (GCP for training, Gemini API for inference) reduces risk because Google profits from your success at multiple layers. Building exclusively on OpenAI's API creates single-vendor dependency without infrastructure layer benefits.Adoption: Apple-Google Siri launches April 2026 (iOS 26.4). The full impact of Google's multi-layer positioning will be visible by Q3 2026 when Siri AI usage data reveals adoption patterns. Google's infrastructure enablement of open-source (LTX-2 pattern) will likely expand to other modalities in 2026-2027.

Cross-Domain Connections

Apple licenses 1.2T Gemini from Google at $1B/year while Google pays Apple ~$20B/year for search defaultsGoogle provided training infrastructure for LTX-2.3, an open-source model competing with Google's own Veo

Google profits from all three distribution channels: paid model licensing (Gemini to Apple), open-source enablement (LTX-2 training infrastructure), and legacy distribution (search defaults). This multi-role positioning means Google's revenue is resilient to changes in which distribution channel wins the consumer AI market.

OpenAI raised $110B to scale Transformer models but has no infrastructure or distribution layerNVIDIA invests in Nscale ($2B Series C) and releases open-weight Nemotron but has no consumer distribution

Neither OpenAI (model-only) nor NVIDIA (hardware + model) can replicate Google's multi-layer positioning. OpenAI needs distribution partners; NVIDIA needs cloud partners. Google is its own distribution partner (Android + search), its own cloud provider (GCP), and its own model developer (Gemini) — vertical integration across all layers.

LTX-2.3 trained on Google infrastructure achieves production-grade 4K video threatening Runway and SynthesiaApple-Google deal validates Gemini as frontier model, making Gemini the default AI engine across both Android and iOS

Google's open-source strategy (enabling LTX-2.3) and proprietary strategy (licensing Gemini to Apple) operate at different layers and do not conflict. Open-source video generation commoditizes a modality Google does not prioritize; proprietary model licensing captures a modality Google does prioritize.

Key Takeaways

  • Google occupies three simultaneous roles in the AI value chain: paid model provider (Gemini to Apple, $1B/year), open-source infrastructure enabler (trained LTX-2.3, which competes with Google's own Veo), and search default revenue collector ($20B/year from Apple). No other AI company occupies more than two.
  • The Apple-Google relationship has inverted: Google previously paid Apple for distribution (search default). Now Apple also pays Google for capability (Gemini). Mutual dependency makes the relationship structurally difficult to terminate for either party.
  • Google providing training infrastructure for LTX-2.3 — an open-source video model that directly threatens Veo — follows the same strategic logic as NVIDIA's Nemotron: seed open-source to drive infrastructure demand, even at the cost of short-term proprietary revenue in a secondary market segment.
  • OpenAI occupies one role (paid model provider). NVIDIA occupies two (hardware + model). Anthropic occupies one (safety-differentiated model). Google's three-layer positioning makes it the most structurally resilient AI company regardless of which distribution model wins.
  • Primary vulnerability: DOJ antitrust scrutiny of the entangled Apple-Google financial relationship, and Apple's stated intention to eventually build its own frontier model (closing the 8x parameter gap is a multi-year, multi-billion effort).

Role 1: Paid Model Provider to the Most Demanding Customer

The Apple-Google Gemini deal makes Google the invisible AI brain behind billions of iOS devices starting April 2026. Users see 'Siri' — Google is entirely white-labeled. The economics favor Google structurally: $1B/year in guaranteed licensing revenue from a single customer, with expansion potential as Apple Intelligence capabilities grow.

The strategic significance goes beyond revenue: Apple — with a $3T market cap, 200K engineers, and unlimited R&D budget — evaluated all frontier model options and chose Gemini. This is the strongest possible commercial validation that Gemini is production-grade at consumer scale. Apple's 150B model was 8x too small (parameter gap), and the solution was a bigger Transformer, not a different architecture or a different provider.

Gemini now has distribution reach across both Android (native) and iOS (licensed) — no other model family has this combined reach. OpenAI's ChatGPT-Siri integration exists but has not expanded, and Apple's stated preference for the Google arrangement suggests ChatGPT's role may contract as Gemini's expands.

Role 2: Open-Source Infrastructure Enabler (Even for Competitors)

Google trained LTX-2 on its infrastructure — a decision that appears paradoxical given LTX-2.3 directly competes with Google's Veo video generation model. The strategic logic becomes clear through the infrastructure lens: Google Cloud sells compute, and training a 22B parameter model generates revenue while seeding an open-source ecosystem that increases long-term demand for cloud compute.

This mirrors NVIDIA's Nemotron strategy (give away the model, sell the hardware) but at the cloud layer: training is one-time revenue; the open-source ecosystem it enables drives ongoing hardware demand that benefits NVIDIA (and indirectly Google Cloud, as NVIDIA and hyperscalers are interdependent). By enabling LTX-2.3, Google pressures proprietary video competitors (Runway, Synthesia) without cannibalizing Gemini's core text/reasoning business — the open-source commoditization hits a secondary market Google does not prioritize while maintaining Gemini's premium positioning in general intelligence.

Role 3: The $20B Search Default That Funds Everything

The most remarkable aspect of the Apple-Google relationship is its financial inversion. Google previously paid Apple ~$20B/year for search defaults — an arrangement under DOJ scrutiny as part of the Google antitrust case. The Gemini licensing deal adds a reverse flow: Apple now pays Google $1B/year for AI capability. Net cash flow remains Google-to-Apple, but the directionality is changing.

This creates mutual dependency that makes the relationship structurally hard to terminate. Apple cannot easily replace Google's search payments (which fund much of Apple Services revenue) or Gemini's capability (which requires 8x its own model capacity to replicate). Google cannot easily replace Apple's distribution reach (40%+ of global smartphones). The relationship has become co-dependent in both financial and technical dimensions simultaneously.

The Contrast: Single-Role vs Multi-Role Positioning

Cross-referencing the March 2026 AI landscape, Google's multi-role positioning stands out structurally. OpenAI occupies one role (paid model provider via API). It has no cloud infrastructure layer (uses Azure) and no consumer distribution beyond ChatGPT. Its $110B raise gives it scaling capital but not structural positioning across value chain layers. NVIDIA occupies two roles (hardware seller and open-source model provider) but has no consumer distribution and no cloud infrastructure directly. Anthropic occupies one role (safety-differentiated model provider) with no infrastructure, distribution, or hardware strategy.

Google is its own distribution partner (Android + search), its own cloud provider (GCP), and its own model developer (Gemini) — vertical integration across all three value chain layers. In a market where distribution, infrastructure, and model quality are all independently valuable, this positioning provides resilience that single-role competitors cannot match regardless of model quality.

AI Company Multi-Role Positioning: Only Google Occupies All Three Value Chain Layers

Comparison of how major AI companies are positioned across model provision, infrastructure, and consumer distribution

CompanyInfrastructureModel ProviderRevenue LayersConsumer Distro
GoogleGoogle Cloud (LTX-2 training)Gemini (1.2T to Apple)3Android + iOS via Siri + Search
OpenAINone (uses Azure)GPT-5.4 ($110B funding)1ChatGPT app only
NVIDIAGPU sales + Nscale investmentNemotron 3 Super (free)2None
AnthropicNone (uses AWS/GCP)Claude (safety moat)1None
ApplePrivate Cloud ComputeNone (licenses Gemini)1iOS (billions)

Source: Synthesized from Apple-Google deal, LTX-2 release, NVIDIA GTC, company filings

Google's Structural Vulnerabilities

The primary risk is antitrust. The search default payments are already under DOJ scrutiny. Adding a Gemini licensing relationship with the same counterparty (Apple) creates further entanglement that regulators may view as anti-competitive in both search distribution and AI distribution markets simultaneously. If the DOJ forces separation of search payments and AI licensing, Google's multi-layer strategy faces structural disruption at its most financially significant layer.

The second vulnerability is Apple's stated ambition. Apple explicitly calls the Gemini arrangement 'temporary' — a signal that it recognizes the strategic risk of the dependency. But Apple's track record on foundation model development does not match its engineering reputation. The 150B model was 8x too small. Closing a factor-of-8 parameter gap with a company that has Google's infrastructure and training expertise is a multi-year effort that requires capital allocation Apple has historically directed toward hardware, not model training.

What This Means for Practitioners

For developers building AI products: Building on Google's infrastructure (GCP for training, Gemini API for inference) reduces single-vendor risk because Google profits from your success at multiple layers. Building exclusively on OpenAI's API creates single-vendor dependency without infrastructure layer benefits — if OpenAI's pricing changes or capability leadership shifts, migration costs are high. Google's multi-layer positioning makes it the most stable long-term infrastructure bet in the current AI landscape.

For competitive analysis: Google's triple-role positioning means it cannot 'lose' the AI platform war in the near term regardless of which distribution channel wins. If platform white-label succeeds, Google wins through Gemini licensing. If open-source local succeeds, Google wins through cloud infrastructure demand. If hardware-coupled models succeed, Google Cloud wins through GPU infrastructure. The only scenario where Google's positioning collapses is paradigm disruption (JEPA/world models replacing Transformers) combined with a new hardware architecture making GPU compute obsolete — both required simultaneously.

For AI infrastructure investment: Nscale's EU-jurisdiction positioning captures the compliance infrastructure layer that EU AI Act enforcement will require. For European enterprises, the combination of Nscale compute (data sovereignty) + Gemini/Claude safety certification could become a standard enterprise AI stack. The $14.6B Nscale valuation reflects the market's anticipation of this compliance-driven demand before enforcement begins.

Share