Pipeline Active
Last: 21:00 UTC|Next: 03:00 UTC
← Back to Insights

The Governance Gap: $130 Billion in AI Capital, 27 Million Pounds in Safety, and No Framework for China's Models

UK Alignment Project's 27 million pounds in safety research represents 0.02% of the $130B+ raised by OpenAI and Anthropic—but the real gap is jurisdictional, not financial. GLM-5 and DeepSeek are globally distributed with zero Western safety evaluation, operating outside the Five Eyes alignment coalition.

TL;DRCautionary 🔴
  • UK Alignment Project: £27M funding for 60 grants across 42 countries—largest coordinated alignment research program
  • Funding coalition includes OpenAI (£7.5M), Microsoft, Anthropic, AWS—the same companies raising $130B to advance capabilities
  • Safety research represents 0.02% of concurrent capability capital deployment—structural under-investment in governance layer
  • GLM-5 (MIT license, Huawei Ascend, $1/M tokens) and DeepSeek (free 1M context) distributed globally with zero Western safety evaluation
  • Governance framework designed for Western duopoly; actual market includes Chinese frontier models operating outside Five Eyes alignment coalition
governancealignmentsafetyuk-alignment-projectchina-ai4 min readFeb 24, 2026

Key Takeaways

  • UK Alignment Project: £27M funding for 60 grants across 42 countries—largest coordinated alignment research program
  • Funding coalition includes OpenAI (£7.5M), Microsoft, Anthropic, AWS—the same companies raising $130B to advance capabilities
  • Safety research represents 0.02% of concurrent capability capital deployment—structural under-investment in governance layer
  • GLM-5 (MIT license, Huawei Ascend, $1/M tokens) and DeepSeek (free 1M context) distributed globally with zero Western safety evaluation
  • Governance framework designed for Western duopoly; actual market includes Chinese frontier models operating outside Five Eyes alignment coalition

The Capital Ratio Tells the Story

UK AI Safety Institute announced 60 grant recipients for the Alignment Project—£27M funding from a coalition including OpenAI, Microsoft, Anthropic, AWS, and sovereign wealth. This is significant: the world's first coordinated international alignment research program at scale.

But the ratio is stark. In February 2026 alone, OpenAI and Anthropic raised $130B. The UK Alignment Project represents 0.02% of that capital—about £1 for every £4,600 deployed to frontier capability development.

This isn't necessarily wrong—capability development requires massive infrastructure investment; research funding scales differently. But the structural implication is clear: alignment research is being underfunded relative to capability scaling at the moment capability is accelerating fastest.

The Governance Architecture: Designed for Yesterday's Market

The UK Alignment Project was designed for a market that no longer exists.

The project spans foundational theory, control methods, and rigorous testing frameworks—funded by OpenAI, Microsoft, Anthropic, AWS, and coordinated through UK and Five Eyes AI Safety Institutes. This assumes a world where a few Western labs produce models evaluated by shared safety institutions.

The actual market includes Chinese frontier models (GLM-5, DeepSeek) distributed globally with zero Western safety evaluation, operating on independent hardware (Huawei Ascend) outside export control jurisdiction, and priced to maximize adoption in precisely the countries (Global South) that UK alignment research targets.

The structure embeds a structural conflict of interest: the same companies contributing to alignment research will be evaluated against research funded partly by their own contributions.

The Jurisdictional Gap: You Cannot Align Models You Do Not Control

The deepest problem is jurisdictional, not financial. GLM-5 ships under MIT license with weights on HuggingFace at $1/M input tokens—the governance framework has no jurisdiction over the most price-competitive frontier model.

DeepSeek's 1M token chatbot is used by millions with no UK/EU alignment evaluation. Seedance 2.0 is available in China with planned global rollout via CapCut. None of these systems are subject to the safety evaluation frameworks being built by the UK Alignment Project.

The International AI Safety Report 2026 (published February 3) is the conceptual framework for the Alignment Project. The report explicitly states that 'real-world evidence of effectiveness [of safeguards] remains limited'—acknowledging that current governance methods are deployed ahead of empirical validation.

But this limitation is even more stark for Chinese models: there is no shared evaluation framework at all.

The Regulatory Blind Spot: Global Safety Governance is Fragmenting

Three structural shifts are creating a governance blind spot:

1. Pluralistic alignment fragments governance: The Alignment Project's 60 research teams are exploring pluralistic approaches—tailored responses to individual users, controversial-response avoidance, majority-view alignment. This reflects acknowledgment that universal alignment is politically unachievable. But without shared evaluation frameworks, "aligned to different values" becomes "different regulation in different jurisdictions."

2. Industry funds government research: OpenAI contributes £7.5M to Alignment Project—the companies being evaluated partly fund the evaluation framework. This is analogous to pharma funding drug safety research but more structural: if the research question is "what should AI systems value," the conflict of interest is fundamental.

3. China-aligned infrastructure is excluded: The Alignment Project is explicitly Five Eyes + industry alignment, with no Chinese institutions or models included. This creates a governance architecture that applies to 15-20% of the actual global AI deployment landscape.

The Enterprise Compliance Gap: How Organizations Prepare

Enterprises deploying Chinese models (GLM-5, DeepSeek) for cost optimization now face an ambiguous compliance situation: no Western safety certification exists for these models, but no regulation currently requires such certification for API usage. The EU AI Act Article 6 focuses on high-risk application providers, not model consumers.

This regulatory ambiguity will resolve within 12-18 months—but in which direction? Three possible paths:

  1. Extend governance frameworks to include Chinese models: Politically difficult, requires bilateral agreements
  2. Require application-layer compliance regardless of model provider: Enforceable but burdensome for enterprises
  3. Accept bifurcated governance: Pragmatic but creates global safety gaps

Enterprises should document model provenance and evaluation processes now to prepare for future compliance requirements.

What This Means for Practitioners

  • Regulators: The jurisdictional gap is the most urgent policy challenge. UK's New Delhi announcement signals awareness of Global South adoption dynamics, but no mechanism exists to address Chinese model governance. Decision required within 12-18 months.
  • Enterprises: Prepare compliance documentation for Chinese model deployments. Multi-provider strategies should include evaluation process records for all models, Chinese and Western.
  • Safety researchers: The Alignment Project's 60 research teams should prioritize governance frameworks that are model-agnostic and hardware-agnostic—capable of evaluating GLM-5, DeepSeek, AND Western models against shared benchmarks.
  • Infrastructure vendors: Third-party safety evaluation companies will emerge to fill the jurisdictional gap. Independent AI safety certification (analogous to UL certification for electronics) across all model origins could become a critical infrastructure layer.
  • Policy makers: Focus on application-layer accountability rather than model-layer control. If you cannot align models you don't control, ensure applications deploying those models are accountable for outcomes.
Share