Key Takeaways
- Meta launched Muse Spark (April 8) with zero public weights — the first closed-source Meta AI model, ending its open-source commitment
- Qwen holds 69% of Hugging Face derivative model share versus Llama's 11%, down from Llama's 25% peak in November 2023
- China now leads cumulative Hugging Face downloads (1.15B) versus the US (723M), completing an ecosystem power reversal
- Google released Gemma 4 under Apache 2.0 (the most permissive license in Gemma history) the same week Meta retreated
- Chinese frontier models (GLM-5 at 85, Qwen3.5 at 81) still outperform Western open-weight alternatives (Gemma 4), making the US safe alternative secondary
Meta's Closed-Source Pivot Signals Defeat
On April 8, 2026, Meta released Muse Spark with zero public weights and an API-only distribution model. This is not a minor product release — it is a fundamental reversal of Meta's identity as an open-source AI company.
Meta's Llama series defined the 2023-2024 narrative: open-source AI could compete with proprietary models, democratization was viable, and the most powerful technology could be released freely without destroying competitive advantage. Llama 2 (July 2023) set the tone, followed by Llama 3 (April 2024). Both became the foundation for the global open-source AI ecosystem.
Muse Spark changes this calculation. It is also worse: ranked 4th on Artificial Analysis (score 52 versus top models at 57), the closed-source strategy did not even deliver superior capability. Meta invested heavily to go closed-source and produced a below-frontier model. The explanation is straightforward — Meta lost the open-source ecosystem war.
Qwen Dominance: 69% of the Derivative Ecosystem
The quantitative evidence of Meta's loss is stark. As of February 2026, Qwen holds 69% of Hugging Face derivative model share, meaning the vast majority of new models being built on top of open-weight foundations are building on Chinese architecture, not Meta's.
This represents a complete reversal from November 2023, when Llama held a 25% peak share. Llama's decline from 25% to 11% is not gradual market share loss — it is ecosystem abandonment. Developers chose to build on Qwen because Qwen models became more capable, received more frequent updates, and benefited from coordinated optimization by Chinese companies with aligned financial incentives.
The scale is enormous. Qwen hit 700M cumulative downloads by January 2026. In December 2025 alone, Qwen downloads exceeded the next eight model families combined. This is not niche adoption — this is the dominant foundation for open-source AI development globally.
| Model Family | Hugging Face Derivative Share | Cumulative Downloads | Monthly Peak |
|---|---|---|---|
| Qwen | 69% | 700M (Jan 2026) | Dec 2025 (exceeded 8 families) |
| Llama | 11% | Not disclosed | Not disclosed |
| Mistral | 12% | Not disclosed | Not disclosed |
| Others | 8% | Not disclosed | Not disclosed |
Compounding the ecosystem data: China has overtaken the US in cumulative Hugging Face downloads. China's 1.15B cumulative downloads versus US's 723M shows that the developer base itself has shifted geographically. This is not temporary noise — this is structural.
Google's Strategic Response: The Safe Western Alternative
Google released Gemma 4 under Apache 2.0 on April 2, 2026 — six days before Meta's closed-source announcement. The timing is not accidental. Google is explicitly positioning Gemma 4 as the Western alternative to Chinese dominance in open-source.
Gemma 4's Apache 2.0 license is significant: it is the most permissive license Google has ever used for Gemma, removing all commercial restrictions. This is a direct signal to enterprises that Gemma is the open-source choice that does not carry Chinese geopolitical risk.
But the data reveals Google's true position: Gemma 4 is playing for second place. The Arena AI open-model leaderboard ranking shows Chinese models still lead frontier open-weight performance:
- GLM-5: 85 Arena rating (Chinese, Alibaba)
- Qwen3.5 397B: 81 Arena rating (Chinese, Alibaba)
- Gemma 4: 78 Arena rating (Western, Apache 2.0)
Gemma 4's 400M cumulative downloads and 100K+ community-derived models show healthy adoption, but adoption is not the same as dominance. Google has created a viable Western open-source option, but it is acknowledging that the best open-weight models are Chinese.
The Three-Player Equilibrium: US Export Controls Accelerated Chinese Dominance
The deepest implication of the open-source inversion is a geopolitical paradox: US export controls designed to slow Chinese AI capabilities may have accelerated Chinese dominance in open-source by forcing architectural efficiency innovations that made Chinese models more globally deployable.
US export controls constrained compute availability for Chinese AI labs. This constraint forced innovation toward sparse architectures (Mixture-of-Experts), quantization techniques, and neuro-symbolic hybrids that reduced compute requirements. These same efficiency innovations made Chinese models deployable in resource-constrained environments across the Global South and Southeast Asia — regions where compute cost is the binding constraint.
The structural dynamic now is a stable three-player equilibrium:
- Chinese frontier (closed): GPT-5.4, Gemini 3.1 Pro equivalents — available via API in China
- Chinese open-weight (leading): GLM-5, Qwen3.5 — dominate performance on open-weight leaderboards, globally deployable
- Western frontier (closed): OpenAI, Anthropic, Google Gemini — expensive APIs, controlled deployment
- Western open-weight (secondary): Gemma 4 — Apache 2.0, governance-friendly, but not performance-leading
Meta's retreat from open-source leaves no Western competitor in the performance-leading tier of open-weight models.
What This Means for Practitioners
If you are a developer building on open-weight models, Qwen and GLM are now the dominant technical choices. Their higher download counts mean larger community ecosystems, more third-party tools, and more documentation. Dismissing them on geopolitical grounds is a productivity penalty.
If you are an enterprise buyer facing supply chain risk around Chinese dependencies, Gemma 4 Apache 2.0 is now your primary option for open-source. It is governance-friendly, Google-supported, and Western. Accept that it is not performance-leading — evaluate whether the governance benefit justifies the capability gap.
If you are a researcher at a Western AI lab, the data should be clear: architectural efficiency (MoE, quantization, neuro-symbolic hybrids) is now the competitive axis that matters. Brute-force scale produced Muse Spark's mediocre 4th-place ranking despite Meta's engineering resources. Efficiency architecture produced Qwen's dominance despite compute constraints. This is the frontier where Western labs still have room to compete.