Pipeline Active
Last: 21:00 UTC|Next: 03:00 UTC
← Back to Insights

Physical AI's Full Stack Is Now In Production: Apptronik, Gemini Robotics, NVIDIA Jetson

Apptronik's $935M Series A with Google DeepMind Gemini Robotics 1.5 powering Apollo humanoids at Mercedes-Benz marks the first commercial deployment of the complete physical AI stack: foundation model + edge compute + robot body.

TL;DRNeutral
  • Apptronik raised $520M Series A extension (Feb 11, 2026) at $5.5B valuation — total Series A now $935M — with Google, Mercedes-Benz, John Deere, AT&T Ventures, and Qatar Investment Authority as investors
  • Apollo humanoid robots are in <em>production</em> (not pilot) at Mercedes-Benz, GXO Logistics, and Jabil — running on Google DeepMind Gemini Robotics 1.5 AI models and NVIDIA Jetson AGX Orin edge compute
  • Gemini Robotics 1.5 enables cross-robot learning transfer across different hardware configurations — solving the per-deployment retraining bottleneck that has blocked humanoid scale
  • The humanoid sector raised $6.1B in 2025 (300% increase from $1.5B in 2024); Tesla is spending $20B capex on Optimus in 2026
  • The complete physical AI stack is now a production-deployable architecture: foundation model (Gemini Robotics) + edge compute (NVIDIA Jetson) + hardware platform (Apollo) — not a research abstraction
apptronik apollogemini roboticshumanoid robot 2026physical ai stacknvidia jetson agx orin6 min readFeb 17, 2026

Key Takeaways

  • Apptronik raised $520M Series A extension (Feb 11, 2026) at $5.5B valuation — total Series A now $935M — with Google, Mercedes-Benz, John Deere, AT&T Ventures, and Qatar Investment Authority as investors
  • Apollo humanoid robots are in production (not pilot) at Mercedes-Benz, GXO Logistics, and Jabil — running on Google DeepMind Gemini Robotics 1.5 AI models and NVIDIA Jetson AGX Orin edge compute
  • Gemini Robotics 1.5 enables cross-robot learning transfer across different hardware configurations — solving the per-deployment retraining bottleneck that has blocked humanoid scale
  • The humanoid sector raised $6.1B in 2025 (300% increase from $1.5B in 2024); Tesla is spending $20B capex on Optimus in 2026
  • The complete physical AI stack is now a production-deployable architecture: foundation model (Gemini Robotics) + edge compute (NVIDIA Jetson) + hardware platform (Apollo) — not a research abstraction

The Stack Is Assembled

Physical AI — robots with general-purpose AI intelligence rather than task-specific programming — has required three components to converge simultaneously: a foundation model capable of physical reasoning, edge compute powerful enough to run inference onboard, and robot hardware with the mechanical sophistication to execute AI decisions safely. In February 2026, all three are deployed together in production manufacturing environments for the first time.

On February 11, 2026, Apptronik announced a $520M Series A extension bringing its total Series A to $935M at a $5.5B valuation — three times its valuation from one year earlier. The strategic investors (Google, Mercedes-Benz, John Deere, AT&T Ventures, Qatar Investment Authority) are not venture bets: they are industrial companies deploying Apollo in production environments.

The Physical AI Stack in Detail

Layer 1: Foundation Model — Google DeepMind Gemini Robotics 1.5

Apptronik's strategic partnership with Google DeepMind integrates Gemini Robotics 1.5 and Gemini Robotics-ER 1.5 as Apollo's cognitive intelligence layer. These models enable Apollo to follow natural language instructions, handle unfamiliar objects, and adapt to environment changes without retraining.

The key innovation is cross-robot learning transfer: learnings from one Apollo instance transfer to others with different hardware configurations. This breaks the traditional robotics paradigm where each deployment required custom training from scratch — the economic bottleneck that has prevented humanoid scaling. With Gemini Robotics, deploying a new Apollo instance at a new facility is closer to provisioning a new cloud server than retraining a neural network.

Layer 2: Edge Compute — NVIDIA Jetson AGX Orin

Apollo's onboard compute runs NVIDIA Jetson AGX Orin and Jetson Orin NX modules. Physical AI decisions — manipulation, balance, obstacle avoidance — require sub-50ms latency that cloud round-trips cannot guarantee. The Jetson platform provides the edge inference capability for real-time physical AI. NVIDIA's February 2026 release of the Cosmos physical AI model family directly targets this Jetson hardware ecosystem.

The implication: NVIDIA captures the edge compute layer of the physical AI stack the same way it captured the datacenter training layer — by providing the reference hardware for the AI models (Cosmos) and the production deployment platform (Jetson) simultaneously.

Layer 3: Robot Hardware — Apollo Force Control

Apollo's defining hardware characteristic is force control rather than position control. Traditional industrial robots execute precise positional commands and fail on unexpected contact. Apollo's force control allows it to detect and adapt to unexpected resistance — a dropped object, a human bump, surface texture variation — making it safe for human-shared environments without full human exclusion.

Physical specs: 5'8", 160 lbs, 55 lb payload, 4-hour hot-swappable battery (no shutdown required for battery swap), OLED display for status communication with human co-workers. Deployed at Mercedes-Benz for automotive assembly tasks, GXO Logistics for warehousing, and Jabil for electronics manufacturing.

The Investment Signal: Industrial Capital, Not Venture Speculation

The composition of Apptronik's investor base reveals the nature of the commitment. Mercedes-Benz is both an investor and a production deployment customer — they are not placing a venture bet, they are securing supply chain access to humanoid robots before the market inflects. John Deere's investment signals planned expansion from manufacturing to agriculture. AT&T Ventures signals logistics and connectivity infrastructure buildout. Qatar Investment Authority signals sovereign capability investment.

The broader market signal: the humanoid sector raised $6.1B in 2025 — a 300% increase from $1.5B in 2024 across 139 deals. Tesla is spending $20B capex on Optimus in 2026. Citi projects 648 million humanoid robots by 2050, with a $209B TAM by 2035 and $7T by 2050.

These are not speculative projections from AI enthusiasts. They are capital allocation decisions from companies with 100-year planning horizons, evaluated against production data from current deployments.

Physical AI Stack: Investment & Deployment Signals (Feb 2026)

Key metrics from the first complete physical AI stack commercial deployment

$935M
Apptronik Total Series A
3x valuation in 1 year
$6.1B
Humanoid Sector 2025 Funding
+300% from 2024
55 lbs
Apollo Payload Capacity
4-hour hot-swap battery
$7T
Citi 2050 TAM Projection
648M units projected

Source: Apptronik press release, Citi Research, industry data Feb 2026

Google's Physical AI Strategy: Foundation Model Leverage

Google's dual position — as both an Apptronik investor and the AI model provider (Gemini Robotics) — reflects a deliberate platform strategy. Gemini Robotics is not a robot-specific model; it's a general multimodal foundation model fine-tuned for physical world interaction. This mirrors how GPT-4 (a general language model) became the foundation for hundreds of domain-specific applications.

The strategic bet: whoever controls the foundation model layer for physical AI controls the intelligence for all robot hardware — regardless of who makes the mechanical components. Google's investment in Apptronik ensures production deployment data flows back into Gemini Robotics training, while Apptronik's hardware improvement benefits Google's AI model performance.

NVIDIA plays a complementary role: NVIDIA invested in Runway (video/world models) while open-sourcing Cosmos and selling Jetson hardware. The physical AI ecosystem is not winner-take-all; it's a stack where Google captures intelligence, NVIDIA captures compute, and hardware companies capture the robot body layer.

What's Not Yet Solved

Current Apollo deployments operate in controlled factory environments with light curtains and human-exclusion zones — not the 'collaborative safety' future where robots share unstructured spaces with humans. The force control hardware is necessary but not sufficient for safe deployment at population scale. AI-specific safety challenges — cross-robot learning transfer changing behavior unpredictably after model updates, adversarial inputs exploiting foundation model vulnerabilities in physical robots — lack regulatory frameworks.

The governance gap identified in prior analysis (67-page voluntary NIST extension vs. $1.3B+ investment velocity in physical AI) is widening. Production deployments are proceeding under traditional industrial safety frameworks designed for position-control robots, not AI-adaptive systems.

What This Means for Practitioners

For robotics engineers: The reference production stack is now Gemini Robotics 1.5 + NVIDIA Jetson AGX Orin + force-control hardware. Start with this stack for new humanoid projects rather than building AI integration from scratch. The Gemini Robotics cross-robot transfer capability is the critical differentiator — evaluate it as a deployment cost multiplier, not just a capability claim.

For enterprise architects planning automation: Apollo's 'smartphone platform' model (robot body + AI layer + application ecosystem) is the emerging deployment paradigm. Mercedes-Benz, GXO, and Jabil are the production reference customers. The 2026 timeline for commercial unit production makes serious RFP processes viable now.

For AI researchers working on physical intelligence: The open questions are (1) AI-specific safety validation for cross-robot learning transfers, (2) adversarial robustness for physical AI in production environments, and (3) governance frameworks for AI models that directly control physical actions. These are unsolved problems with production urgency.

Share