Nvidia used CES 2026 to make a simple pitch: it wants to be for robots what Android became for smartphones.
Not just a chip vendor. The default stack for everything from humanoids to warehouse arms.
From cloud AI to "physical AI"
Nvidia calls this push its "physical AI" ecosystem â AI that doesnât just live in the cloud, but runs on machines that can learn, plan and act in the real world.
That shift is being driven by three trends the company is leaning hard into:
- Cheaper, better sensors on robots
- Advanced simulation so training can move off the factory floor and into virtual worlds
- Foundation models that can generalize across many tasks instead of being locked into a single job
At CES, Nvidia stitched those pieces into a full-stack robotics platform: open models on Hugging Face, an open source simulator, orchestration software, and new edge hardware.
A stack of robot foundation models
Nvidia released several new open foundation models designed to help robots see, reason and act across a wide range of environments and tasks. All are available on Hugging Face.
The set includes:
- Cosmos Transfer 2.5 â a world model for generating synthetic data and helping robots learn in simulation
- Cosmos Predict 2.5 â another world model optimized for evaluating robot policies before they ever touch real hardware
- Cosmos Reason 2 â a reasoning vision-language model (VLM) that lets AI systems "see," understand context and decide what to do next in the physical world
- Isaac GR00T N1.6 â Nvidiaâs nextâgeneration vision-language-action (VLA) model, purposeâbuilt for humanoid robots
GR00T leans on Cosmos Reason as its "brain" and is tuned for wholeâbody control. In practice, that means a humanoid can move and manipulate objects at the same time, instead of treating walking and handling as separate, brittle skills.
All of this is the opposite of the traditional industrial robot model, where each machine is customâprogrammed to do one repeatable thing and nothing else.
Isaac Lab-Arena: a common playground for robot training
Training and validating robots in the real world is expensive, slow and often risky, especially for tasks like precision assembly or cable installation. Nvidiaâs answer is Isaac Lab-Arena, a new open source simulation framework hosted on GitHub.
Lab-Arena is meant to be a kind of standard gym for robots:
- A single place to host task scenarios and benchmarks
- Shared training tools and datasets
- Support for existing benchmarks like Libero, RoboCasa and RoboTwin
Instead of every lab or startup building its own bespoke simulator, Nvidia is betting the ecosystem will consolidate around a common environment â conveniently one that is tightly integrated with its own models and hardware.
OSMO: command center for physical AI
To glue all these pieces together, Nvidia introduced OSMO, an open source command center that connects the full workflow from data generation through training.
OSMO is designed to:
- Coordinate experiments across both desktop and cloud environments
- Streamline the loop between simulation, model training and deployment on edge devices
If Nvidiaâs platform play works, OSMO becomes the control room operators live in while their robots learn and update in the background.
Jetson T4000: Blackwell at the edge
Underneath all of this is new hardware. Nvidia unveiled the Jetson T4000, a Blackwellâpowered graphics card and the latest member of its Thor family, aimed squarely at onâdevice AI for robots.
Nvidiaâs key numbers:
- 1200 teraflops of AI compute
- 64 GB of memory
- Power draw in the 40 to 70 watt range
The pitch: itâs a costâeffective upgrade path for robots that need serious onâboard inference without jumping to dataâcenterâclass GPUs.
Hugging Face partnership: 2M roboticists meet 13M AI builders
The platform story doesnât stop at Nvidiaâs own ecosystem. The company is deepening its partnership with Hugging Face to lower the barrier to entry for robotics.
The collaboration plugs Nvidiaâs Isaac and GR00T technologies directly into Hugging Faceâs LeRobot framework. On paper, that connects:
- Nvidiaâs claimed 2 million robotics developers
- With Hugging Faceâs 13 million AI builders
One tangible outcome: the open source Reachy 2 humanoid on Hugging Face now works directly with Nvidiaâs Jetson Thor chip. Developers can swap different AI models in and out on the same physical robot instead of being locked into a single proprietary stack.
Thatâs a key part of Nvidiaâs Androidâstyle narrative: open source tools, widely available models, and hardware that many vendors can build around.
Early traction for Nvidiaâs Android-of-robots play
There are signs the strategy is gaining momentum.
On Hugging Face, robotics is currently the fastestâgrowing category, with Nvidiaâs models leading the download charts. And on the commercial side, a range of robotics companies â including Boston Dynamics, Caterpillar, Franka Robots and NEURA Robotics â are already using Nvidia technology.
Individually, none of these moves is surprising for a company that already dominates AI training. Together, they amount to a clear bid: if you are building a generalâpurpose robot, Nvidia wants to be the default option for your models, your simulator, your orchestration tools and your edge compute.
If Android defined the modern smartphone era by becoming the default OS for everyone not named Apple, Nvidia is betting it can do the same at the intersection of AI and robotics.
Source tweet: https://twitter.com/TechCrunch/status/2008311157950468454



