Nvidia wants to be the Android of generalist robotics

January 6, 2026
5 min read
Nvidia robotics demo of humanoid robot using Jetson hardware at CES 2026

Nvidia used CES 2026 to make a simple pitch: it wants to be for robots what Android became for smartphones.

Not just a chip vendor. The default stack for everything from humanoids to warehouse arms.

From cloud AI to "physical AI"

Nvidia calls this push its "physical AI" ecosystem — AI that doesn’t just live in the cloud, but runs on machines that can learn, plan and act in the real world.

That shift is being driven by three trends the company is leaning hard into:

  • Cheaper, better sensors on robots
  • Advanced simulation so training can move off the factory floor and into virtual worlds
  • Foundation models that can generalize across many tasks instead of being locked into a single job

At CES, Nvidia stitched those pieces into a full-stack robotics platform: open models on Hugging Face, an open source simulator, orchestration software, and new edge hardware.

A stack of robot foundation models

Nvidia released several new open foundation models designed to help robots see, reason and act across a wide range of environments and tasks. All are available on Hugging Face.

The set includes:

  • Cosmos Transfer 2.5 – a world model for generating synthetic data and helping robots learn in simulation
  • Cosmos Predict 2.5 – another world model optimized for evaluating robot policies before they ever touch real hardware
  • Cosmos Reason 2 – a reasoning vision-language model (VLM) that lets AI systems "see," understand context and decide what to do next in the physical world
  • Isaac GR00T N1.6 – Nvidia’s next‑generation vision-language-action (VLA) model, purpose‑built for humanoid robots

GR00T leans on Cosmos Reason as its "brain" and is tuned for whole‑body control. In practice, that means a humanoid can move and manipulate objects at the same time, instead of treating walking and handling as separate, brittle skills.

All of this is the opposite of the traditional industrial robot model, where each machine is custom‑programmed to do one repeatable thing and nothing else.

Isaac Lab-Arena: a common playground for robot training

Training and validating robots in the real world is expensive, slow and often risky, especially for tasks like precision assembly or cable installation. Nvidia’s answer is Isaac Lab-Arena, a new open source simulation framework hosted on GitHub.

Lab-Arena is meant to be a kind of standard gym for robots:

  • A single place to host task scenarios and benchmarks
  • Shared training tools and datasets
  • Support for existing benchmarks like Libero, RoboCasa and RoboTwin

Instead of every lab or startup building its own bespoke simulator, Nvidia is betting the ecosystem will consolidate around a common environment — conveniently one that is tightly integrated with its own models and hardware.

OSMO: command center for physical AI

To glue all these pieces together, Nvidia introduced OSMO, an open source command center that connects the full workflow from data generation through training.

OSMO is designed to:

  • Coordinate experiments across both desktop and cloud environments
  • Streamline the loop between simulation, model training and deployment on edge devices

If Nvidia’s platform play works, OSMO becomes the control room operators live in while their robots learn and update in the background.

Jetson T4000: Blackwell at the edge

Underneath all of this is new hardware. Nvidia unveiled the Jetson T4000, a Blackwell‑powered graphics card and the latest member of its Thor family, aimed squarely at on‑device AI for robots.

Nvidia’s key numbers:

  • 1200 teraflops of AI compute
  • 64 GB of memory
  • Power draw in the 40 to 70 watt range

The pitch: it’s a cost‑effective upgrade path for robots that need serious on‑board inference without jumping to data‑center‑class GPUs.

Hugging Face partnership: 2M roboticists meet 13M AI builders

The platform story doesn’t stop at Nvidia’s own ecosystem. The company is deepening its partnership with Hugging Face to lower the barrier to entry for robotics.

The collaboration plugs Nvidia’s Isaac and GR00T technologies directly into Hugging Face’s LeRobot framework. On paper, that connects:

  • Nvidia’s claimed 2 million robotics developers
  • With Hugging Face’s 13 million AI builders

One tangible outcome: the open source Reachy 2 humanoid on Hugging Face now works directly with Nvidia’s Jetson Thor chip. Developers can swap different AI models in and out on the same physical robot instead of being locked into a single proprietary stack.

That’s a key part of Nvidia’s Android‑style narrative: open source tools, widely available models, and hardware that many vendors can build around.

Early traction for Nvidia’s Android-of-robots play

There are signs the strategy is gaining momentum.

On Hugging Face, robotics is currently the fastest‑growing category, with Nvidia’s models leading the download charts. And on the commercial side, a range of robotics companies — including Boston Dynamics, Caterpillar, Franka Robots and NEURA Robotics — are already using Nvidia technology.

Individually, none of these moves is surprising for a company that already dominates AI training. Together, they amount to a clear bid: if you are building a general‑purpose robot, Nvidia wants to be the default option for your models, your simulator, your orchestration tools and your edge compute.

If Android defined the modern smartphone era by becoming the default OS for everyone not named Apple, Nvidia is betting it can do the same at the intersection of AI and robotics.


Source tweet: https://twitter.com/TechCrunch/status/2008311157950468454

Comments

Leave a Comment

No comments yet. Be the first to comment!

Related Articles

Stay Updated

Get the latest AI and tech news delivered to your inbox.