AI's Shift from Language Models to Real World Models

Last week at CES 2026 in Las Vegas, several keynote remarks highlighted AI shifting from large language models (which generate text) to world models (which simulate physical reality and guide machine actions).

  • Jensen Huang, CEO, Nvidia: "The ChatGPT moment for physical AI is here—when machines begin to understand, reason, and act in the real world."

  • Roland Busch, President and CEO, Siemens AG: "There was a world before electricity; today electricity is ubiquitous. There was a world before AI—right now we are transitioning to a world that makes full use of it, including in factories, buildings, grids, and transportation. Industrial AI is no longer a feature; it's a force that will reshape the next century. In the industrial world, hallucinations are not acceptable."

  • Carolina Parada, Senior Director of Robotics, Google DeepMind: "We think robots should understand the physical world the same way we do. They should be able to learn from their experience, generalize to new situations, and get better over time. Whether it's assembling a new car part or tying your shoelaces, robots should learn the way we do—from a handful of examples, then get better quickly with practice."

OUR TAKE

  • Physical-world AI has near-zero tolerance for error, which requires systems that are engineered for high reliability, durability and accuracy.

  • Real-world AI shifts the development constraint from collecting data to building accurate simulations that cover a full range of operational scenarios.

  • A shift to AI that controls machines will require industrial deployments designed to last decades. This requires a fundamental rethink of how solutions are built, deployed and maintained.

Next
Next

An AI Labor Shock, or Just a Hiring Pause?