Join our daily and weekly newsletters for the latest updates and exclusive content on top AI coverage. More information
Humanoid robots are no longer science fiction. Imagine a world where robots not only work with us in factories, but also welcome us in shops, help in doctor’s offices and take care of our loved ones. With Tesla planning to deploy thousands of Optimus robots by 2026, the age of humanoid robots is closer than we think.
This vision is becoming increasingly tangible as more and more companies demonstrate disruptive innovation. The 2025 Consumer Electronics Show (CES) showed several examples of how robotics is advancing in both functionality and human-centered design. These included Richtech Robotics’ ADAM robot bartender, which mixes more than 50 types of drinks and interacts with customers, and Tombot Inc.’s puppy dogs, which wag their tails and make noises designed to soothe older adults with dementia. While there may be a market for these and other robots at the show, it’s still early days for this type of robotic technology to be widely deployed.
Nevertheless, there is real technological progress in the field. Increasingly, this includes “humanoid” robots that use generative artificial intelligence to create human-like capabilities – enabling robots to learn, feel and act in complex environments. From Optimus by Tesla to Aria by Realbotix, the next decade will see a proliferation of humanoid robots.
Despite these promising advances, some experts warn that achieving fully human capabilities is still a distant goal. Yann LeCun – one of the “godfathers of AI” – recently argued that AI systems “do not have the capacity to plan, reason… or understand the physical world”, citing shortcomings in current technology. He added that we can’t build smart enough robots today because “we can’t make them intelligent enough.”
LeCun may be right, though that doesn’t mean we won’t see more humanoid robots soon. Elon Musk recently said that Tesla will produce several thousand Optimus units in 2025 and expects to ship 50,000 to 100,000 in 2026. That’s a dramatic increase from the handful that exist today and perform limited functions. Of course, Musk has been known to get his timelines wrong, such as when he said in 2016 that fully autonomous driving would be achieved within two years.
Still, it seems clear that significant progress is being made with humanoid robots. Tesla is not alone in pursuing this goal, as other companies including Agility Robotics, Boston Dynamics and Figure AI are among the leaders in the field of humanoid robots.
Business Insider recently interviewed Agility Robotics CEO Peggy Johnson, who said that soon it will be “very normal” for humanoid robots to become human collaborators in various workplaces. Last month, Figure announced in a LinkedIn post: “We have delivered the F.02 humanoid robots to our commercial client and they are currently hard at work on them.” With significant support from major investors including Microsoft and Nvidia, Figure will provide fierce competition in the humanoid robot market.
Creating a world view
However, LeCun was right because robots need more advancements before they have more fully human capabilities. Moving parts in a factory is easier than traversing dynamic and complex environments.
The current generation of robots faces three key challenges: processing visual information fast enough to respond in real time; understanding subtle cues in human behavior; and adapt to unexpected changes in their environment. Most humanoid robots today are dependent on cloud computing, and the resulting network latency can make simple tasks such as capturing an object difficult.
One company trying to overcome the current limitations of robotics is startup World Labs, founded by “AI Godmother” Fei Fei Li Speaking with WiredLi said, “The physical world of computers is seen through cameras and the computer brain behind the cameras. Transforming this vision into thinking, creating and eventually interacting requires an understanding of the physical structure, the physical dynamics of the physical world. And this technology is called spatial intelligence.”
Gen AI powers spatial intelligence by helping robots map their surroundings in real time, much like humans do, and predict how objects might move or change. Such advances are critical to creating autonomous humanoid robots capable of navigating complex real-world scenarios with the adaptability and decision-making capabilities needed for success.
While spatial intelligence relies on real-time data to create mental maps of the environment, another approach is to help a humanoid robot infer the real world from a single static image. As explained in a pre-published paper, the Generative World Explorer (GenEx) uses AI to create a detailed virtual world from a single image that mimics how humans make inferences about their surroundings. While still in the research phase, this capability will help robots make split-second decisions or navigate new environments with limited sensor data. This would allow them to quickly understand and adapt to spaces they have never experienced before.
ChatGPT’s moment for robotics is coming
While World Labs and GenEx are pushing the boundaries of AI thinking, Nvidia Cosmos and GR00T are tackling the challenges of equipping humanoid robots with real-world adaptability and interactive capabilities. Cosmos is a family of artificial intelligence “world base models” that help robots understand physics and spatial relationships, while GR00T (Generalist Robot 00 Technology) allows robots to learn by observing humans – much like an apprentice learns from a master. Together, these technologies help robots understand what to do and how to do it naturally.
These innovations reflect a broader push in the robotics industry to equip humanoid robots with both cognitive and physical adaptability. GR00T could allow humanoid robots to assist in healthcare by observing and imitating medical professionals, while GenEx could allow robots to navigate disaster areas by inferring the environment from limited visual input. As he stated Investor’s Business DailyNvidia CEO Jensen Huang said, “ChatGPT’s moment for robotics is coming.”
Another company working on creating physical models of artificial intelligence is Google DeepMind. Timothy Brooks, a research scientist there, posted on X this month about the company’s plans to create large-scale gene models that simulate the physical world.
These emerging models of the physical world will better predict, plan, and learn from experience, all essential capabilities of future humanoid robots.
The robots are coming
In early 2025, humanoid robots are mostly prototypes. In the near future, they will focus on specific tasks such as manufacturing, logistics and disaster response where automation provides immediate value. Broader applications such as grooming or retail interactions will come later as the technology matures. However, advances in artificial intelligence and mechanical engineering are accelerating the development of such a humanoid robot.
Consulting firm Accenture recently noted the evolving full suite of robotic hardware, software, and artificial intelligence models being built to create machine autonomy in the human world. In its “Technology Vision 2025” report, the company states: “Over the next decade, we will begin to see robots randomly and routinely interact with humans, consider their way through unplanned tasks, and independently perform actions in any environment.”
Wall Street firm Morgan Stanley estimates that the number of American humanoid robots could reach eight million by 2040 and 63 million by 2050. The company said that in addition to technological advances, long-term demographic shifts causing labor shortages may contribute to development and adoption.
Building trusted bots
In addition to the purely technical hurdles, there are potential social objections to overcome. Without addressing these concerns, public skepticism could hinder the adoption of humanoid robots, even in industries where they offer clear benefits. For humanoid robots to be successful, they would need to be seen as trustworthy and people would need to believe that they are helping society. As he noted MIT Technology Review“Few people would feel warm and cozy with such a robot walking into their living room right now.”
To deal with trust issues, researchers are investigating how to make robots appear closer. For example, engineers in Japan created a face mask from human skin cells and attached it to robots. That’s according to a study published last summer and reported by the company The New York Timesthe study’s lead researcher said, “Human faces and expressions improve communication and empathy in human-robot interactions, making robots more effective in healthcare, service, and companion roles.” In other words, a human-like appearance will improve trust.
In addition to appearing trustworthy, human-like robots will need to behave consistently ethically and responsibly to ensure acceptance by humans. For example, in public spaces, humanoid robots with cameras may inadvertently collect sensitive data such as conversations or facial details, raising concerns about surveillance. Policies ensuring transparent data practices will be critical to mitigating these risks.
The next decade
In the near future, humanoid robots will focus on specific tasks such as manufacturing, logistics and disaster response where automation provides immediate value. These specialized roles highlight their current strengths in structured environments, while broader applications such as healthcare, nursing and retail operations will emerge as the technology matures.
As humanoid robots become more visible in everyday life, their presence will profoundly affect and potentially change human interactions and social norms. In addition to performing tasks, these machines become embedded in the social fabric and require humans to navigate new relationships with technology. Their adoption could ease labor shortages in aging societies and improve efficiency in service industries, but it may also spark debates about job displacement, privacy and human identity in an increasingly automated world. Preparing for these shifts will require not only technological progress, but also thoughtful social adaptation.
By addressing the challenges and harnessing the efficiency and adaptability of humanoid robots, we can ensure that these technologies serve as tools for progress. Shaping that future isn’t just the responsibility of policymakers and technology leaders—it’s a conversation for everyone. Public participation will be essential to ensure that humanoid robots empower society and address real human needs.
Gary Grossman is EVP of the Technology Practice at Edelman and Global Head of the Edelman AI Center of Excellence.
DataDecisionMakers
Welcome to the VentureBeat community!
DataDecisionMakers is a place where experts, including technical data people, can share insights and innovations related to data.
If you want to read about the cutting edge ideas and current information, best practices and the future of data and data technology, join us at DataDecisionMakers.
You might even consider contributing your own article!
Read more from DataDecisionMakers