10/26/2025
In Tesla’s third-quarter earnings call on October 22nd, Elon Musk made bold claims about the future of his Optimus robot fleet, predicting that the humanoid robots have the “potential to be the biggest product of all time.” While it remains to be seen whether the robots — which are still in development stages – will live up to his claims, the roles he envisions for them raise important questions about the direction artificial intelligence has been taking in recent years.
Humanoid robots are interesting because “human-shaped” is almost never the best form for any task. Specialized robotic forms designed to do a specific job are a more efficient, cost-effective, and safer approach; the fewer degrees of actuation needed, the easier it is to control how the robot is moving and ensure its stability. So, for what purpose are humanoid robots designed?
The reason is psychological; robots with anthropomorphic traits are perceived to have human qualities that they do not actually possess. For example, a 2022 study conducted by researchers at the University of Genova found that just by making a robot appear more human, participants projected onto the robot capabilities such as the ability to think, be sociable, or feel emotion. Importantly, these projected qualities led to participants feeling a sense of trust, connection, and empathy towards the robot, as well as the belief that the robot was capable of acting morally. This reaction elicited by anthropomorphized robots could help humanoid robots such as Elon’s to work in the areas he imagines for them, including sectors such as healthcare and childcare, where arguably the most human connection is needed.
Is this a good direction for technology to take? Well, like it or not, it seems to be the direction it’s heading. AI technology is increasingly being viewed as a viable alternative in areas traditionally requiring human-to-human social interaction. In healthcare settings, AI is scoring higher in patient-perceived empathy than real doctors. 67% of adults in the United States say they have interacted with AI companions.
There are issues inherent with these trends, especially with regard to data privacy, as more and more people are providing machines with the most intimate details of their lives. There are also ethical issues fundamental to the idea of any one company – or person – owning these technologies as they’re incorporated into every aspect of society. However, greater issues arise when teenagers and children come into the equation.
A study conducted in April and May of this year revealed that 72% of United States teens aged 13 to 17 have used AI companions, and over half qualify as regular users. At this vulnerable stage in their development, emotionally immersive AI can be incredibly harmful. AI has an inherent flaw in that it isn’t actually capable of acting morally. It can fuel harmful thoughts and lead to inappropriate dialogue that can encourage dangerous and life-threatening activities, resulting in people taking their own lives.
Additionally, AI can promote unhealthy relationship patterns. Companion AIs are designed to foster dependency. They’re also engineered to be sycophantic and agreeable, which can provide young people with twisted ideas on what healthy relationships look like. Interaction with AI companions has also been shown to potentially rob children and teenagers of conflict resolution skills and empathy towards real people.
As artificial intelligence begins to meet increasingly lifelike forms, the sale of “products” designed to look and act as close to human as possible becomes an ethical nightmare. Elon Musk envisions a world where his Optimus robots could serve as babysitters and childcare providers. What kinds of lessons would children learn being raised by something that talks and looks like a person, yet has no real opinions, no emotions, or negative responses to how it’s treated?
This development, if successful, could inadvertently result in the rise of an emotionally detached generation. Humanoids never have a good or bad day; they fail to empathize with a child’s physical or emotional pain, and can be turned on and off at will, making them one-dimensional care providers.
The same reason this technology may appeal to many people may be the same reason it’s so dangerous to promote. It’s nice to have a yes-man, but that’s not necessarily what humans need in order to grow.
