Roboticists from the nation’s top universities recently created a self-aware mechanical arm that taught itself how to move.

Hop Lipson shows an audience the Self-Aware robotic arm he's been involved in making.

Continue reading below
Our Featured Videos

“The idea is that robots need to take care of themselves,” says Boyuan Chen, a roboticist at Duke University in North Carolina and an author of a study on the self-aware robot published in Science. “In order to do that, we want a robot to understand their body.”

Graphic shows a progression of the self-aware robot's range of motion.

“We humans clearly have a notion of self,” he adds. “Close your eyes and try to imagine how your own body would move if you were to take some action, such as stretch your arms forward or take a step backward. Somewhere inside our brain we have a notion of self, a self-model that informs us what volume of our immediate surroundings we occupy, and how that volume changes as we move.”

In addition to Chen, the team for this project included Robert Kwiatkowski and Carl Vondrick, both with the Department of Computer Science at Columbia University, and Hod Lipson from Columbia’s Mechanical Engineering Department. Together with their assistants, they created a simple robotic arm and gave it access to multiple camera feeds so it could essentially see itself from several angles.

Graphic outlines specific instructions the robot was given while teaching itself to move.

“We were really curious to see how the robot imagined itself,” says Lipson. “But you can’t just peek into a neural network, it’s a black box.”

Using that neural network, the robot (which was mounted to a table) was able to create a conglomerate picture of its own shape and size, using a marker to draw a self-portrait on paper for the researchers. The robot was also given a command to pick up a red sphere on the surface of the table. Through a process of wiggling and rotating its arm back and forth, it began to teach itself the cause and effect of each movement. After just three hours, it was able to easily touch the ball consistently.

While past robots have been self-aware in that they were given models of themselves, this experiment is novel, as the robot came up with an understanding of itself much in the way an animal or human would — by looking in the mirror, flailing limbs about, and trying out new motions. A robot that can self-model could be much more effective and long-lasting.

Graphic breaks down the self-aware robotic arm's learning process while teaching itself to move.

“Self-modeling is a primitive form of self-awareness,” Chen explains. “If a robot, animal, or human has an accurate self-model, it can function better in the world, it can make better decisions, and it has an evolutionary advantage.”

This self-awareness could help robots on assembly lines diagnose their own problems and learn to fix them. It could also be extremely useful in situations where humans cannot be on hand to solve mechanical errors, like deep sea dives or in space.

The robotic arm currently has four degrees of freedom, or types of motion. The researchers are now trying to work it up to 12 degrees. By comparison, a human body has hundreds.

“The more complex you are, the more you need this self-model to make predictions. You can’t just guess your way through the future,” notes Lipson. “We’ll have to figure out how to do this with increasingly complex systems.”

Hop Lipson shows an audience the Self-Aware robotic arm he's been involved in making.
The group’s work is also groundbreaking in that most researchers first use virtual simulations to model how a robot would respond, but such computations can be expensive and time demanding. Allowing a robot to teach itself about its own nature could potentially save vast amounts of money and resources.