Learning to Walk

Scientists watch the robot attempt to walk
Credit: Lindsay France/Cornell University. Source: PhysOrg.com.

Josh Bongard and his colleagues at Cornell write in the November 17, 2006, edition of Science (see abstract) about a new robot they have built. As reported on PhysOrg.com (thanks to Food not Bourgeoisie for spotting this), the robot develops a model of self to learn how to move, perhaps somewhat similar to the way human babies learn:

Nothing can possibly go wrong … go wrong … go wrong … The truth behind the old joke is that most robots are programmed with a fairly rigid “model” of what they and the world around them are like. If a robot is damaged or its environment changes unexpectedly, it can’t adapt.

So Cornell researchers have built a robot that works out its own model of itself and can revise the model to adapt to injury. First, it teaches itself to walk. Then, when damaged, it teaches itself to limp.

(continue reading at PhysOrg.com)

The robot is programmed with a list of its parts, but not how they are connected or used. Instead, it uses a process that is a mixture of scientific method and evolution to learn how to move. It activates a single random motor, then, based on the results, it constructs fifteen varying internal models of how it might be put together. Next, it decides on commands to send to its motors, selecting commands that will produce the largest variation between models. It activates its motors and based on the results, the most likely model is selected. Variations on this model are constructed, and the robot again determines which test movement will produce the largest difference in movement between models. (This sort of repeated variation and selection is sometimes called evolutionary computation.) After sixteen cycles, the robot uses its best model of self to determine how to move its motors to move the farthest. It then attempts to move (usually awkwardly, but functional).

In a second part of the experiment, the researchers simulated injury by removing part of a leg. When the robot detects a large discrepancy between its predicted movement and its actual movement, it repeats the sixteen-cycle process, generating a new model of self and new way to walk.

There are two readily apparent benefits to this type of research. One, this sort of learning is thought to be similar to how our own brains learn. Like the robot, human babies are not “hard-wired” with the ability to walk, but slowly learn over time after continually trying different strategies. Research like this helps us understand how our brains and those of other animals might work. And two, this sort of flexibility makes robots far more robust. Rather than being constrained by fixed programming, a robot with this sort of programming might be able to adapt to damage or changing conditions. This might be especially useful as we expand our exploration of the solar system: a future version the Mars Exploration Rover Spirit might be able to generate new methods of moving itself without the aid of human programmers if one of its wheels were to fail. And for exploring truly alien environments, astronomers might not know enough about local conditions to properly program a robot beforehand. A suitably flexible robotic vehicle might be able to determine an optimal method of travel given the local air density, gravity, wind velocity, ground consistency, and so on.

In a fascinating accompanying editorial, Chrisoph Adami discusses the study and suggests yet a third connection. He refers to this type of algorithm as a “dream-inspired algorithm,” since the robot spends part of its time “dreaming” about itself and how it might work. His thought-provoking commentary includes the following:

How would dream-inspired algorithms work in terra incognita? A robot would spend the day exploring part of the landscape, and perhaps be stymied by an obstacle. At night, the robot would replay its actions and infer a model of the environment. Armed with this model, it could think of—that is, synthesize—actions that would allow it to overcome the obstacle, perhaps trying out those in particular that would best allow it to understand the nature of the obstacle. Informally, then, the robot would dream up strategies for success—just as the robot constructed by Bongard et al. dreams about its own shape and form—and approach the morning with fresh ideas.…And even though the robots studied by Bongard et al. seem to prefer to dream about themselves rather than electric sheep, they just may have unwittingly helped us understand what dreams are for.

References:

  • Adami, C. 2006. Computer science: What do robots dream of? Science 314 (5802): 1093–1094. DOI: 10.1126/science.1135929
  • Bongard, J., V. Zykov, and H. Lipson. 2006. Resilient machines through continuous self-modeling. Science 314 (5802): 1118–1121. DOI: 10.1126/science.1133687
  • PhysOrg.com. 2006. Robot discovers itself, adapts to injury. 16 November. http://www.physorg.com/news82910066.html
Advertisement

2 thoughts on “Learning to Walk

  1. The second experiment is awesome. It should help creating humanoids. I’m just wondering. If they remove after the first leg an other leg, then the cycle will still consist of 16 parts?

  2. I would assume so. Whenever there is a large discrepancy between predicted and measured movement, the robot returns to the 16-cycle modeling stage, if I understand the paper correctly. One of the reasons they limited it to 16 cycles is because they don’t want it to damage itself thrashing around, especially if it were on another planet or something.

Comments are closed.