Winning the Nobel Prize is one of the highest honors one can achieve. Winners bring their institutions and their countries prestige. I’d like to highlight this year’s prizewinners.
The Nobel Prize in Physics this year was awarded to French scientist Albert Fert and German scientist Peter Grünberg. They were recognized for their independent discovery of giant magnetoresistance. The concept’s a bit esoteric, but the Nobel Prize site, nobelprize.org, has some nice introductory material. In fact, it’s really put together well and you are advised to browse through it for more information about any aspect of the Nobel Prizes.
I especially like their “speed read” summaries. The Physics entry is quite easy to understand and begins as follows:
The Giant within Small Devices
Lying at the heart of the computer which you are using to read this article is a memory retrieval system based on the discoveries for which the 2007 Nobel Prize in Physics was awarded to Albert Fert and Peter Grünberg. They discovered, quite independently, a new way of using magnetism to control the flow of electrical current through sandwiches of metals built at the nanotechnology scale.
And if you have time, you should definitely read a nice 7-page PDF explaining the concept for the layperson, using illustrations and easy-to-understand concepts. I won’t bother going into detail here since the site does such a nice job. There’s no excuse not to know the basics of this discovery!
You can also see videos of the announcement, or read the press release.
Scientific American has a neat piece of news in its February 2007 issue (“Chipping In” by Anna Griffin; subscription required for full text). For some time, we have had technology that can pick up signals from neurons (brain and nerve cells), for instance, allowing paralyzed patients rudimentary control over a computer or prosthesis.
But a team at the University of Southern California, led by Theodore W. Berger, have taken this a step further. For twenty years he and his team studied the brains of rats; specifically, how neurons communicate in the hippocampus, a region of the brain involved in memory. They developed a model of how the neurons responded to various inputs and built it into a chip. They then took slices of hippocampal tissue, removed part of it, and replaced it with the chip, “[restoring] function by processing incoming neural signals into appropriate output with 90 percent accuracy,” according to the Scientific American article.
I find this to be very exciting. This sort of research could one day lead to devices to help humans with brain damage or memory problems, for instance, though of course that is still far away. Even at this stage, it took some interesting engineering work to figure out how to make a silicon chip interact with brain tissue. The next step will be to design a chip to work with a living brain, instead of tissue slices.
But what really fascinates me is that they were able to model the function of that brain tissue mathematically, to calculate how the section of neurons would respond to various inputs. This brings us closer to understanding just how brain functions such as memory and consciousness arise from the biology and chemistry of the brain.
It does suggest some future ethical and philosophical puzzles, though. Will we eventually be able to reproduce the functioning of the entire rat brain? How about that of a human? Might we one day be able to calculate the functioning of a human mind, to reproduce a mind as software?
My brain looks forward to future advances.
Credit: Lindsay France/Cornell University. Source: PhysOrg.com.
Josh Bongard and his colleagues at Cornell write in the November 17, 2006, edition of Science (see abstract) about a new robot they have built. As reported on PhysOrg.com (thanks to Food not Bourgeoisie for spotting this), the robot develops a model of self to learn how to move, perhaps somewhat similar to the way human babies learn:
Nothing can possibly go wrong … go wrong … go wrong … The truth behind the old joke is that most robots are programmed with a fairly rigid “model” of what they and the world around them are like. If a robot is damaged or its environment changes unexpectedly, it can’t adapt.
So Cornell researchers have built a robot that works out its own model of itself and can revise the model to adapt to injury. First, it teaches itself to walk. Then, when damaged, it teaches itself to limp.
(continue reading at PhysOrg.com)
The robot is programmed with a list of its parts, but not how they are connected or used. Instead, it uses a process that is a mixture of scientific method and evolution to learn how to move. It activates a single random motor, then, based on the results, it constructs fifteen varying internal models of how it might be put together. Next, it decides on commands to send to its motors, selecting commands that will produce the largest variation between models. It activates its motors and based on the results, the most likely model is selected. Variations on this model are constructed, and the robot again determines which test movement will produce the largest difference in movement between models. (This sort of repeated variation and selection is sometimes called evolutionary computation.) After sixteen cycles, the robot uses its best model of self to determine how to move its motors to move the farthest. It then attempts to move (usually awkwardly, but functional).
In a second part of the experiment, the researchers simulated injury by removing part of a leg. When the robot detects a large discrepancy between its predicted movement and its actual movement, it repeats the sixteen-cycle process, generating a new model of self and new way to walk.