Google to Invest Hundreds of Millions of Dollars in Alternative Energy

From Reuters:

Google Inc is prepared to invest hundreds of millions of dollars in big commercial alternative-energy projects that traditionally have had trouble getting financing, the executive in charge of its green-energy push said on Wednesday.

The Internet search giant, which has said it will invest in researching green technologies and renewable-energy companies, is eager to help promising technologies amass scale to help drive the cost of alternative energy below the cost of coal.

(continued)

It’s nice to see private groups getting involved like this.

Advertisement

Motorized Wheelchair Guided by Thoughts

A company called Ambient is developing a new wheelchair that is controlled by words the user thinks of. The system, called Audeo, uses a neckband to pick up signals in the nerves that control the larynx, or voice box. Obviously, this requires that the operator still has control of those nerves, though he doesn’t have to have control of the other muscles or the coordination that is required for speech. This has the potential to restore some mobility to those who have very little strength or coordination to make purposeful movements. And as this technology is refined, the potential uses are many: users could control other devices, such as a computer or television. If the “vocabulary” of the system is increased, the system could also function as an artificial speech synthesizer that could sense the words the user was trying to say and construct them directly. See New Scientist for more.

Below is a video demonstrating the system.

Google Earth Adds…the Universe!

I already thought the free Google Earth program was one of the coolest programs out there. Sort of a “digital globe”, you can zoom from looking at the entire Earth right up to your house or favorite location, change viewing angles, and fly to other places. That alone can occupy me to no end, but there is so much more you can do with the program. I think a good side benefit of Google Earth—and I’ve remarked on this before—is that to some degree, I feel it helps promote interest in geography. As one goes about exploring places, it’s difficult not to appreciate their geographical relationships, and eventually one starts exploring other parts of the world, as well.

But now Google’s taken this a step further. In their newest version, they’ve added the ability to explore the sky as well. Complete with Hubble imagery and loads of astronomical tidbits, this is a great new feature and one I hope will stimulate interest in astronomy.

Below is a video demonstration Google has created. There has also been a significant amount of media coverage—see, for instance, articles in New Scientist, SPACE.com, PC World, or other media.

I should note that the astronomical view is displayed as one might see it from Earth—sort of on the inside of a dome, not unlike a planetarium view. You cannot travel out into space. For that, I strongly recommend the excellent, free, and easy-to-use Celestia (wp). It’s beautiful, has an elegant interface, and is quite powerful. Google Earth plays a rather different role, and both programs complement each other nicely. I strongly urge everyone to download and explore both!

Update: Scientific American has a nice article, as well.

Retinal Implant Helps Restore Vision

Diagram of visual prosthesis
The major components of the new prosthesis. The small wearable computer is not included. Credit: Mark Humayun/AAAS. Source: New Scientist.

An article by Gaia Vince in New Scientist reports on a retinal prosthesis designed to help restore vision to blind people. After a prototype was successfully used in six people, further trials are set to begin. While cochlear implants are used to give deaf people some ability to hear, there has been no comparable, practical system for those who cannot see.

The system has several components. The user wears a pair of glasses with a built-in camera. The information is then transmitted to a wireless computer around the size of a mobile telephone that the user must keep with him. This computer processes the data, then transmits it to a receiver implanted in the user’s head. This is connected to a chip on the user’s retina. This all occurs extremely quickly, as discrepancy between perceived movement and visual changes would cause nausea and dizziness.

The device is still preliminary; the resolution is quite limited, naturally. But it is interesting that the brains of the patients seem to adapt to the limited visual input, and their vision improved over time. The article notes one patient’s observation:

At the beginning, it was like seeing assembled dots — “now it’s much more than that,” says Terry Bryant, aged 58, who received the implant in 2002 after 13 years of blindness. “I can go into any room and see the light coming in through the window. When I am walking along the street I can avoid low hanging branches and I can cross a busy street.”

Similar to the cochlear implant, an intact nervous system is required. This prothesis links with the ganglion cells at the back of the eye and the signals travel over the optic nerve to the brain. Damage to any of these components—such as damage to the ganglion cells, injury to the optic nerve, or stroke—will result in blindness that this prosthesis cannot correct. For that, we’ll have to wait for new technology.

Virtual Touch

There was an interesting article in New Scientist today about research towards developing a “haptic” glove. This glove would simulate tactile information, analagous to the way a television screen simulates visual information or speakers simulate auditory information. However, simulating touch is much more difficult for several reasons.

One of the main ways we determine the texture of something is through vibration. As we run our fingers over it, different textures have different patterns of high and low points, and vibration sensors in our fingertips are stimulated differently. Touch is complex, though, since we may also pick up and manipulate an object. As Tom Simonite writes in New Scientist,

“Virtual fabric” that feels just like the real thing is being developed by a group of European researchers. Detailed models of the way fabrics behave are combined with new touch stimulating hardware to realistically simulate a texture’s physical properties.

Detailed measurements of a fabric’s stress, strain and deformation properties are fed into a computer, recreating it virtually. Two new physical interfaces then allow users to interact with these virtual fabrics – an exoskeleton glove with a powered mechanical control system attached to the back and an array of moving pins under each finger. The “haptic” glove exerts a force on the wearer’s fingers to provide the sensation of manipulating the fabric, while the “touching” pins convey a tactile sense of the material’s texture.

(continue reading at New Scientist)

Of course, the benefits to virtual reality games are obvious. But there are many possible medical and industrial applications as well, such as manipulation of toxic substances or work in dangerous environments, or perhaps remote or robotic surgery.

There does not seem to be any olfactory or gustatory simulation on the horizon, though.

Prosthetic arm

New Scientist reports about an article in this week’s Lancet. Prosthetic limbs are getting quite advanced! The article discusses a prosthetic arm that has been attached to a 26-year-old woman. Motor (movement) nerves have been attached in a way to allow for more intuitive control of the limb. She is able to achieve remarkable control and accomplish activities of daily living such as cooking and dressing, albeit a bit more slowly. Below is a video of this remarkable woman demonstrating use of her new arm.

Take a look at the advantage this prosthesis offers over previous ones.

They also attached the sensory nerves to her chest so that if she is touched there, she feels the sensation as if it is coming from her arm. The next step will be to develop a sensory mechanism for the arm and relay the signal to the nerves.

Exploring Mars, Part 1: Mars Global Surveyor

The hunt for the missing Mars Global Surveyor continues

Mars Global Surveyor has been orbiting Mars since 1997, the first of a fleet of probes now exploring the Red Planet. Well past its intended lifespan, it has provided a wealth of data, but unfortunately went silent several weeks ago, and so far neither Earth nor the other probes have been able to detect or contact it. This is a good opportunity to take a brief look at the many craft busy examining our neighbor in space. There are too many to cover in a single post; subsequent posts will continue the series. In the meantime, you may read the New Scientist article discussing the search for Mars Global Surveyor.

Artist’s conception of MGS orbiting Mars
Artist’s concept of MGS orbiting Mars. Artwork Credit: Corby Waste. Courtesy NASA/JPL-Caltech.

Mars Global Surveyor

The Mars Global Surveyor (MGS) was launched by NASA on 7 November 1996; it reached Mars eight months later on 11 September 1997. It was the first U.S. craft to visit Mars in twenty years (the Soviet Union’s Phobos 2 briefly explored Mars in 1998 before prematurely malfunctioning; the United States’ Mars Observer, launched in 1992, failed to function properly). MGS has performed well beyond expectations; it completed its primary mission in 2001 and has had its mission extended several times since then. It has been a highly successful spacecraft, studying Mars extensively and providing more information than all previous missions combined, according to New Scientist. Some of its observations include mapping local magnetic fields (Mars, unlike Earth, does not have a global magnetic field) and discovering repeating weather patterns. And more recently, it had been serving as a communications relay for the other craft exploring the planet, while complementing their observations.

Continue reading “Exploring Mars, Part 1: Mars Global Surveyor

Video Games as an Anti-Obesity Tool?

The prevalence of obesity has been increasing at a tremendous rate in industrialized countries. In the United States, where the problem is most pronounced, three-fifths of adults are overweight, and almost a quarter are obese (source, CDC):

In 2005, among the total U.S. adult population surveyed, 60.5% were overweight, 23.9% were obese, and 3.0% were extremely obese. Obesity prevalence was 24.2% among men and 23.5% among women and ranged from 17.7% among adults aged 18–29 years to 29.5% among adults aged 50–59 years…. Among racial/ethnic populations, the greatest obesity prevalence was 33.9% for non-Hispanic blacks. Overall, age-adjusted obesity rates were 15.6%, 19.8%, and 23.7% for the 1995, 2000, and 2005 surveys, respectively.

Children are affected as well. Also according to the CDC,

The most recent data indicate that in the United States about 16% of children ages 6–19 years are overweight. Since the 1970s, overweight has doubled among young children aged 2–5 years and tripled among school-aged children aged 6–19 years.

Continue reading “Video Games as an Anti-Obesity Tool?”

New Horizons Sights Pluto

Probe spots the dwarf planet for the first time


False-color images from New Horizons, animated to show Pluto’s movement. Credit: NASA/Johns Hopkins University Applied Physics Laboratory/Southwest Research Institute.

New Horizons, NASA’s Pluto-bound craft, just passed an important milestone (kilometerstone?). As reported by NASA,

The New Horizons team got a faint glimpse of the mission’s distant, main target when one of the spacecraft’s telescopic cameras spotted Pluto for the first time.

The Long Range Reconnaissance Imager (LORRI) took the pictures during an optical navigation test on Sept. 21–24, and stored them on the spacecraft’s data recorder until their recent transmission back to Earth. Seen at a distance of about 4.2 billion kilometers (2.6 billion miles) from the spacecraft, Pluto is little more than a faint point of light among a dense field of stars. But the images prove that the spacecraft can find and track long-range targets, a critical capability the team will use to navigate New Horizons toward 2,500-kilometer wide Pluto and, later, one or more 50-kilometer sized Kuiper Belt objects.

(continue reading at NASA’s web site) Continue reading New Horizons Sights Pluto”

Learning to Walk

Scientists watch the robot attempt to walk
Credit: Lindsay France/Cornell University. Source: PhysOrg.com.

Josh Bongard and his colleagues at Cornell write in the November 17, 2006, edition of Science (see abstract) about a new robot they have built. As reported on PhysOrg.com (thanks to Food not Bourgeoisie for spotting this), the robot develops a model of self to learn how to move, perhaps somewhat similar to the way human babies learn:

Nothing can possibly go wrong … go wrong … go wrong … The truth behind the old joke is that most robots are programmed with a fairly rigid “model” of what they and the world around them are like. If a robot is damaged or its environment changes unexpectedly, it can’t adapt.

So Cornell researchers have built a robot that works out its own model of itself and can revise the model to adapt to injury. First, it teaches itself to walk. Then, when damaged, it teaches itself to limp.

(continue reading at PhysOrg.com)

The robot is programmed with a list of its parts, but not how they are connected or used. Instead, it uses a process that is a mixture of scientific method and evolution to learn how to move. It activates a single random motor, then, based on the results, it constructs fifteen varying internal models of how it might be put together. Next, it decides on commands to send to its motors, selecting commands that will produce the largest variation between models. It activates its motors and based on the results, the most likely model is selected. Variations on this model are constructed, and the robot again determines which test movement will produce the largest difference in movement between models. (This sort of repeated variation and selection is sometimes called evolutionary computation.) After sixteen cycles, the robot uses its best model of self to determine how to move its motors to move the farthest. It then attempts to move (usually awkwardly, but functional).

In a second part of the experiment, the researchers simulated injury by removing part of a leg. When the robot detects a large discrepancy between its predicted movement and its actual movement, it repeats the sixteen-cycle process, generating a new model of self and new way to walk.

Continue reading “Learning to Walk”