Google to Invest Hundreds of Millions of Dollars in Alternative Energy

From Reuters:

Google Inc is prepared to invest hundreds of millions of dollars in big commercial alternative-energy projects that traditionally have had trouble getting financing, the executive in charge of its green-energy push said on Wednesday.

The Internet search giant, which has said it will invest in researching green technologies and renewable-energy companies, is eager to help promising technologies amass scale to help drive the cost of alternative energy below the cost of coal.


It’s nice to see private groups getting involved like this.

Motorized Wheelchair Guided by Thoughts

A company called Ambient is developing a new wheelchair that is controlled by words the user thinks of. The system, called Audeo, uses a neckband to pick up signals in the nerves that control the larynx, or voice box. Obviously, this requires that the operator still has control of those nerves, though he doesn’t have to have control of the other muscles or the coordination that is required for speech. This has the potential to restore some mobility to those who have very little strength or coordination to make purposeful movements. And as this technology is refined, the potential uses are many: users could control other devices, such as a computer or television. If the “vocabulary” of the system is increased, the system could also function as an artificial speech synthesizer that could sense the words the user was trying to say and construct them directly. See New Scientist for more.

Below is a video demonstrating the system.

Google Earth Adds…the Universe!

I already thought the free Google Earth program was one of the coolest programs out there. Sort of a “digital globe”, you can zoom from looking at the entire Earth right up to your house or favorite location, change viewing angles, and fly to other places. That alone can occupy me to no end, but there is so much more you can do with the program. I think a good side benefit of Google Earth—and I’ve remarked on this before—is that to some degree, I feel it helps promote interest in geography. As one goes about exploring places, it’s difficult not to appreciate their geographical relationships, and eventually one starts exploring other parts of the world, as well.

But now Google’s taken this a step further. In their newest version, they’ve added the ability to explore the sky as well. Complete with Hubble imagery and loads of astronomical tidbits, this is a great new feature and one I hope will stimulate interest in astronomy.

Below is a video demonstration Google has created. There has also been a significant amount of media coverage—see, for instance, articles in New Scientist,, PC World, or other media.

I should note that the astronomical view is displayed as one might see it from Earth—sort of on the inside of a dome, not unlike a planetarium view. You cannot travel out into space. For that, I strongly recommend the excellent, free, and easy-to-use Celestia (wp). It’s beautiful, has an elegant interface, and is quite powerful. Google Earth plays a rather different role, and both programs complement each other nicely. I strongly urge everyone to download and explore both!

Update: Scientific American has a nice article, as well.

Retinal Implant Helps Restore Vision

Diagram of visual prosthesis
The major components of the new prosthesis. The small wearable computer is not included. Credit: Mark Humayun/AAAS. Source: New Scientist.

An article by Gaia Vince in New Scientist reports on a retinal prosthesis designed to help restore vision to blind people. After a prototype was successfully used in six people, further trials are set to begin. While cochlear implants are used to give deaf people some ability to hear, there has been no comparable, practical system for those who cannot see.

The system has several components. The user wears a pair of glasses with a built-in camera. The information is then transmitted to a wireless computer around the size of a mobile telephone that the user must keep with him. This computer processes the data, then transmits it to a receiver implanted in the user’s head. This is connected to a chip on the user’s retina. This all occurs extremely quickly, as discrepancy between perceived movement and visual changes would cause nausea and dizziness.

The device is still preliminary; the resolution is quite limited, naturally. But it is interesting that the brains of the patients seem to adapt to the limited visual input, and their vision improved over time. The article notes one patient’s observation:

At the beginning, it was like seeing assembled dots — “now it’s much more than that,” says Terry Bryant, aged 58, who received the implant in 2002 after 13 years of blindness. “I can go into any room and see the light coming in through the window. When I am walking along the street I can avoid low hanging branches and I can cross a busy street.”

Similar to the cochlear implant, an intact nervous system is required. This prothesis links with the ganglion cells at the back of the eye and the signals travel over the optic nerve to the brain. Damage to any of these components—such as damage to the ganglion cells, injury to the optic nerve, or stroke—will result in blindness that this prosthesis cannot correct. For that, we’ll have to wait for new technology.

Virtual Touch

There was an interesting article in New Scientist today about research towards developing a “haptic” glove. This glove would simulate tactile information, analagous to the way a television screen simulates visual information or speakers simulate auditory information. However, simulating touch is much more difficult for several reasons.

One of the main ways we determine the texture of something is through vibration. As we run our fingers over it, different textures have different patterns of high and low points, and vibration sensors in our fingertips are stimulated differently. Touch is complex, though, since we may also pick up and manipulate an object. As Tom Simonite writes in New Scientist,

“Virtual fabric” that feels just like the real thing is being developed by a group of European researchers. Detailed models of the way fabrics behave are combined with new touch stimulating hardware to realistically simulate a texture’s physical properties.

Detailed measurements of a fabric’s stress, strain and deformation properties are fed into a computer, recreating it virtually. Two new physical interfaces then allow users to interact with these virtual fabrics – an exoskeleton glove with a powered mechanical control system attached to the back and an array of moving pins under each finger. The “haptic” glove exerts a force on the wearer’s fingers to provide the sensation of manipulating the fabric, while the “touching” pins convey a tactile sense of the material’s texture.

(continue reading at New Scientist)

Of course, the benefits to virtual reality games are obvious. But there are many possible medical and industrial applications as well, such as manipulation of toxic substances or work in dangerous environments, or perhaps remote or robotic surgery.

There does not seem to be any olfactory or gustatory simulation on the horizon, though.

Prosthetic arm

New Scientist reports about an article in this week’s Lancet. Prosthetic limbs are getting quite advanced! The article discusses a prosthetic arm that has been attached to a 26-year-old woman. Motor (movement) nerves have been attached in a way to allow for more intuitive control of the limb. She is able to achieve remarkable control and accomplish activities of daily living such as cooking and dressing, albeit a bit more slowly. Below is a video of this remarkable woman demonstrating use of her new arm.

Take a look at the advantage this prosthesis offers over previous ones.

They also attached the sensory nerves to her chest so that if she is touched there, she feels the sensation as if it is coming from her arm. The next step will be to develop a sensory mechanism for the arm and relay the signal to the nerves.

Exploring Mars, Part 1: Mars Global Surveyor

The hunt for the missing Mars Global Surveyor continues

Mars Global Surveyor has been orbiting Mars since 1997, the first of a fleet of probes now exploring the Red Planet. Well past its intended lifespan, it has provided a wealth of data, but unfortunately went silent several weeks ago, and so far neither Earth nor the other probes have been able to detect or contact it. This is a good opportunity to take a brief look at the many craft busy examining our neighbor in space. There are too many to cover in a single post; subsequent posts will continue the series. In the meantime, you may read the New Scientist article discussing the search for Mars Global Surveyor.

Artist’s conception of MGS orbiting Mars
Artist’s concept of MGS orbiting Mars. Artwork Credit: Corby Waste. Courtesy NASA/JPL-Caltech.

Mars Global Surveyor

The Mars Global Surveyor (MGS) was launched by NASA on 7 November 1996; it reached Mars eight months later on 11 September 1997. It was the first U.S. craft to visit Mars in twenty years (the Soviet Union’s Phobos 2 briefly explored Mars in 1998 before prematurely malfunctioning; the United States’ Mars Observer, launched in 1992, failed to function properly). MGS has performed well beyond expectations; it completed its primary mission in 2001 and has had its mission extended several times since then. It has been a highly successful spacecraft, studying Mars extensively and providing more information than all previous missions combined, according to New Scientist. Some of its observations include mapping local magnetic fields (Mars, unlike Earth, does not have a global magnetic field) and discovering repeating weather patterns. And more recently, it had been serving as a communications relay for the other craft exploring the planet, while complementing their observations.

Continue reading “Exploring Mars, Part 1: Mars Global Surveyor