Wednesday, January 10, 2007

Scientist says pulsar may have four poles

SEATTLE, Jan. 10 (UPI) -- The neutron star in the Crab Nebula may have four magnetic poles, which would be a cosmic first, a Puerto Rican scientist said.

A neutron star, the remnant of a star after a supernova explosion, sometimes is called a pulsar because it emits radio waves similar to a lighthouse track of light. The profile of the Crab Nebula star's pulse suggests the magnetic field that drives its emission is different, the BBC said.

The Crab Nebula pulsar has two pulses that can be identified, Tim Hankins, from the Arecibo Observatory in Puerto Rico, said during the American Astronomical Society meeting in Seattle. The profiles of pulses from the north and south poles of a neutron star should be identical.

Profiles for these pulses weren't, which Hankins said is the first time this has been noted in a pulsar.

"What we think is that there is another pole, possibly with a partner, that is influencing and distorting the magnetic field," he said, explaining that magnetic poles always come in pairs, so the fourth pole is distinctly likely.

Copyright 2007 by United Press International. All Rights Reserved.

(c) www.physorg.com

A New Reflection in the Mirror

A New Reflection in the Mirror

scanning electron microscope image of some of the magnetic mirror's “fish scale”-shaped aluminum nanowires. Credit: Alexander Schwanecke

A research group has devised a new type of mirror that reverses the magnetic field of a light wave upon reflection, rather than its electric field, as regular mirrors do. Seems like a minor difference? It's not.

“Our mirror's ability to reverse the magnetic field of a light wave but not its electric field is extremely unusual,” physicist Alexander Schwanecke, the study's corresponding scientist, said to PhysOrg.com. Schwanecke is a researcher at the NanoPhotonics Portfolio Centre at the University of Southampton in the United Kingdom. “It is the first demonstration of an entirely new type of optical tool.”
A typical household mirror works like this: Photons (particles of light) bounce off an object or person, hit the mirror, and are absorbed by electrons on the surface of its metal backing. The electrons almost instantly emit “reflected” photons (not the same photons that came in, as those are absorbed and gone), which travel to our eyes, allowing us to see our image. Photons that strike the mirror head-on are reflected squarely back, and those hitting at an angle are reflected at the same angle in the other direction, forming a V-shaped path. This is the law of reflection.
To understand the work by Schwanecke and his colleagues, however, we must remember that light is both a particle and a wave, and that, as a wave, it consists of an electric-field component and a magnetic-field component. After a reflection, the direction of the emitted light wave's electric field is reversed (this is one type of a “phase change”) but the magnetic component is not.
This magnetic mirror produces the opposite scenario: a flipped magnetic field and an unchanged electric field. The mirror has three layers: a layer of aluminum, a layer of silicon dioxide, and finally a layer of carefully arranged aluminum nanowires, shaped into a wavy pattern that the researchers call “fish scales.” The fish-scale shape is important because it allows the light to interact with the nanowires in a particular way, due to the spacing between each “scale.” As a result, the scales resonate with the light much like molecules would.
The mirror is tiny and square, about 500 micrometers (millionths of a meter) on each side, and contains about one million fish-scale-shaped elements. It works best for visible light, but the group expects that, with some tweaks to the fish-scale pattern, near-infrared light would work, too.

The nanowire layer is the key to the mirror's function. The curved nanowire “fish scales,” like molecules, have dimensions that are smaller than the wavelength of visible light. This means that they can interact with the light to influence or directly produce the material's overall optical response, in this case, a reversal of the light's magnetic field.
The researchers discovered the mirror's ability by observing a reflection using an interferometer, a device that can detect the difference in behavior of two light waves by recording what happens when they “interfere,” or cross paths.
“One characteristic of our mirror is that it is very sensitive to energy losses at the surface,” said Schwanecke. “This property could make it very useful for improving devices that work by detecting light, such as photodetectors.”
The mirror could also be useful, he says, in the detection of tiny particles or molecules near the mirror’s surface. If a particle or molecule was nearby and emitted a photon, the mirror would reflect the photon’s electric component without reversing it. A “normal” mirror would reverse it, thus weakening the signal and making it harder to detect the photon and, by extension, the particle or molecule.
The mirror’s potential to work with near-infrared light (light close to the visible range but still in the infrared) could make it advantageous to the telecommunications industry, in which near-infrared light is commonly used.

(c) www.physorg.com

Saturday, January 6, 2007

What’s Making Us Sick Is an Epidemic of Diagnoses

by H. Gilbert Welch, Lisa Schwartz and Steven Woloshin

For most Americans, the biggest health threat is not avian flu, West Nile or mad cow disease. It’s our health-care system.

You might think this is because doctors make mistakes (we do make mistakes). But you can’t be a victim of medical error if you are not in the system. The larger threat posed by American medicine is that more and more of us are being drawn into the system not because of an epidemic of disease, but because of an epidemic of diagnoses. 
                                                                                                              Harry Campbell

Americans live longer than ever, yet more of us are told we are sick.

How can this be? One reason is that we devote more resources to medical care than any other country. Some of this investment is productive, curing disease and alleviating suffering. But it also leads to more diagnoses, a trend that has become an epidemic.

This epidemic is a threat to your health. It has two distinct sources. One is the medicalization of everyday life. Most of us experience physical or emotional sensations we don’t like, and in the past, this was considered a part of life. Increasingly, however, such sensations are considered symptoms of disease. Everyday experiences like insomnia, sadness, twitchy legs and impaired sex drive now become diagnoses: sleep disorder, depression, restless leg syndrome and sexual dysfunction.

Perhaps most worrisome is the medicalization of childhood. If children cough after exercising, they have asthma; if they have trouble reading, they are dyslexic; if they are unhappy, they are depressed; and if they alternate between unhappiness and liveliness, they have bipolar disorder. While these diagnoses may benefit the few with severe symptoms, one has to wonder about the effect on the many whose symptoms are mild, intermittent or transient.

The other source is the drive to find disease early. While diagnoses used to be reserved for serious illness, we now diagnose illness in people who have no symptoms at all, those with so-called predisease or those “at risk.”

Two developments accelerate this process. First, advanced technology allows doctors to look really hard for things to be wrong. We can detect trace molecules in the blood. We can direct fiber-optic devices into every orifice. And CT scans, ultrasounds, M.R.I. and PET scans let doctors define subtle structural defects deep inside the body. These technologies make it possible to give a diagnosis to just about everybody: arthritis in people without joint pain, stomach damage in people without heartburn and prostate cancer in over a million people who, but for testing, would have lived as long without being a cancer patient.

Second, the rules are changing. Expert panels constantly expand what constitutes disease: thresholds for diagnosing diabetes, hypertension, osteoporosis and obesity have all fallen in the last few years. The criterion for normal cholesterol has dropped multiple times. With these changes, disease can now be diagnosed in more than half the population.

Most of us assume that all this additional diagnosis can only be beneficial. And some of it is. But at the extreme, the logic of early detection is absurd. If more than half of us are sick, what does it mean to be normal? Many more of us harbor “pre-disease” than will ever get disease, and all of us are “at risk.” The medicalization of everyday life is no less problematic. Exactly what are we doing to our children when 40 percent of summer campers are on one or more chronic prescription medications?

No one should take the process of making people into patients lightly. There are real drawbacks. Simply labeling people as diseased can make them feel anxious and vulnerable — a particular concern in children.

But the real problem with the epidemic of diagnoses is that it leads to an epidemic of treatments. Not all treatments have important benefits, but almost all can have harms. Sometimes the harms are known, but often the harms of new therapies take years to emerge — after many have been exposed. For the severely ill, these harms generally pale relative to the potential benefits. But for those experiencing mild symptoms, the harms become much more relevant. And for the many labeled as having predisease or as being “at risk” but destined to remain healthy, treatment can only cause harm.

The epidemic of diagnoses has many causes. More diagnoses mean more money for drug manufacturers, hospitals, physicians and disease advocacy groups. Researchers, and even the disease-based organization of the National Institutes of Health, secure their stature (and financing) by promoting the detection of “their” disease. Medico-legal concerns also drive the epidemic. While failing to make a diagnosis can result in lawsuits, there are no corresponding penalties for overdiagnosis. Thus, the path of least resistance for clinicians is to diagnose liberally — even when we wonder if doing so really helps our patients.

As more of us are being told we are sick, fewer of us are being told we are well. People need to think hard about the benefits and risks of increased diagnosis: the fundamental question they face is whether or not to become a patient. And doctors need to remember the value of reassuring people that they are not sick. Perhaps someone should start monitoring a new health metric: the proportion of the population not requiring medical care. And the National Institutes of Health could propose a new goal for medical researchers: reduce the need for medical services, not increase it.

Dr. Welch is the author of “Should I Be Tested for Cancer? Maybe Not and Here’s Why” (University of California Press). Dr. Schwartz and Dr. Woloshin are senior research associates at the VA Outcomes Group in White River Junction, Vt.

(c) www.nytimes.com

Super Slurper: From Laboratory Bench To Library Shelf

Science Daily — Super Slurper, a cornstarch-based superabsorbent polymer invented by Agricultural Research Service (ARS) scientists over 30 years ago, continues to fan the entrepreneurial spirit.


A flake of Super Slurper absorbs nearly 2,000 times its own weight in moisture. (Photo by George Robinson)

Take, for example, Nicholas Yeager, president of Artifex Equipment, Inc., a Penngrove, Calif., company specializing in book and document restoration. This fall, Yeager's company began mass-producing Zorbix, a sheetlike product based on Super Slurper that can dry out waterlogged library materials before destructive molds take hold.

Zorbix's commercialization is the latest chapter in a storied history of Super Slurper spinoffs that followed an ARS patent on the starch polymer in 1976. Among those spinoffs were disposable diapers, wound dressings, fuel filters and seed coatings.

The Zorbix story began in 2003, when Yeager was contacted by Kate Hayes, an information specialist with the Technology Transfer Information Center at the ARS National Agricultural Library in Beltsville, Md. Hayes, now retired, proposed using Super Slurper as a fast, new way of drying books exposed to flooding, leaky pipes and other watery disasters.

Intrigued, Yeager ran a simple test: He pressed Super Slurper onto the pages of a paperback novel that he had wetted. Yeager informed Hayes that her idea had worked, and that he wanted to explore the polymer's potential further.

In August 2003, NAL and Artifex entered into a material-transfer cooperative research and development agreement to both expedite and formalize Hayes and Yeager's bicoastal collaboration. In February 2004, the U.S. Department of Agriculture (USDA) also awarded a Small Business Innovation Research grant to Artifex.

In studies there, Yeager changed Super Slurper's flake form into another that allowed the creation of thin, flexible sheets, which he named Zorbix. In his tests and independent studies, the sheets worked as well as or better than other drying methods, including vacuum drying, poultices and blotters.

To better meet demand since debuting Zorbix in March, Artifex has obtained automated equipment capable of making thousands of the sheets per hour.

ARS is USDA's chief scientific research agency.

(c) www.sciencedaily.com

X-ray Evidence Supports Possible New Class Of Supernova

Science Daily — Recent observations have uncovered evidence that helps to confirm the identification of the remains of one of the earliest stellar explosions recorded by humans.


The combined image from the Chandra and XMM-Newton X-ray observatories of RCW 86 shows the expanding ring of debris that was created after a massive star in the Milky Way collapsed onto itself and exploded. (Chandra: NASA/CXC/Univ. of Utrecht/J.Vink et al. XMM-Newton: ESA/Univ. of Utrecht/J.Vink et al.)

The new study shows that the supernova remnant RCW 86 is much younger than previously thought. As such, the formation of the remnant appears to coincide with a supernova observed by Chinese astronomers in 185 A.D. The study used data from NASA's Chandra X-ray Observatory and the European Space Agency's XMM-Newton Observatory,

"There have been previous suggestions that RCW 86 is the remains of the supernova from 185 A.D.," said Jacco Vink of University of Utrecht, the Netherlands, and lead author of the study. "These new X-ray data greatly strengthen the case."

When a massive star runs out of fuel, it collapses on itself, creating a supernova that can outshine an entire galaxy. The intense explosion hurls the outer layers of the star into space and produces powerful shock waves. The remains of the star and the material it encounters are heated to millions of degrees and can emit intense X-ray radiation for thousands of years.

In their stellar forensic work, Vink and colleagues studied the debris in RCW 86 to estimate when its progenitor star originally exploded. They calculated how quickly the shocked, or energized, shell is moving in RCW 86, by studying one part of the remnant. They combined this expansion velocity with the size of the remnant and a basic understanding of how supernovas expand to estimate the age of RCW 86.

"Our new calculations tell us the remnant is about 2,000 years old," said Aya Bamba, a coauthor from the Institute of Physical and Chemical Research (RIKEN), Japan. "Previously astronomers had estimated an age of 10,000 years."

The younger age for RCW 86 may explain an astronomical event observed almost 2000 years ago. In 185 AD, Chinese astronomers (and possibly the Romans) recorded the appearance of a new bright star. The Chinese noted that it sparkled like a star and did not appear to move in the sky, arguing against it being a comet. Also, the observers noticed that the star took about eight months to fade, consistent with modern observations of supernovas.

RCW 86 had previously been suggested as the remnant from the 185 AD event, based on the historical records of the object's position. However, uncertainties about the age provided significant doubt about the association.

"Before this work I had doubts myself about the link, but our study indicates that the age of RCW 86 matches that of the oldest known supernova explosion in recorded history," said Vink. "Astronomers are used to referencing results from 5 or 10 years ago, so it's remarkable that we can build upon work from nearly 2000 years ago."

The smaller age estimate for the remnant follows directly from a higher expansion velocity. By examining the energy distribution of the X-rays, a technique known as spectroscopy, the team found most of the X-ray emission was caused by high-energy electrons moving through a magnetic field. This is a well-known process that normally gives rise to low-energy radio emission. However, only very high shock velocities can accelerate the electrons to such high energies that X-ray radiation is emitted.

"The energies reached in this supernova remnant are extremely high," said Andrei Bykov, another team member from the Ioffe Institute, St. Peterburg, Russia. "In fact, the particle energies are greater than what can be achieved by the most modern particle accelerators."

The difference in age estimates for RCW 86 is due to differences in expansion velocities measured for the supernova remnant. The authors speculate that these variations arise because RCW 86 is expanding into an irregular bubble blown by a wind from the progenitor star before it exploded. In some directions, the shock wave has encountered a dense region outside the bubble and slowed down, whereas in other regions the shock remains inside the bubble and is still moving rapidly. These regions give the most accurate estimate of the age.

The study describing these results appeared in the September 1 issue of The Astrophysical Journal Letters. NASA's Marshall Space Flight Center, Huntsville, Ala., manages the Chandra program for the agency's Science Mission Directorate. The Smithsonian Astrophysical Observatory, Cambridge, Mass., controls science and flight operations from the Chandra X-ray Center, Cambridge, Mass. XMM-Newton is an European Space Agency science mission managed at the European Space Research and Technology Centre, Noordwijk, the Netherlands for the Directorate of the Scientific Programme.

(c) www.sciencedaily.com

Catching Cosmic Clues

by Joanne Baker

Figure 1THE AUTHOR HENRY JAMES WROTE THAT "EXPERIENCE IS … A KIND OF HUGE spider-web of the finest silken threads suspended in the chamber of consciousness, and catching every airborne particle in its tissue." Particle astrophysicists are trying to weave their own webs by building vast detectors on Earth and in space that will ensnare cosmic particles and so teach us about the building blocks of the universe.

Thanks to enormous progress in cosmology in recent years, astrophysicists are both pleased and perplexed. On the one hand, they have succeeded in nailing down the universe's mass, geometry, and expansion rate. But on the other, they have discovered that 95% of the stuff of the universe is in two unknown forms that they have named "dark matter" and "dark energy." Only 5% is normal matter: electrons, protons, and neutrons. Pinning down the nature of this missing mass and energy is difficult, because dark matter does not absorb light or interact with normal atoms; the dark energy driving accelerated cosmic expansion is even more intangible. Particle physicists may, however, have the tools to test some ideas. In this special issue devoted to particle astrophysics, a rapidly developing interdisciplinary area, six Perspectives cover not only candidates for dark matter but also the physics of the Big Bang fireball, neutrinos, cosmic rays, and sources of extreme-energy gamma rays such as black holes.

Neutrino physics has leapt ahead in recent years, with measurements of neutrino mass and oscillations between different types, or flavors. The next frontier is neutrino astronomy, capturing neutrinos from sources more distant than the Sun, and vast arrays of detectors are being built under the ice in Antarctica and under the Mediterranean Sea to do this. Neutrinos hardly interact with normal matter at all, but occasionally they do and produce ghostly flashes of light that detectors can catch. If the universe's hidden mass takes the form of other particles, then axions and WIMPs (weakly interacting massive particles) are the prime suspects. Experiments, many hidden below ground to isolate the detectors from other stray particles, have been designed and are being implemented to spot these exotic particles via their recoil off other nuclei. Currently, these detectors are modest in size, but detectors now on the drawing board could weigh as much as a ton.

High-energy particles can also be used for astronomy. Cosmic-ray observatories are nearing the sensitivities required to detect individual sources in the sky, thus testing acceleration physics. Cosmic rays are created by extreme astrophysical sources such as supernova shock waves, gamma-ray bursts, and near black holes. Very-high-energy gamma-ray emission from these sources is already detectable with new telescope arrays and has constrained the physics of particle jets emanating from compact stars and black holes.

Particle astrophysics is an exciting area brimming with promise. As scientists come together to combine their know-how, maybe in the next decade we will find the missing matter, and crown the already remarkable achievements of cosmology.

(c) www.sciencemag.org

What Landed in New Jersey? It Came From Outer Space

by Kareem Fahim

It was not from the neighborhood.

The object that tore through the roof of a house in the New Jersey suburbs this week was an iron meteorite, perhaps billions of years old and maybe ripped from the belly of an asteroid, experts who examined it said yesterday.

Tentatively named “Freehold Township” for the place where it landed — and ruined a second-floor bathroom — the meteorite is only the second found in New Jersey, said Jeremy S. Delaney, a Rutgers University expert who examined it.

“It’s a pretty exciting find,” said Dr. Delaney, who has examined thousands of meteorites. He said that the first New Jersey meteorite was found in 1829, in the seaside town of Deal.

The meteorite now belongs to the family whose house it ended up in, said Lt. Robert Brightman of the Freehold Township Police Department, adding that they had asked not to be identified.

The family has not yet given permission for physical testing of the meteorite, but from looking at it, Dr. Delaney and other experts were able to tell that the object it had been part of — perhaps an asteroid — cooled relatively fast.

It is magnetic, and reasonably dense, they determined. The leading edge — the one that faced forward as it traveled through the earth’s atmosphere — was much smoother, while the so-called trailing edge seemed to have caught pieces of molten metal.

In fact, Mr. Delaney said, it seemed very similar to another meteorite fragment, the Ahnighito, now on display at the American Museum of Natural History.

“This little guy is a lot like it,” he said. “It’s a good candidate for the core of an asteroid.”

And the scientists are hoping that the owners of the “Freehold Township” will make it available for testing and public viewing, like the Ahnighito, a 34-ton chunk of the Cape York meteorite found in Greenland.

Or, they could sell it.

“The worth of a meteorite like this is almost completely determined by where it fell,” said Eric Twelker, a geologist and a dealer in meteorites, who buys and sells perhaps a hundred of them a month on meteoritemarket.com, his Web site. He was speaking of the premium placed on meteorites with a compelling back story, like the football-size rock that crashed into a parked Chevrolet in Peekskill, N.Y., in 1992.

(c) www.nytimes.com

Cancer-killing Invention Also Harvests Stem Cells

Science Daily — Associate Professor Michael King of the University of Rochester Biomedical Engineering Department has invented a device that filters the blood for cancer and stem cells.  When he captures cancer cells, he kills them.  When he captures stem cells, he harvests them for later use in tissue engineering, bone marrow transplants, and other applications that treat human disease and improve health.


Bone marrow cells that have been purified in a StemCapture device. (Image courtesy of School of Engineering and Applied Sciences, University of Rochester)

With Nichola Charles, Jared Kanofsky, and Jane L. Liesveld of the University of Rochester, King wrote about his discoveries in "Using Protein-Functionalized Microchannels for Stem Cell Separation," Paper No. ICNMM2006-96228, Proceedings of the ASME, June 2006.  King’s team includes scientists at StemCapture, Inc., a Rochester company that bought the University patent for King’s technique in November 2005 to build the cancer-killing and stem cell-harvesting devices.  The technique can be used in vivo, meaning a device is inserted in the body, or in vitro, in which case the device resides outside of the body – either way, the device kills cancer cells and captures stem cells, which grow into blood cells, bone, cartilage, and fat.

When King was working at the University of Pennsylvania from 1999 to 2001, one of his labmates discovered that bone marrow stem cells stick to adhesive proteins called selectins more strongly than other cells -- including blood cells -- stick to selectins.  When King came to the University of Rochester in early 2002, he started studying the adhesion of blood cells to the vascular wall, the inner lining of the blood vessels.  During inflammation, the vascular wall presents surface selectins that adhere specifically to white blood cells.  These selectins cause the white blood cells to roll slowly along the vascular wall, seeking signals that tell them to crawl out of the bloodstream.  This is how white blood cells migrate to bacterial infections and tissue injuries.  King set out to find a way to duplicate this natural process. 

First, he noted that the selectins form bonds with the white blood cells within fractions of a second, then immediately release the cells back into the bloodstream.  He also realized that selectin is the adhesive mechanism by which bone marrow stem cells leave the bloodstream and find their way back into bone marrow.  This is how bone marrow transplantation works.  Finally, he learned that when a cancer cell breaks free of a primary tumor and enters circulation, it flow through the bloodstream to a remote organ, then leaves the bloodstream and forms a secondary tumor.  This is how cancer spreads.  He put these facts together with one more, very important fact:  the selectins grab onto a specific carbohydrate on the surfaces of white blood cells, stem cells, and cancer cells.  Associate Professor King decided to capture stem and cancer cells before the selectins release them. 

Harvesting Stem Cells

Because bone marrow stem cells stick to selectin surfaces more strongly than other cells, King’s group coated a slender plastic tube with selectin.  They then did a series of lab experiments, both in vitro and in vivo using rats, with this selectin-coated tube to filter the bloodstream for stem cells.  It worked, and the King Lab discovered that they could attract a large number of cells to the wall of their selectin-coated device, and that 38% of these captured cells were stem cells.  King envisioned a system by which doctors could remove stem cells from the bloodstream by flowing the cells through a device, and make a more concentrated mixture containing, say, 20-40 percent stem cells.  These stem cells could then be used for tissue engineering or bone marrow transplantation. 

This is a non-controversial way of obtaining stem cells that can be differentiated into other, useful cells.

King’s team can capture significant amounts of cells of the lymphatic and circulatory systems, and potentially mesenchymal stem cells, which are unspecialized cells that form tissue, bone, and cartilage.  Current procedures enable the specific capture of hematopoietic stem cells, which grow (or differentiate) over time into all of the different blood cells, and the specific capture of stem cells that differentiate into bone marrow cells.  The device itself uses a combination of microfluidics, or fluid flow properties, and specialized selectin coatings. 

Killing Cancer Cells

Another exciting application of King’s invention is filtering the blood for cancer cells and triggering their death, an innovative, new method to prevent the spread of cancer.  When someone has a primary cancer tumor, a small number of cancer cells circulates through the bloodstream.  In a process called metastasis, these cells are transmitted from the primary tumor to other locations in the body, where they form secondary, cancerous growths. 

As a cancer cell flows along the implanted surface, King’s device captures it and delivers an apoptosis signal, a biochemical way of telling the cancer cell to kill itself.  Within two days, that cancer cell is dead.  Normal cells are left totally unharmed because the device selectively targets cancer cells. 

The apoptosis signal is delivered by a molecule called TRAIL that coats the cancer-killing device.  Cancer cells have five types of proteins that recognize and bind to TRAIL, but only two trigger cell death.  The other three are called decoy receptors.  Healthy cells contain a lot of decoy receptors, giving them a natural protection against TRAIL, whereas cancer cells mainly express the two receptors that signal cell death. 

During the death of the cancer cells, TRAIL is not depleted or used up in any way, and in fact, it stays active for many weeks or months.  The same TRAIL molecules can kill enormous numbers of cancer cells. 

A possible way to use the cancer-killing invention is to implant the device in the body before primary tumor surgery or chemotherapy.  When doctors remove a primary tumor, the procedure itself can release cancer cells into the bloodstream.  King’s device would grab those cancer cells and kill them, greatly reducing the possibility of metastasis.

Associate Professor King envisions that the device would use a shunt similar to the type used in hospitals today.  This shunt would reside on the exterior of the arm or be implanted beneath the skin.  Some of the blood flow would bypass the capillary bed and instead go into the shunt, which could remain implanted for many weeks, continually removing and killing cancer cells.  King’s first targets are colorectal cancer and blood malignancies such as leukemia.

Note: This story has been adapted from a news release issued by University of Rochester, School of Engineering and Applied Sciences.

(c) www.sciencedaily.com

Sea Snakes Conquered by Salt

by Elizabeth Pennisi

Picture of sea snakePHOENIX, ARIZONA--Shipwrecked sailors shouldn't drink ocean water no matter how thirsty they get. And neither should sea snakes. Contrary to the current dogma, at least some of these serpentine mariners must have freshwater to survive. Research shows that without it, at least one group of sea snakes--and likely others--will gradually waste away, researchers reported here yesterday at the annual meeting of the Society for Integrative and Comparative Biology. The need for access to fresh water may limit where these snakes can live, explaining their patchy distribution along certain coastlines.

All organisms must work to keep dehydration in check. Kidneys concentrate urine to conserve water, and many marine animals have special adaptations for getting rid of the excess salt taken in from the surrounding environment. Sea snakes--dozens of species of which live in the open ocean, while a few others hang out inshore--have a gland under their tongues for this purpose. Researchers have long assumed that this gland worked so well that the snakes could get away with sipping salt water whenever they needed a drink.

But Harvey Lillywhite, an ecological physiologist at the University of Florida in Gainesville, began to suspect otherwise when he had trouble keeping file snakes, which live almost fulltime in the ocean, alive in his lab. He discovered the snakes did fine once he put them in fresh water and began to wonder if the same was true of other marine snakes.

With the help of Ming Tu from the National Taiwan Normal University in Taipei, Lillywhite and his colleagues collected three species of sea kraits, snakes that live in the coastal waters of islands off Taiwan but, at the very least, come ashore to lay their eggs, usually in rocky caves close to the intertidal zone. Two of the species also visit land occasionally. All have the brine-secreting gland, suggesting they are well adapted to constant immersion in salt water.

For their experiments, the researchers first took the snakes out of water long enough to allow them to dehydrate. They then put the snakes in different concentrations of seawater. None of the dehydrated snakes tried to drink anything that was 50% or more salt water (They live in full-strength seawater.) But they did gulp down water fresh water and imbibed 25% saltwater concentrations, Lillywhite reported.

In a second study, Lillywhite's team tracked the weight of the snakes for 10 days. For the experiment, they kept the snakes in the seawater without food. The researchers placed half of the snakes in fresh water every other day for an hour. All the snakes experienced dehydration and lost weight, but the ones exposed to fresh water lost significantly less, says Lillywhite.

The results help explain the demographics of these Taiwanese snakes, Lillywhite says. They tend to be most plentiful along the shore, where there are springs or other sources of fresh water nearby. Furthermore, there are more sea snake species in areas with higher mean annual rainfalls, notes Lillywhite. Under calm conditions, thin layers of rain will float on top of the salt water, apparently providing ample supplies for the snakes.

It's a "major finding," says Harold Heatwole, an ecologist at North Carolina State University in Raleigh. Physiologist Lisa Hazard from Montclair State University in Upper Montclair, New Jersey, agrees. "He shows pretty clearly that [sea snakes] have to have access to fresh water," she says.

(c) sciencenow.sciencemag.org

With Mild Winter, the City Revisits Fall Fashion and the Record Books

by Anthony Ramirez
Karen Flagg with her 3-year-old son, James, on the swings at the Bleecker Street Playground in Greenwich Village. New York City, basking in warm weather, hasn’t gone this long without snow since 1878.

The last recorded time the snowfall in Central Park came so late in the season, the date was Jan. 4, 1878.

Rutherford B. Hayes was president, the tallest building in the city was Trinity Church (281 feet), and there was no Statue of Liberty. (It was erected in 1886.)

In a sense, there was no New York, either. The boroughs consolidated in 1898. Before then, the Bronx was called the Annexed District, Queens was farms, Staten Island was nearly empty, and Brooklyn was the nation’s third-largest city.

Yesterday, with parts of the nation shivering and the Rockies and the Midwest pummeled by another snowstorm, the record for the latest appearance of snow in New York City was broken with little fanfare.

For now, not even a flurry is in the immediate forecast. Indeed, today the temperature might reach 71 degrees, which would be another record. According to the National Weather Service, it will not even come close to freezing until Tuesday night, when the temperature could go down to 30 degrees.

For many people interviewed yesterday — a warm day of mist and gray skies — the city without snow was both a bewilderment and a delight.

There was scarcely a fedora, a knit cap or a hoodie to be seen. Therese Kahn, an interior decorator on the Upper East Side, was wearing what she described as “comfortable” Stuart Weitzman patent-leather boots, rather than Gore-Tex snow boots.

“It’s amazing that it’s so nice,” said Ms. Kahn, 50, who also had on a thin white parka, unzipped. “I have two teenage daughters and I’m always worried that they’re not dressed warmly enough, so this lifts the pressure.”

Jan Khan, 53, has been a doorman at 88 Central Park West for 21 years. “This is the first year I see no snow coming down,” he said. “I don’t like it. It’s not normal.”

Mr. Khan, originally from Mansehra, in northern Pakistan, said winter was invading usually warmer countries of Asia. On Thursday, more than 30 people were reported dead in Madhya Pradesh, in central India, and at least 20 in Bihar, in northeastern India, because of a severe cold snap.

“In Pakistan that is the problem now,” Mr. Khan said. “Two feet, three feet of snow. The Arctic is happening in my country, and India and Bangladesh and Nepal and China, all under snow.”

In East Harlem, at the Three Kings Day Parade, which commemorates the arrival of the Magi in Bethlehem, Carlos Canales, 36, from Glendale, Queens, worried about the weather.

“People aren’t really ready for the winter anymore,” he said. “We’re going to get caught off guard when winter finally hits us and a lot of people are going to get sick.”

Nearby, in Central Park, Patrick Denehan, 36, a furniture mover from Washington Heights, sipped coffee and watched geese waddle near an ice-free Harlem Meer.

“It feels,” Mr. Denehan said, “like the Twilight Zone.”

There is one positive aspect to the warm weather: the pothole situation. The city’s Department of Transportation said that work crews paved 17,357 potholes last month, about a quarter fewer than the 22,685 during the much snowier December of 2005.

In December 1877, when The New York Times took note of the snowless Christmas, the day was described as crisp and sunny.

The headline said, “A MILD CHRISTMAS DAY — THOUSANDS OF PERSONS IN THE CENTRAL PARK.”

The Times account read, “It is estimated, and the estimate is thought to be moderate, that fully 50,000 persons were in the Park during the afternoon, nearly all of whom visited the new Museum, opened by the President on Saturday.”

The Times noted, however, that the weather did hurt certain businesses. “Dry-goods houses, clothiers and coal dealers have been the heaviest sufferers,” the newspaper said. “They have seen their Winter’s supplies lie on their hands almost undiminished.”

When snow finally fell for the first time that winter in Central Park on Jan. 4, 1878, The Times did not report it. The newspaper did say that Poughkeepsie had four inches of snow.

The National Weather Service was cautious yesterday about how snowless is snowless. Jeffrey Tongue, science and operations officer at the service’s Upton office on Long Island, said the Jan. 4, 1878, date is based on the best available records.

“When we’re talking about a snow flurry that might last 10 minutes,” Mr. Tongue said, “there’s a question whether those were fully documented. We believe the 1878 date is accurate, but of course there’s nobody alive to actually ask about it.”

Stephen Fybish, an amateur weather historian, contends that the record for late snow in Central Park occurred far later than 1878, indeed nearly a century later, on Jan. 29, 1973.

“This is also true,” said Mr. Tongue of the weather service. “The 1878 date is for a trace of snow, which doesn’t stick to the ground, and the 1973 date is for measurable snow, which was 1.8 inches.”

So, whether the start date for the snowless record should be Jan. 5 or Jan. 30 is a matter of keen scholarly interest.

But, please, the weather service urges, no wagering.

(c) www.nytimes.com

Chemistry Of Volcanic Fallout Reveals Secrets Of Past Eruptions

Science Daily — A team of American and French scientists has developed a method to determine the influence of past volcanic eruptions on climate and the chemistry of the upper atmosphere, and significantly reduce uncertainty in models of future climate change.


Joel Savarino collecting snow samples at Dome C. (Credit: Joel Savarino, CNRS)

In the January 5 issue of the journal Science, the researchers from the University of California, San Diego, the National Center for Scientific Research (CNRS) and the University of Grenoble in France report that the chemical fingerprint of fallout from past eruptions reveals how high the volcanic material reached, and what chemical reactions occurred while it was in the atmosphere. The work is particularly relevant because the effect of atmospheric particles, or aerosols, is a large uncertainty in models of climate, according to Mark Thiemens, Dean of UCSD’s Division of Physical Sciences and professor of chemistry and biochemistry.

“In predictions about global warming, the greatest amount of error is associated with atmospheric aerosols,” explained Thiemens, in whose laboratory the method, which is based on the measurement of isotopes—or forms of sulfur—was developed. “Now for the first time, we can account for all of the chemistry involving sulfates, which removes uncertainties in how these particles are made and transported. That’s a big deal with climate change.”

Determining the height of a past volcanic eruption provides important information about its impact on climate. If volcanic material only reaches the lower atmosphere, the effects are relatively local and short term because the material is washed out by rain. Eruptions that reach higher, up to the stratosphere, have a greater influence on climate.

“In the stratosphere, sulfur dioxide that was originally in the magma gets oxidized and forms droplets of sulfuric acid,” said Joël Savarino, a researcher at the CNRS and the University of Grenoble, who led the study. “This layer of acid can stay for years in the stratosphere because no liquid water is present in this part of the atmosphere. The layer thus acts as a blanket, reflecting the sunlight and therefore reducing the temperature at ground level, significantly and for many years.”

To distinguish eruptions that made it to the stratosphere from those that did not, the researchers examined the isotopes of sulfur in fallout preserved in the ice in Antarctica. The volcanic material is carried there by air currents. Thiemens, Savarino and two of their students traveled to Antarctica and recovered the samples by digging snow pits near the South Pole and Dome C, the new French/Italian inland station.

Sulfur that rises as high as the stratosphere, above the ozone layer, is exposed to short wavelength ultraviolet light. UV exposure creates a unique ratio of sulfur isotopes. Therefore the sulfur isotope signature in fallout reveals whether or not an eruption was stratospheric.

To develop the method, the team, which also included Mélanie Baroni, the first author on the paper who is a postdoctoral fellow working with Savarino, and Robert Delmas, a research director at the CNRS, focused on two volcanic eruptions. Both eruptions, the 1963 eruption of Mount Agung in Bali and the 1991 eruption of Mount Pinatubo in the Philippines, were stratospheric according to the isotope measurements.

“Young volcanoes have the advantage of having been documented by modern instruments, such as satellites or aircraft,” said Savarino, who began his investigations into sulfur isotope measurements when he was a postdoctoral fellow working with Thiemens. “We could therefore compare our measurements on volcanic fallout stored in snow with atmospheric observations.”

Not only did their isotope measurements match the atmospheric observations, they were also able to distinguish the Pinatubo eruption from the eruption of Cerro Hudson that occurred the same year. Cerro Hudson did not send material as high as the stratosphere and the fallout had a different sulfur isotope fingerprint than the fallout from Pinatubo.

Volcanic material from more ancient eruptions is preserved in Antarctica, but the older, deeper seasonal layers of ice are extremely thin as a result of the pressure from the overlying ice. Therefore, it is not currently feasible to extract enough fallout from the ice to apply the isotope method to all past volcanoes. However, data from eruptions in the recent past reveal what chemical reactions of sulfates occur in the upper atmosphere.

Some scientists have proposed that if global warming becomes severe, sulfates could be injected into the stratosphere in order to block some of the incoming solar radiation and reduce the temperature. Thiemens explained that understanding the chemical reactions of sulfates in the stratosphere is critical to determining if this approach would be effective.

“Sulfates can cause warming or cooling depending on how they are made,” he said. “They are usually white particles, which tend to reflect sunlight, but if they are made on dark particles like soot, they can absorb heat and worsen warming.”

The study was funded by the French Polar Institute (IPEV) and the National Science Foundation Office of Polar Programs.

Note: This story has been adapted from a news release issued by University of California - San Diego.

(c) www.sciencedaily.com

Wednesday, January 3, 2007

Scientists discover new class of polymers

Scientists discover new class of polymers

Since the late 1990s, Lauterbach and Snively have been developing a method to make extremely thin polymer layers on surfaces. The film covering the surface of these metal samples is at least 1,000 times thinner than a human hair. Photo by Kathy F. Atkinson

For years, polymer chemistry textbooks have stated that a whole class of little molecules called 1,2-disubstituted ethylenes could not be transformed into polymers--the stuff of which plastics and other materials are made.


This photo shows an ultra-thin polymer film of fumaronitrile, which formerly was believed to be 'unpolymerizable,' on a tantalum foil. The film is circular due to the shape of the window where ultraviolet light is emitted into the vacuum chamber during the film deposition process. Photo courtesy of Lauterbach Laboratory, University of Delaware

However, the UD scientists were determined to prove the textbooks wrong. As a result of their persistence, the researchers have discovered a new class of ultra-thin polymer films with potential applications ranging from coating tiny microelectronic devices to plastic solar cells.
The discovery was reported as a "communication to the editor" in the Nov. 28 edition of Macromolecules, a scientific journal published by the American Chemical Society.
The research, which also involved doctoral student Seth Washburn, focused on formerly nonpolymerizable ethylenes. Among them are several compounds that are derived from natural sources, such as cinnamon, and and are FDA-approved for use in fragrances and foods. One of the compounds is found in milkshakes, according to the scientists.
"There's been a rule that these molecules wouldn't polymerize," Snively, who is a research associate in Lauterbach's laboratory group, noted. "When I first saw that in a textbook when I was in graduate school, I said to myself, 'Don't tell me I can't do this.'"
And thus, the quest to disprove a widely accepted scientific rule of thumb began.
Polymerization is a chemical reaction in which monomers, which are small molecules with repeating structural units, join together to form a long chain-like molecule--a polymer. Each polymer typically consists of 1,000 or more of these monomer "building blocks."
There are lots of natural polymers in the world, ranging from the DNA in our bodies to chewing gum. Plastics, of course, are one of the most common groups of manmade polymers. These synthetic materials first came on the scene in the mid-1800s and are found today in a wide range of applications, including foam drinking cups, carpet fibers, epoxy and PVC pipes.
Since the late 1990s, Lauterbach and Snively have been developing a method to make extremely thin polymer layers on surfaces. These nanofilms--at least 1,000 times thinner than a human hair--are becoming increasingly important as coatings for optics, solar cells, electrical insulators, advanced sensors and numerous other applications.
Formerly, to make a pound of polymer, scientists would take a monomer and a solvent and subject them to heat or light. Recently, Lauterbach and Snively developed a new polymer-making technique that eliminates the need for a solvent.
Their deposition-polymerization (DP) process takes place in a vacuum chamber, where the air is pumped out and the pressure is similar to outer space. The material to be coated, such as a piece of metal, is placed in the chamber, and the metal is cooled below the monomer's freezing point, which causes the monomer vapor to condense on the metal. Then the resulting film is exposed to ultraviolet light to initiate polymerization. The two-step process allows for the formation of uniform, defect-free films with thicknesses that can be controlled to within billionths of a meter.
The process is fairly "green," in that not only are no solvents used, but there also is very low energy consumption using this method, according to Lauterbach.
"You also can do photolithography with it," he said, meaning that the polymer will appear only where the light hits the monomer film.
While their polymerization technique was reported a few years ago, the class of materials the UD scientists have applied it to lately is new and unique.
"We can make nanometer-thick films, but we can't make a gram of the material yet," Snively noted. "We're working on ways to scale up the process."
The scientists also want to find out if the materials may be stronger, tougher or possess unique properties compared to other polymers.
"It's exciting because you don't really know what all their properties are yet," Snively said.
As for all the potential applications, Lauterbach said, "we're kind of in the discovery phase, looking to see where all these materials could be used."
The scientists say their collaboration has been so productive not just because their personalities mesh but because knowledge in each of their respective disciplines is essential to solving the scientific questions they seek to answer.
"We get more done together than either of us could alone," Lauterbach says.
Lauterbach has a doctorate in physical chemistry from the Free University of Berlin and the Fritz-Haber Institute of the Max Planck Society, and a bachelor's degree in physics from the University of Bayreuth, Germany.
Snively, a research assistant professor in the materials science and engineering department, has a doctorate in macromolecular science from Case Western Reserve University.
The two scientists came to UD in 2002 from Purdue University, where Lauterbach won the National Science Foundation's prestigious Faculty Early Career Development Award, as well as Union Carbide's Innovation Recognition Award.
Soon, the UD researchers may be applying the new class of polymers they've discovered to plastic solar cells through collaborative research in UD's new Sustainable Energy from Solar Hydrogen program.
This effort, which focuses on transforming UD graduate students into "energy experts" through interdisciplinary,
problem-based research, is supported by a five-year, $3.1 million grant from the National Science Foundation's Integrative Graduate Education and Research Training (IGERT) program.
UD's program, which began this fall, includes students and faculty from electrical and computer engineering, mechanical engineering, chemical engineering, materials science, chemistry, physics, economics and policy.
"We're looking forward to participating with our students in that program," Lauterbach said.
And until the current polymer textbooks are revised, Lauterbach and Snively also will take great relish when they ask their students to make note of a change in the margins.
"Right now, the excitement for us is that we've proven textbooks wrong," Lauterbach said.

(c) www.physorg.com

Cheaper LEDs from breakthrough in ZnO nanowire research

P-type ZnO Nanowires

SEM image of p-type ZnO nanowires created by electrical engineering professor Deli Wang at UC San Diego . Note: the blue color was added in photoshop. Credit: Deli Wang/UCSD

Engineers at UC San Diego have synthesized a long-sought semiconducting material that may pave the way for an inexpensive new kind of light emitting diode (LED) that could compete with today's widely used gallium nitride LEDs, according to a new paper in the journal Nano Letters.

To build an LED, you need both positively and negatively charged semiconducting materials; and the engineers synthesized zinc oxide (ZnO) nanoscale cylinders that transport positive charges or "holes" – so-called "p-type ZnO nanowires." They are endowed with a supply of positive charge carrying holes that, for years, have been the missing ingredients that prevented engineers from building LEDs from ZnO nanowires. In contrast, making "n-type" ZnO nanowires that carrier negative charges (electrons) has not been a problem. In an LED, when an electron meets a hole, it falls into a lower energy level and releases energy in the form of a photon of light.
Deli Wang, an electrical and computer engineering professor from UCSD's Jacobs School of Engineering, and colleagues at UCSD and Peking University, report synthesis of high quality p-type zinc oxide nanowires in a paper published online by the journal Nano Letters.
"Zinc oxide nanostructures are incredibly well studied because they are so easy to make. Now that we have p-type zinc oxide nanowires, the opportunities for LEDs and beyond are endless," said Wang.
Wang has filed a provisional patent for p-type ZnO nanowires and his lab at UCSD is currently working on a variety of nanoscale applications.
"Zinc oxide is a very good light emitter. Electrically driven zinc oxide single nanowire lasers could serve as high efficiency nanoscale light sources for optical data storage, imaging, and biological and chemical sensing," said Wang.
To make the p-type ZnO nanowires, the engineers doped ZnO crystals with phosphorus using a simple chemical vapor deposition technique that is less expensive than the metal organic chemical vapor deposition (MOCVD) technique often used to synthesize the building blocks of gallium nitride LEDs. Adding phosphorus atoms to the ZnO crystal structure leads to p-type semiconducting materials through the formation of a defect complex that increases the number of holes relative to the number of free electrons.

"Zinc oxide is wide band gap semiconductor and generating p-type doping impurities that provide free holes is very difficult – particularly in nanowires. Bin Xiang in my group worked day and night for more than a year to accomplish this goal," said Wang.
The starting materials and manufacturing costs for ZnO LEDs are far less expensive than those for gallium nitride LEDs. In the future, Wang expects to cut costs even further by making p-type and n-type ZnO nanowires from solution.
For years, researchers have been making electron-abundant n-type ZnO nanowire crystals from zinc and oxygen. Missing oxygen atoms within the regular ZnO crystal structure create relative overabundances of zinc atoms and give the semiconductors their n-type, conductive properties. The lack of accompanying p-type ZnO nanowires, however, has prevented development of a wide range of ZnO nanodevices.
While high quality p-type ZnO nanowires have not previously been reported, groups have demonstrated p-type conduction in ZnO thin films and made ZnO thin film LEDs. Using ZnO nanowires rather than thin films to make LEDs would be less expensive and could lead to more efficient LEDs, Wang explained.
Having both n- and p-type ZnO nanowires – complementary nanowires – could also be useful in a variety of applications including transistors, spintronics, UV detectors, nanogenerators, and microscopy. In spintronics applications, researchers could use p-type ZnO nanowires to make dilute magnetic semiconductors by doping ZnO with magnetic atoms, such as manganese and cobalt, Wang explained.
Transistors that rely on the semiconducting properties of ZnO are also now on the horizon. "P-type doping in nanowires would make complementary ZnO nanowire transistors possible," said Wang.

(c) www.physorg.com

Lost lakes of Titan are found at last

Titan Has Liquid Lakes, Scientists Report

This colorized radar view from Cassini shows lakes on Titan. Color intensity is proportional to how much radar brightness is returned. The colors are not a representation of what the human eye would see. Image credit: NASA/JPL/USGS

Lakes of methane have been spotted on Saturn's largest moon, Titan, boosting the theory that this strange, distant world bears beguiling similarities to Earth, according to a new study.

Titan has long intrigued space scientists, as it is the only moon in the Solar System to have a dense atmosphere -- and its atmosphere, like Earth's, mainly comprises nitrogen.

Titan's atmosphere is also rich in methane, although the source for this vast store of hydrocarbons is unclear.

Methane, on the geological scale, has a relatively limited life. A molecule of the compound lasts several tens of millions of years before it is broken up by sunlight.

Given that Titan is billions of years old, the question is how this atmospheric methane gets to be renewed. Without replenishment, it should have disappeared long ago.

A popular hypothesis is that it comes from a vast ocean of hydrocarbons.

But when the US spacecraft Cassini sent down a European lander, Huygens, to Titan in 2005, the images sent back were of a rugged landscape veiled in an orange haze.

There were indeed signs of methane flows and methane precipitation, but nothing at all that pointed to any sea of the stuff.

But a flyby by Cassini on July 22 last year has revealed, thanks to a radar scan, 75 large, smooth, dark patches between three and 70 kilometers across (two and 42 miles) across that appear to be lakes of liquid methane, scientists report on Thursday.

They believe the lakes prove that Titan has a "methane cycle" -- a system that is like the water cycle on Earth, in which the liquid evaporates, cools and condenses and then falls as rain, replenishing the surface liquid.

As on Earth, Titan's surface methane may well be supplemented by a "table" of liquid methane that seeps through the rock, the paper suggests.
Some of the methane lakes seem only partly filled, and other depressions are dry, which suggests that, given the high northerly latitudes where they were spotted, the methane cycle follows Titan's seasons.
In winter, the lakes expand, while in summer, they shrink or dry up completely -- again, another parallel with Earth's hydrological cycle.
The study, which appears on Thursday in the British weekly journal Nature, is headed by Ellen Stofan of Proxemy Research in Virginia and University College London.
Titan and Earth are of course very different, especially in their potential for nurturing life. Titan is frigid, dark and, as far as is known, waterless, where as Earth is warm, light and has lots of liquid water.
But French astrophysicist Christophe Sotin says both our planet and Titan have been sculpted by processes that, fundamentally, are quite similar.
The findings "add to the weight of evidence that Titan is a complex world in which the interaction between the inner and outer layers is controlled by processes similar to those that must have dominated the evolution of any Earth-like planet," Sotin said in a commentary.
"Indeed, as far as we know," Sotin added, "there is only one planetary body that displays more dynamism than Titan. Its name is Earth."

(c) www.physorg.com

Tuesday, January 2, 2007

A Robot in Every Home

by Bill Gates

Imagine being present at the birth of a new industry. It is an industry based on groundbreaking new technologies, wherein a handful of well-established corporations sell highly specialized devices for business use and a fast-growing number of start-up companies produce innovative toys, gadgets for hobbyists and other interesting niche products. But it is also a highly fragmented industry with few common standards or platforms. Projects are complex, progress is slow, and practical applications are relatively rare. In fact, for all the excitement and promise, no one can say with any certainty when--or even if--this industry will achieve critical mass. If it does, though, it may well change the world.

A Robot in Every Home
AMERICAN ROBOTIC: Although a few of the domestic robots of tomorrow may resemble the anthropomorphic machines of science fiction, a greater number are likely to be mobile peripheral devices that perform specific household tasks.

Of course, the paragraph above could be a description of the computer industry during the mid-1970s, around the time that Paul Allen and I launched Microsoft. Back then, big, expensive mainframe computers ran the back-office operations for major companies, governmental departments and other institutions. Researchers at leading universities and industrial laboratories were creating the basic building blocks that would make the information age possible. Intel had just introduced the 8080 microprocessor, and Atari was selling the popular electronic game Pong. At homegrown computer clubs, enthusiasts struggled to figure out exactly what this new technology was good for.

But what I really have in mind is something much more contemporary: the emergence of the robotics industry, which is developing in much the same way that the computer business did 30 years ago. Think of the manufacturing robots currently used on automobile assembly lines as the equivalent of yesterday's mainframes. The industry's niche products include robotic arms that perform surgery, surveillance robots deployed in Iraq and Afghanistan that dispose of roadside bombs, and domestic robots that vacuum the floor. Electronics companies have made robotic toys that can imitate people or dogs or dinosaurs, and hobbyists are anxious to get their hands on the latest version of the Lego robotics system.

Meanwhile some of the world's best minds are trying to solve the toughest problems of robotics, such as visual recognition, navigation and machine learning. And they are succeeding. At the 2004 Defense Advanced Research Projects Agency (DARPA) Grand Challenge, a competition to produce the first robotic vehicle capable of navigating autonomously over a rugged 142-mile course through the Mojave Desert, the top competitor managed to travel just 7.4 miles before breaking down. In 2005, though, five vehicles covered the complete distance, and the race's winner did it at an average speed of 19.1 miles an hour. (In another intriguing parallel between the robotics and computer industries, DARPA also funded the work that led to the creation of Arpanet, the precursor to the Internet.)

What is more, the challenges facing the robotics industry are similar to those we tackled in computing three decades ago. Robotics companies have no standard operating software that could allow popular application programs to run in a variety of devices. The standardization of robotic processors and other hardware is limited, and very little of the programming code used in one machine can be applied to another. Whenever somebody wants to build a new robot, they usually have to start from square one.

Despite these difficulties, when I talk to people involved in robotics--from university researchers to entrepreneurs, hobbyists and high school students--the level of excitement and expectation reminds me so much of that time when Paul Allen and I looked at the convergence of new technologies and dreamed of the day when a computer would be on every desk and in every home. And as I look at the trends that are now starting to converge, I can envision a future in which robotic devices will become a nearly ubiquitous part of our day-to-day lives. I believe that technologies such as distributed computing, voice and visual recognition, and wireless broadband connectivity will open the door to a new generation of autonomous devices that enable computers to perform tasks in the physical world on our behalf. We may be on the verge of a new era, when the PC will get up off the desktop and allow us to see, hear, touch and manipulate objects in places where we are not physically present.

From Science Fiction to Reality
The word "robot" was popularized in 1921 by Czech playwright Karel Capek, but people have envisioned creating robotlike devices for thousands of years. In Greek and Roman mythology, the gods of metalwork built mechanical servants made from gold. In the first century A.D., Heron of Alexandria--the great engineer credited with inventing the first steam engine--designed intriguing automatons, including one said to have the ability to talk. Leonardo da Vinci's 1495 sketch of a mechanical knight, which could sit up and move its arms and legs, is considered to be the first plan for a humanoid robot.

Over the past century, anthropomorphic machines have become familiar figures in popular culture through books such as Isaac Asimov's I, Robot, movies such as Star Wars and television shows such as Star Trek. The popularity of robots in fiction indicates that people are receptive to the idea that these machines will one day walk among us as helpers and even as companions. Nevertheless, although robots play a vital role in industries such as automobile manufacturing--where there is about one robot for every 10 workers--the fact is that we have a long way to go before real robots catch up with their science-fiction counterparts.

One reason for this gap is that it has been much harder than expected to enable computers and robots to sense their surrounding environment and to react quickly and accurately. It has proved extremely difficult to give robots the capabilities that humans take for granted--for example, the abilities to orient themselves with respect to the objects in a room, to respond to sounds and interpret speech, and to grasp objects of varying sizes, textures and fragility. Even something as simple as telling the difference between an open door and a window can be devilishly tricky for a robot.

But researchers are starting to find the answers. One trend that has helped them is the increasing availability of tremendous amounts of computer power. One megahertz of processing power, which cost more than $7,000 in 1970, can now be purchased for just pennies. The price of a megabit of storage has seen a similar decline. The access to cheap computing power has permitted scientists to work on many of the hard problems that are fundamental to making robots practical. Today, for example, voice-recognition programs can identify words quite well, but a far greater challenge will be building machines that can understand what those words mean in context. As computing capacity continues to expand, robot designers will have the processing power they need to tackle issues of ever greater complexity.

A Robot in Every Home
COMPUTER TEST-DRIVE of a mobile device in a three-dimensional virtual environment helps robot builders analyze and adjust the capabilities of their designs before trying them out in the real world. Part of the Microsoft Robotics Studio software development kit, this tool simulates the effects of forces such as gravity and friction.

Another barrier to the development of robots has been the high cost of hardware, such as sensors that enable a robot to determine the distance to an object as well as motors and servos that allow the robot to manipulate an object with both strength and delicacy. But prices are dropping fast. Laser range finders that are used in robotics to measure distance with precision cost about $10,000 a few years ago; today they can be purchased for about $2,000. And new, more accurate sensors based on ultrawideband radar are available for even less.

Now robot builders can also add Global Positioning System chips, video cameras, array microphones (which are better than conventional microphones at distinguishing a voice from background noise) and a host of additional sensors for a reasonable expense. The resulting enhancement of capabilities, combined with expanded processing power and storage, allows today's robots to do things such as vacuum a room or help to defuse a roadside bomb--tasks that would have been impossible for commercially produced machines just a few years ago.

A BASIC Approach
In february 2004 I visited a number of leading universities, including Carnegie Mellon University, the Massachusetts Institute of Technology, Harvard University, Cornell University and the University of Illinois, to talk about the powerful role that computers can play in solving some of society's most pressing problems. My goal was to help students understand how exciting and important computer science can be, and I hoped to encourage a few of them to think about careers in technology. At each university, after delivering my speech, I had the opportunity to get a firsthand look at some of the most interesting research projects in the school's computer science department. Almost without exception, I was shown at least one project that involved robotics.

At that time, my colleagues at Microsoft were also hearing from people in academia and at commercial robotics firms who wondered if our company was doing any work in robotics that might help them with their own development efforts. We were not, so we decided to take a closer look. I asked Tandy Trower, a member of my strategic staff and a 25-year Microsoft veteran, to go on an extended fact-finding mission and to speak with people across the robotics community. What he found was universal enthusiasm for the potential of robotics, along with an industry-wide desire for tools that would make development easier. "Many see the robotics industry at a technological turning point where a move to PC architecture makes more and more sense," Tandy wrote in his report to me after his fact-finding mission. "As Red Whittaker, leader of [Carnegie Mellon's] entry in the DARPA Grand Challenge, recently indicated, the hardware capability is mostly there; now the issue is getting the software right."

Back in the early days of the personal computer, we realized that we needed an ingredient that would allow all of the pioneering work to achieve critical mass, to coalesce into a real industry capable of producing truly useful products on a commercial scale. What was needed, it turned out, was Microsoft BASIC. When we created this programming language in the 1970s, we provided the common foundation that enabled programs developed for one set of hardware to run on another. BASIC also made computer programming much easier, which brought more and more people into the industry. Although a great many individuals made essential contributions to the development of the personal computer, Microsoft BASIC was one of the key catalysts for the software and hardware innovations that made the PC revolution possible.

After reading Tandy's report, it seemed clear to me that before the robotics industry could make the same kind of quantum leap that the PC industry made 30 years ago, it, too, needed to find that missing ingredient. So I asked him to assemble a small team that would work with people in the robotics field to create a set of programming tools that would provide the essential plumbing so that anybody interested in robots with even the most basic understanding of computer programming could easily write robotic applications that would work with different kinds of hardware. The goal was to see if it was possible to provide the same kind of common, low-level foundation for integrating hardware and software into robot designs that Microsoft BASIC provided for computer programmers.

A Robot in Every Home
BIRTH OF AN INDUSTRY: iRobot, a company based in Burlington, Mass., manufactures the Packbot EOD, which assists with bomb disposal in Iraq


Tandy's robotics group has been able to draw on a number of advanced technologies developed by a team working under the direction of Craig Mundie, Microsoft's chief research and strategy officer. One such technology will help solve one of the most difficult problems facing robot designers: how to simultaneously handle all the data coming in from multiple sensors and send the appropriate commands to the robot's motors, a challenge known as concurrency. A conventional approach is to write a traditional, single-threaded program--a long loop that first reads all the data from the sensors, then processes this input and finally delivers output that determines the robot's behavior, before starting the loop all over again. The shortcomings are obvious: if your robot has fresh sensor data indicating that the machine is at the edge of a precipice, but the program is still at the bottom of the loop calculating trajectory and telling the wheels to turn faster based on previous sensor input, there is a good chance the robot will fall down the stairs before it can process the new information.

Concurrency is a challenge that extends beyond robotics. Today as more and more applications are written for distributed networks of computers, programmers have struggled to figure out how to efficiently orchestrate code running on many different servers at the same time. And as computers with a single processor are replaced by machines with multiple processors and "multicore" processors--integrated circuits with two or more processors joined together for enhanced performance--software designers will need a new way to program desktop applications and operating systems. To fully exploit the power of processors working in parallel, the new software must deal with the problem of concurrency.

One approach to handling concurrency is to write multi-threaded programs that allow data to travel along many paths. But as any developer who has written multithreaded code can tell you, this is one of the hardest tasks in programming. The answer that Craig's team has devised to the concurrency problem is something called the concurrency and coordination runtime (CCR). The CCR is a library of functions--sequences of software code that perform specific tasks--that makes it easy to write multithreaded applications that can coordinate a number of simultaneous activities. Designed to help programmers take advantage of the power of multicore and multiprocessor systems, the CCR turns out to be ideal for robotics as well. By drawing on this library to write their programs, robot designers can dramatically reduce the chances that one of their creations will run into a wall because its software is too busy sending output to its wheels to read input from its sensors.

In addition to tackling the problem of concurrency, the work that Craig's team has done will also simplify the writing of distributed robotic applications through a technology called decentralized software services (DSS). DSS enables developers to create applications in which the services--the parts of the program that read a sensor, say, or control a motor-- operate as separate processes that can be orchestrated in much the same way that text, images and information from several servers are aggregated on a Web page. Because DSS allows software components to run in isolation from one another, if an individual component of a robot fails, it can be shut down and restarted--or even replaced--without having to reboot the machine. Combined with broadband wireless technology, this architecture makes it easy to monitor and adjust a robot from a remote location using a Web browser.

What is more, a DSS application controlling a robotic device does not have to reside entirely on the robot itself but can be distributed across more than one computer. As a result, the robot can be a relatively inexpensive device that delegates complex processing tasks to the high-performance hardware found on today's home PCs. I believe this advance will pave the way for an entirely new class of robots that are essentially mobile, wireless peripheral devices that tap into the power of desktop PCs to handle processing-intensive tasks such as visual recognition and navigation. And because these devices can be networked together, we can expect to see the emergence of groups of robots that can work in concert to achieve goals such as mapping the seafloor or planting crops.

These technologies are a key part of Microsoft Robotics Studio, a new software development kit built by Tandy's team. Microsoft Robotics Studio also includes tools that make it easier to create robotic applications using a wide range of programming languages. One example is a simulation tool that lets robot builders test their applications in a three-dimensional virtual environment before trying them out in the real world. Our goal for this release is to create an affordable, open platform that allows robot developers to readily integrate hardware and software into their designs.

Should We Call Them Robots?
How soon will robots become part of our day-to-day lives? According to the International Federation of Robotics, about two million personal robots were in use around the world in 2004, and another seven million will be installed by 2008. In South Korea the Ministry of Information and Communication hopes to put a robot in every home there by 2013. The Japanese Robot Association predicts that by 2025, the personal robot industry will be worth more than $50 billion a year worldwide, compared with about $5 billion today.

As with the PC industry in the 1970s, it is impossible to predict exactly what applications will drive this new industry. It seems quite likely, however, that robots will play an important role in providing physical assistance and even companionship for the elderly. Robotic devices will probably help people with disabilities get around and extend the strength and endurance of soldiers, construction workers and medical professionals. Robots will maintain dangerous industrial machines, handle hazardous materials and monitor remote oil pipelines. They will enable health care workers to diagnose and treat patients who may be thousands of miles away, and they will be a central feature of security systems and search-and-rescue operations.

Although a few of the robots of tomorrow may resemble the anthropomorphic devices seen in Star Wars, most will look nothing like the humanoid C-3PO. In fact, as mobile peripheral devices become more and more common, it may be increasingly difficult to say exactly what a robot is. Because the new machines will be so specialized a nd ubiquitous--and look so little like the two-legged automatons of science fiction--we probably will not even call them robots. But as these devices become affordable to consumers, they could have just as profound an impact on the way we work, communicate, learn and entertain ourselves as the PC has had over the past 30 years.

---

BILL GATES is co-founder and chairman of Microsoft, the world's largest software company. While attending Harvard University in the 1970s, Gates developed a version of the programming language BASIC for the first microcomputer, the MITS Altair. In his junior year, Gates left Harvard to devote his energies to Microsoft, the company he had begun in 1975 with his childhood friend Paul Allen. In 2000 Gates and his wife, Melinda, established the Bill & Melinda Gates Foundation, which focuses on improving health, reducing poverty and increasing access to technology around the world.

(c) www.sciam.com

Monday, January 1, 2007

India to test space capsule as part of moon mission

India to test space capsule as part of moon mission

India plans to launch a capsule into orbit early next year and bring it back to Earth, an initial step towards an unmanned mission to the moon by 2010, a news report said Sunday.

The Indian Space Research Organisation (ISRO), which has said it hopes to send an unmanned probe to the moon in the next three years, has said it needs to test its re-entry and recovery technology, the Indian Express report said.

The agency will launch a 50-kilogram (110 pound) capsule and then have it re-enter and splash into the Bay of Bengal after 15 to 30 days of orbit around the Earth, the newspaper said.

The announcement is the latest by India's space agency that show an expansion in policy from projects meant to aid national development to a growing interest in space exploration, the report said.

The space agency also said last month that it planned to send an unmanned mission to Mars by 2013 to look for evidence of life.

The six-to-eight-month mission, would cost three billion rupees (67 million dollars), the Hindustan Times reported.

© 2006 AFP

(c) www.physorg.com

Molecular Anatomy Of Influenza Virus Detailed

Science Daily — Scientists at the National Institute of Arthritis and Musculoskeletal and Skin Diseases (NIAMS), part of the National Institutes of Health in Bethesda, Md., and colleagues at the University of Virginia in Charlottesville have succeeded in imaging, in unprecedented detail, the virus that causes influenza.

Molecular Anatomy Of Influenza Virus Detailed
The three-dimensional structure of influenza virus from electron tomography. The viruses are about 120 nanometers -- about one ten thousandth of a millimeter -- in diameter.

A team of researchers led by NIAMS' Alasdair Steven, Ph.D., working with a version of the seasonal H3N2 strain of influenza A virus, has been able to distinguish five different kinds of influenza virus particles in the same isolate (sample) and map the distribution of molecules in each of them. This breakthrough has the potential to identify particular features of highly virulent strains, and to provide insight into how antibodies inactivate the virus, and how viruses recognize susceptible cells and enter them in the act of infection.

“Being able to visualize influenza virus particles should boost our efforts to prepare for a possible pandemic flu attack,” says NIAMS Director Stephen I. Katz, M.D., Ph.D. “This work will allow us to ‘know our enemy' much better.”

One of the difficulties that has hampered structural studies of influenza virus is that no two virus particles are the same. In this fundamental respect, it differs from other viruses; poliovirus, for example, has a coat that is identical in each virus particle, allowing it to be studied by crystallography.

The research team used electron tomography (ET) to make its discovery. ET is a novel, three-dimensional imaging method based on the same principle as the well-known clinical imaging technique called computerized axial tomography, but it is performed in an electron microscope on a microminiaturized scale.

The mission of the National Institute of Arthritis and Musculoskeletal and Skin Diseases (NIAMS), a part of the Department of Health and Human Services' National Institutes of Health, is to support research into the causes, treatment, and prevention of arthritis and musculoskeletal and skin diseases; the training of basic and clinical scientists to carry out this research; and the dissemination of information on research progress in these diseases. For more information about NIAMS, call the information Clearinghouse at (301) 495-4484 or (877) 22-NIAMS (free call) or visit the NIAMS Web site at http://www.niams.nih.gov.

The National Institutes of Health (NIH) — The Nation's Medical Research Agency — includes 27 Institutes and Centers and is a component of the U.S. Department of Health and Human Services. It is the primary federal agency for conducting and supporting basic, clinical and translational medical research, and it investigates the causes, treatments, and cures for both common and rare diseases. For more information about NIH and its programs, visit http://www.nih.gov.

Reference: Harris A, et al . Influenza virus pleiomorphy characterized by cryoelectron tomography. PNAS 2006;103(50):19123-19127.

(c) www.sciencedaily.com

Gene-engineered cattle resist mad cow disease: study

Gene-engineered cattle resist mad cow disease: study

WASHINGTON (Reuters) - U.S. and Japanese scientists reported on Sunday that they had used genetic engineering to produce cattle that resist mad cow disease.

They hope the cattle can be the source of herds that can provide dairy products, gelatin and other products free of the brain-destroying disease, also known as bovine spongiform encephalopathy or BSE.

Writing in the journal Nature Biotechnology, the researchers said their cattle were healthy at the age of 20 months, and sperm from the males made normal embryos that were used to impregnate cows, although it is not certain yet that they could breed normally.

The cattle lack the nervous system prions, a type of protein, that cause BSE and other related diseases such as scrapie in sheep and Creutzfeldt-Jakob disease, known as CJD, in humans, the researchers said.

"(Prion-protein-negative) cattle could be a preferred source of a wide variety of bovine-derived products that have been extensively used in biotechnology, such as milk, gelatin, collagen, serum and plasma," they wrote in their report.

Yoshimi Kuroiwa of Kirin Brewery Co. in Tokyo, Japan and colleagues made the cattle, known as knockouts because a specific gene has been "knocked" out of them, using a method they call gene targeting.

"By knocking out the prion protein gene and producing healthy calves, our team has successfully demonstrated that normal cellular prion protein is not necessary for the normal development and survival of cattle. The cows are now nearly 2 years old and are completely healthy," said James Robl of Hematech, a South Dakota subsidiary of Kirin.

"We anticipate that prion protein-free cows will be useful models to study prion disease processes in both animals and humans," Robl, an expert in cloning technology, said in a statement.

Misfolded prion proteins are blamed for BSE and other, similar brain diseases. It is known that certain genetic variations make animals more susceptible to the diseases.

BSE swept through British herds in the 1980s and people began developing an odd, early-onset form of CJD called variant CJD or vCJD a few years later. CJD normally affects one in a million people globally, usually the elderly, as it has a long incubation period.

There is no cure and it is always fatal.

As of November 2006, 200 vCJD patients were reported worldwide, including 164 patients in Britain, 21 in France, 4 in the Republic of Ireland, 3 in the United States, 2 in the Netherlands and 1 each in Canada, Italy, Japan, Portugal, Saudi Arabia and Spain.

The disease may have first started to infect cattle when they were fed improperly processed remains of sheep, possibly sheep infected with scrapie. Although people are not known to have ever caught scrapie from eating sheep, BSE can be transmitted to humans.

BSE occasionally occurs in cattle outside Britain although it is now rare.

(c) www.sciam.com

Tag Cloud