Showing posts sorted by relevance for query Time. Sort by date Show all posts
Showing posts sorted by relevance for query Time. Sort by date Show all posts

Tuesday, March 20, 2007

248-dimension maths puzzle solved

An international team of mathematicians has detailed a vast complex numerical "structure" which was invented more than a century ago.

Mapping the 248-dimensional structure, called E8, took four years of work and produced more data than the Human Genome Project, researchers said.

Part of the E8 matrix. Image: David Vogan / MIT
The structure is described in the form of a vast matrix

E8 is a "Lie group", a means of describing symmetrical objects.

The team said their findings may assist fields of physics which use more than four dimensions, such as string theory.

Lie groups were invented by the 19th Century Norwegian mathematician Sophus Lie (pronounced "Lee").

Familar structures such as balls and cones have symmetry in three dimensions, and there are Lie groups to describe them. E8 is much bigger.

"What's attractive about studying E8 is that it's as complicated as symmetry can get", observed David Vogan from the Massachussetts Institute of Technology (MIT) in the US.

"Mathematics can almost always offer another example that's harder than the one you're looking at now, but for Lie groups, E8 is the hardest one."

Professor Vogan is presenting the results at MIT in a lecture entitled The Character Table for E8, or How We Wrote Down a 453,060 x 453,060 Matrix and Found Happiness.

Fundamental force

Conceptualising, designing and running the calculations took a team of 19 mathematicians four years. The final computation took more than three days' solid processing time on a Sage supercomputer.

Sophus Lie. Image: Science Photo Library
Lie groups were invented by the Norwegian Sophus Lie

What came out was a matrix of linked numbers, which together describe the structure of E8. It contains more than 60 times as much data as the human genome sequence.

Each of the 205,263,363,600 entries on the matrix is far more complicated than a straightforward number; some are complex equations.

The team calculated that if all the numbers were written out in small type, they would cover an area the size of Manhattan.

In addition to facilitating further understanding of symmetry and related areas of mathematics, the team hopes its work will contribute to areas of physics, such as string theory, which involve structures possessing more than the conventional four dimensions of space and time.

"While mathematicians have known for a long time about the beauty and the uniqueness of E8, we physicists have come to appreciate its exceptional role only more recently," commented Hermann Nicolai, director of the Max Planck Institute for Gravitational Physics (the Albert Einstein Institute) in Germany.

"Yet, in our attempts to unify gravity with the other fundamental forces into a consistent theory of quantum gravity, we now encounter it at almost every corner."

(c) http://news.bbc.co.uk

Tuesday, January 2, 2007

A Robot in Every Home

by Bill Gates

Imagine being present at the birth of a new industry. It is an industry based on groundbreaking new technologies, wherein a handful of well-established corporations sell highly specialized devices for business use and a fast-growing number of start-up companies produce innovative toys, gadgets for hobbyists and other interesting niche products. But it is also a highly fragmented industry with few common standards or platforms. Projects are complex, progress is slow, and practical applications are relatively rare. In fact, for all the excitement and promise, no one can say with any certainty when--or even if--this industry will achieve critical mass. If it does, though, it may well change the world.

A Robot in Every Home
AMERICAN ROBOTIC: Although a few of the domestic robots of tomorrow may resemble the anthropomorphic machines of science fiction, a greater number are likely to be mobile peripheral devices that perform specific household tasks.

Of course, the paragraph above could be a description of the computer industry during the mid-1970s, around the time that Paul Allen and I launched Microsoft. Back then, big, expensive mainframe computers ran the back-office operations for major companies, governmental departments and other institutions. Researchers at leading universities and industrial laboratories were creating the basic building blocks that would make the information age possible. Intel had just introduced the 8080 microprocessor, and Atari was selling the popular electronic game Pong. At homegrown computer clubs, enthusiasts struggled to figure out exactly what this new technology was good for.

But what I really have in mind is something much more contemporary: the emergence of the robotics industry, which is developing in much the same way that the computer business did 30 years ago. Think of the manufacturing robots currently used on automobile assembly lines as the equivalent of yesterday's mainframes. The industry's niche products include robotic arms that perform surgery, surveillance robots deployed in Iraq and Afghanistan that dispose of roadside bombs, and domestic robots that vacuum the floor. Electronics companies have made robotic toys that can imitate people or dogs or dinosaurs, and hobbyists are anxious to get their hands on the latest version of the Lego robotics system.

Meanwhile some of the world's best minds are trying to solve the toughest problems of robotics, such as visual recognition, navigation and machine learning. And they are succeeding. At the 2004 Defense Advanced Research Projects Agency (DARPA) Grand Challenge, a competition to produce the first robotic vehicle capable of navigating autonomously over a rugged 142-mile course through the Mojave Desert, the top competitor managed to travel just 7.4 miles before breaking down. In 2005, though, five vehicles covered the complete distance, and the race's winner did it at an average speed of 19.1 miles an hour. (In another intriguing parallel between the robotics and computer industries, DARPA also funded the work that led to the creation of Arpanet, the precursor to the Internet.)

What is more, the challenges facing the robotics industry are similar to those we tackled in computing three decades ago. Robotics companies have no standard operating software that could allow popular application programs to run in a variety of devices. The standardization of robotic processors and other hardware is limited, and very little of the programming code used in one machine can be applied to another. Whenever somebody wants to build a new robot, they usually have to start from square one.

Despite these difficulties, when I talk to people involved in robotics--from university researchers to entrepreneurs, hobbyists and high school students--the level of excitement and expectation reminds me so much of that time when Paul Allen and I looked at the convergence of new technologies and dreamed of the day when a computer would be on every desk and in every home. And as I look at the trends that are now starting to converge, I can envision a future in which robotic devices will become a nearly ubiquitous part of our day-to-day lives. I believe that technologies such as distributed computing, voice and visual recognition, and wireless broadband connectivity will open the door to a new generation of autonomous devices that enable computers to perform tasks in the physical world on our behalf. We may be on the verge of a new era, when the PC will get up off the desktop and allow us to see, hear, touch and manipulate objects in places where we are not physically present.

From Science Fiction to Reality
The word "robot" was popularized in 1921 by Czech playwright Karel Capek, but people have envisioned creating robotlike devices for thousands of years. In Greek and Roman mythology, the gods of metalwork built mechanical servants made from gold. In the first century A.D., Heron of Alexandria--the great engineer credited with inventing the first steam engine--designed intriguing automatons, including one said to have the ability to talk. Leonardo da Vinci's 1495 sketch of a mechanical knight, which could sit up and move its arms and legs, is considered to be the first plan for a humanoid robot.

Over the past century, anthropomorphic machines have become familiar figures in popular culture through books such as Isaac Asimov's I, Robot, movies such as Star Wars and television shows such as Star Trek. The popularity of robots in fiction indicates that people are receptive to the idea that these machines will one day walk among us as helpers and even as companions. Nevertheless, although robots play a vital role in industries such as automobile manufacturing--where there is about one robot for every 10 workers--the fact is that we have a long way to go before real robots catch up with their science-fiction counterparts.

One reason for this gap is that it has been much harder than expected to enable computers and robots to sense their surrounding environment and to react quickly and accurately. It has proved extremely difficult to give robots the capabilities that humans take for granted--for example, the abilities to orient themselves with respect to the objects in a room, to respond to sounds and interpret speech, and to grasp objects of varying sizes, textures and fragility. Even something as simple as telling the difference between an open door and a window can be devilishly tricky for a robot.

But researchers are starting to find the answers. One trend that has helped them is the increasing availability of tremendous amounts of computer power. One megahertz of processing power, which cost more than $7,000 in 1970, can now be purchased for just pennies. The price of a megabit of storage has seen a similar decline. The access to cheap computing power has permitted scientists to work on many of the hard problems that are fundamental to making robots practical. Today, for example, voice-recognition programs can identify words quite well, but a far greater challenge will be building machines that can understand what those words mean in context. As computing capacity continues to expand, robot designers will have the processing power they need to tackle issues of ever greater complexity.

A Robot in Every Home
COMPUTER TEST-DRIVE of a mobile device in a three-dimensional virtual environment helps robot builders analyze and adjust the capabilities of their designs before trying them out in the real world. Part of the Microsoft Robotics Studio software development kit, this tool simulates the effects of forces such as gravity and friction.

Another barrier to the development of robots has been the high cost of hardware, such as sensors that enable a robot to determine the distance to an object as well as motors and servos that allow the robot to manipulate an object with both strength and delicacy. But prices are dropping fast. Laser range finders that are used in robotics to measure distance with precision cost about $10,000 a few years ago; today they can be purchased for about $2,000. And new, more accurate sensors based on ultrawideband radar are available for even less.

Now robot builders can also add Global Positioning System chips, video cameras, array microphones (which are better than conventional microphones at distinguishing a voice from background noise) and a host of additional sensors for a reasonable expense. The resulting enhancement of capabilities, combined with expanded processing power and storage, allows today's robots to do things such as vacuum a room or help to defuse a roadside bomb--tasks that would have been impossible for commercially produced machines just a few years ago.

A BASIC Approach
In february 2004 I visited a number of leading universities, including Carnegie Mellon University, the Massachusetts Institute of Technology, Harvard University, Cornell University and the University of Illinois, to talk about the powerful role that computers can play in solving some of society's most pressing problems. My goal was to help students understand how exciting and important computer science can be, and I hoped to encourage a few of them to think about careers in technology. At each university, after delivering my speech, I had the opportunity to get a firsthand look at some of the most interesting research projects in the school's computer science department. Almost without exception, I was shown at least one project that involved robotics.

At that time, my colleagues at Microsoft were also hearing from people in academia and at commercial robotics firms who wondered if our company was doing any work in robotics that might help them with their own development efforts. We were not, so we decided to take a closer look. I asked Tandy Trower, a member of my strategic staff and a 25-year Microsoft veteran, to go on an extended fact-finding mission and to speak with people across the robotics community. What he found was universal enthusiasm for the potential of robotics, along with an industry-wide desire for tools that would make development easier. "Many see the robotics industry at a technological turning point where a move to PC architecture makes more and more sense," Tandy wrote in his report to me after his fact-finding mission. "As Red Whittaker, leader of [Carnegie Mellon's] entry in the DARPA Grand Challenge, recently indicated, the hardware capability is mostly there; now the issue is getting the software right."

Back in the early days of the personal computer, we realized that we needed an ingredient that would allow all of the pioneering work to achieve critical mass, to coalesce into a real industry capable of producing truly useful products on a commercial scale. What was needed, it turned out, was Microsoft BASIC. When we created this programming language in the 1970s, we provided the common foundation that enabled programs developed for one set of hardware to run on another. BASIC also made computer programming much easier, which brought more and more people into the industry. Although a great many individuals made essential contributions to the development of the personal computer, Microsoft BASIC was one of the key catalysts for the software and hardware innovations that made the PC revolution possible.

After reading Tandy's report, it seemed clear to me that before the robotics industry could make the same kind of quantum leap that the PC industry made 30 years ago, it, too, needed to find that missing ingredient. So I asked him to assemble a small team that would work with people in the robotics field to create a set of programming tools that would provide the essential plumbing so that anybody interested in robots with even the most basic understanding of computer programming could easily write robotic applications that would work with different kinds of hardware. The goal was to see if it was possible to provide the same kind of common, low-level foundation for integrating hardware and software into robot designs that Microsoft BASIC provided for computer programmers.

A Robot in Every Home
BIRTH OF AN INDUSTRY: iRobot, a company based in Burlington, Mass., manufactures the Packbot EOD, which assists with bomb disposal in Iraq


Tandy's robotics group has been able to draw on a number of advanced technologies developed by a team working under the direction of Craig Mundie, Microsoft's chief research and strategy officer. One such technology will help solve one of the most difficult problems facing robot designers: how to simultaneously handle all the data coming in from multiple sensors and send the appropriate commands to the robot's motors, a challenge known as concurrency. A conventional approach is to write a traditional, single-threaded program--a long loop that first reads all the data from the sensors, then processes this input and finally delivers output that determines the robot's behavior, before starting the loop all over again. The shortcomings are obvious: if your robot has fresh sensor data indicating that the machine is at the edge of a precipice, but the program is still at the bottom of the loop calculating trajectory and telling the wheels to turn faster based on previous sensor input, there is a good chance the robot will fall down the stairs before it can process the new information.

Concurrency is a challenge that extends beyond robotics. Today as more and more applications are written for distributed networks of computers, programmers have struggled to figure out how to efficiently orchestrate code running on many different servers at the same time. And as computers with a single processor are replaced by machines with multiple processors and "multicore" processors--integrated circuits with two or more processors joined together for enhanced performance--software designers will need a new way to program desktop applications and operating systems. To fully exploit the power of processors working in parallel, the new software must deal with the problem of concurrency.

One approach to handling concurrency is to write multi-threaded programs that allow data to travel along many paths. But as any developer who has written multithreaded code can tell you, this is one of the hardest tasks in programming. The answer that Craig's team has devised to the concurrency problem is something called the concurrency and coordination runtime (CCR). The CCR is a library of functions--sequences of software code that perform specific tasks--that makes it easy to write multithreaded applications that can coordinate a number of simultaneous activities. Designed to help programmers take advantage of the power of multicore and multiprocessor systems, the CCR turns out to be ideal for robotics as well. By drawing on this library to write their programs, robot designers can dramatically reduce the chances that one of their creations will run into a wall because its software is too busy sending output to its wheels to read input from its sensors.

In addition to tackling the problem of concurrency, the work that Craig's team has done will also simplify the writing of distributed robotic applications through a technology called decentralized software services (DSS). DSS enables developers to create applications in which the services--the parts of the program that read a sensor, say, or control a motor-- operate as separate processes that can be orchestrated in much the same way that text, images and information from several servers are aggregated on a Web page. Because DSS allows software components to run in isolation from one another, if an individual component of a robot fails, it can be shut down and restarted--or even replaced--without having to reboot the machine. Combined with broadband wireless technology, this architecture makes it easy to monitor and adjust a robot from a remote location using a Web browser.

What is more, a DSS application controlling a robotic device does not have to reside entirely on the robot itself but can be distributed across more than one computer. As a result, the robot can be a relatively inexpensive device that delegates complex processing tasks to the high-performance hardware found on today's home PCs. I believe this advance will pave the way for an entirely new class of robots that are essentially mobile, wireless peripheral devices that tap into the power of desktop PCs to handle processing-intensive tasks such as visual recognition and navigation. And because these devices can be networked together, we can expect to see the emergence of groups of robots that can work in concert to achieve goals such as mapping the seafloor or planting crops.

These technologies are a key part of Microsoft Robotics Studio, a new software development kit built by Tandy's team. Microsoft Robotics Studio also includes tools that make it easier to create robotic applications using a wide range of programming languages. One example is a simulation tool that lets robot builders test their applications in a three-dimensional virtual environment before trying them out in the real world. Our goal for this release is to create an affordable, open platform that allows robot developers to readily integrate hardware and software into their designs.

Should We Call Them Robots?
How soon will robots become part of our day-to-day lives? According to the International Federation of Robotics, about two million personal robots were in use around the world in 2004, and another seven million will be installed by 2008. In South Korea the Ministry of Information and Communication hopes to put a robot in every home there by 2013. The Japanese Robot Association predicts that by 2025, the personal robot industry will be worth more than $50 billion a year worldwide, compared with about $5 billion today.

As with the PC industry in the 1970s, it is impossible to predict exactly what applications will drive this new industry. It seems quite likely, however, that robots will play an important role in providing physical assistance and even companionship for the elderly. Robotic devices will probably help people with disabilities get around and extend the strength and endurance of soldiers, construction workers and medical professionals. Robots will maintain dangerous industrial machines, handle hazardous materials and monitor remote oil pipelines. They will enable health care workers to diagnose and treat patients who may be thousands of miles away, and they will be a central feature of security systems and search-and-rescue operations.

Although a few of the robots of tomorrow may resemble the anthropomorphic devices seen in Star Wars, most will look nothing like the humanoid C-3PO. In fact, as mobile peripheral devices become more and more common, it may be increasingly difficult to say exactly what a robot is. Because the new machines will be so specialized a nd ubiquitous--and look so little like the two-legged automatons of science fiction--we probably will not even call them robots. But as these devices become affordable to consumers, they could have just as profound an impact on the way we work, communicate, learn and entertain ourselves as the PC has had over the past 30 years.

---

BILL GATES is co-founder and chairman of Microsoft, the world's largest software company. While attending Harvard University in the 1970s, Gates developed a version of the programming language BASIC for the first microcomputer, the MITS Altair. In his junior year, Gates left Harvard to devote his energies to Microsoft, the company he had begun in 1975 with his childhood friend Paul Allen. In 2000 Gates and his wife, Melinda, established the Bill & Melinda Gates Foundation, which focuses on improving health, reducing poverty and increasing access to technology around the world.

(c) www.sciam.com

Wednesday, December 6, 2006

Danger? Nanotube-Infested Waters Created in the Lab

 

Carbon nanomaterials can mix in water despite being hydrophobic, raising the possibility of a spreading spill in the future.Carbon Nanotubes

Carbon nanotubes--and their spherical cousins known as buckyballs--are proving to have myriad uses, finding employ in improved solar cells, electronics and medical probes. But the production volume of the tricky nanomaterials remains nanoscale when compared with the production volume of other industrial components. Nevertheless, environmental engineers have begun investigating how such materials might interact with natural environments if accidentally released and have discovered that at least some of the hydrophobic (water fearing) materials persist quite readily in natural waters.
Jae-Hong Kim of the Georgia Institute of Technology and his colleagues investigated how so-called multiwalled carbon nanotubes--layered, straws-within-straws of carbon atoms--interacted with natural water, in this case samples taken from the nearby Suwannee River. To their surprise, the carbon nanomaterial did not clump together as it tried to avoid water molecules, rather it interacted with the negatively charged natural organic matter in the river water. This organic matter seemed to shield the nanotubes and allow them to disperse throughout the water after an hour of mixing, instead of clumping and settling. "At the beginning, the solution is very black and, over time, it becomes grayish," Kim says. "What is interesting is that it is still grayish after a month." In other words, the nanotubes do not settle even after this time period.
This monthlong suspension means that Suwannee River water was actually better at promoting the dispersal of carbon nanotubes than chemical surfactants, which can maintain nanotubes in solution for roughly four days, according to the paper presenting the finding published online in Environmental Science and Technology. Similar studies with buckyballs--stable balls of 60 carbon atoms, also known as C60--had required copious organic solvents in order to maintain suspensions.

Because of the presence of such solvents, toxicity tests on C60 have been open to question as to whether the buckyballs or the solvents caused the damaging effects. Environmental engineers Volodymyr Tarabara and Syed Hashsham of Michigan State University and their colleagues tested the toxic effects of such buckyballs in water--without solvents--on lymphocytes, human immune cells. The researchers created solutions of C60 and water using ethanol at levels previously proven to have no toxic impact and using weeks worth of magnetically powered stirring.
At concentrations as low as 2.2 micrograms per liter, the clumps of C60 damaged the DNA of the immune cells, according to microscopic analysis presented in the December 1 issue of Environmental Science and Technology. The exact mechanism by which C60causes the DNA damage remains unclear, particularly because imaging could not detect the smallest of the buckyball clumps, but its DNA damaging effect was dose dependent. "We are not sure if very very small particles exist, one or two nanometers big," Tarabara says. "They may be very important as far as cellular damage."
Regardless, such nanopollution is unlikely to occur anytime soon: "The fact of the matter is that it takes weeks of mixing to generate appreciable concentrations in the size range where the particles are small enough not to settle," Tarabara notes. "It's not something that we can expect to be out there loose." But the environmental engineers argue that such research should be carried out before any widespread adoption of the new carbon nanomaterials takes place, especially because they seem to have a few surprises in store. "One thing is definite," Kim says, "these materials were not traditionally considered an aqueous-based contaminant." He adds: "I am saying, 'Well, it seems possible.'" --David Biello

[source: www.sciam.com]

Saturday, January 6, 2007

With Mild Winter, the City Revisits Fall Fashion and the Record Books

by Anthony Ramirez
Karen Flagg with her 3-year-old son, James, on the swings at the Bleecker Street Playground in Greenwich Village. New York City, basking in warm weather, hasn’t gone this long without snow since 1878.

The last recorded time the snowfall in Central Park came so late in the season, the date was Jan. 4, 1878.

Rutherford B. Hayes was president, the tallest building in the city was Trinity Church (281 feet), and there was no Statue of Liberty. (It was erected in 1886.)

In a sense, there was no New York, either. The boroughs consolidated in 1898. Before then, the Bronx was called the Annexed District, Queens was farms, Staten Island was nearly empty, and Brooklyn was the nation’s third-largest city.

Yesterday, with parts of the nation shivering and the Rockies and the Midwest pummeled by another snowstorm, the record for the latest appearance of snow in New York City was broken with little fanfare.

For now, not even a flurry is in the immediate forecast. Indeed, today the temperature might reach 71 degrees, which would be another record. According to the National Weather Service, it will not even come close to freezing until Tuesday night, when the temperature could go down to 30 degrees.

For many people interviewed yesterday — a warm day of mist and gray skies — the city without snow was both a bewilderment and a delight.

There was scarcely a fedora, a knit cap or a hoodie to be seen. Therese Kahn, an interior decorator on the Upper East Side, was wearing what she described as “comfortable” Stuart Weitzman patent-leather boots, rather than Gore-Tex snow boots.

“It’s amazing that it’s so nice,” said Ms. Kahn, 50, who also had on a thin white parka, unzipped. “I have two teenage daughters and I’m always worried that they’re not dressed warmly enough, so this lifts the pressure.”

Jan Khan, 53, has been a doorman at 88 Central Park West for 21 years. “This is the first year I see no snow coming down,” he said. “I don’t like it. It’s not normal.”

Mr. Khan, originally from Mansehra, in northern Pakistan, said winter was invading usually warmer countries of Asia. On Thursday, more than 30 people were reported dead in Madhya Pradesh, in central India, and at least 20 in Bihar, in northeastern India, because of a severe cold snap.

“In Pakistan that is the problem now,” Mr. Khan said. “Two feet, three feet of snow. The Arctic is happening in my country, and India and Bangladesh and Nepal and China, all under snow.”

In East Harlem, at the Three Kings Day Parade, which commemorates the arrival of the Magi in Bethlehem, Carlos Canales, 36, from Glendale, Queens, worried about the weather.

“People aren’t really ready for the winter anymore,” he said. “We’re going to get caught off guard when winter finally hits us and a lot of people are going to get sick.”

Nearby, in Central Park, Patrick Denehan, 36, a furniture mover from Washington Heights, sipped coffee and watched geese waddle near an ice-free Harlem Meer.

“It feels,” Mr. Denehan said, “like the Twilight Zone.”

There is one positive aspect to the warm weather: the pothole situation. The city’s Department of Transportation said that work crews paved 17,357 potholes last month, about a quarter fewer than the 22,685 during the much snowier December of 2005.

In December 1877, when The New York Times took note of the snowless Christmas, the day was described as crisp and sunny.

The headline said, “A MILD CHRISTMAS DAY — THOUSANDS OF PERSONS IN THE CENTRAL PARK.”

The Times account read, “It is estimated, and the estimate is thought to be moderate, that fully 50,000 persons were in the Park during the afternoon, nearly all of whom visited the new Museum, opened by the President on Saturday.”

The Times noted, however, that the weather did hurt certain businesses. “Dry-goods houses, clothiers and coal dealers have been the heaviest sufferers,” the newspaper said. “They have seen their Winter’s supplies lie on their hands almost undiminished.”

When snow finally fell for the first time that winter in Central Park on Jan. 4, 1878, The Times did not report it. The newspaper did say that Poughkeepsie had four inches of snow.

The National Weather Service was cautious yesterday about how snowless is snowless. Jeffrey Tongue, science and operations officer at the service’s Upton office on Long Island, said the Jan. 4, 1878, date is based on the best available records.

“When we’re talking about a snow flurry that might last 10 minutes,” Mr. Tongue said, “there’s a question whether those were fully documented. We believe the 1878 date is accurate, but of course there’s nobody alive to actually ask about it.”

Stephen Fybish, an amateur weather historian, contends that the record for late snow in Central Park occurred far later than 1878, indeed nearly a century later, on Jan. 29, 1973.

“This is also true,” said Mr. Tongue of the weather service. “The 1878 date is for a trace of snow, which doesn’t stick to the ground, and the 1973 date is for measurable snow, which was 1.8 inches.”

So, whether the start date for the snowless record should be Jan. 5 or Jan. 30 is a matter of keen scholarly interest.

But, please, the weather service urges, no wagering.

(c) www.nytimes.com

Tuesday, December 19, 2006

Planets Were Formed From A Giant Mix, Suggests New Analysis

 

Our Solar System may have been created in a gigantic mixing process far more extensive than previously imagined, according to research published today.

NASA's dust collector, which brought samples of Comet Wild-2 to Earth. (Image courtesy of Imperial College London)The findings, reported in the journal Science, come from the first analysis of dust fragments from Comet Wild-2, captured by NASA's Stardust spacecraft and brought to Earth in January 2006. Because comets are among the oldest objects in the Solar System, the team, which includes researchers from Imperial College London and the Natural History Museum, believes their sample of dust can provide insights into how Earth and other planets came to be formed.

Using spectroscopy technology which does not damage the mineral content of the particles, the team found that the comet dust is made up of many different mineral compositions rather than a single dominant one. This implies that the dust was formed in many different environments before coming together to make the comet, indicating a great deal of mixing in the early Solar System prior to the formation of planets.

Particularly significant was the discovery of calcium aluminium inclusions, which are amongst the oldest solids in the Solar System and are thought to have formed close to the young Sun. This discovery suggests that components of the comet came from all over the early Solar System, with some dust having formed close to the Sun and other material coming from the asteroid belt between Mars and Jupiter. Since Wild-2 originally formed in the outer Solar System, this means that some of its composite material has travelled great distances. Dr Phil Bland of Imperial's Department of Earth Science and Engineering says:

"We weren't expecting to find such widely-spread material in the sample of dust we were given to examine. The composition of minerals is all over the place, which tells us that the components that built this comet weren't formed in one place at one time by one event. It seems that the Solar System was born in much more turbulent conditions than we previously thought."

The researchers have also found evidence of surprising variety in cometary composition. NASA's 2005 Deep Impact mission, which provided images of material blasted from the nucleus of the comet Tempel 1, revealed evidence of aqueous activity within the comet. However the dust from Wild-2 has none of those characteristics and apparently has not interacted with water at all. Anton Kearsley of the Natural History Museum says:

"This is a very interesting mismatch, and it seems that comets are not all the same. Perhaps they vary as much in their evolution as in the composition of the dust from which they are made."

This is the first time scientists have had the opportunity to study samples from a comet, having previously relied on studying comets from afar or analysing interplanetary dust particles of uncertain origin. Dr Bland adds:

"Comets are likely to be the oldest objects in our Solar System and their components have remained largely unchanged, so discovering more about what they have experienced gives us a snapshot of the processes that formed the planets over four and a half billion years ago. Fundamentally we still don't know how you make planets from a cloud of dust and gas. Hopefully the Wild-2 samples will help us towards an answer."

The analysis was carried out by the Impacts and Astromaterials Research Centre, a joint Imperial-Natural History Museum research group funded by the Particle Physics and Astronomy Research Council.

[Thanks goes to www.sciencedaily.com for this article]

Wednesday, January 10, 2007

Scientist says pulsar may have four poles

SEATTLE, Jan. 10 (UPI) -- The neutron star in the Crab Nebula may have four magnetic poles, which would be a cosmic first, a Puerto Rican scientist said.

A neutron star, the remnant of a star after a supernova explosion, sometimes is called a pulsar because it emits radio waves similar to a lighthouse track of light. The profile of the Crab Nebula star's pulse suggests the magnetic field that drives its emission is different, the BBC said.

The Crab Nebula pulsar has two pulses that can be identified, Tim Hankins, from the Arecibo Observatory in Puerto Rico, said during the American Astronomical Society meeting in Seattle. The profiles of pulses from the north and south poles of a neutron star should be identical.

Profiles for these pulses weren't, which Hankins said is the first time this has been noted in a pulsar.

"What we think is that there is another pole, possibly with a partner, that is influencing and distorting the magnetic field," he said, explaining that magnetic poles always come in pairs, so the fourth pole is distinctly likely.

Copyright 2007 by United Press International. All Rights Reserved.

(c) www.physorg.com

Monday, February 26, 2007

Searching for Signs of Life on Mars

by Guy Webster, Dwayne Brown

 

Urey Instrument. Credit: NASAJPL

Urey Instrument. Credit: NASA/JPL

NASA-funded researchers are refining a tool that could not only check for the faintest traces of life's molecular building blocks on Mars, but could also determine whether they have been produced by anything alive.

The instrument, called Urey: Mars Organic and Oxidant Detector, has already shown its capabilities in one of the most barren climes on Earth, the Atacama Desert in Chile. The European Space Agency has chosen this tool from the United States as part of the science payload for the ExoMars rover planned for launch in 2013. Last month, NASA selected Urey for an instrument-development investment of $750,000.


Artist's concept of ExoMars. Credit: European Space Agency

The European Space Agency plans for the ExoMars rover to grind samples of Martian soil to fine powder and deliver them to a suite of analytical instruments, including Urey, that will search for signs of life. Each sample will be a spoonful of material dug from underground by a robotic drill.
"Urey will be able to detect key molecules associated with life at a sensitivity roughly a million times greater than previous instrumentation," said Dr. Jeffrey Bada of Scripps Institution of Oceanography at the University of California, San Diego. Bada is the principal investigator for an international team of scientists and engineers working on various components of the device.
To aid in interpreting that information, part of the tool would assess how rapidly the environmental conditions on Mars erase those molecular clues.
Dr. Pascale Ehrenfreund of the University of Leiden in the Netherlands, said, "The main objective of ExoMars is to search for life. Urey will be a key instrument for that because it is the one with the highest sensitivity for organic chemicals." Ehrenfreund, one of two deputy principal investigators for Urey, coordinates efforts of team members from five other European countries.
Urey can detect several types of organic molecules, such as amino acids, at concentrations as low as a few parts per trillion.
All life on Earth assembles chains of amino acids to make proteins. However, amino acids can be made either by a living organism or by non-biological means. This means it is possible that Mars has amino acids and other chemical precursors of life but has never had life. To distinguish between that situation and evidence for past or present life on Mars, the Urey instrument team will make use of the knowledge that most types of amino acids can exist in two different forms. One form is referred to as "left-handed" and the other as "right-handed." Just as the right hand on a human mirrors the left, these two forms of an amino acid mirror each other.
Amino acids from a non-biological source come in a roughly 50-50 mix of right-handed and left-handed forms. Life on Earth, from the simplest microbes to the largest plants and animals, makes and uses only left-handed amino acids, with rare exceptions. Comparable uniformity -- either all left or all right -- is expected in any extraterrestrial life using building blocks that have mirror-image versions because a mixture would complicate biochemistry.
"The Urey instrument will be able to distinguish between left-handed amino acids and right-handed ones," said Allen Farrington, Urey project manager at NASA's Jet Propulsion Laboratory, which will build the instrument to be sent to Mars.
If Urey were to find an even mix of the mirror-image molecules on Mars, that would suggest life as we know it never began there. All-left or all-right would be strong evidence that life now exists on Mars, with all-right dramatically implying an origin separate from Earth life. Something between 50-50 and uniformity could result if Martian life once existed, because amino acids created biologically gradually change toward an even mixture in the absence of life.
The 1976 NASA Viking mission discovered that strongly oxidizing conditions at the Martian surface complicate experiments to search for life. The Urey instrument has a component, called the Mars oxidant instrument, for examining those conditions.
The oxidant instrument has microsensors coated with various chemical films. "By measuring the reaction of the sensor films with chemicals present in the Martian soil and atmosphere, we can establish if organisms could survive and if evidence of past life would be preserved," said Dr. Richard Quinn, a co-investigator on Urey from the SETI Institute, Mountain View, Calif., who also works at NASA Ames Research Center, Moffett Field, Calif.

"In order to improve our chances of finding chemical evidence of life on Mars, and designing human habitats and other equipment that will function well on Mars' surface, we need to improve our understanding of oxidants in the planet's surface environment," said Dr. Aaron Zent, a Urey co-investigator at NASA Ames.
A Urey component called the sub-critical water extractor handles the task of getting any organic compounds out of each powdered sample the ExoMars rover delivers to the instrument. "It's like an espresso maker," explained JPL's Dr. Frank Grunthaner, a deputy principal investigator for Urey. "We bring the water with us. It is added to the sample, and different types of organic compounds dissolve into the liquid as the temperature increases. We keep it under pressure the whole time."
The dissolved compounds are highly concentrated by stripping away water in a tiny oven. Then a detector checks for fluorescent glowing, which would indicate the presence of amino acids, some components of DNA and RNA, or other organic compounds that bind to a fluorescing chemical added by the instrument.
A Urey component called the micro-capillary electrophoresis unit has the critical job of separating different types of organic compounds from one another for identification, including separation of mirror-image amino acids from each other. "We have essentially put a laboratory onto a single wafer," said Dr. Richard Mathies of the University of California, Berkeley, a Urey co-investigator. The device for sending to Mars will be a small version incorporating this detection technology, which is already in use for biomedical procedures such as law-enforcement DNA tests and checking for hazardous microbes.
Switzerland will provide electronics design and packaging expertise for Urey. Micro-Cameras and Space Exploration S.A., Neuchatel, will collaborate with JPL and the European Space Agency to accomplish this significant contribution to the heart of the instrument. Dr. Jean-Luc Josset, Urey co-investigator at the University of Neuchatel will coordinate this effort and help provide detector selection and support.

(c) www.physorg.com

Saturday, January 6, 2007

Chemistry Of Volcanic Fallout Reveals Secrets Of Past Eruptions

Science Daily — A team of American and French scientists has developed a method to determine the influence of past volcanic eruptions on climate and the chemistry of the upper atmosphere, and significantly reduce uncertainty in models of future climate change.


Joel Savarino collecting snow samples at Dome C. (Credit: Joel Savarino, CNRS)

In the January 5 issue of the journal Science, the researchers from the University of California, San Diego, the National Center for Scientific Research (CNRS) and the University of Grenoble in France report that the chemical fingerprint of fallout from past eruptions reveals how high the volcanic material reached, and what chemical reactions occurred while it was in the atmosphere. The work is particularly relevant because the effect of atmospheric particles, or aerosols, is a large uncertainty in models of climate, according to Mark Thiemens, Dean of UCSD’s Division of Physical Sciences and professor of chemistry and biochemistry.

“In predictions about global warming, the greatest amount of error is associated with atmospheric aerosols,” explained Thiemens, in whose laboratory the method, which is based on the measurement of isotopes—or forms of sulfur—was developed. “Now for the first time, we can account for all of the chemistry involving sulfates, which removes uncertainties in how these particles are made and transported. That’s a big deal with climate change.”

Determining the height of a past volcanic eruption provides important information about its impact on climate. If volcanic material only reaches the lower atmosphere, the effects are relatively local and short term because the material is washed out by rain. Eruptions that reach higher, up to the stratosphere, have a greater influence on climate.

“In the stratosphere, sulfur dioxide that was originally in the magma gets oxidized and forms droplets of sulfuric acid,” said Joël Savarino, a researcher at the CNRS and the University of Grenoble, who led the study. “This layer of acid can stay for years in the stratosphere because no liquid water is present in this part of the atmosphere. The layer thus acts as a blanket, reflecting the sunlight and therefore reducing the temperature at ground level, significantly and for many years.”

To distinguish eruptions that made it to the stratosphere from those that did not, the researchers examined the isotopes of sulfur in fallout preserved in the ice in Antarctica. The volcanic material is carried there by air currents. Thiemens, Savarino and two of their students traveled to Antarctica and recovered the samples by digging snow pits near the South Pole and Dome C, the new French/Italian inland station.

Sulfur that rises as high as the stratosphere, above the ozone layer, is exposed to short wavelength ultraviolet light. UV exposure creates a unique ratio of sulfur isotopes. Therefore the sulfur isotope signature in fallout reveals whether or not an eruption was stratospheric.

To develop the method, the team, which also included Mélanie Baroni, the first author on the paper who is a postdoctoral fellow working with Savarino, and Robert Delmas, a research director at the CNRS, focused on two volcanic eruptions. Both eruptions, the 1963 eruption of Mount Agung in Bali and the 1991 eruption of Mount Pinatubo in the Philippines, were stratospheric according to the isotope measurements.

“Young volcanoes have the advantage of having been documented by modern instruments, such as satellites or aircraft,” said Savarino, who began his investigations into sulfur isotope measurements when he was a postdoctoral fellow working with Thiemens. “We could therefore compare our measurements on volcanic fallout stored in snow with atmospheric observations.”

Not only did their isotope measurements match the atmospheric observations, they were also able to distinguish the Pinatubo eruption from the eruption of Cerro Hudson that occurred the same year. Cerro Hudson did not send material as high as the stratosphere and the fallout had a different sulfur isotope fingerprint than the fallout from Pinatubo.

Volcanic material from more ancient eruptions is preserved in Antarctica, but the older, deeper seasonal layers of ice are extremely thin as a result of the pressure from the overlying ice. Therefore, it is not currently feasible to extract enough fallout from the ice to apply the isotope method to all past volcanoes. However, data from eruptions in the recent past reveal what chemical reactions of sulfates occur in the upper atmosphere.

Some scientists have proposed that if global warming becomes severe, sulfates could be injected into the stratosphere in order to block some of the incoming solar radiation and reduce the temperature. Thiemens explained that understanding the chemical reactions of sulfates in the stratosphere is critical to determining if this approach would be effective.

“Sulfates can cause warming or cooling depending on how they are made,” he said. “They are usually white particles, which tend to reflect sunlight, but if they are made on dark particles like soot, they can absorb heat and worsen warming.”

The study was funded by the French Polar Institute (IPEV) and the National Science Foundation Office of Polar Programs.

Note: This story has been adapted from a news release issued by University of California - San Diego.

(c) www.sciencedaily.com

Tuesday, March 20, 2007

No Sex For 40 Million Years? No Problem

Science Daily — A group of organisms that has never had sex in over 40 million years of existence has nevertheless managed to evolve into distinct species, says new research published today. The study challenges the assumption that sex is necessary for organisms to diversify and provides scientists with new insight into why species evolve in the first place.


Scanning electron micrographs showing morphological variation of bdelloid rotifers and their jaws. Have these asexual animals really diversified into evolutionary species? (Credit: Diego Fontaneto / Courtesy of PLoS Biology)

The research, published in PLoS Biology, focuses on the study of bdelloid rotifers, microscopic aquatic animals that live in watery or occasionally wet habitats including ponds, rivers, soils, and on mosses and lichens. These tiny asexual creatures multiply by producing eggs that are genetic clones of the mother -- there are no males. Fossil records and molecular data show that bdelloid rotifers have been around for over 40 million years without sexually reproducing, and yet this new study has shown that they have evolved into distinct species.

Using a combination of DNA sequencing and jaw measurements taken using a scanning electron microscope, the research team examined bdelloid rotifers living in different aquatic environments across the UK, Italy and other parts of the world. They found genetic and jaw-shape evidence that the rotifers had evolved into distinct species by adapting to differences in their environment.

Dr Tim Barraclough from Imperial College London's Division of Biology explained: "We found evidence that different populations of these creatures have diverged into distinct species, not just because they become isolated in different places, but because of the differing selection pressures in different environments.

"One remarkable example is of two species living in close proximity on the body of another animal, a water louse. One lives around its legs, the other on its chest, yet they have diverged in body size and jaw shape to occupy these distinct ecological niches. Our results show that, over millions of years, natural selection has caused divergence into distinct entities equivalent to the species found in sexual organisms."

Previously, many scientists had thought that sexual reproduction was necessary for speciation because of the importance of interbreeding in explaining speciation in sexual organisms. Asexual creatures like the bdelloid rotifers were known not to be all identical, but it had been argued that the differences might arise solely through the chance build-up of random mutations that occur in the 'cloning' process when a new rotifer is born. The new study proves that these differences are not random and are the result of so-called 'divergent selection', a process well known to cause the origin of species in sexual organisms.

Dr Barraclough adds: "These really are amazing creatures, whose very existence calls into question scientific understanding, because it is generally thought that asexual creatures die out quickly, but these have been around for millions of years.

"Our proof that natural selection has driven their divergence into distinct species is another example of these miniscule creatures surprising scientists -- and their ability to survive and adapt to change certainly raises interesting questions about our understanding of evolutionary processes."

Note: This story has been adapted from a news release issued by Imperial College London.

(c) www.sciencedaily.com

Uni.Science Tags: , , ,

Saturday, January 6, 2007

Cancer-killing Invention Also Harvests Stem Cells

Science Daily — Associate Professor Michael King of the University of Rochester Biomedical Engineering Department has invented a device that filters the blood for cancer and stem cells.  When he captures cancer cells, he kills them.  When he captures stem cells, he harvests them for later use in tissue engineering, bone marrow transplants, and other applications that treat human disease and improve health.


Bone marrow cells that have been purified in a StemCapture device. (Image courtesy of School of Engineering and Applied Sciences, University of Rochester)

With Nichola Charles, Jared Kanofsky, and Jane L. Liesveld of the University of Rochester, King wrote about his discoveries in "Using Protein-Functionalized Microchannels for Stem Cell Separation," Paper No. ICNMM2006-96228, Proceedings of the ASME, June 2006.  King’s team includes scientists at StemCapture, Inc., a Rochester company that bought the University patent for King’s technique in November 2005 to build the cancer-killing and stem cell-harvesting devices.  The technique can be used in vivo, meaning a device is inserted in the body, or in vitro, in which case the device resides outside of the body – either way, the device kills cancer cells and captures stem cells, which grow into blood cells, bone, cartilage, and fat.

When King was working at the University of Pennsylvania from 1999 to 2001, one of his labmates discovered that bone marrow stem cells stick to adhesive proteins called selectins more strongly than other cells -- including blood cells -- stick to selectins.  When King came to the University of Rochester in early 2002, he started studying the adhesion of blood cells to the vascular wall, the inner lining of the blood vessels.  During inflammation, the vascular wall presents surface selectins that adhere specifically to white blood cells.  These selectins cause the white blood cells to roll slowly along the vascular wall, seeking signals that tell them to crawl out of the bloodstream.  This is how white blood cells migrate to bacterial infections and tissue injuries.  King set out to find a way to duplicate this natural process. 

First, he noted that the selectins form bonds with the white blood cells within fractions of a second, then immediately release the cells back into the bloodstream.  He also realized that selectin is the adhesive mechanism by which bone marrow stem cells leave the bloodstream and find their way back into bone marrow.  This is how bone marrow transplantation works.  Finally, he learned that when a cancer cell breaks free of a primary tumor and enters circulation, it flow through the bloodstream to a remote organ, then leaves the bloodstream and forms a secondary tumor.  This is how cancer spreads.  He put these facts together with one more, very important fact:  the selectins grab onto a specific carbohydrate on the surfaces of white blood cells, stem cells, and cancer cells.  Associate Professor King decided to capture stem and cancer cells before the selectins release them. 

Harvesting Stem Cells

Because bone marrow stem cells stick to selectin surfaces more strongly than other cells, King’s group coated a slender plastic tube with selectin.  They then did a series of lab experiments, both in vitro and in vivo using rats, with this selectin-coated tube to filter the bloodstream for stem cells.  It worked, and the King Lab discovered that they could attract a large number of cells to the wall of their selectin-coated device, and that 38% of these captured cells were stem cells.  King envisioned a system by which doctors could remove stem cells from the bloodstream by flowing the cells through a device, and make a more concentrated mixture containing, say, 20-40 percent stem cells.  These stem cells could then be used for tissue engineering or bone marrow transplantation. 

This is a non-controversial way of obtaining stem cells that can be differentiated into other, useful cells.

King’s team can capture significant amounts of cells of the lymphatic and circulatory systems, and potentially mesenchymal stem cells, which are unspecialized cells that form tissue, bone, and cartilage.  Current procedures enable the specific capture of hematopoietic stem cells, which grow (or differentiate) over time into all of the different blood cells, and the specific capture of stem cells that differentiate into bone marrow cells.  The device itself uses a combination of microfluidics, or fluid flow properties, and specialized selectin coatings. 

Killing Cancer Cells

Another exciting application of King’s invention is filtering the blood for cancer cells and triggering their death, an innovative, new method to prevent the spread of cancer.  When someone has a primary cancer tumor, a small number of cancer cells circulates through the bloodstream.  In a process called metastasis, these cells are transmitted from the primary tumor to other locations in the body, where they form secondary, cancerous growths. 

As a cancer cell flows along the implanted surface, King’s device captures it and delivers an apoptosis signal, a biochemical way of telling the cancer cell to kill itself.  Within two days, that cancer cell is dead.  Normal cells are left totally unharmed because the device selectively targets cancer cells. 

The apoptosis signal is delivered by a molecule called TRAIL that coats the cancer-killing device.  Cancer cells have five types of proteins that recognize and bind to TRAIL, but only two trigger cell death.  The other three are called decoy receptors.  Healthy cells contain a lot of decoy receptors, giving them a natural protection against TRAIL, whereas cancer cells mainly express the two receptors that signal cell death. 

During the death of the cancer cells, TRAIL is not depleted or used up in any way, and in fact, it stays active for many weeks or months.  The same TRAIL molecules can kill enormous numbers of cancer cells. 

A possible way to use the cancer-killing invention is to implant the device in the body before primary tumor surgery or chemotherapy.  When doctors remove a primary tumor, the procedure itself can release cancer cells into the bloodstream.  King’s device would grab those cancer cells and kill them, greatly reducing the possibility of metastasis.

Associate Professor King envisions that the device would use a shunt similar to the type used in hospitals today.  This shunt would reside on the exterior of the arm or be implanted beneath the skin.  Some of the blood flow would bypass the capillary bed and instead go into the shunt, which could remain implanted for many weeks, continually removing and killing cancer cells.  King’s first targets are colorectal cancer and blood malignancies such as leukemia.

Note: This story has been adapted from a news release issued by University of Rochester, School of Engineering and Applied Sciences.

(c) www.sciencedaily.com

Thursday, December 7, 2006

A terabyte of data on a regular DVD?

This is the promise of the 3-D Optical Data Storage system developed at the University of Central Florida (UCF). This technology allows to record and store at least 1,000 GB of data on multiple layers of a single disc. The system uses lasers to compact large amounts of information onto a DVD and the process involves shooting two different wavelengths of light onto the recording surface. By using several layers, this technique will increase the storage capacity of a standard DVD to more than a terabyte.

This technology has been developed by Kevin D. Belfield, Department Chair and Professor of Chemistry at UCF, and his colleagues in the Belfield Research Group. So how does this work?

The process involves shooting two different wavelengths of light onto the recording surface. The use of two lasers creates a very specific image that is sharper than what current techniques can render. Depending on the color (wavelength) of the light, information is written onto a disk. The information is highly compacted, so the disk isn’t much thicker. It’s like a typical DVD.

The challenge scientists faced for years was that light is also used to read the information. The light couldn’t distinguish between reading and writing, so it would destroy the recorded information. Belfield’s team developed a way to use light tuned to specific colors or wavelengths to allow information that a user wants to keep to stay intact.

Below is a picture showing how this two-photon 3D optical system reads the data. "This 3D image was reconstructed from successively two-photon fluorescence imaging (readout) of 33 XY data planes along the axial direction (1 micron distance between each image). The principle for this novel two-photon 3D optical storage device was based on a bichromophoric mixture consisting of diaryletheneand fluorene derivative, suitable for recording data in thick storage media." (Credit: Dr. Zhen-Li Huang, UCF)

A terabyte of data on a regular DVD?

This research work has been published by Advanced Materials under the title "Two-Photon 3D Optical Data Storage via Fluorescence Modulation of an Efficient Fluorene Dye by a Photochromic Diarylethene" (Volume 18, Issue 21, Pages 2910-2914, Published online on October 30, 2006). Here is a link to the abstract.

This work has also been reviewed by Rachel Pei Chin Won in Nature Photonics under the title "Two photons are better than one" (November 16, 2006). Here are more details about this "Two-Photon 3-D Optical Data Storage" system.

[The researchers] have fabricated a two-photon three-dimensional optical data system using a photochromic polymer. They show that the system is suitable for recording data in thick storage media and for providing a readout method that does not erase existing stored information — they perform 10,000 readout cycles with only a small reduction in contrast. Also, contrary to other techniques, this method allows reading and writing of data at the same wavelength, which is achieved by changing the intensity of the laser light.

Nature Photonics also describes what kind of lasers were used by Belfield and his team.

Although the authors used a relatively expensive femtosecond Ti-sapphire laser to both read and write the information, they suggest that the data could be read using cheaper nanosecond laser diodes with comparable laser intensity, making this high density data-storage system more cost effective.

But when will we able to use DVDs with a terabyte capacity? Not before several years. In fact, the researchers just received a $270,000, three-year grant from the National Science Foundation to continue its work.

In the mean time, you can still visit — virtually — Belfield's lab. In particular, you should take a look at this page about High-Density Optical Data Storage, from which the above illustration has been extracted, and a photo gallery about One vs Two-photon Excitation.

[original post: http://blogs.zdnet.com]

Friday, December 15, 2006

Global Warming Affects Space Station Orbit

Global Warming Affects Space Station Orbit

By Robert Roy Britt
LiveScience Managing Editor

As the climate warms near Earth's surface, the upper atmosphere is getting less dense, a change that will mean less drag on satellites, scientists announced today.

Carbon dioxide emissions from the burning of fossil fuels will cause a 3 percent reduction in air density in the outermost layer of the atmosphere by 2017, the researchers predict.

Among the affected satellites: The International Space Station and the Hubble Space Telescope.

"We're seeing climate change manifest itself in the upper as well as lower atmosphere," said Stan Solomon of the National Center for Atmospheric Research (NCAR). "This shows the far-ranging impacts of greenhouse gas emissions."

The finding was presented today at a meeting of the American Geophysical Union in San Francisco.

The thermosphere extends from about 60 miles above Earth to 400 miles. The air is incredibly thin, but still causes drag on satellites in low Earth orbit. NASA routinely boosts the orbit of the space station as it is constantly degrading. Other satellites have limited life spans in part because the thin air up there eventually drags them down.

A thinning thermosphere means satellites can stay aloft longer.

Carbon dioxide molecules absorb radiation. Near Earth's surface, the molecules collide frequently with other molecules and the energy is released as heat, warming the air, the scientists explained. In the much thinner thermosphere, a carbon dioxide molecule has ample time to radiate energy to space because collisions are infrequent. The result is a cooling effect. And as it cools, the thermosphere settles, so that the density at a given height is reduced.

The effect varies with changes in the 11-year cycle of the Sun's activity, too. Being able to now predict the changes will help satellite operators plan better, the researchers said.

"Satellite operators noticed the solar cycle changes in density at the very beginning of the Space Age," Solomon said. "We are now able to reproduce the changes using the NCAR models and extend them into the next solar cycle."

The findings are also detailed in the journal Geophysical Research Letters and explained in a video presentation.

[original post: www.livescience.com]

Tag Cloud