Tuesday, August 3, 2010


0 comentarios

Can light-colored rooftops and roads really curb carbon emissions and combat global climate change? The idea has been around for years, but now, a new study by researchers at Lawrence Berkeley National Laboratory that is the first to use a global model to study the question has found that implementing cool roofs and cool pavements in cities around the world can not only help cities stay cooler, they can also cool the world, with the potential of canceling the heating effect of up to two years of worldwide carbon dioxide emissions.

Because white roofs reflect far more of the sun’s heat than black ones, buildings with white roofs will stay cooler. If the building is air conditioned, less air conditioning will be required, thus saving energy. Even if there is no air conditioning, the heat absorbed by a black roof both heats the space below, making the space less comfortable, and is also carried into the city air by wind—raising the ambient temperature in what is known as the urban heat island effect. Additionally, there’s a third, less familiar way in which a black roof heats the world: it radiates energy directly into the atmosphere, which is then absorbed by the nearest clouds and ends up trapped by the greenhouse effect, contributing to global warming.

Today, U.S. Energy Secretary Steven Chu announced a series of initiatives at the Department of Energy to more broadly implement cool roof technologies on DOE facilities and buildings across the federal government. As part of the effort to make the federal government more energy efficient, Chu has directed all DOE offices to install cool roofs, whenever cost effective over the lifetime of the roof, when constructing new roofs or replacing old ones at DOE facilities. Additionally, the Secretary has also issued a letter to the heads of other federal agencies, encouraging them to take similar steps at their facilities.

“Cool roofs are one of the quickest and lowest cost ways we can reduce our global carbon emissions and begin the hard work of slowing climate change,” said Chu. “By demonstrating the benefits of cool roofs on our facilities, the federal government can lead the nation toward more sustainable building practices, while reducing the federal carbon footprint and saving money for taxpayers.”

In the latest study, the Berkeley Lab researchers and their collaborators used a detailed global land surface model from NASA Goddard Space Flight Center, which contained regional information on surface variables, such as topography, evaporation, radiation and temperature, as well as on cloud cover. For the northern hemisphere summer, they found that increasing the reflectivity of roof and pavement materials in cities with a population greater than 1 million would achieve a one-time offset of 57 gigatons (1gigaton equals 1 billion metric tons) of CO2 emissions (31 Gt from roofs and 26 Gt from pavements). That’s double the worldwide CO2 emissions in 2006 of 28 gigatons. Their results were published online in the journal Environmental Research Letters.

“These offsets help delay warming that would otherwise take place if actual CO2 emissions are not reduced,” says Surabi Menon, staff scientist at Berkeley Lab and lead author of the paper.

Co-author Hashem Akbari emphasizes that cool roofs and pavements are only a part of the solution: “Two years worth of emissions is huge, but compared to what we need to do, it’s just a dent in the problem,” says Akbari, the former head of the Berkeley Lab Heat Island Group and now Hydro-Quebec Industrial Research Professor at Concordia University in Montreal. “We’ve been dumping CO2 into the atmosphere for the last 200 years as if there’s no future.”

This study is a follow-up to a 2008 paper published in the journal Climate Change, which calculated the CO2 offset from cool surfaces by using a simplified model that assumed a global average for cloud cover. The earlier paper, co-authored by Akbari, Menon and Art Rosenfeld, a Berkeley Lab physicist who was then a member of the California Energy Commission, found that implementing cool roofs and pavements worldwide could offset 44 gigatons of CO2 (24 Gt from roofs and 20 Gt from pavements).

“If all eligible urban flat roofs in the tropics and temperate regions were gradually converted to white (and sloped roofs to cool colors), they would offset the heating effect of the emission of roughly 24 Gt of CO2, but one-time only,” says Rosenfeld, who returned to Berkeley Lab this year. “However, if we assume that roofs have a service life of 20 years, we can think of an equivalent annual rate of 1.2 Gt per year. That offsets the emissions of roughly 300 million cars (about the cars in the world) for 20 years!”

In both studies, the researchers used a conservative assumption of increasing the average albedo (solar reflectance) of all roofs by 0.25 and of pavements by 0.15. That means a black roof (which has an albedo of 0) would not have to be replaced by a pure white roof (which has an albedo of 1), but just a roof of a cooler color, a scenario that is more plausible to implement.

Roofs and pavements cover 50 to 65 percent of urban areas. Because they absorb so much heat, dark-colored roofs and roadways create what is called the urban heat island effect, where a city is significantly warmer than its surrounding rural areas. This additional heat also eventually contributes to global warming. More than half of the world’s population now lives in cities; by 2040 the proportion of urbanites is expected to reach 70 percent, adding urgency to the urban heat island problem.

The Berkeley Lab study found that global land surface temperature decreased by a modest amount—an average of roughly 0.01degrees Celsius, based on an albedo increase of .003 averaged over all global land surfaces. This relatively small temperature reduction is an indication that implementing cool surfaces can be only part of the solution to the global climate change problem, the researchers say. To put the number in context, consider that global temperatures are estimated to increase about 3 degrees Celsius in the next 40 to 60 years if CO2 emissions continue rising as they have. Preventing that warming would necessitate a 0.05 degree Celsius annual decrease in temperature between now and 2070.

Thus, even modest changes should not be dismissed. “Simply put, a cool roof will save money for homeowners and businesses through reduced air conditioning costs. The real question is not whether we should move toward cool roof technology: it’s why we haven’t done it sooner,” says Rosenfeld.

Another recent study on cool roofs, led by Keith Oleson at the National Center for Atmospheric Research (NCAR) and published in Geophysical Research Letters, found that if every roof were painted entirely white, the CO2 emission offsets would be approximately 32 Gt for summer and about 30 Gt annually. While the NCAR study used a different model, the calculated CO2 emission offsets are similar to the results from the Berkeley Lab study and provide a useful and independent verification of the expected CO2 emission offsets from increasing the reflectivity of roofs.

Some observers have pointed out that cool roofs do not make sense in cooler climates because of “winter penalties,” since cooler buildings require more energy to heat. However, the energy savings from cooler buildings usually outweighs any increase in heating costs. Furthermore, in winter, there tends to be more cloud cover; also, the sun is lower and the days are shorter, so a flat roof’s exposure to the sun is significantly reduced.

“Cool roofs have worked for thousands of years in the Mediterranean and Middle Eastern cities, where demand for air conditioning is low,” says Akbari. “If you have a cool roof on your house, that will reduce your energy use from air conditioning and it’s a gift that keeps on giving for many, many years, for the life of the roof.”

(Photo: ASU National Center of Excellence for SMART Innovations)

Lawrence Berkeley National Laboratory


0 comentarios

That dry, dusty moon overhead? Seems it isn't quite as dry as it's long been thought to be. Although you won't find oceans, lakes, or even a shallow puddle on its surface, a team of geologists at the California Institute of Technology (Caltech), working with colleagues at the University of Tennessee, has found structurally bound hydroxyl groups (i.e., water) in a mineral in a lunar rock returned to Earth by the Apollo program.

Their findings are detailed in the journal Nature.

"The moon, which has generally been thought to be devoid of hydrous materials, has water," says John Eiler, the Robert P. Sharp Professor of Geology and professor of geochemistry at Caltech, and a coauthor on the paper.

"The fact that we were able to quantitatively measure significant amounts of water in a lunar mineral is truly surprising," adds lead author Jeremy Boyce, a visitor in geochemistry at Caltech, and a research scientist at the University of California, Los Angeles.

The team found the water in a calcium phosphate mineral, apatite, within a basalt collected from the moon's surface by the Apollo 14 astronauts.

To be precise, they didn't find "water"-the molecule H2O. Rather, they found hydrogen in the form of a hydroxyl anion, OH-, bound in the apatite mineral lattice.

"Hydroxide is a close chemical relative of water," explains coauthor George Rossman, Caltech's Eleanor and John R. McMillan Professor of Mineralogy. "If you heat up the apatite, the hydroxyl ions will 'decompose' and come out as water."

The lunar basalt sample in which the hydrogen was found had been collected by the Apollo 14 moon mission in 1971; the idea to focus the search for water on this particular sample was promoted by Larry Taylor, a professor at the University of Tennessee in Knoxville, who sent the samples to the Caltech scientists last year.

"The moon has been considered to be bone dry ever since the return of the first Apollo rocks," Taylor notes. However, there are lunar volcanic deposits interpreted as having been erupted by expanding vapor. Although carbon dioxide and sulfur gases have generally been thought to dominate the expanding vapor, recent evidence from the study of the these deposits has suggested that water could also play a role in powering lunar volcanic eruptions. The discovery of hydroxyl in apatite from lunar volcanic rocks is consistent with this suggestion.

The idea of looking for water in lunar apatite isn't new, Boyce notes. "Charles B. Sclar and Jon F. Bauer, geoscientists at Lehigh University, first noted that something was missing from the results of chemical analyses of apatite in 1975," he says. "Now, 35 years later, we have quantitative measurements-and it turns out, they were right. The missing piece was OH."

The Caltech team analyzed the lunar apatite for hydrogen, sulfur, and chlorine using an ion microprobe, which is capable of analyzing mineral grains with sizes much smaller than the width of a human hair. This instrument fires a focused beam of high-energy ions at the sample surface, sputtering away target atoms that are collected and then analyzed in a mass spectrometer. Ion microprobe measurements demonstrated that in terms of its hydrogen, sulfur, and chlorine contents, the lunar apatite in this sample is indistinguishable from apatites from terrestrial volcanic rocks.

"We realized that the moon and the earth were able to make the same kind of apatite, relatively rich in hydrogen, sulfur and chlorine," Boyce says.

Does that mean the moon is as awash in water as our planet? Almost certainly not, say the scientists. In fact, the amount of water the moon must contain to be capable of generating hydroxyl-rich apatite remains an open question. After all, it's hard to scale up the amount of water found in the apatite-1600 parts per million or 0.16 percent by weight-to determine just how much water there is on the lunar landscape. The apatite that was studied is not abundant, and is formed by processes that tend to concentrate hydrogen to much higher levels than are present in its host rocks or the moon as a whole.

"There's more water on the moon than people suspected," says Eiler, "but there's still likely orders of magnitude less than there is on the earth."

Nonetheless, the finding is significant for what it implies about our moon's composition and its history. "These findings tell us that the geological processes on the moon are capable of creating at least one hydrous mineral," Eiler says. "Recent spectroscopic observations of the moon showed that hydrogen is present on its surface, maybe even as water ice. But that could be a thin veneer, possibly hydrogen brought to the moon's surface by comets or solar wind. Our findings show that hydrogen is also part of the rock record of the moon, and has been since early in its history."

Beyond that, Eiler continues, "it's all a great big question mark. We don't know whether these were igneous processes,"-in which rocks are formed by solidification of molten lava-"or metamorphic"-in which minerals re-crystallize or change in change in chemistry without melting. "They're both on the table as possible players."

(Photo: Larry Taylor/University of Tennessee)

California Institute of Technology (Caltech)


0 comentarios

Everybody enjoys a laugh but new research from an international team shows it's not as simple as you might think.

Most people consider laughter as a sign of happiness, but now scientists from Newcastle and Germany have shown it can convey a range of emotions, each processed by a different part of the brain.

And the information could be used to revolutionise the way patients with neuro-degenerative diseases are able to communicate. This could have an increasing benefit as the effects of an ageing population continue to be felt.

The latest part of the project, which has been running since 2003, is published in the journal Neuroimage. In it, scientists from Newcastle University and the University of Tuebingen, in Germany, show how a group of volunteers were able to recognise three different negative and positive forms of laughter, (joy, taunting, and tickling) simply by listening to it, and that different networks and pathways in the brain decode different types of laughter.

The aim of the unique project is to investigate how emotions are expressed and perceived in non-verbal communication, such as laughter.

This could potentially be of great benefit in the future to people who have difficulty in recognising and expressing their feelings and emotions, for example those with Parkinson’s disease and Alzheimer’s.

Laughter is an essential part of human communication, but before recent studies, little was known about how the brain processes different forms of laughter or how this could help with communication.

Scientist Kai Alter, senior lecturer in the Institute of Neuroscience at Newcastle University, one of the researchers involved with the project, said: “We have investigated three different types of laughter, two positive, one negative. They are joy, tickling and taunting. We are the first group to investigate different types of laughter, and the basic brain mechanisms during recognition.

“For our experiment we recorded actors performing each of these laughs in different ways and then got volunteers to listen to the tapes. They were able to identify the type of laughter just by listening to the tapes, which showed us that human emotions are passed on in laughter in a clear way.

“Because of this we were then able to try to find the regions of the brain which process those emotions by using an MRI scanner on the volunteers while they were listening to laughter. Different regions of the brain were in use when different types of laughter were being processed.

“Further study is needed but we can now investigate the networks in the brain which become damaged when people suffer from these neuro-degenerative illnesses, by studying how people with these conditions react to different types of laughter and other non-verbal communication, such as exclamations.

“Therapists and relatives of patients, as well as doctors could be trained to spot what they are trying to communicate, and new ways of communicating at a lower cognitive level could be developed.”

(Photo: Newcastle U.)

Newcastle University


0 comentarios

A comet may have hit the planet Neptune about two centuries ago. This is indicated by the distribution of carbon monoxide in the atmosphere of the gas giant that researchers - among them scientists from the French observatory LESIA in Paris, from the Max Planck Institute for Solar System Research (MPS) in Katlenburg-Lindau (Germany) and from the Max Planck Institute for Extraterrestrial Physics in Garching (Germany) - have now studied. The scientists analyzed data taken by the research satellite Herschel, that has been orbiting the Sun in a distance of approximately 1.5 million kilometers since May 2009.

When the comet Shoemaker-Levy 9 hit Jupiter sixteen years ago, scientists all over the world were prepared: instruments on board the space probes Voyager 2, Galileo and Ulysses documented every detail of this rare incident. Today, this data helps scientists detect cometary impacts that happened many, many years ago. The "dusty snowballs" leave traces in the atmosphere of the gas giants: water, carbon dioxide, carbon monoxide, hydrocyanic acid, and carbon sulfide. These molecules can be detected in the radiation the planet radiates into space.

In February 2010 scientists from Max Planck Institute for Solar System Research discovered strong evidence for a cometary impact on Saturn about 230 years ago (see Astronomy and Astrophysics, Volume 510, February 2010). Now new measurements performed by the instrument PACS (Photodetector Array Camera and Spectrometer) on board the Herschel space observatory indicate that Neptune experienced a similar event. For the first time, PACS allows researchers to analyze the long-wave infrared radiation of Neptune.

The atmosphere of the outer-most planet of our solar system mainly consists of hydrogen and helium with traces of water, carbon dioxide and carbon monoxide. Now, the scientists detected an unusual distribution of carbon monoxide: In the upper layer of the atmosphere, the socalled stratosphere, they found a higher concentration than in the layer beneath, the troposphere. "The higher concentration of carbon monoxide in the stratosphere can only be explained by an external origin", says MPS-scientist Paul Hartogh, principle investigator of the Herschel science program "Water and related chemistry in the solar system". "Normally, the concentrations of carbon monoxide in troposphere and stratosphere should be the same or decrease with increasing height", he adds.

The only explanation for these results is a cometary impact. Such a collision forces the comet to fall apart while the carbon monoxide trapped in the comet’s ice is released and over the years distributed throughout the stratosphere. "From the distribution of carbon monoxide we can therefore derive the approximate time, when the impact took place", explains Thibault Cavalié from MPS. The earlier assumption that a comet hit Neptune two hundred years ago could thus be confirmed. A different theory according to which a constant flux of tiny dust particles from space introduces carbon monoxide into Neptune’s atmosphere, however, does not agree with the measurements.

In Neptune’s stratosphere the scientists also found a higher concentration of methane than expected. On Neptune, methane plays the same role as water vapor on Earth: the temperature of the socalled tropopause - a barrier of colder air separating troposphere and stratosphere - determines how much water vapor can rise into the stratosphere. If this barrier is a little bit warmer, more gas can pass through. But while on Earth the temperature of the tropopause never falls beneath minus 80 degrees Celsius, on Neptune the tropopause's mean temperature is minus 219 degrees.

Therefore, a gap in the barrier of the tropopause seems to be responsible for the elevated concentration of methane on Neptune. With minus 213 degrees Celsius, at Neptune’s southern Pole this air layer is six degrees warmer than everywhere else allowing gas to pass more easily from troposphere to stratosphere. The methane, which scientists believe originates from the planet itself, can therefore spread throughout the stratosphere.

The instrument PACS was developed at the Max Planck Institute for Extraterrestrial Physics. It analyzes the long-wave infrared radiation, also known as heat radiation, that the cold bodies in space such as Neptune emit. In addition, the research satellite Herschel carries the largest telescope ever to have been operated in space.

(Photo: NASA)

Max Planck Institute


0 comentarios

Understanding the processes that cause volcanic eruptions can help scientists predict how often and how violently a volcano will erupt. Although scientists have a general idea of how these processes work — the melting of magma below the volcano causes liquid magma and gases to force their way to Earth’s surface — eruptions happen so rarely, and often with little warning, that it can be difficult to study them in detail.

One volcano that volcanologists believe they understand fairly well is Italy’s Stromboli, which has been erupting every five to 20 minutes for thousands of years, spewing fountains of ash and magma several meters into the sky. For several decades, scientists have pretty much used one theory to explain what is causing huge amounts of gas to erupt so frequently: swimming-pool-sized bubbles that travel through a few hundred meters of molten magma before popping at the surface.

But they may be wrong, according to new research by Jenny Suckale, a graduate student in MIT’s Department of Earth, Atmospheric and Planetary Sciences (EAPS), who has developed a sophisticated computer model to simulate Stromboli’s magma flow. In a two-paper series published July 20 in The Journal of Geophysical Research, Suckale suggests that giant gas bubbles can’t be driving the Stromboli eruptions because such bubbles aren’t compatible with the basic laws of fluid dynamics, or the science of how fluids move. Instead of large bubbles that pop at the top of Stromboli’s conduits — pipelike openings that connect the volcano’s magma chamber to the Earth’s surface — Suckale thinks that the eruptions are caused by a spongelike plug located within the conduit, similar to a cork in a champagne bottle, that fractures every few minutes as a result of pressure created by significantly smaller bubbles.

Although all volcanoes are different — some are driven by gas while others are driven by rising magma or interactions with water — Suckale says that figuring out Stromboli would be “an important step forward for volcanology” because scientists don’t really know the details of how most volcanoes function. Rethinking how Stromboli works could also shed light on the processes of volcanoes that appear to be driven by similar mechanisms as Stromboli, such as Mount Erebus in Antarctica, which has been continuously active since the 1970s.

Despite having a wealth of data about Stromboli, volcanologists have really only applied one model to explain Stromboli’s continuous eruptions, Suckale says. According to the so-called “big bubble paradigm,” as magma rises to Stromboli’s surface, pressure drops, and this creates gas bubbles that merge together and can become several meters wide. Eventually, these bubbles explode at the top of the conduit.

But the problem with this theory, according to Suckale, is that it conflicts with the basic principles of fluid dynamics. Specifically, magma doesn’t have enough surface tension (created when two fluids meet) or viscosity (a measure of a fluid’s resistance) to maintain bubbles larger than a few dozen centimeters. She thinks that many researchers have assumed that bubbles inside Stromboli behave similarly to bubbles in a tank of water. “People take lab models as an analog for the volcano, but the scale is so different, and fluid dynamics is so dependent on scale,” she explains.

To test the theory, Suckale and co-authors and EAPS professors Brad Hager and Lindy Elkins-Tanton, as well as Jean-Christophe Nave, a lecturer in MIT’s Department of Mathematics, developed a computer model of the inner volcano’s mixture of gas and magma and the bubbles that can rupture or merge. By changing certain parameters, such as scale, she discovered that it would be physically impossible for massive gas bubbles in Stromboli to survive for longer than a second because of the lack of stabilizing forces, such as surface tension and viscosity.

Suckale still believes there are gas bubbles inside Stromboli that are created by some unknown source located underneath the volcano. But she thinks these bubbles are significantly smaller — perhaps only several centimeters thick — and accumulate beneath a porous plug that covers part of the volcano. As the bubbles exert greater pressure on the plug, it eventually fractures, causing gas, rocks and liquid to scatter into the sky. This could explain why samples of Stromboli rock contain many tiny crystals — because the top of Stromboli is a spongelike plug of crystals and gas bubbles that releases lots of gas every few minutes.

Kathy Cashman, a geologist at the University of Oregon, says Suckale’s modeling work “greatly advances” volcanologists’ understanding of the bubbles inside Stromboli and may also shed light on noneruptive processes in volcanoes that could also be transferring gas to the atmosphere. “Jenny’s work sits at the boundary of these two types of gas transfer, and her modeling may help to address very fundamental issues related to volatile budgets of both the magma and the atmosphere,” Cashman says. But she cautions that Suckale’s work represents a “first step” toward modeling a very complex system, and that future modeling efforts should address the effect that crystals may have on bubble behavior.

Suckale agrees, but for now, she is working to develop a new model to explain how she thinks the theorized Stromboli plug works, why it could cause such constant eruptions and what this might say about other volcanoes that erupt frequently.

(Photo: U.S. Geological Survey)



0 comentarios

Steering clear of crocodiles and navigating around massive submerged trees, a team of divers began mapping some of the 25 freshwater pools of Cara Blanca, Belize, which were important to the ancient Maya. In three weeks this May, the divers found fossilized animal remains, bits of pottery and – in the largest pool explored – an enormous underwater cave.

This project, led by University of Illinois anthropology professor Lisa Lucero and funded by the National Geographic Society and an Arnold O. Beckman Award, was the first of what Lucero hopes will be a series of dives into the pools of the southern Maya lowlands in central Belize. The divers will return this summer to assess whether archaeological excavation is even possible at the bottom of the pools, some of which are more than 60 meters deep.

“We don’t know if it’s going to be feasible to conduct archaeology 200 feet below the surface,” Lucero said. “But they are going to try.”

The Maya believed that openings in the earth, including caves and water-filled sinkholes, called cenotes (sen-OH-tays), were portals to the underworld, and often left offerings there. Ceremonial artifacts of the Maya have been found in pools and lakes in Mexico, but not yet in Belize.

Maya structures have been found near two of the eight pools the team surveyed.

“The pools with the most substantial and most obvious settlement at the edge also turn out to be the deepest that we know,” Lucero said. The divers so far have explored eight of the 25 known pools of Cara Blanca.

The use of these pools at the end of the Late Classic period (roughly A.D. 800-900) corresponds to an enduring drought that deforested parts of Central America and – some believe – ultimately drove the Maya from the area.

The need for fresh water could have drawn the Maya to the pools, Lucero said. No vessels other than water jars were found in the structures built near the pools.

“They could have been making offerings to the rain god and other supernatural forces to bring an end to the drought,” she said.

Patricia Beddows, one of the divers and a hydrologist and geochemist at Northwestern University, found that the chemistry of the water in each of the pools was distinct. She also found that the water in Pool 1, the pool with the huge cave and a Maya structure at its edge, held the freshest water of the pools surveyed. But the water contained a lot of soluble minerals, Lucero said, making it problematic for anyone who used it as their primary water supply. Those who drank the water over an extended period would have been at risk of developing kidney stones, she said.

The divers extracted core samples of the sediment at the bottoms of two of the pools. An analysis of the soil, debris and pollen in the cores will offer insight into the natural history of the cenotes and the surrounding region.

Lucero recruited expert cave exploration divers for the expedition. She provided food, lodging and other basics, but the divers donated their time and expertise. The dive team included Robbie Schmittner, Kim Davidsson (an independent cave dive instructor), Bil Phillips, and videographer Marty O’Farrell, who produced the video.

The research team also included archaeologist Andrew Kinkella, of Moorpark College. In Pool 1, Kinkella and diver Edward Mallon recovered ceramic jar shards in the wall of the pool just below the Maya structure.

Three more divers, Steve Bogaerts, James “Chip” Petersen and still photographer Tony Rath will join the project this summer.

Lucero has studied Maya settlements and sacred sites in Belize for more than 20 years, and works under the auspices of the Institute of Archaeology, which is part of the National Institute of Culture and History, Government of Belize.

(Photo: VOPA)

University of Illinois


0 comentarios

As a teen in his native Taiwan, Bo-wen Shen observed helplessly as typhoon after typhoon pummeled the small island country. Without advanced forecasting systems, the storms left a trail of human loss and property destruction in their wake. Determined to find ways to stem the devastation, Shen chose a career studying tropical weather and atmospheric science.

Now a NASA-funded research scientist at the University of Maryland-College Park, Shen has employed NASA's Pleiades supercomputer and atmospheric data to simulate tropical cyclone Nargis, which devastated Myanmar in 2008. The result is the first model to replicate the formation of the tropical cyclone five days in advance.

To save lives from the high winds, flooding, and storm surges of tropical cyclones (also known as hurricanes and typhoons), forecasters need to give as much advance warning as possible and the greatest degree of accuracy about when and where a storm will occur. In Shen's retrospective simulation, he was able to anticipate the storm five days in advance of its birth, a critical forewarning in a region where the meteorology and monitoring of cyclones is hampered by a lack of data.

At the heart of Shen's work is an advanced computer model that could improve our understanding of the predictability of tropical cyclones. The research team uses the model to run millions of numbers -- atmospheric conditions like wind speed, temperature, and moisture -- through a series of equations. This results in digital data of the cyclone's location and atmospheric conditions that are plotted on geographical maps.

Scientists study the maps and data from the model and compare them against real observations of a past storm (like Nargis) to evaluate the model's accuracy. The more the model reflects the actual storm results, the greater confidence researchers have that a particular model can be used to paint a picture of what the future might look like.

"To do hurricane forecasting, what's really needed is a model that can represent the initial weather conditions – air movements and temperatures, and precipitation – and simulate how they evolve and interact globally and locally to set a cyclone in motion," said Shen, whose study appeared online last week in the Journal of Geophysical Research –Atmospheres.

"We know what's happening across very large areas. So, we need really good, high-resolution simulations with the ability to detail conditions across the smallest possible areas. We've marked several forecasting milestones since 2004, and we can now compute a storm's fine-scale details to 10 times the level of detail than we could with traditional climate models."

The cyclone's birth prediction is possible because the supercomputer at NASA's Ames Research Center in Moffett Field, Calif., can process atmospheric data for global and regional conditions, as well as the fine-scale measurements like those around the eye of a storm. NASA built the Pleiades supercomputer in 2008, incrementally boosting its processing "brain power" since to the capacity of 81,920 desktop CPUs. The upgrades laid the groundwork for Shen and others to gradually improve simulations of varying aspects of a storm – from simulations of the path, then intensity, and now the actual genesis of a storm.

The improved simulations can translate into greater accuracy and less guesswork in assessing when a storm is forming.

"There is a tendency to over-warn beyond the actual impact area of a storm, leading people to lose confidence in the warning system and to ignore warnings that can save their lives," said study co-author Robert Atlas, director of the National Oceanic and Atmospheric Administration's (NOAA) Atlantic Oceanographic and Meteorological Laboratory in Miami, Fla., and former chief meteorologist at NASA's Goddard Space Flight Center in Greenbelt, Md.

"Although we've seen tremendous forecasting advances in the past 10 years – with potential to improve predictions of a cyclone's path and intensity -- they're still not good enough for all of the life-and-death decisions that forecasters have to make. Tropical cyclones have killed nearly two million people in the last 200 years, so this remaining 'cone of uncertainty' in our predictions is unacceptable."

As promising as the new model may be, Atlas cautions that "Shen's model worked for one cyclone, but it doesn't mean it'll work in real-time for future storms. The research model Shen and predecessors at NASA have developed sets the stage for NOAA's researchers to hone and test the new capability with their own models."

Shen's use of real data from Nargis – one of the 10 deadliest cyclones on record – with the new global model also yields insights into the dynamics of weather conditions over time and across different areas that generate such storms.

"In the last few years, high-resolution global modeling has evolved our understanding of the physics behind storms and its interaction with atmospheric conditions more rapidly than in the past several decades combined," explained Shen, who presented the study last month before peers at the American Geophysical Union's Western Pacific Geophysics Meeting in Taipei, Taiwan. "We can 'see' a storm's physical processes with this advanced global model – like both the release of heat associated with rainfall and changes in environmental atmospheric flow, which was very difficult until now."

(Photo: NASA)



0 comentarios

Rui Costa, Principal Investigator of the Champalimaud Neuroscience Programme at the Instituto Gulbenkian de Ciência (Portugal), and Xin Jin, of the National Institute on Alcohol Abuse and Alcoholism, National Institutes of Health (USA), describe in the latest issue of the journal Nature, that the activity of certain neurons in the brain can signal the initiation and termination of behavioural sequences we learn anew.

Furthermore, they found that this brain activity is essential for learning and executing novel action sequences, many times compromised in patients suffering from disorders such as Parkinson's or Huntington's.

Animal behaviour, including our own, is very complex and is many times seen as a sequence of particular actions or movements, each with a precise start and stop step. This is evident in a wide range of abilities, from escaping a predator to playing the piano. In all of them there is a first initial step and one that signals the end. In this latest work, the researchers explored the role of certain brain circuits located in the basal ganglia in this process. They looked at the striatum, its dopaminergic input (dopamine-producing neurons that project into the striatum) and its output to the substantia nigra, another area in the basal ganglia, and found that both play an essential role in the initiation and termination of newly learnt behavioural sequences.

Rui Costa and Xin Jin show that when mice are learning to perform a particular behavioural sequence there is a specific neuronal activity that emerges in those brain circuits and signals the initiation and termination steps. Interestingly these are the circuits that degenerate in patients suffering from Parkinson's and Huntington's diseases, who also display impairments both in sequence learning, and in the initiation and termination of voluntary movements. Furthermore, the researchers were able to genetically manipulate those circuits in mice, and showed that this leads to deficits in sequence learning by the mice - again, a feature shared with human patients affected with basal ganglia disorders.

Rui Costa explains the implications of these results: "For the execution of learned skills, like playing a piano or driving a car, it is essential to know when to start and stop each particular sequence of movements, and we found the neuronal circuits that are involved in the initiation and termination of action sequences that are learnt. This can be of particular relevance for patients suffering from Huntington's and Parkinson's disease, but also for people suffering from other disorders like compulsivity".

Xin Jun adds: "This start/stop activity appears during learning and disrupting it genetically severely impairs the learning of new action sequences. These findings may provide a possible insight into the mechanism underlying the sequence learning and execution impairments observed in Parkinson's and Huntington's patients who have lost basal ganglia neurons which may be important in generating initiation and termination activity in their brain".

(Photo: Rui Costa)

Instituto Gulbenkian de Ciência


0 comentarios
Maintaining the correct time is no longer just a matter of keeping your watch wound -- especially when it comes to computers, telecommunications, and other complex systems. The clocks in these devices must stay accurate to within nanoseconds because their oscillators -- objects, like quartz crystals, which repeat the same motion over and over again -- are synchronized to agree with the clocks on board Global Positioning System (GPS) satellites.

In the journal Review of Scientific Instruments, which is published by the American Institute of Physics, researchers report on a new way to accurately synchronize clocks. The new method uses both GPS and the Internet to set clocks within 10 nanoseconds of a reference clock located anywhere on Earth.

The method makes use of a common-view disciplined oscillator (CVDO) -- a device "whose frequency and time are tightly controlled to agree with a reference clock at another location, if both clocks are connected to the Internet and if both clocks are being compared to GPS satellites," says Michael Lombardi, a metrology engineer with the National Institute of Standards and Technology (NIST), and coauthor of the paper along with Aaron Dahlen of the United States Coast Guard.

The significance of the CVDO, says Lombardi, "is simply that you don't have to depend on GPS time." While there is no shortage of GPS disciplined oscillators -- "the telecommunications industry in North America probably owns several hundred thousand of them," Lombardi says -- "a CVDO potentially provides more versatility. It would allow a telecommunications network to synchronize all of its clocks to a different reference than GPS, such as the NIST standard" -- the atomic clock that keeps the official time for the United States. "If GPS time is wrong, the CVDO will still be correct as long as its reference clock is right."

American Institute of Physics




Selected Science News. Copyright 2008 All Rights Reserved Revolution Two Church theme by Brian Gardner Converted into Blogger Template by Bloganol dot com