Tuesday, February 8, 2011


0 comentarios

There’s a common misconception that prior to European contact in the 15th century, the Americas were a pristine, untouched wilderness, inhabited by people who lived in complete harmony with their environment. In reality, humans have been affecting and influencing their surroundings as long as we’ve existed.

Legacies on the Landscape, a collaborative project among archeologists and ecologists from Arizona State University, is an ongoing effort to determine the degree to which ancient peoples’ actions continue to affect landscapes today.

The focus of the study is the Agua Fria National Monument, a 71,000-acre area of mesas and grasslands located approximately 40 miles north of Phoenix. Within the monument, numerous prehistoric sites allow researchers to examine the impact inhabitants of the mesa had on their environment. In particular, the ASU researchers are studying Perry Mesa, a high expanse of desert landscape, cut by canyons.

“The monument is a giant laboratory,” says Katherine Spielmann, the archeology lead on the project and professor in the School of Human Evolution and Social Change within the College of Liberal Arts and Sciences. “It allows us to work on the entire landscape of a prehistoric community, which contains hundreds of habitation, agricultural and special-purpose sites.”

Within this giant laboratory are some prehistoric artifacts vital to determining humanity’s environmental impact: rocks.

Specifically, 700-year-old rock terraces installed on the sloping hills of the mesa, which were used to slow down and direct water flow to crops.

“Because these terraces exist, we can see where prehistoric people actually farmed,” Spielmann says. “In most parts of the Southwest, although we know people farmed, they didn’t modify the landscape so we can’t say for sure where. At Agua Fria, we can see exactly where they farmed and look at what transformation farming caused.”

Determining exactly what changes these terraces and the people who made them had on the landscape is where the ecological half of the project comes in.

“The people of Perry Mesa manipulated their surroundings in numerous ways,” says ecologist Sharon Hall, a professor in the Schools of Life Sciences and Sustainability. “They planted and cultivated agave, which are still growing near abandoned settlements today. They removed rocks from soils to build rooms and houses, which altered patterns of fire and the distribution of woody plants. The rock-terraced hill slopes still perform their intended function of slowing water runoff and have led to altered soil properties and plant communities.”

“The details of the soil’s information that the ecologists have brought to the table make us archaeologists think about the productivity of the landscape in the past,” Spielmann says.

One archeology doctoral student, Melissa Kruse-Peeples, is looking at the soil’s productivity in relation to ancient farming.

With the use of buried probes that measure soil moisture and a local rain gauge, Kruse-Peeples is monitoring water and nutrient dynamics within the prehistoric agricultural fields.

“By examining the modern conditions I can estimate what conditions would have been like in the ancient past,” Kruse-Peeples says.

Along with the program’s ecologists, Kruse-Peeples is studying the productivity of the soil by looking at runoff and nutrient value of the water that is deposited on the terraced fields during rainstorms. The study will answer whether or not rainfall replenished the fields sufficiently for sustainable corn production.

“Nutrient-rich runoff is higher in terraced areas, indicating that the stone terraces were significant in terms of nutrient replenishment and therefore prolonged the use life of a field,” Kruse-Peeples says. “But this strategy of terracing may have ultimately been unsustainable.”

By exploring the extent to which the Agua Fria residents affected the environment through agricultural production, researchers also may answer why the settlement was abandoned so soon after its establishment.

“These people came in the late 1200s and left in the early 1400s,” Spielmann says. “They were only there 100, maybe 125 years.”

Tree ring data documents a period of extreme drought that swept through much of the Southwest in the late 1200s, forcing many people to relocate from ancestral homelands to better-watered places, such as Agua Fria. Because some villages were built in what appear to be defensive locations, the people there may have been concerned about external threats. Why would they abandon such an ideal set-up?

“Did they farm out all the nutrients? Did they deplete all their resources?” Hall asks.

Our society today is not as dependent on the immediate environment as the people of Perry Mesa were. Even so, the story that the researchers are uncovering could have modern applications.

“The cautionary tale about using up local resources is out there already,” Spielmann says. “What we’re coming to understand about prehistoric agriculture will help us interpret the archeology of the Southwest. There are a lot of assumptions about what runoff agriculture does, which we are examining through the Legacies project.”

In addition to expanding knowledge of the desert Southwest, the findings of Agua Fria may have uses for semi-arid environments elsewhere in the world, such as Sub-Saharan Africa and parts of Asia.

“There’s potential that some of the things we learn might be helpful to farmers of the same scale, households that are dry-farming,” Spielmann says.

Whether or not the research will benefit modern small-scale farmers remains to be seen, but the project’s realization of ancient people’s imprint on our world remains paramount.

“What we’re trying to understand is how people modified the environment, maybe in subtle ways, and if these changes have lasting impacts today,” Hall says.

(Photo: Melissa Kruse-Peeples)

Arizona State University


0 comentarios

Clinical researchers have discovered what may be a surgical alternative to medication in controlling persistent high blood pressure where patients do not respond to drugs.

Reporting this month in Neurology, the medical journal of the American Academy of Neurology, the team from the University of Bristol and Frenchay Hospital describe the case of a 55-year-old man who had been diagnosed with high blood pressure when he suffered a stroke. His blood pressure had remained high despite attempts to control it using drugs.

Using a surgical implant similar to a cardiac pacemaker, the team were able to control the man’s condition by sending electrical pulses to the brain, in a procedure known as deep brain stimulation.

While the electrical stimulation did not permanently alleviate his pain, researchers did find that it decreased his blood pressure enough for him to be able to stop taking the blood pressure drugs.

“This is an exciting finding as high blood pressure affects millions of people and can lead to heart attack and stroke, but for about one in ten people, high blood pressure can’t be controlled with medication or they cannot tolerate the medication,” said study author Dr Nikunj K. Patel from the Department of Neurosurgery at Bristol’s Frenchay Hospital and Senior Clinical Lecturer at the University of Bristol.

Dr Patel noted that the decrease in blood pressure was a response to the deep brain stimulation, and not a result of changes to the gentleman’s other conditions.

The man’s blood pressure gradually decreased after the deep brain stimulator was implanted in the periaqueductal-periventricular grey region of the brain, which is involved in regulating pain. His blood pressure was controlled for three years of follow-up; at one point he went back on an anti-hypertension drug for a slight increase in blood pressure, but that drug was withdrawn when the blood pressure was again reduced.

At one point researchers tested turning off the stimulator. This led to an increase of an average of 18/5 mmHg in blood pressure. When the stimulator was turned back on, blood pressure dropped by an average of 32/12 mmHg. Repeating the tests produced the same results.

“More research is needed to confirm these results in larger numbers of people, but this suggests that stimulation can produce a large, sustained lowering of blood pressure,” added Dr Patel. “With so many people not responding to blood pressure medications, we are in need of alternative strategies such as this one.”

(Photo: Bristol U.)

University of Bristol


0 comentarios

Australian, Belgian and New Zealand scientists have expanded our understanding of the way phyoplankton take up scarce iron in the ocean – a process that regulates ocean food chains from the bottom up and helps remove up to 40 per cent of carbon dioxide (CO2) from the atmosphere.

Research published recently in the Proceedings of the National Academy of Science explores the relationship between iron, which limits primary productivity in vast regions of the ocean, and its uptake by phytoplankton species.

It has identified how natural organic compounds in the Southern Ocean can control iron availability to phytoplankton in iron-deficient waters and, in particular, for high-nutrient but low-chlorophyll regions.

"The anaemic regions of the oceans are a ‘battlefield’ for iron, which drives photosynthesis and enhances growth of phytoplankton and other microbes in the oceanic food chain for the benefit of all marine species," said the project team’s leader, former CSIRO Postdoctoral Fellow and now University of Technology Sydney researcher, Dr Christel Hassler.

"Ocean chemists are eager to know more about this process, which mediates uptake of CO2 and its subsequent storage in the ocean interior, as species die and sink.

“This research reveals the significance of a newly identified organic mechanism controlling the bio-availability of iron to oceanic life," Dr Hassler said.

According to co-author, CSIRO’s Dr Carol Nichols, given that marine phytoplankton contribute up to 40 per cent of global biological carbon fixation, it is important to understand what features control the availability of iron to these organisms.

“Phytoplankton and other microbes living in the oceans produce long sugar polymers, or polysaccharides, as a survival strategy,” Dr Nichols said. “Polysaccharides help the microbial community stick to each other and to nutrients that may otherwise be difficult to access from the surrounding ocean.

“Working with laboratory cultures of Southern Ocean phytoplankton, the study shows that biologically-produced polysaccharides help keep iron accessible to phytoplankton by increasing its solubility in the upper layers of the oceans where photosynthesis occurs.

"In the ocean, most of the iron is bound to organic materials whose nature is largely unknown. Most marine microorganisms release saccharides or sugars, resulting in high levels being reported. These observations originally motivated our study," Dr Nichols said.

Iron fertilisation of the ocean has long been mooted as an option to engineer change in the amount of CO2 in the ocean.

"If we can understand the processes behind cycling of iron and other micronutrients in the oceans more fully, then we will be in a better position to counsel on proposals for such environmental engineering," Dr Nichols said.

(Photo: Dr Andrew Bowie, ACE CRC, UTAS)

The Commonwealth Scientific and Industrial Research Organisation (CSIRO)


0 comentarios

The fresh scent of pine has helped atmospheric scientists find missing sources of organic molecules in the air — which, it could well turn out, aren't missing after all. In work appearing in this week's Proceedings of the National Academy of Sciences Early Edition Online, researchers examined what particles containing compounds such as those given off by pine trees look like and how quickly they evaporate. They found the particles evaporate more than 100 times slower than expected by current air-quality models.

"This work could resolve the discrepancy between field observations and models," said atmospheric chemist Alla Zelenyuk. "The results will affect how we represent organics in climate and air quality models, and could have profound implications for the science and policy governing control of submicron particulate matter levels in the atmosphere."

Zelenyuk and colleagues at the Department of Energy's Pacific Northwest National Laboratory were able to measure evaporation from atmospheric particles in a much more realistic manner than ever before. This allowed them to show that they are not liquids, as has been assumed for two decades, and to get an accurate read on how fast these particles evaporate. What researchers previously thought takes seconds actually takes days.

Secondary organic aerosols are tiny bits of chemically modified organic compounds floating in the air. They absorb, scatter or reflect sunlight, and serve as cloud nuclei, making them an important component of the atmosphere.

For a couple of decades, researchers have interpreted laboratory and field measurements under the assumption that these particles are liquid droplets that evaporate fast, which is central to the way these particles are modeled. However, to this day researchers have failed to explain the high amounts observed in the real atmosphere. The never-ending search for extra sources of organics has been frustrating for scientists studying these aerosols.

To re-examine the assumption, researchers at PNNL used equipment that could study the particles under realistic conditions. Zelenyuk developed a sensitive and high-precision instrument called SPLAT II that can count, size and measure the evaporation characteristics of these particles at room temperature. Research and development for SPLAT II occurred partly in EMSL, DOE's Environmental Molecular Sciences Laboratory at PNNL.

First, the researchers created secondary organic aerosol particles in the lab by oxidizing alpha-pinene, the molecule that makes pine trees smell like pine. Oxidation is the same thing that happens to iron when it rusts, and happens a lot in the atmosphere when aerosols come into contact with gases such as ozone, which is a pollutant when it is low in the atmosphere.

For comparison, the researchers also made particles from other, well-understood organic molecules that are known to form solids or liquid droplets, such as one called DOP. Lastly, they allowed these other organic molecules and the pine-scented SOA particles to mingle to simulate what likely happens in the outdoors.

Monitoring the various particles with SPLAT II for up to 24 hours, the research team found that DOP particles behaved as expected. Organics evaporated from the particles quickly, and faster if the particle was smaller, which is how liquid particles evaporate.

But the pinene-based particles did not. About 50 percent of their volume evaporated away within the first 100 minutes. Then they clammed up, and only another 25 percent of their volume dissipated in the next 23 hours. In addition, this fast-slow evaporation occurred similarly whether the particle was big or small, indicating the particles were not behaving like a liquid.

This lack of evaporation could account for the inability of scientists to find other sources of atmospheric organics. "Our findings indicate that there may, in fact, be no missing SOA," said Zelenyuk.

In the world, the SOAs from pinene co-exist with other organic molecules, and some of these slam onto the particle and coat it. Experiments with the co-mingled SOAs and organic compounds showed the researchers that coated particles evaporate even slower than single-source SOA.

Zelenyuk then tested how close to reality their lab-based SOAs were. Using air samples gathered in Sacramento, Calif., the team found the behavior of atmospheric SOAs (whether from trees and shrubs or pollution) paralleled that of the co-mingled pinene-derived SOAs in the lab and did not behave like liquids.

The results suggest that in the real atmosphere, SOA evaporation is so slow that scientists do not need to include the evaporation in certain models. The researchers hope that incorporating this information into atmospheric models will improve the understanding of aerosols' role in the climate.

(Photo: PNNL)

Pacific Northwest National Laboratory


0 comentarios

A major driving force of evolution comes from mistakes made by cells and how organisms cope with the consequences, UA biologists have found. Their discoveries offer lessons for creating innovation in economics and society.

Charles Darwin based his groundbreaking theory of natural selection on the realization that genetic variation among organisms is the key to evolution.

Some individuals are better adapted to a given environment than others, making them more likely to survive and pass on their genes to future generations. But exactly how nature creates variation in the first place still poses somewhat of a puzzle to evolutionary biologists.

Now, Joanna Masel, associate professor in the UA's department of ecology and evolutionary biology, and postdoctoral fellow Etienne Rajon discovered the ways organisms deal with mistakes that occur while the genetic code in their cells is being interpreted greatly influences their ability to adapt to new environmental conditions – in other words, their ability to evolve.

"Evolution needs a playground in order to try things out," Masel said. "It's like in competitive business: New products and ideas have to be tested to see whether they can live up to the challenge."

The finding is reported in a paper published in the journal Proceedings of the National Academy of Sciences.

In nature, it turns out, many new traits that, for example, enable their bearers to conquer new habitats, start out as blunders: mistakes made by cells that result in altered proteins with changed properties or functions that are new altogether, even when there is nothing wrong with the gene itself. Sometime later, one of these mistakes can get into the gene and become more permanent.

"If the mechanisms interpreting genetic information were completely flawless, organisms would stay the same all the time and be unable to adapt to new situations or changes in their environment," said Masel, who is also a member of the UA's BIO5 Institute.

Living beings face two options of handling the dangers posed by errors, Masel and Rajon wrote. One is to avoid making errors in the first place, for example by having a proofreading mechanism to spot and fix errors as they arise. The authors call this a global solution, since it is not specific to any particular mistake, but instead watches over the entire process.

The alternative is to allow errors to happen, but evolve robustness to the effects of each of them. Masel and Rajon call this strategy a local solution, because in the absence of a global proofreading mechanism, it requires an organism to be resilient to each and every mistake that pops up.

"We discovered that extremely small populations will evolve global solutions, while very large populations will evolve local solutions," Masel said. "Most realistically sized populations can go either direction but will gravitate toward one or the other. But once they do, they rarely switch, even over the course of evolutionary time."

Using what is known about yeast, a popular model organism in basic biological research, Masel and Rajon formulated a mathematical model and ran computer simulations of genetic change in populations.

Avoiding or fixing errors comes at a cost, they pointed out. If it didn't, organisms would have evolved nearly error-free accuracy in translating genetic information into proteins. Instead, there is a trade-off between the cost of keeping proteins free of errors and the risk of allowing potentially deleterious mistakes.

In previous publications, Masel's group introduced the idea of variation within a population producing "hopeful and hopeless monsters" – organisms with genetic changes whose consequences can be either mostly harmless or deadly, but rarely in between.

In the present paper, Masel and Rajon report that natural variation comes in two flavors: regular variation, which is generally bad most of the time, since the odds of a genetic mutation leading to something useful or even better are pretty slim, and what they call cryptic variation, which is less likely to be deadly, and more likely to be mostly harmless.

So how does cryptic variation work and why is it so important for understanding evolution?

By allowing for a certain amount of mistakes to occur instead of quenching them with global proofreading machinery, organisms gain the advantage of allowing for what Masel calls pre-selection: It provides an opportunity for natural selection to act on sequences even before mutations occur.

"There is evidence that cryptic gene sequences still get translated into protein," Masel explained, "at least occasionally."

"When those proteins are bad enough, the sequences that produce them can be selected against. For example, if we imagine a protein with an altered amino acid sequence causing it to not fold correctly and pile up inside the cell, that would be very toxic to the organism."

"In this case of a misfolded protein, selection would favor mutations causing that genetic sequence to not be translated into protein or it would favor sequences in which there is a change so that even if that protein is made by accident, the altered sequence would be harmless."

"Pre-selection puts that cryptic variation in a state of readiness," Masel said. "One could think of local solutions as natural selection going on behind the scenes, weeding out variations that are going to be catastrophic, and enriching others that are only slightly bad or even harmless."

"Whatever is left after this process of pre-selection has to be better," she pointed out. "Therefore, populations relying on this strategy have a greater capability to evolve in response to new challenges. With too much proofreading, that pre-selection can't happen."

"Most populations are fairly well adapted and from an evolutionary perspective get no benefit from lots of variation. Having variation in a cryptic form gets around that because the organism doesn't pay a large cost for it, but it's still there if it needs it."

According to Masel, studying how nature creates innovation holds clues for human society as well.

"We find that biology has a clever solution. It lets lots of ideas flourish, but only in a cryptic form and even while it's cryptic, it weeds out the worst ideas. This is an extremely powerful and successful strategy. I think companies, governments, economics in general can learn a lot on how to foster innovation from understanding how biological innovation works."

(Photo: Beatriz Verdugo/UANews)


0 comentarios

If someone told you there was a way you could save 2.5 million to 3 million lives a year and simultaneously halt global warming, reduce air and water pollution and develop secure, reliable energy sources – nearly all with existing technology and at costs comparable with what we spend on energy today – why wouldn't you do it?

According to a new study coauthored by Stanford researcher Mark Z. Jacobson, we could accomplish all that by converting the world to clean, renewable energy sources and forgoing fossil fuels.

"Based on our findings, there are no technological or economic barriers to converting the entire world to clean, renewable energy sources," said Jacobson, a professor of civil and environmental engineering. "It is a question of whether we have the societal and political will."

He and Mark Delucchi, of the University of California-Davis, have written a two-part paper in Energy Policy in which they assess the costs, technology and material requirements of converting the planet, using a plan they developed.

The world they envision would run largely on electricity. Their plan calls for using wind, water and solar energy to generate power, with wind and solar power contributing 90 percent of the needed energy.

Geothermal and hydroelectric sources would each contribute about 4 percent in their plan (70 percent of the hydroelectric is already in place), with the remaining 2 percent from wave and tidal power.

Vehicles, ships and trains would be powered by electricity and hydrogen fuel cells. Aircraft would run on liquid hydrogen. Homes would be cooled and warmed with electric heaters – no more natural gas or coal – and water would be preheated by the sun.

Commercial processes would be powered by electricity and hydrogen. In all cases, the hydrogen would be produced from electricity. Thus, wind, water and sun would power the world.

The researchers approached the conversion with the goal that by 2030, all new energy generation would come from wind, water and solar, and by 2050, all pre-existing energy production would be converted as well.

"We wanted to quantify what is necessary in order to replace all the current energy infrastructure – for all purposes – with a really clean and sustainable energy infrastructure within 20 to 40 years," said Jacobson.

One of the benefits of the plan is that it results in a 30 percent reduction in world energy demand since it involves converting combustion processes to electrical or hydrogen fuel cell processes. Electricity is much more efficient than combustion.

That reduction in the amount of power needed, along with the millions of lives saved by the reduction in air pollution from elimination of fossil fuels, would help keep the costs of the conversion down.

"When you actually account for all the costs to society – including medical costs – of the current fuel structure, the costs of our plan are relatively similar to what we have today," Jacobson said.

One of the biggest hurdles with wind and solar energy is that both can be highly variable, which has raised doubts about whether either source is reliable enough to provide "base load" energy, the minimum amount of energy that must be available to customers at any given hour of the day.

Jacobson said that the variability can be overcome.

"The most important thing is to combine renewable energy sources into a bundle," he said. "If you combine them as one commodity and use hydroelectric to fill in gaps, it is a lot easier to match demand."

Wind and solar are complementary, Jacobson said, as wind often peaks at night and sunlight peaks during the day. Using hydroelectric power to fill in the gaps, as it does in our current infrastructure, allows demand to be precisely met by supply in most cases. Other renewable sources such as geothermal and tidal power can also be used to supplement the power from wind and solar sources.

"One of the most promising methods of insuring that supply matches demand is using long-distance transmission to connect widely dispersed sites," said Delucchi. Even if conditions are poor for wind or solar energy generation in one area on a given day, a few hundred miles away the winds could be blowing steadily and the sun shining.

"With a system that is 100 percent wind, water and solar, you can't use normal methods for matching supply and demand. You have to have what people call a supergrid, with long-distance transmission and really good management," he said.

Another method of meeting demand could entail building a bigger renewable-energy infrastructure to match peak hourly demand and use the off-hours excess electricity to produce hydrogen for the industrial and transportation sectors.

Using pricing to control peak demands, a tool that is used today, would also help.

Jacobson and Delucchi assessed whether their plan might run into problems with the amounts of material needed to build all the turbines, solar collectors and other devices.

They found that even materials such as platinum and the rare earth metals, the most obvious potential supply bottlenecks, are available in sufficient amounts. And recycling could effectively extend the supply.

"For solar cells there are different materials, but there are so many choices that if one becomes short, you can switch," Jacobson said. "Major materials for wind energy are concrete and steel and there is no shortage of those."

Jacobson and Delucchi calculated the number of wind turbines needed to implement their plan, as well as the number of solar plants, rooftop photovoltaic cells, geothermal, hydroelectric, tidal and wave-energy installations.

They found that to power 100 percent of the world for all purposes from wind, water and solar resources, the footprint needed is about 0.4 percent of the world's land (mostly solar footprint) and the spacing between installations is another 0.6 percent of the world's land (mostly wind-turbine spacing), Jacobson said.

One of the criticisms of wind power is that wind farms require large amounts of land, due to the spacing required between the windmills to prevent interference of turbulence from one turbine on another.

"Most of the land between wind turbines is available for other uses, such as pasture or farming," Jacobson said. "The actual footprint required by wind turbines to power half the world's energy is less than the area of Manhattan." If half the wind farms were located offshore, a single Manhattan would suffice.

Jacobson said that about 1 percent of the wind turbines required are already in place, and a lesser percentage for solar power.

"This really involves a large scale transformation," he said. "It would require an effort comparable to the Apollo moon project or constructing the interstate highway system."

"But it is possible, without even having to go to new technologies," Jacobson said. "We really need to just decide collectively that this is the direction we want to head as a society."

(Photo: Stanford U.)

Stanford University


0 comentarios
Solar radiation management projects, also known as sun dimming, seek to reduce the amount of sunlight hitting the Earth to counteract the effects of climate change. Global dimming can occur as a side-effect of fossil fuels or as a result of volcanic eruptions, but the consequences of deliberate sun dimming as a geoengineering tool are unknown.

A new study by Dr Peter Braesicke, from the Centre for Atmospheric Science at Cambridge University, seeks to answer this question by focusing on the possible impacts of a dimming sun on atmospheric teleconnections.

Teleconnections, important for the predictability of weather regimes, are the phenomenon of distant climate anomalies being related to each other at large distances, such as the link between sea-level pressure at Tahiti and Darwin, Australia, which defines the Southern Oscillation.

“It is important that we look for unintended consequences of any sun dimming schemes,” said Braesicke. “We have to test our models continuously against observations to make sure that they are ‘fit-for-purpose’, and it’s important that we should not only look at highly averaged ‘global’ quantities.”

Dr Braesicke’s team believes that the link between tropical temperatures and extra-tropical circulation are well captured for the recent past and that the link changes when the sun is dimmed.

“This could have consequences for prevailing weather regimes,” said Braesicke, “particularly for the El Nino/Southern Oscillation (ENSO) teleconnection. Our research allows us to assess how forced atmospheric variability, exemplified by the northern polar region, might change in a geoengineered world with a dimmed sun.”

A dimmed sun will change the temperature structure of the atmosphere with a cooling throughout the atmosphere. In the troposphere, temperatures drop because less solar radiation reaches the ground and therefore less can be converted into heat. In the stratosphere, less shortwave radiation is available for absorption by ozone and, therefore, heating rates in the stratosphere are lower.

“We have shown that important teleconnections are likely to change in such a geoengineered future, due to chemistry-climate interactions and in particular, due to changing stratospheric ozone,” concluded Braesicke. “In our model, the forced variability of northern high latitude temperatures changes spatially, from a polecentred pattern to a pattern over the Pacific region when the solar irradiance is reduced. Future geoengineering studies need to consider the full evolution of the stratosphere, including its chemical behaviour.”

In an accompanying paper Ben Kravitz, from Rutgers University, reviews the new project to coordinate and compare experiments in aerosol geoengineering and evaluates the effects of stratospheric geoengineering with sulfate aerosols.

Since the idea of geoengineering was thrust back into the scientific arena many have wondered whether it could reduce global warming as a mitigation measure. Kravitz’s team argues that one of the most feasible methods is through stratospheric sulfate aerosols. While geoengineering projects are not yet favored by policy makers this method is inexpensive compared with other such projects and so may prove more attractive.

However, stratospheric geoengineering with sulfate aerosols may have unintended consequences. Research indicates that stratospheric geoengineering could, by compensating for increased greenhouse gas concentrations, reduce summer monsoon rainfall in Asia and Africa, potentially threatening the food supply for billions of people.

“Some unanswered questions include whether a continuous stratospheric aerosol cloud would have the same effect as a transient one, such as that from a volcano, and to what extent regional changes in precipitation would be compensated by regional changes in evapotranspiration,” said Kravitz.

A consensus has yet to be reached on these, as well as other, important issues and to answer these questions the team propose a suite of standardised climate modeling experiments, as well as a coordinating framework for performing such experiments, known as the Geoengineering Model Intercomparison Project (GeoMIP).



0 comentarios

Can't help molding some snow into a ball and hurling it or tossing a stone as far into a lake as you can? New research from Indiana University and the University of Wyoming shows how humans, unlike any other species on Earth, readily learn to throw long distances. This research also suggests that this unique evolutionary trait is entangled with language development in a way critical to our very existence.

The study, appearing online Jan. 14 in the journal "Evolution and Human Behavior," suggests that the well-established size-weight illusion, where a person who is holding two objects of equal weight will consider the larger object to be much lighter, is more than just curious or interesting, but a necessary precursor to humans' ability to learn to throw -- and to throw far.

Just as young children unknowingly experience certain perceptual auditory biases that help prepare them for language development, the researchers assert that the size-weight illusion primes children to learn to throw. It unwittingly gives them an edge -- helping them choose an object of size and weight most effective for throwing.

"These days we celebrate our unique throwing abilities on the football or baseball field or basketball court, but these abilities are a large part of what made us successful as a species," said Geoffrey Bingham, professor in IU's Department of Psychological and Brain Sciences. "It was not just language. It was language and throwing that led to the survival of Homo sapiens, and we are now beginning to gain some understanding of how these abilities are rapidly acquired by members of our species."

Why is throwing so important from an evolutionary standpoint? Bingham said Homo sapiens have been so successful as a species because of three factors: Social organization and cooperation, language, which helps with the former factor, and the ability to throw long distance. This trio allowed Homo sapiens to "take down all the potential competition," Bingham said. It brought us through the ice ages because Homo sapiens could hunt the only major food sources available, big game such as mammoths and giant sloths.

Bingham and Qin Zhu, lead author of the study and assistant professor at the University of Wyoming, Laramie, consider throwing and language in concert, because both require extremely well-coordinated timing and motor skills, which are facilitated by two uniquely developed brain structures -- the cerebellum and posterior parietal cortex.

"The idea here is that our speech and throwing capabilities came as a package," said Bingham, director of the Perception/Action Lab at IU. Language is special, and we acquire it very rapidly when young. Recent theories and evidence suggest that perceptual biases in auditory perception channel auditory development, so that we become attuned to the relevant acoustic units for speech. Our work on the size-weight illusion is now suggesting that a similar bias exists in object perception that corresponds to human readiness to acquire throwing skills."

Bingham and Zhu, who completed his doctorate in the Department of Kinesiology at IU's School of Health, Physical Education and Recreation, put their theory to the test, recruiting 12 adult men and women to perform various tests related to perception, the size-weight illusion and throwing prowess.

Another way of stating the size-weight illusion is that for someone to perceive that two objects -- one larger than the other -- weigh the same, the larger object must weigh significantly more than the smaller object. Their study findings show that skilled throwers use this illusion of 'equal felt' heaviness to select objects that they are able to throw to the farthest, maximum distance. This, says Bingham, suggests the phenomenon is not actually an illusion but instead a "highly useful and accurate perception."

Neanderthals, which co-existed with Homo sapiens long ago, lacked the more developed cerebellum and posterior parietal cortex.

"These brain structures have recently been found to distinguish Homo sapiens from Neanderthals," Bingham said. "It is possible that this is what enabled us to beat out Neanderthals, who otherwise had the larger brains."

(Photo: Indiana U.)

Indiana University


0 comentarios

Do mountain tops act as sky islands for species that live at high elevations? Are plant populations on these mountain tops isolated from one another because the valleys between them act as barriers, or can pollinators act as bridges allowing genes to flow among distant populations?

Dr. Andrea Kramer and colleagues from the Chicago Botanic Garden and the University of Illinois at Chicago were interested in pursuing these questions, particularly for a genus of plants, Penstemon (Plantaginaceae), endemic to the Great Basin region of the Western United States. They published their findings in the January issue of the American Journal of Botany (http://www.amjbot.org/cgi/content/full/98/1/109).

The flow of genetic material between populations maintains a species. In plants, if populations are separated by a landscape barrier such that pollen or seeds are unable to traverse either over or through, then the populations will begin to differ, either via mutations or genetic drift over time. However, habitat fragmentation and distance may not always be barriers, depending on the species and their modes of dispersal. And sometimes studies surprise us with their findings.

"These questions become increasingly important in places like the Great Basin as we consider the effects of climate change on native plant communities and the wildlife that depend upon them," Kramer commented. "The majority of the Great Basin region's species diversity is located on mountain tops, and as a generally warming climate drives species to higher elevations, the distance between mountaintop plant populations increases and more is required of the pollinators in order to traverse the arid valleys between them."

Kramer and her colleagues chose Penstemon because these plants provide critical resources for pollinators and other wildlife. But, as Kramer notes, "Until recently they haven't been used in large-scale restoration projects because key research needed to guide restoration of these species hasn't been available. Without an understanding of the population genetic structure of natural populations it is very difficult to determine what seed sources should be used where to ensure maximum success of a restoration."

The authors selected three Penstemon species with similar dispersal modes, a key element to the study design. For all three, seeds are dispersed by gravity and are not likely to be moved very far—thus, any long-distance movement of genes (or gene flow) should primarily be due to movement of pollen. This critical aspect is where the species differ: Penstemon deustus and P. pachyphyllus are both pollinated primarily by bees, while P. rostriflorus is pollinated primarily by hummingbirds.

For each species the authors were able to sample individuals from 6 to 8 populations on 4 to 6 mountain ranges. By extracting genomic DNA from leaf tissues and using molecular analytical tools they identified up to 8 polymorphic microsatellite loci and used these data to determine patterns of gene diversity both within each population on a mountain top and between more distant populations found on other mountain tops. They then pieced this information together to assess the degree of genetic relatedness among the different populations.

The authors found interesting differences between the bee- and bird-pollinated Penstemon. Although all three species had significant genetic differentiation among the populations, the two bee-pollinated species were found to have genetic clusters that were distinct for each mountain range, with little or no mixing between mountain ranges; the bird-pollinated species had far less genetic structure across all the ranges sampled. Thus, hummingbirds seem to be more effective at crossing large distances and pollinating flowers from distinct mountains than bees. Bees either do not cross the arid valley floors to visit populations on neighboring mountains or, if they do, they may be ineffective at transferring pollen across these long distances. In contrast, hummingbirds may be transferring pollen across very large distances—additional analysis indicated that hummingbirds may be visiting populations 19 km apart within a mountain range and over 100 km apart on different mountain ranges.

One of the take-home messages from this is that the interaction between pollinators and their landscape differs for different species, and yet this very interaction can have a significant impact on the genetic structure of a plant species' populations. In addition, their results suggest that pollination syndromes do not just capture the morphology and likely pollinators of flowers, but may also impact the population structure and genetic isolation of populations.

There are evolutionary implications as well. Hummingbird pollination has arisen independently in Penstemon at least 10 times, and possibly as many as 20 times, yet a shift back to bee pollination has never been reported. The results from this study may shed light on why that might be. Because populations of a bird-pollinated plant experience greater gene flow among distant populations, there is greater connectivity among these populations. Any changes that might arise due to the local presence of bees would be negated by the gene flow facilitated by birds, which would constrain any adaptations at the local level and prevent plants from shifting back to a more confined bee-pollination syndrome.

Furthermore, the results from Kramer's work can be used directly in restoration efforts. "The Bureau of Land Management is working to increase the species diversity and success of large-scale restoration work on public land in the western United States," Kramer notes. "The BLM supported this work and will be able to put the results of this research on Penstemon to use in developing seed transfer zones for these and similar animal-pollinated species."

Kramer's next step is to determine whether the patterns of neutral genetic diversity they identified translate to similar patterns in adaptive genetic diversity. "The Great Basin is an amazing place full of incredible climatic and geological extremes, and we are very interested in understanding how these extremes drive population divergence due to, or in spite of, the differences in gene flow we observed," she said.

(Photo: Courtesy of Andrea Kramer, Chicago Botanic Garden)

American Journal of Botany


0 comentarios

Scientists at Imperial College London have uncovered an unusual process that helps the cancer survive by stealing tiny DNA-containing 'powerhouses' (known as mitochondria) from the cells of the infected animal, to incorporate as its own. They say this may be because genes in the tumour's own mitochondria have a tendency to mutate and degenerate. The results are surprising because mitochondria and their genes are usually only passed from a mother to her offspring.

The findings may have broad implications for halting the spread of similar diseases in other animals and for understanding cancer progression across species. Mitochondrial transfer between genetically distinct cells has previously been observed in the laboratory, but this is the first time it has been demonstrated to occur in nature.

Canine Transmissible Venereal Tumour or CTVT is a very unusual form of cancer that is typically transmitted by mating, though it can also be spread by licking, biting or sniffing tumour-affected areas. The cancer cells themselves move directly from dog to dog, acting like a parasite on each infected animal. Found in most canine breeds throughout the world, the scientists think CTVT is very similar to the transmissible but more fatal cancer seen in the endangered Tasmanian devils of Australia.

Dr Clare Rebbeck, formerly a PhD student on the project at Imperial College London, now working at Cold Spring Harbor Laboratory in the USA, originally set out to explore how the cancers found in different parts of the world were related to one another, using computer analysis of DNA samples. However, her analysis showed that the pattern of relationships for the mitochondrial genes was different to that of the nuclear DNA. In some cases the cancers were even more closely related to some dogs than to other some cancers. This finding suggested that the cancers sometimes acquired mitochondria from their hosts.

Professor Austin Burt from the Department of Life Sciences at Imperial College London, who led the research, said: "Our study has revealed that this type of cancer works in a really unexpected way. It raises some really important questions about the progression of other cancers, such as how they repair their own DNA."

The researchers believe that the cancer does not take up new mitochondria with every new host, rather that this functions as an occasional repair mechanism to replace faulty mitochondria. A naturally high rate of genetic mutation in cancers regularly leads to non-functional genes in the CTVT mitochondria, which causes them to lose productivity.

In an earlier study, Imperial's scientists estimated that the earliest CTVT tumour originated from an ancient dog or wolf approximately 10,000 years ago, perhaps when dogs were first domesticated through intensive inbreeding of the more social wolves. Today’s results suggest that over this time, the cancer must have evolved the unusual ability to capture mitochondria from its host animal.

The scientists hope their work can be built upon by medical researchers to advance our knowledge of cancer progression in humans and other animal species.

(Photo: Joel Mills)

Imperial College London


0 comentarios

The scientists have now carried out the first full run of experiments that smash protons together at almost the speed of light. When these sub-atomic particles collide at the heart of the CMS detector, the resultant energies and densities are similar to those that were present in the first instants of the Universe, immediately after the Big Bang some 13.7 billion years ago. The unique conditions created by these collisions can lead to the production of new particles that would have existed in those early instants and have since disappeared.

The researchers say they are well on their way to being able to either confirm or rule out one of the primary theories that could solve many of the outstanding questions of particle physics, known as Supersymmetry (SUSY). Many hope it could be a valid extension for the Standard Model of particle physics, which describes the interactions of known subatomic particles with astonishing precision but fails to incorporate general relativity, dark matter and dark energy.

Dark matter is an invisible substance that we cannot detect directly but whose presence is inferred from the rotation of galaxies. Physicists believe that it makes up about a quarter of the mass of the Universe whilst the ordinary and visible matter only makes up about 5% of the mass of the Universe. Its composition is a mystery, leading to intriguing possibilities of hitherto undiscovered physics.

Professor Geoff Hall from the Department of Physics at Imperial College London, who works on the CMS experiment, said: "We have made an important step forward in the hunt for dark matter, although no discovery has yet been made. These results have come faster than we expected because the LHC and CMS ran better last year than we dared hope and we are now very optimistic about the prospects of pinning down Supersymmetry in the next few years."

The energy released in proton-proton collisions in CMS manifests itself as particles that fly away in all directions. Most collisions produce known particles but, on rare occasions, new ones may be produced, including those predicted by SUSY – known as supersymmetric particles, or 'sparticles'. The lightest sparticle is a natural candidate for dark matter as it is stable and CMS would only 'see' these objects through an absence of their signal in the detector, leading to an imbalance of energy and momentum.

In order to search for sparticles, CMS looks for collisions that produce two or more high-energy 'jets' (bunches of particles travelling in approximately the same direction) and significant missing energy.

Dr Oliver Buchmueller, also from the Department of Physics at Imperial College London, but who is based at CERN, explained: "We need a good understanding of the ordinary collisions so that we can recognise the unusual ones when they happen. Such collisions are rare but can be produced by known physics. We examined some 3 trillion proton-proton collisions and found 13 'SUSY-like' ones, around the number that we expected. Although no evidence for sparticles was found, this measurement narrows down the area for the search for dark matter significantly."

The physicists are now looking forward to the 2011 run of the LHC and CMS, which is expected to bring in data that could confirm Supersymmetry as an explanation for dark matter.

The CMS experiment is one of two general purpose experiments designed to collect data from the LHC, along with ATLAS (A Toroidal LHC ApparatuS). Imperial’s High Energy Physics Group has played a major role in the design and construction of CMS and now many of the members are working on the mission to find new particles, including the elusive Higgs boson particle (if it exists), and solve some of the mysteries of nature, such as where mass comes from, why there is no anti-matter in our Universe and whether there are more than three spatial dimensions.

(Photo: ICL)

Imperial College London


0 comentarios
Despite what you might have heard, genetic sequencing alone is not enough to understand human disease.

Researchers at Duke University Medical Center have shown that functional tests are absolutely necessary to understand the biological relevance of the results of sequencing studies as they relate to disease, using a suite of diseases known as the ciliopathies, which can cause patients to have many different traits.

“Right now the paradigm is to sequence a number of patients and see what may be there in terms of variants,” said Nicholas Katsanis, PhD, of Duke Pediatrics and Cell Biology. “The key finding of this study says that this approach is important, but not sufficient."

"If you really want to be able to penetrate, you must have a robust way to test the functional relevance of mutations you find in patients. For a person at risk of type two diabetes, schizophrenia, or atherosclerosis, getting their genome sequenced is not enough -- you have to functionally interpret the data to get a sense of what might happen to the particular patient.”

“This is the message to people doing medical genomics,” said lead author Erica Davis, PhD, assistant professor in the Duke Department of Pediatrics, who works in the Duke Center for Human Disease Modeling.

“We have to know the extent to which gene variants in question are detrimental -- how do they affect individual cells or organs and what is the result on human development or disease? Every patient has his or her own set of genetic variants, and most of these will not be found at sufficient frequency in the general population so that anyone could make a clear medical statement about their case.”

Davis, working in the lab of Katsanis, and in collaboration with many ciliopathy labs worldwide, sequenced a gene, TTC21B, known to be a critical component of the primary cilium, an antenna-like projection critical to cell function (and the cell component that is the source of ciliopathies).

While a few of the mutations could readily be shown to cause two main human disorders, a kidney disease and an asphyxiating thoracic condition, the significance of the majority of DNA variants could not be determined. Davis then tested these variants in a zebrafish model, in which many genes are similar to humans, and showed that TTC21B appears to contribute disease-related mutations to about 5 percent of human ciliopathy cases.

The study, which appears in Nature Genetics online on Jan. 23, shows how genetic variations both can cause ciliopathies and also interact with other disease-causing genes to yield very different sets of patient problems.

Katsanis, the Jean and George Brumley Jr., MD, Professor of Pediatrics and Cell Biology, and Director of the Duke Center for Human Disease Modeling, is a world expert in ciliopathies such as Bardet-Biedl Syndrome, in which the primary cilium of cells is abnormal and leads to a host of problems. About one child in 1,000 live births will have a ciliopathy, an incidence that is in the range of Down's syndrome, said Katsanis.

“By sequencing genes to identify genetic variation, followed by functional studies with a good experimental model, we can get a much better idea of the architecture of complex, inherited disorders,” Katsanis said.

“Each individual with a disease is unique,” Davis said. “If you can overlay gene sequencing with functional information, then you will be able to increase the fidelity of your findings and it will become more meaningful for patients and families.”

It will take more laboratories doing more pointed studies like this one to get a fuller picture of the ciliopathies and other diseases, Davis said.

Katsanis noted that it will take true collaboration within many scientific disciplines as well as scientific finesse to get at the true roots of complex diseases.

“Brute force alone -- sequencing -- will not help,” he said. “Technology is of finite resolution. You must have synthesis of physiology, cell biology, biochemistry, and other fields to get true penetration into medically relevant information.”

Duke University Medical




Selected Science News. Copyright 2008 All Rights Reserved Revolution Two Church theme by Brian Gardner Converted into Blogger Template by Bloganol dot com