Friday, January 29, 2010


0 comentarios

'Grid cells' that act like a spatial map in the brain have been identified for the first time in humans, according to new research by UCL scientists which may help to explain how we create internal maps of new environments.

The study is by a team from the UCL Institute of Cognitive Neuroscience and was funded by the Medical Research Council and the European Union. Published today in Nature, it uses brain imaging and virtual reality techniques to try to identify grid cells in the human brain. These specialised neurons are thought to be involved in spatial memory and have previously been identified in rodent brains, but evidence of them in humans has not been documented until now.

Grid cells represent where an animal is located within its environment, which the researchers liken to having a satnav in the brain. They fire in patterns that show up as geometrically regular, triangular grids when plotted on a map of a navigated surface. They were discovered by a Norwegian lab in 2005 whose research suggested that rats create virtual grids to help them orient themselves in their surroundings, and remember new locations in unfamiliar territory.

Study co-author Dr Caswell Barry said: "It is as if grid cells provide a cognitive map of space. In fact, these cells are very much like the longitude and latitude lines we're all familiar with on normal maps, but instead of using square grid lines it seems the brain uses triangles.

Lead author Dr Christian Doeller added: "Although we can't see the grid cells directly in the brain scanner, we can pick up the regular six-fold symmetry that is a signature of this type of firing pattern. Interestingly, the study participants with the clearest signs of grid cells were those who performed best in the virtual reality spatial memory task, suggesting that the grid cells help us to remember the locations of objects."

Professor Neil Burgess, who leads the team, commented: "The parts of the brain which show signs of grid cells - the hippocampal formation and associated brain areas - are already known to help us navigate our environment and are also critical for autobiographical memory. This means that grid cells may help us to find our way to the right memory as well as finding our way through our environment. These brain areas are also amongst the first to be affected by Alzheimer's disease which may explain why getting lost is one of the most common early symptoms of this disease."

(Photo: UCL)

University College London


0 comentarios

How did the lemurs, flying foxes and narrow-striped mongooses get to the large, isolated island of Madagascar sometime after 65 million years ago?

A pair of scientists say their research confirms the longstanding idea that the animals hitched rides on natural rafts blown out to sea.

Professors Matthew Huber of Purdue and Jason Ali of the University Hong Kong say that the prevailing flow of ocean currents between Africa and Madagascar millions of years ago would have made such a trip not only possible, but fast, too. The findings, based on a three-year computer simulation of ancient ocean currents, will be published in the journal Nature on Feb. 4 and were posted on Nature’s Web site Jan. 20.

The idea that animals rafted to the island is not new. Since at least 1915, scientists have used it as an alternative theory to the notion that the animals arrived on Madagascar via a land bridge that was later obliterated by shifting continents. Rafting would have involved animals being washed out to sea during storms, either on trees or large vegetation mats, and floating to the mini-continent, perhaps while in a state of seasonal torpor or hibernation.

Huber and Ali's work supports a 1940 paper by George Gaylord Simpson, one of the most influential paleontologists and evolution theorists of the 20th century. Simpson introduced the concept of a "sweepstakes" process to explain the chance of raft colonization events taking place through vast stretches of geological time. Once the migrants arrived on the world’s fourth largest island, their descendants evolved into the distinctive, and sometimes bizarre forms seen today.

"What we've really done is prove the physical plausibility of Simpson's argument," Huber said.

Anthropologists and paleontologists have good reason to be interested in Madagascar's animals. The island is located in the Indian Ocean roughly 300 miles east of Africa over the Mozambique Channel and is otherwise isolated from significant land masses. Its isolation and varied terrain make it a living laboratory for scientists studying evolution and the impact of geography on the evolutionary process.

Madagascar has more unique species of animals than any location except Australia, which is 13 times larger. The island's population includes 70 kinds of lemurs found nowhere else and about 90 percent of the other mammals, amphibians and reptiles are unique to its 226,656 square miles.

The question has always been how the animals arrived there in the first place. Madagascar appears to have been an island for at least 120 million years, and its animal population began arriving much later, sometime after 65 million years ago.

The raft hypothesis, which scientists refer to as “dispersal,” has always presented one big problem, however. Currents and prevailing winds between Madagascar and Africa flow south and southwest, away from, not toward, the island.

Yet, the land bridge hypothesis also is problematic in that there is no geologic evidence that such a bridge existed during the time in question. Also, there are no large mammals such as apes, giraffes, lions or elephants, indigenous to Madagascar. Only small species such as lemurs, the island's signature species; hedgehog-like tenrecs; rodents; mongoose-like carnivores; and similar animals populate the island.

The animals of Madagascar also appear to have arrived in occasional bursts of immigration by species rather than in a continuous, mixed migration. They likewise appear to have evolved from single ancestors, and their closest relatives are in Africa, scientists say. All of which suggests Simpson's theory was correct.

Ali, who has a research focus in plate tectonics -- the large-scale motions of the Earth's outer shell -- kept running across the land bridge hypothesis in the course of his work. The question intrigued him because the notion of a bridge between Madagascar and Africa appeared to break rules of plate tectonic theory. A background in oceanography also made him think ocean currents between Africa and Madagascar might have changed over time.

"Critically, Madagascar and Africa have together drifted more than 1,600 kilometers northwards and could thus have disrupted a major surface water current running across the tropical Indian Ocean, and hence modified flow around eastern Africa and Madagascar," says Ali, an earth sciences professor.

That led Ali to contact Huber, a paleoclimatologist who reconstructs and models the climate millions of years in the past. Huber, a Purdue earth and atmospheric sciences professor, has a particular interest and expertise in ocean currents, which have a significant impact on climate.

Huber models ancient conditions at a time when the planet was much warmer than it is today, and he specializes in lengthy, highly detailed simulations. He uses the modeling of a warmer Earth in the past -- warm enough for crocodiles to live in an ice-free Arctic -- to help understand conditions generated by today's global warming and to project what the warming trend may hold for the future.

When Ali contacted him about the Madagascar question, Huber had just finished running a three-year simulation on a supercomputer operated by Information Technology at Purdue (ITaP), Purdue's central information technology organization. The modeling produced 100 terabytes of output -- data with potential uses for a variety of purposes, including a study of ancient ocean currents around Madagascar.

The Purdue professor was able to show that 20 million to 60 million years ago, when scientists have determined ancestors of present-day animals likely arrived on Madagascar, currents flowed east, toward the island. Climate modeling showed that currents were strong enough -- like a liquid jet stream in peak periods -- to get the animals to the island without dying of thirst. The trip appears to have been well within the realm of possibility for small animals whose naturally low metabolic rates may have been even lower if they were in torpor or hibernating.

Huber's computer modeling also indicates that the area was a hotspot at the time, just as it is today, for powerful tropical cyclones capable of regularly washing trees and tree islands into the ocean.

"It seems likely that rafting was a distinct possibility," the study concludes. "All signs point to the Simpson sweepstakes model as being correct: Ocean currents could have transported rafts of animals to Madagascar from Africa during the Eocene."

The raft hypothesis has always been the most plausible, says Anne Yoder, director of the Duke University Lemur Center. She specializes in using molecular biogenetic techniques and geospatial analysis to examine the evolutionary history of Madagascar. But Ali and Huber's study now puts hard data behind it, says the Duke professor of biology, biological anthropology and anatomy.

"I was very excited to see this paper," says Yoder, whom Nature asked to review the study prior to publication. "Dispersal has been a hypothesis about a mechanism without any actual data. This takes it out of the realm of storytelling and makes it science."

Ali says the study also is relevant to the movement of animal species elsewhere on the planet, lending support for dispersal over a competing idea that animals arrived at their positions on the drifting land masses of the continents as the Earth took its current form.

Moreover, the Madagascar study provided a test case confirming scientists' ability to model ocean and atmosphere interactions in a past greenhouse climate, Huber said. The National Science Foundation recently funded Huber to further simulate ocean currents in the Eocene epoch, roughly 39 million to 56 million years ago, using the methodology he applied to Madagascar.

(Photo: Purdue University)

Purdue University


0 comentarios

For decades, astronomers have analyzed the impact that asteroids could have on Earth. New research by MIT Professor of Planetary Science Richard Binzel examines the opposite scenario: that Earth has considerable influence on asteroids — and from a distance much larger than previously thought. The finding helps answer an elusive, decades-long question about where most meteorites come from before they fall to Earth and also opens the door to a new field study of asteroid seismology.

By analyzing telescopic measurements of near-Earth asteroids (NEAs), or asteroids that come within 30 million miles of Earth, Binzel has determined that if an NEA travels within a certain range of Earth, roughly one-quarter of the distance between Earth and the moon, it can experience a "seismic shake" strong enough to bring fresh material called "regolith" to its surface. These rarely seen "fresh asteroids" have long interested astronomers because their spectral fingerprints, or how they reflect different wavelengths of light, match 80 percent of all meteorites that fall to Earth, according to a paper by Binzel appearing in the Jan. 21 issue of Nature. The paper suggests that Earth's gravitational pull and tidal forces create these seismic tremors.

By hypothesizing about the cause of the fresh surfaces of some NEAs, Binzel and his colleagues have tried to solve a decades-long conundrum about why these fresh asteroids are not seen in the main asteroid belt, which is between Mars and Jupiter. They believe this is because the fresh surfaces are the result of a close encounter with Earth, which obviously wouldn't be the case with an object in the main asteroid belt. Only those few objects that have ventured recently inside the moon's orbital distance and have experienced a "fresh shake" match freshly fallen meteorites measured in the laboratory, Binzel said.

Clark Chapman, a planetary scientist at the Southwest Research Institute in Colorado, believes Binzel's work is part of a "revolution in asteroid science" over the past five years that considers the possibility that something other than collisions can affect asteroid surfaces.

How they did it: Binzel's team used a large NASA telescope in Hawaii to collect information on NEAs, including a huge amount of spectral fingerprint data. Analyzing this data, the group examined where a sample of 95 NEAs had been during the past 500,000 years, tracing their orbits to see how close they'd come to Earth. They discovered that 75 NEAs in the sample had passed well inside the moon's distance within the past 500,000 years, including all 20 fresh asteroids in the sample.

Binzel next determined that an asteroid traveling within a distance equal to 16 times the Earth's radius (about one-quarter of the distance to the moon) appears to experience vibrations strong enough to create fresh surface material. He reached that figure based on his finding that about one-quarter of NEAs are fresh, as well as two known facts — that the space weathering process that ages regolith can happen in less than one million years, and that about one-quarter of NEAs come within 16 Earth radii in one million years.

Before now, people thought an asteroid had to come within one to two Earth radii to undergo significant physical change.

Next steps: Many details about the shaking process remain unknown, including what exactly it is about Earth that shakes the asteroids, and why this happens from a distance as far away as 16 Earth radii. What is certain is that the conditions depend on complex factors such as the velocity and duration of the encounter, the asteroid's shape and the nature of the preexisting regolith. "The exact trigger distance depends on all those seismology factors that are the totally new and interesting area for cutting edge research," Binzel said.

Further research might include computer simulations, ground observations and sending probes to look at the surfaces of asteroids. Binzel's next steps will be to try to discover counterexamples to his findings or additional examples to support it. He may also investigate whether other planets like Venus or Mars affect asteroids that venture close to them.

(Photo: Christine Daniloff)

Massachusetts Institute of Technology


0 comentarios
We have more in common with Dead Sea-dwelling microbes than previously thought. University of Florida researchers have found that one of the most common proteins in complex life forms may have evolved from proteins found in microbes that live in deadly salty environments.

The protein ubiquitin is so-called because it is ubiquitously active in all higher life forms on Earth. The protein is essential to the life cycle of nearly all eukaryotic cells — those that are complex enough to have a nucleus and other membrane-bound structures.

Haloferax volcanii microbes, on the other hand, are unique creatures. One of the most ancient species on the planet, they long ago adapted to conditions far too salty for other organisms — even surviving for thousands of years in dried-out salt lakes.

As they report in the Jan. 7 issue of the journal Nature, researchers for UF’s Institute of Food and Agricultural Sciences have found that two proteins in Haloferax are likely the simple evolutionary precursors of ubiquitin.

These two proteins, dubbed SAMP1 and SAMP2, seem to perform similar functions to ubiquitin without some of enzymes that are needed for ubiquitin to function in eukaryotes, said Julie Maupin-Furlow, the study’s lead researcher and professor in UF’s department of microbiology and cell science.

The finding not only lends insight into how ubiquitin evolved, but it also reveals that this seemingly complex protein network may have some simple mechanisms that can be examined for use as potential medical treatments, Maupin-Furlow said.

Researchers are currently investigating ubiquitin’s role in a broad range of diseases such as cancer, viral infections, neurodegenerative disorders, muscle wasting, diabetes and various inflammatory conditions.

“This opens the door to a new avenue of study for this very important protein,” Maupin-Furlow said. “And it gives us a broader picture of some of the common aspects of life on Earth.”

University of Florida


0 comentarios
The surprising increase in methane concentrations millennia ago, identified in continental glacier studies, has puzzled researchers for a long time. According to a strong theory, this would have resulted from the commencement of rice cultivation in East Asia. However, a study conducted at the University of Helsinki's Department of Environmental Sciences and the Department of Geosciences and Geography shows that the massive expanse of the northern peatlands occurred around 5000 years ago, coincident with rising atmospheric methane levels.

After water vapour and carbon dioxide, methane is the most significant greenhouse gas, resulting in about one fifth of atmospheric warming caused by humans. Methane emissions are mainly created by peatlands, animal husbandry, rice cultivation, landfill sites, fossil fuel production and biomass combustion.

Northern peatlands are immense sources of methane, but previous studies have argued them to have been established almost immediately after the Ice Age ended. Consequently, they could not explain the increase of methane, dated to have commenced thousands of years later, since the methane emissions of peatlands decrease as they age.

William Ruddiman, Professor Emeritus in environmental sciences at the University of Virginia, has presented a widely published theory according to which humanity started to affect the climate thousands of years ago, not just since the start of the industrial revolution. According to the theory, rice cultivation, commenced in East Asia already over 5,000 years ago, caused the declining methane amounts to again increase, which contributed to preventing the next ice age.

The new study, conducted under the supervision of Professor Atte Korhola, explains the emergence of the peatlands in the northern hemisphere, and their development history, in a new way. The researchers compiled an extensive radiocarbon dating database concerning the bottom peat in peatlands. Based on over 3,000 dates, their statistical and location information-based analysis, it was identified that the expansion of northern peatlands significantly accelerated about 5,000 years ago. At the same time, the methane content in the atmosphere started to increase.

Peatland expansion resulted in the emergence of millions of square kilometres of young peatlands of the mineretrophic fen type, and they puffed large amounts of methane gas in to the air as the organic matter rotted. According to the study, the early increase in methane levels was mainly caused by natural reasons, and human operations are not necessarily required to explain it.

The expansion of peatlands was triggered by the climate turning moister and cooler, which caused the groundwater levels to rise, while accelerating peat build-up and growth. A similar methane peak may also emerge in the future if precipitation in the arctic areas increases as forecasted.

University of Helsinki


0 comentarios
New research shows that migraine and depression may share a strong genetic component. The research is published in the January 13, 2010, online issue of Neurology®, the medical journal of the American Academy of Neurology.

"Understanding the genetic factors that contribute to these disabling disorders could one day lead to better strategies to manage the course of these diseases when they occur together," said Andrew Ahn, MD, PhD, of the University of Florida in Gainesville, who wrote an editorial accompanying the study and is a member of the American Academy of Neurology. "In the meantime, people with migraine or depression should tell their doctors about any family history of either disease to help us better understand the link between the two."

The study involved 2,652 people who took part in the larger Erasmus Rucphen Family study. All of the participants are descendants of 22 couples who lived in Rucphen in the 1850s to 1900s.

"Genealogical information has shown them all to be part of a large extended family, which makes this type of genetic study possible," said study author Gisela M. Terwindt, MD, PhD, of Leiden University Medical Center in the Netherlands.

Of the participants, 360 had migraine. Of those, 151 had migraine with aura, which is when headaches are preceded by sensations that affect vision, such as seeing flashing lights, and 209 had migraine with no aura. A total of 977 people had depression, with 25 percent of those with migraine also having depression, compared to 13 percent of those without migraine.

The researchers then estimated the relative contribution of genetic factors for both of the disorders. They found that for both types of migraine, the heritability was estimated at 56 percent, i.e., 56 percent of the trait is explained by genetic effects. For migraine with aura, the estimate was 96 percent. "This finding shows that migraine with aura may be a promising avenue to search for migraine genes," Terwindt said.

Comparing the heritability scores for depression between those with migraine and those without showed a shared genetic component in the two disorders, particularly with migraine with aura. "This suggests that common genetic pathways may, at least partly, underlie both of these disorders, rather than that one is the consequence of the other," Terwindt said.

American Academy of Neurology


0 comentarios

Deep underground aquifers in the American Southwest contain gases that tell of the region's ancient climate, and support a growing consensus that the jet stream over North America was once split in two.

The discoveries were made with a new paleohydrogeology tool, developed by Indiana University Bloomington geologist Chen Zhu and Swiss Federal Institute of Technology geologist Rolf Kipfer, that depends on the curious properties of noble gases as they seep through natural underground aquifers. Noble gases (neon and helium, for example) are elements that resist chemical reactions, and therefore have the potential to record information from Earth's past. In the January issue of Geology, the scientists report the results of their tool's first serious test amid the Navajo sandstone aquifers of northeast Arizona.

"Getting to the point where we understand the interaction between these aquifers and the atmosphere above is going to open up many new ways to ask questions about the relationship between climate changes and water resources," said Zhu, the report's lead author. "We have shown our approach can work extremely well. It confirms that the aquifer was recharged mostly during the Earth's most recent ice age, and particularly related to the fact that the jet stream was actually two jet streams many thousands of years ago, with the lower descending far to the south of North America."

An exhaustive and methodical sampling of ground water from the Arizona sandstone aquifers shows significant changes in noble gas infusion rates and concentration (particularly neon) at key times in Earth's quaternary period (as far back as 40,000 years ago). The scientists saw both an excess of neon about 25,000 to 40,000 years ago, coinciding with the last Pleistocene ice age, as well as an extraordinary peak of neon associated with a flood of groundwater 14,000 to 17,000 years ago, at which time the southern jet stream hovered above northern Arizona.

"We admit, the conditions in Northeast Arizona were ideal," Zhu said. "Navajo sandstone is a rock formed from ancient sand dunes. The system there is free of other complications, and that is not an accident. We chose it for that reason. It is possible results won't be so clear if we employ our method in other, similar places around the world. But we are ready to try." Zhu has conducted substantial studies of the Navajo sandstone in Arizona in the past 15 years.

When water from rains and streams first enters the ground and seeps through sedimentary rock, it carries some gases with it. The water moves slowly. In sideways-oriented aquifers, water at points more distant from where it entered were thought to carry the indelible stamp of ancient atmospheric and hydrological conditions. Groundwater closer to the point of entry, by contrast, would be recognizably newer.

The amount of noble gases carried into the aquifers will depend on a variety of environmental factors, Zhu said. Lower temperatures, lower salinity, and higher pressure can all lead to an excess of noble gases in groundwater.

Much like the analyses of ancient air pockets deep inside glaciers and polar ice, the new tool Zhu and Kipfer have developed can't be used just anywhere. To make sense of the noble gas seepage, the aquifers must be protected by an impermeable layer of rock that stops newer water above from mixing with the older water below. As long as noble gases in the groundwater do not mix with large quantities of noble gases from other sources, the noble gases' initial mix will be left alone to change over time.

Despite restrictions on where Zhu and Kipfer's tool can be used, there are rock formations all over the world that are suitable for its deployment. Best candidates are the more arid regions of the northern hemisphere, such as the southwestern U.S., the northern Sahara, the Middle East, and parts of western China.

The availability of noble gases such as neon, xenon, and helium is believed to have remained relatively unchanged over the last 100,000 years. Zhu said their concentrations in the atmosphere would be controlled primarily by temperature at a given location. Zhu and Kipfer's noble gas data show that soil temperatures were 5 to 6 degrees Celsius cooler (9 to 11 degree Fahrenheit) during the peak of Earth's most recent glacier activity, about 25,000 year ago.

Kipfer was the first to suggest neon might be employed as an indicator of ancient hydraulic effects. The present Geology paper with Zhu represents the approach's first empirical test.

Zhu says he hopes his work with Kipfer will inform policymakers as they grapple with climate change in areas of the world where populations depend on underground aquifers for their water supply. By understanding how these aquifers have waxed and waned in response to changing temperatures, Zhu says experts will get a better idea of how a warmer Earth may influence the availability of water to human populations in the future.

"The more we understand about the interaction between the atmosphere and the water table in the recent geological past, the easier it will be to develop policies that help us adapt to a global warming environment," Zhu said. "We need geologic ground truth for policies based on model forecast."

(Photo: Ruth Droppo)

Indiana University


0 comentarios

While governments around the world continue to explore strategies for reducing greenhouse gas emissions, a new study suggests policymakers should focus on what needs to be achieved in the next 40 years in order to keep long-term options viable for avoiding dangerous levels of warming.

The study is the first of its kind to use a detailed energy system model to analyze the relationship between mid-century targets and the likelihood of achieving long-term outcomes.

"Setting mid-century targets can help preserve long-term policy options while managing the risks and costs that come with long-term goals," says co-lead author Brian O'Neill, a scientist at the National Center for Atmospheric Research (NCAR).

The study, conducted with co-authors at the International Institute for Applied Systems Analysis (IIASA) in Austria and the Energy Research Centre of the Netherlands, is being published today in the Proceedings of the National Academy of Sciences. It was funded by IIASA, a European Young Investigator Award to O'Neill, and the National Science Foundation, NCAR's sponsor.

The researchers used a computer simulation known as an integrated assessment model to represent interactions between the energy sector and the climate system. They began with "business as usual" scenarios, developed for the Intergovernmental Panel on Climate Change's 2000 report, that project future greenhouse gas emissions in the absence of climate policy. They then analyzed the implications of restricting emissions in 2050, using a range of levels.

The team focused on how emissions levels in 2050 would affect the feasibility of meeting end-of-century temperature targets of either 2 or 3 degrees Celsius (about 3.5 degrees or 5.5 degrees Fahrenheit, respectively) above the pre-industrial average.

The study identifies critical mid-century thresholds that, if surpassed, would make particular long-term goals unachievable with current energy technologies.

For example, the scientists examined what would need to be done by 2050 in order to preserve the possibility of better-than-even odds of meeting the end-of-century temperature target of 2 degrees Celsius of warming advocated by many governments.

One "business as usual" scenario showed that global emissions would need to be reduced by about 20 percent below 2000 levels by mid-century to preserve the option of hitting the target. In a second case, in which demand for energy and land grow more rapidly, the reductions by 2050 would need to be much steeper: 50 percent. The researchers concluded that achieving such reductions is barely feasible with known energy sources.

"Our simulations show that in some cases, even if we do everything possible to reduce emissions between now and 2050, we'd only have even odds of hitting the 2 degree target-and then only if we also did everything possible over the second half of the century too," says co-author and IIASA scientist Keywan Riahi.

The research team made a number of assumptions about the energy sector, such as how quickly the world could switch to low- or zero-carbon sources to achieve emission targets. Only current technologies that have proven themselves at least in the demonstration stage, such as nuclear fission, biomass, wind power, and carbon capture and storage, were considered. Geoengineering, nuclear fusion, and other technologies that have not been demonstrated as viable ways to produce energy or reduce emissions were excluded from the study.

Research shows that average global temperatures have warmed by close to 1 degree C (almost 1.8 degrees F) since the pre-industrial era. Much of the warming is due to increased emissions of greenhouse gases, predominantly carbon dioxide, due to human activities. Many governments have advocated limiting global temperature to no more than 1 additional degree Celsius in order to avoid more serious effects of climate change.

During the recent international negotiations in Copenhagen, many nations recognized the case for limiting long-term warming to 2 degrees Celsius above pre-industrial levels, but they did not agree to a mid-century emissions target.

"Even if you agree on a long-term goal, without limiting emissions sufficiently over the next several decades, you may find you're unable to achieve it. There's a risk that potentially desirable options will no longer be technologically feasible, or will be prohibitively expensive to achieve," O'Neill says.

On the other hand, "Our research suggests that, provided we adopt an effective long-term strategy, our emissions can be higher in 2050 than some proposals have advocated while still holding to 2 degrees Celsius in the long run," he adds.

The researchers caution that this is just one study looking at the technological feasibility of mid- and end-of-century emissions targets. O'Neill says that more feasibility studies should be undertaken to start "bounding the problem" of emissions mitigation.

"We need to know whether our current and planned actions for the coming decades will produce long-term climate change we can live with," he says. "Mid-century targets are a good way to do that."

(Photo: ©UCAR, Photo by Carlye Calvin)

The University Corporation for Atmospheric Research


0 comentarios

A cancer epidemic under way in southeast China may have been initiated by a string of Siberian volcanoes that spewed ash across the Earth 250 million years ago, according to a study published in the journal Environmental Science and Technology.

Nonsmoking women in the Xuan Wei County, Yunnan Province, China suffer from the world’s highest known rate of lung cancer, and Geosciences Research Professor Robert Finkelman, one of the study’s co-authors, said researchers believe the answer is in the coal that women in the province use for heating and cooking.

“Peak lung cancer mortality in women in one specific area of China—Xuan Wei—has been reported at 400 deaths per 100,000 people, which is nearly 20 times the mortality levels in the rest of China,” Finkelman said.

The extraordinarily high rate of lung cancer and the constant use of coal by women for heating and cooking led geoscientists to study the native coal from area mines.

“We discovered that the regional coal that formed after the Permo-Triassic times, about 250 million years ago, was very high in silicon dioxide, which has been linked to cancer in recent studies,” Finkelman said.

The team concluded that volcanoes in Siberia erupted for 5 million years, blasting acidic gasses and particulates into the atmosphere, which cooked into a toxic soup of acid rain. The acidic rain decimated life on Earth and eroded area rocks, freeing up silica, which washed into surrounding peat bogs. Over millions of years, the Xuan Wei peat bogs converted into coal fields, becoming the source of the tainted coal.

“We think the cancer risk comes from burning the coal, not from harvesting it,” Finkelman said. “There is probably a linkage between the gasses being mobilized by the burning coal and the very fine-grained silica particulates that are rafted up by these gasses.”

(Photo: UTD)

The University of Texas at Dallas


0 comentarios
A new model for primate origins is presented in Zoologica Scripta, published by the Norwegian Academy of Science and Letters and The Royal Swedish Academy of Sciences. The paper argues that the distributions of the major primate groups are correlated with Mesozoic tectonic features and that their respective ranges are congruent with each evolving locally from a widespread ancestor on the supercontinent of Pangea about 185 million years ago.

Michael Heads, a Research Associate of the Buffalo Museum of Science, arrived at these conclusions by incorporating, for the first time, spatial patterns of primate diversity and distribution as historical evidence for primate evolution. Models had previously been limited to interpretations of the fossil record and molecular clocks.

"According to prevailing theories, primates are supposed to have originated in a geographically small area (center of origin) from where they dispersed to other regions and continents" said Heads, who also noted that widespread misrepresentation of fossil molecular clocks estimates as maximum or actual dates of origin has led to a popular theory that primates somehow crossed the globe and even rafted across oceans to reach America and Madagascar.

In this new approach to molecular phylogenetics, vicariance, and plate tectonics, Heads shows that the distribution ranges of primates and their nearest relatives, the tree shrews and the flying lemurs, conforms to a pattern that would be expected from their having evolved from a widespread ancestor. This ancestor could have evolved into the extinct Plesiadapiformes in north America and Eurasia, the primates in central-South America, Africa, India and south East Asia, and the tree shrews and flying lemurs in South East Asia.

Divergence between strepsirrhines (lemurs and lorises) and haplorhines (tarsiers and anthropoids) is correlated with intense volcanic activity on the Lebombo Monocline in Africa about 180 million years ago. The lemurs of Madagascar diverged from their African relatives with the opening of the Mozambique Channel (160 million years ago), while New and Old World monkeys diverged with the opening of the Atlantic about 120 million years ago.

"This model avoids the confusion created by the center of origin theories and the assumption of a recent origin for major primate groups due to a misrepresentation of the fossil record and molecular clock divergence estimates" said Michael from his New Zealand office. "These models have resulted in all sorts of contradictory centers of origin and imaginary migrations for primates that are biogeographically unnecessary and incompatible with ecological evidence".

The tectonic model also addresses the otherwise insoluble problem of dispersal theories that enable primates to cross the Atlantic to America, and the Mozambique Channel to Madagascar although they have not been able to cross 25 km from Sulawesi to Moluccan islands and from there travel to New Guinea and Australia.

Heads acknowledged that the phylogenetic relationships of some groups such as tarsiers, are controversial, but the various alternatives do not obscure the patterns of diversity and distribution identified in this study.

Biogeographic evidence for the Jurassic origin for primates, and the pre-Cretaceous origin of major primate groups considerably extends their divergence before the fossil record, but Heads notes that fossils only provide minimal dates for the existence of particular groups, and there are many examples of the fossil record being extended for tens of millions of years through new fossil discoveries.

The article notes that increasing numbers of primatologists and paleontologists recognize that the fossil record cannot be used to impose strict limits on primate origins, and that some molecular clock estimates also predict divergence dates pre-dating the earliest fossils. These considerations indicate that there is no necessary objection to the biogeographic evidence for divergence of primates beginning in the Jurassic with the origin of all major groups being correlated with plate tectonics.

Buffalo Museum of Science


0 comentarios
When consumers talk to each other about products, they generally respond more favorably to abstract language than concrete descriptions, according to a new study in the Journal of Consumer Research.

"In a series of experiments, we explored when and why consumers use abstract language in word-of-mouth messages, and how these differences in language use affect the receiver," write authors Gaby A. C. Schellekens, Peeter W. J. Verlegh, and Ale Smidts (Erasmus University, The Netherlands).

In the course of their studies, the authors found that consumers who described a positive experience with a product (like a smooth shave with a new razor) used more abstract language when they had a positive opinion about the brand before they tried the product. "When consumers were told that the product was a brand they did not like, they used more concrete language to describe a positive experience. Thus, consumers use different ways of describing the exact same experience, depending on whether they use a liked or disliked brand," the authors write.

For a disliked brand, favorable experiences are seen as exceptions, and concrete language helps consumers to frame the experience as a one-time event, the authors explain.

On the receiver end, the studies showed that consumers responded differently to abstract and concrete language. "In our study of receivers, we gave consumers a description of a positive product experience, and asked them to estimate the sender's opinion about the products," the authors write. "We found that perceived opinion of the sender was more positive when the description was cast in more abstract terms." For descriptions of negative experiences, the perceived opinion of the sender was more negative when the description used abstract language.

"Our finding that abstract messages have a stronger impact on buying intentions can be translated straightforwardly into the recommendation to use abstract language if you try to convince someone of the (positive or negative) consequences of buying a product, or of following your advice," the authors conclude.

Chicago Journals


0 comentarios
Plasma jets capable of obliterating tooth decay-causing bacteria could be an effective and less painful alternative to the dentist's drill, according to a new study published in the February issue of the Journal of Medical Microbiology.

Firing low temperature plasma beams at dentin – the fibrous tooth structure underneath the enamel coating - was found to reduce the amount of dental bacteria by up to 10,000-fold. The findings could mean plasma technology is used to remove infected tissue in tooth cavities – a practice that conventionally involves drilling into the tooth.

Scientists at the Leibniz-Institute of Surface Modifications, Leipzig and dentists from the Saarland University, Homburg, Germany, tested the effectiveness of plasma against common oral pathogens including Streptococcus mutans and Lactobacillus casei. These bacteria form films on the surface of teeth and are capable of eroding tooth enamel and the dentin below it to cause cavities. If left untreated it can lead to pain, tooth loss and sometimes severe gum infections. In this study, the researchers infected dentin from extracted human molars with four strains of bacteria and then exposed it to plasma jets for 6, 12 or 18 seconds. The longer the dentin was exposed to the plasma the greater the amount of bacteria that were eliminated.

Plasmas are known as the fourth state of matter after solids, liquids and gases and have an increasing number of technical and medical applications. Plasmas are common everywhere in the cosmos, and are produced when high-energy processes strip atoms of one or more of their electrons. This forms high-temperature reactive oxygen species that are capable of destroying microbes. These hot plasmas are already used to disinfect surgical instruments.

Dr Stefan Rupf from Saarland University who led the research said that the recent development of cold plasmas that have temperatures of around 40 degrees Celsius showed great promise for use in dentistry. "The low temperature means they can kill the microbes while preserving the tooth. The dental pulp at the centre of the tooth, underneath the dentin, is linked to the blood supply and nerves and heat damage to it must be avoided at all costs."

Dr Rupf said using plasma technology to disinfect tooth cavities would be welcomed by patients as well as dentists. "Drilling is a very uncomfortable and sometimes painful experience. Cold plasma, in contrast, is a completely contact-free method that is highly effective. Presently, there is huge progress being made in the field of plasma medicine and a clinical treatment for dental cavities can be expected within 3 to 5 years."

Society for general Microbiology

Thursday, January 28, 2010


0 comentarios

Enzyme malfunctions are at the root of many serious health problems, but rarely do scientists come up with a way to repair them.

New research from the Stanford University School of Medicine and Indiana University shows how a particular molecule, known as Alda-1, repairs a common enzyme mutation that leads to a debilitating reaction to alcohol, increases the risk of some types of cancer and might also promote some neurodegenerative diseases.

What makes this so remarkable is the way the drug-like molecule Alda-1 works. While the typical drug development strategy is to find a molecule that binds to a key site on an enzyme and blocks the disease process, in this instance, the molecule actually changes the shape of a broken enzyme and revives it.

“We show how this small molecule fixes the structure of a dead enzyme and makes it function again,” said Daria Mochly-Rosen, PhD, professor of chemical and systems biology at Stanford, a co-author of the findings published online Jan. 10 in Nature Structural Biology. She and researcher Che-Hong Chen, PhD collaborated on the work with senior author Thomas Hurley, PhD, professor of biochemistry and molecular biology at Indiana University, and members of his laboratory.

Hurley’s team used an X-ray crystallography imaging technique to reveal how Alda-1 props up the broken enzyme’s flailing phalanges.

The discovery opens the door to the possibility of an entirely new class of drugs. Until now, using drugs to fix enzymes was widely viewed as impossible. “It’s one of the holy grails of drug development,” said Steve Schow, PhD, who has worked in drug discovery and development for 35 years and is vice president of research at a biotechnology company not involved in the Alda-1 study. “I talked to a friend of mine about this earlier and he nearly jumped out of his seat. This approach points the way to solving problems that heretofore were not addressable.”

While drug developers have strategies for fixing the other major types of proteins responsible for disease or discomfort, none exists for enzymes. Yet enzyme malfunctions are associated with a greater risk from a number of diseases including Alzheimer’s disease, diabetic complications and certain types of oral and esophageal cancers, as well as an adverse reaction to alcohol.

Here’s the background: A year ago, Mochly-Rosen’s team reported that it had identified a small molecule (or a drug), which they named Alda-1, that repairs abnormal alcohol processing when an alcohol-processing enzyme, aldehyde dehydrogenase 2, malfunctions. Typically, ALDH2 metabolizes aldehyde molecules, a toxic byproduct of alcohol. But when the enzyme doesn’t work, just one drink can cause unpleasant reactions—flushing, headaches, heart palpitations and nausea among other symptoms. The mutation that leads to this condition is common among people of Asian descent, appearing in about 40 percent of this population or about 1 billion people.

What’s more, the mutation in drinkers can raise the risk of certain illnesses, including esophagus and liver cancers.

Perhaps most intriguing was the mutation’s connection to heart disease. In fact, Mochly-Rosen first searched for Alda-1 because she was investigating why alcohol (in moderation) prevents damage from heart attacks. Her experiments showed that the alcohol-processing enzyme plays an important role in mopping up damaging molecules that mount from heart attacks. In experiments in rats, she and her colleagues found that increases in the enzyme’s activity decreased damage from heart attacks. A two-fold increase in activity led to a 60 percent drop in associated damage, she reported in 2008 in Science.

This suggests that people who lack normal activity of this enzyme would be more prone to heart failure and that a drug that fixes the enzyme could reduce that risk. For these reasons, drug developers are interested in the fix-it molecule. Mochly-Rosen is also considering creating a company to attempt to bring the drug to market herself.

The news now is that Hurley and his team have used X-ray crystallography to create 3-D images showing how Alda-1 fixes the enzyme. The first step in the project was the hardest: growing crystals made of the molecule formed by Alda-1 binding to the mutant enzyme. Hurley’s team struggled with this for more than a year. “The enzyme is difficult to work with,” said Hurley. “It’s floppy, so it’s hard to get into a crystal.”

Once crystallized, the researchers bombarded the compound with X-rays and used the diffraction patterns to discover the compound’s shape.

Hurley’s team had previously revealed the shape of the normal enzyme, which resembles a four-leaf clover in basic structure. They had also shown that in the mutant version, the “leaves” droop and flap about.

The team’s new work shows that Alda-1 molecules latch onto four identical binding sites near the center of the structure, one on each leaf, and buttress the waggling edges. “They come in, bind behind the disrupted structures, and pull them in,” said Hurley.

And because of the way Alda-1 interacts with the broken enzyme, it opens drug developers’ eyes to possibilities most hadn’t dreamed of. Alda-1 falls into a new class of therapeutic agents that work not by blocking the disease process but by changing the shape of broken molecules and, as a result, improving their function. More traditional drugs work by binding to a protein’s “active site,” which essentially acts like a keyhole, waiting for the right molecular key to come along and unlock it. By binding to this site, drugs block the access of other molecules, essentially keeping the switch that starts the chain of molecular events turned off.

The beauty of the new approach is that it allows drugs to not only switch molecular interactions off, but to switch them on—which it accomplishes by fixing mutant versions of molecules. Mochly-Rosen is aware of only one other group that has reported success at using a drug to repair a broken enzyme: Yoshiyuki Suzuki, MD, PhD, and his colleagues at the International University of Health and Welfare Graduate School and Keio University in Japan. Their findings, published in May in Perspectives in Medicinal Medicine, show they used a similar approach to compensate for an enzyme malfunction that leads to Gaucher’s disease and related conditions.

Now that Mochly-Rosen and Hurley have a clear picture of the interaction between the faulty enzyme and its helper, they plan to work together to make an even more effective molecular partnership. Ultimately, they hope to see the development of a drug to not only correct alcohol intolerance but also to decrease vulnerability to heart disease.

(Photo: Stanford U.)

Stanford University School of Medicine


0 comentarios

A group of Dartmouth researchers have discovered a new role for an important plant gene. Dartmouth Biology Professor Tom Jack and his colleagues have learned that a gene regulator called miR319a (micro RNA 319a) is important for proper flower development, particularly the development of petals.

“In my lab, we are particularly interested in genes that are necessary for the petals and stamens in the flower to develop properly,” says Jack. “We isolated a mutant plant that had defects in petal development, and we then went on to identify the gene that was defective in the mutant plant, which turned out to be miR319a, which had previously, through the work of others, been implicated in leaf development.”

The researchers then found that one of the targets of miR319a is a gene called TCP4, one of many similar genes that functions in leaves and flowers to control cell growth and proliferation.

The study was published Dec. 29, 2009, in the journal Proceedings of the National Academy of Sciences, and Jack’s co-authors on the study are Anwesha Nag and Stacey King, a graduate student and research technician, respectively, at Dartmouth.

When the petals or flowers of plants don’t grow properly, they risk not being able to reproduce. In other words, they don’t attract pollinators. “Flowers are very important to humans because most plant food products are derived from fruits and seeds which are produced by the flower,” says Jack.

Jack’s lab will continue to investigate the genetic pathways controlled by miR319a and TCP4. “Addressing how these genes control petal size and shape is one of the goals of our future research,” says Jack.

(Photo: Darmouth C.)

Dartmouth College Office of Public Affairs


0 comentarios
A pioneering scheme to build a giant central heating system that will harness heat from deep underground is being developed by university scientists.

For the first time in the UK, a team of scientists and engineers, led by Newcastle University, plan to complete a twin borehole system that will allow warm groundwater to be continually cycled through rocks as deep as 1,000m.

Water at a temperature of around 30C will be brought up to the surface where it will pass through a heat exchanger before being sent back underground to be re-heated.

The project will provide renewable, clean energy for homes and businesses in the planned Eastgate eco-village in Weardale, County Durham, complementing four other forms of renewable energy which are to be harnessed there.

Some of the natural hot water will also be used in a spa - the first such development in the UK since the Romans tapped the hot springs at Bath.

Project lead Professor Paul Younger, of Newcastle University, says that using a twin set of boreholes solves problems which have hindered other attempts to use deep-seated hot water.

“Water from such depths is twice as salty as seawater, so unless you happen to be on the coast, you can’t let the spent water flow away at surface,” he explained.

“By re-injecting water using a second borehole we are able to maintain the natural water pressures in the rocks and allow pumping to continue for many decades to come.”

Funding of £461,000 from the Department of Energy and Climate Change will be used to drill a reinjection borehole to complement the 995m deep exploration borehole which was originally drilled three years ago. There are also plans to prepare the existing borehole for long-term pumping service.

Used water will be reintroduced to the granite at about 420m depth, and will heat up again as it flows through a complicated maze of fractures on its way back to the pumping borehole.

“By recycling the hot water through what is essentially a huge central heating system deep underground, we can produce an almost carbon-neutral source of energy,” added Professor Younger.

Newcastle University’s Professor David Manning said the plan was to build a geothermal prototype that could be used at other ‘hotspots’ across the UK.

He explained: “Water deep underground gets heated by the naturally-occurring low-level radiation that is found in all rocks.

“Some rocks are far better at producing heat than others – especially granite of the kind we drilled into at Eastgate. This makes it one of the country’s ‘hotspots’ – where water starts warming up quite close to the surface.”

The new twin borehole system is to be analysed by a team of experts which also includes Professor Jon Gluyas from Durham University, and the world-leading engineering consultancy Parsons Brinckerhoff.

“There is every reason to suppose that if we drill even deeper here in future we will find water at boiling point, which is hot enough to generate electricity,” says Professor Younger. “Once the twin set of boreholes is complete in March this year, we will be in a position to explore this possibility.”

Newcastle University

Wednesday, January 27, 2010


0 comentarios
To all those who have made deals with the tooth fairy in the past: you probably sold your teeth below their fair value.

Dr. George Huang, Chair of Endodontics at the Boston University Henry M. Goldman School of Dental Medicine (GSDM), says those baby teeth and extracted third molars we are throwing away hold valuable dental stem cells.

“Our team found for the first team that we can reprogram dental stem cells into human embryonic-like cells called induced pluripotent stem (iPS) cells, which may be an unlimited source of cells for tissue regeneration,” Dr. Huang says.

So far, scientists have had luck creating iPS cells from various cells in mice easily, but this hasn’t been as easy in humans, until more recently. All three types of human dental stem cells the GSDM team tested are easier to reprogram than fibroblasts, which previously seemed to be the best way to make human iPS cells.

In a related study, Dr. Huang regenerated two major human tooth components—dental pulp and dentin—for the first time in a mouse experimental model. The mouse was used to supply nutrition for human tissue regeneration.

Using tissue engineering, researchers saw empty root canal space fill with pulp-like tissue with ample blood supplies. Dentin-like tissue regrew on the dentinal wall.

“The finding will revolutionize endodontic and dental clinical practice by helping to preserve teeth,” Dr. Huang says.”

The studies, iPS cells reprogrammed from mesenchymal-like stem/progenitor cells of dental tissue origin and Stem/progenitor cell–mediated de novo regeneration of dental pulp with newly deposited continuous layer of dentin in an in vivo model, appear in Stem Cells and Development and Tissue Engineering.

The mission of Boston University Henry M. Goldman School of Dental Medicine is to provide excellent education to dental professionals throughout their careers; to shape the future of dental medicine and dental education through research; to offer excellent health care services to the community; to participate in community activities; and to foster a respectful and supportive environment.

Boston University


0 comentarios
University of Manchester scientists have discovered exactly how plants obtain energy from sunlight through chlorophyll production in a study that helps to explain the design and activity of all enzymes.

Professor Nigel Scrutton and his team at the Faculty of Life Sciences have not only gained a more detailed understanding of the production of the most abundant and life sustaining chemical on Earth, they also expect to apply their findings to all enzymes thus allowing the design of novel clinical and industrial processes.

The study, published in the latest edition of the Journal of Biological Chemistry (JBC), also takes in quantum tunnelling, a newly discovered enzyme mechanism where they use energy to blast through rather than climb a chemical reaction.

Professor Scrutton says: 'Chlorophyll is the most abundant and arguably the most important chemical on planet Earth. Without it, there would be no life on Earth as it allows plants to convert light into chemical energy.

'The enzyme that produces it, POR, is light-driven. Now we have a detailed understanding of how the light 'switches on' the chemistry.

This could be applied to other enzymes with a host of implications - we could design our own light-activated enzymes and use these to target disease, or we could harness light - a very cheap form of energy - to drive biocatalysis in general.

His colleague Dr Derren Heyes adds: 'There is only one other light-driven enzyme, DNA photolyase, which repairs genetic mutations. The insights we have gained into POR could be applied to this enzyme although we also hope to apply our findings to other enzymes.

"We were surprised to find that very small changes in the structure of the enzyme were crucial for the light-driven reaction - so we know we need to be very precise when engineering enzymes."

Professor Scrutton says: "The fact that the atoms involved in the enzyme reaction use unusual behaviour (quantum tunneling) found only in the 'quantum world' is creating a lot of interest in the scientific community.

"Ultimately this is very exciting research. We are developing a theory of how enzymes work. Enzymes drive all of life so the research potential is huge."

University of Manchester


0 comentarios

For decades, omega-3 fatty acids have been praised for their myriad health benefits. Credited with helping treat or prevent degenerative illnesses such as heart disease, rheumatoid arthritis, diabetes, and even Alzheimer's disease, they also play a key role in brain development and cognitive function.

However, the benefits of omega-3s –– and DHA in particular –– also come at an inevitable cost, according to researchers at UC Santa Barbara. Over a lifetime, they can lead to cellular disease and a significant decrease in cognitive function. The scientists, the father and son team of Raymond C. and David L. Valentine, have compiled their work in a new book titled "Omega-3 Fatty Acids and the DHA Principle" (CRC Press, 2009).

In humans, omega-3s are essential fatty acids that are necessary for health, but cannot be manufactured from scratch by the body. They are obtained largely from fish and other marine organisms, such as algae and krill. The body uses several types of omega-3s, including two "fish oils": eicosapentaenoic acid (EPA), and docosahexaenoic acid (DHA). "DHA and EPA sit atop a hierarchy of omega-3s and provide the most clear health benefits," said David Valentine, associate professor of earth science at UCSB. His father, Raymond Valentine, is a professor emeritus of plant sciences at UC Davis, and a visiting scholar at the Marine Science Institute at UCSB.

The elder Valentine's most recent research explores ways in which genes from ocean organisms can be engineered into crop plants so omega-3s can be harvested and made easily accessible in the same way other oils, such as olive and canola, are. David Valentine is studying the energetics of organisms, specifically the roles of omega-3s and other molecules in cellular membranes. The oils are used inside organisms as membranes that surround cells. They serve as the host phase for many critical functions, including respiration and photosynthesis.

The physical and chemical properties of DHA enable it to rapidly facilitate biochemical processes in the cell membrane. This effect provides numerous benefits, including those involved in the growth of bacteria, rapid energy generation, vision, brain impulse, and photosynthesis. The Valentines focus on the roles of omega-3 fatty acids in cellular membranes ranging from human neurons and swimming sperm to deep-sea bacteria, and they develop a principle by which to assess their benefits and risks.

While other oils are used in membranes all over the body, omega-3s are far more specific in their location. They are found most predominantly in the brain, in the eyes, in certain other brain-like cells, and in sperm, according to David Valentine. "They are precisely targeted," he said. "And the reasons for that are twofold. Their physical and chemical properties enable specialized cellular functions, such as the rapid firing of neurons and the rhythmic pulsing of sperm tails. However, omega-3s are so chemically unstable that they oxidize and go rancid very quickly, and are therefore excluded from all other cells."

That instability, he added, is their down side. "The brain is absolutely packed full of omega-3s. However, there's also oxygen in the brain, and that's going to cause a reaction. They're going to oxidize. And that gradual oxidation is, in essence, a losing battle against brain damage. After a lifetime, the damage takes its toll," he said.

Among the many ideas the Valentines explore in the book are the effects that the oxidation of omega-3s have on the brain. "One of the causes of brain aging is the oxidation of these compounds, which compose a large percentage of the brain. It's not too much of a stretch to think they're part of the gradual progression of brain aging and, possibly, some of the brain diseases that come later in life," David Valentine said.

Despite the hazards posed by a lifetime of omega-3s, Valentine does not encourage people to toss out their bottles of fish oil. In fact, the contrary is true. "I consume plenty of DHA because I think the benefits to human health far outweigh the down sides," he said. "There's no question, for example, that omega-3s help brain development in children. The brain relies heavily on DHA, and will perform better when it is plentiful," he said. "But it's an inevitable catch-22. Human intelligence is derived in part from DHA, but DHA itself is inherently unstable."

The effects of omega-3 oxidation are, in some ways, a function of the ever-increasing human life span. "The dangers have never been an issue in human evolution because humans didn't live to be 70 or 80 years old," said Valentine. "There was never this wall that people would hit, because they didn't live that long."

(Photo: UCSB)

University of California


0 comentarios

A U of A engineering professor researching carbon storage has been awarded funding to set up a one-of-a-kind lab that will simulate the harsh conditions that exist two kilometres beneath the Earth's surface.

Rick Chalaturnyk has been awarded $1.6 million in funding from both the Alberta Science and Research Investments Program and the federal Canada Foundation for Innovation for a $4-million research lab that will be the only one of its kind in Canada. The remainder of funding will come from the university and industry supporters.

Researchers at the lab, called the Geomechanical Reservoir Experimental Facility, will investigate carbon storage and enhanced oil recovery using leading-edge equipment that will give them a better understanding of how carbon dioxide behaves underground.

Using a large high-powered centrifuge, Chalaturnyk and his colleagues will be able to increase the size of their standard test samples from small cores to a half a cubic metre in size, apply up to 5,800 pounds per square inch of pressure to the sample and use highly sensitive acoustic imaging techniques to "see" the way carbon dioxide injected into the ground will behave in its liquid, gas and supercritical phases under different pressures and temperatures.

They will also be able to simulate steam-injection oil recovery methods, heating the samples to temperatures as high as 350 C, an experiment that is "not for the faint of heart," according to Chalaturnyk.

"At 350 degrees metal starts to change," he said. "It's hard to get sensors to measure what's going on. There aren't many plastics that can withstand that temperature. You can't stick electronics in there-even using fibre optics at that temperature is a very difficult thing to do. We'll have these tiny sensors, barely 3 millimetres in size, which we will be using to record what is going on in the sample."

The lab will also enable researchers to study the way cement behaves at certain depths and temperatures. Because carbon dioxide can be injected into the ground at abandoned oil and gas wells, it is essential to understand how well the cement borehole casings hold up if the storage site is to maintain its integrity.

While helping educate the next generation of geo-engineers, the lab will also advance Alberta's commitment to become a leader in carbon capture and storage, to reduce greenhouses gas emissions while allowing development of unconventional energy sources like oilsands, shale gas, bitumen carbonates and coal-bed methane.

(Photo: Jason University of Alberta)

University of Alberta


0 comentarios
An international team of scientists, led by University of Maryland astronomer Stacy McGaugh, has found that individual galactic objects have less ordinary matter, relative to dark matter, than does the Universe as a whole.

Scientists believe that all ordinary matter, the protons & neutrons that make up people, planets, stars and all that we can see, are a mere fraction -- some 17 percent -- of the total matter in the Universe. The protons and neutrons of ordinary matter are referred to as baryons in particle physics and cosmology.

The remaining 83 percent apparently is the mysterious "dark matter," the existence of which is inferred largely from its gravitational pull on visible matter. Dark matter, explains McGaugh "is presumed to be some new form of non-baryonic particle - the stuff scientists hope the Large Hadron Collider in CERN will create in high energy collisions between protons."

McGaugh and his colleagues posed the question of whether the "universal" ratio of baryonic matter to dark matter holds on the scales of individual structures like galaxies.

"One would expect galaxies and clusters of galaxies to be made of the same stuff as the universe as a whole, so if you make an accounting of the normal matter in each object, and its total mass, you ought to get the same 17 percent fraction," he says. "However, our work shows that individual objects have less ordinary matter, relative to dark matter, than you would expect from the cosmic mix; sometimes a lot less!"

Just how much less depends systematically on scale, according to the researchers. The smaller an object the further its ratio of ordinary matter to dark matter is from the cosmic mix. McGaugh says their work indicates that the largest bound structures, rich clusters of galaxies, have 14 percent of ordinary baryonic matter, close to expected 17 percent.

"As we looked at smaller objects - individual galaxies and satellite galaxies, the normal matter content gets steadily less," he says. "By the time we reach the smallest dwarf satellite galaxies, the content of normal matter is only ~1percent of what it should be. (Such galaxies' baryon content is ~0.2percent instead of 17percent). The variation of the baryon content is very systematic with scale. The smaller the galaxy, the smaller is its ratio of normal matter to dark matter. Put another way, the smallest galaxies are very dark matter dominated.

"This raises an obvious question," McGaugh says, "where are all these missing baryons? The short answer is, we don't know. There are various lines of speculation, most of which are either easily dismissed or are un-testable. So for now this is a problem without an obvious solution."

University of Maryland


0 comentarios
The less you use your brain's frontal lobes, the more you see yourself through rose-colored glasses, a University of Texas at Austin researcher says.

Those findings are being published in the February edition of the journal NeuroImage.

"In healthy people, the more you activate a portion of your frontal lobes, the more accurate your view of yourself is," says Jennifer Beer, an assistant professor of psychology, who conducted the research with graduate student Brent L. Hughes. "And the more you view yourself as desirable or better than your peers, the less you use those lobes."

The natural human tendency to see oneself in a positive light can be helpful and motivating in some situations but detrimental in others, Beer says.

Her research, conducted at the university's Imaging Research Center, gives new insight into the relationship among brain functions and human emotion and perceptions.

It may help scientists better understand brain functions in seniors or people who suffer from depression or other mental illnesses. It could also have implications for recovering methamphetamine addicts whose frontal lobes are often damaged by drug use and who can overestimate their ability to stay clean.

As part of the study, 20 subjects answered questions about how they compared to their peers on such positive traits as tact, modesty, likability and maturity and such negative traits as materialism, messiness, unreliability and narrow-mindedness. As the subjects answered those questions, a magnetic resonance imaging (MRI) machine scanned their brains.

The subjects who viewed themselves in a very positive light across those disparate areas used their orbitofrontal cortex less than the other subjects. This region of the frontal lobe is generally associated with reasoning, planning, decision-making and problem-solving

Some subjects who had accurate views of themselves showed four times more frontal lobe activation than the most extreme "rose-colored glasses" wearer in the study.

Among a separate set of subjects who were asked the same questions, those who were required to answer quickly saw themselves in a far more positive light than those who had unlimited time to answer. Those findings suggest that processing information in a more deliberate manner may be the way in which frontal lobe activation permits people to come to more realistic conclusions.

"Subjects made unrealistically positive judgments about themselves more quickly, suggesting these judgments require fewer mental resources," Beer says. "Perhaps, like the visual system, the social judgment system is designed to give us a quick 'good enough' perception for the sake of efficiency."

Beer is a leader in the emerging field of social neuroscience, which studies people's emotions, how they think about themselves and what's going on in the brain when they do.

University of Texas


0 comentarios

From the Dachshund's stubby legs to the Shar-Pei's wrinkly skin, breeding for certain characteristics has left its mark on the dog genome. Researchers have identified 155 regions on the canine genome that appear to have been influenced by selective breeding.

With more than 400 distinct breeds, dogs come in a wide range of shapes, sizes, fur-styles, and temperaments. The curly-haired toy poodle, small enough to sit in a teacup, barely looks or acts like the smooth-coated Great Dane tall enough to peer like a periscope out of a car's sunroof. Not so apparent are breed differences in how the dogs' bodies function and their susceptibility to various diseases.

Although domestication of dogs began over 14,000 years ago, according to Dr. Joshua Akey, University of Washington (UW) assistant professor of genome sciences, the spectacular diversity among breeds is thought to have originated during the past few centuries through intense artificial selection of and strict breeding for desired characteristics. Akey is the lead author of the effort to map canine genome regions that show signs of recent selection and that contain genes that are prime candidates for further investigation. Those genes are being examined for their possible roles in the most conspicuous variations among dog breeds: size, coat color and texture, behavior, physiology, and skeleton structure.

The researchers performed the largest genome-wide scan to date for targets of selection in purebred dogs. The genomes came from 275 unrelated dogs representing 10 breeds that were very unlike each other. The breeds were: Beagle, Border Collie, Brittany, Dachshund, German Shepherd, Greyhound, Jack Russell Terrier, Labrador Retriever, Shar-Pei, and Standard Poodle.

The study was conducted, the researchers said, because the canine genome, the product of centuries of strong selection, contains many important lessons about the genetic architecture of physical and behavioral variations and the mechanisms of rapid, short-term evolution. The findings, the researchers said, "provide a detailed glimpse into the genetic legacy of centuries of breeding practices."

Their results were published Jan. 11 in the Proceedings of the National Academy of Science, in the article "Tracking footprints of artificial selection in the dog genome."

The researchers catalogued more than 21,000 tiny variations in the genome. In investigating the relationships among the 10 breeds, they found that, genetically, the German Shepherd, Shar-Pei, Beagle, and Greyhound were especially distinct.

Their list of most differentiated regions of the dog genome included five genes already linked to hallmark traits of certain breeds: one for small size, one for short limbs like those in Dachshunds and other stubby-legged dogs, and three for coats.

In calculating the overlap of the signatures marking selection in the genome, the researchers found that approximately 66 percent occurred in only one or two breeds. They noted it was likely that these genome regions contain genes that confer qualities that distinguish a breed, such as skin wrinkling in the Shar-Pei. In contrast, signatures of selection found in five or more breeds tended to sort the dogs into classes, and include, for example, a gene that governs the miniature size of breeds in the toy group.

A gene associated with dwarfism in mice, the study reports, appears to mediate variations in dog breed size and weight. Small-size breeds, like Dachshund, Beagle, Jack Russell Terrier, and Brittany have enormous differentiation in this gene, compared to larger-size breeds. Another region of peak differentiation in the dog genome, in an area thought to regulate muscle cell formation in embryos, seems to separate the German Shepherd, Jack Russell Terrier, Border Collie and Greyhound from the Dachshund, Beagle, Brittany, and Shar-Pei.

The 155 regions of the genome that appear to have been influenced by selective breeding contain 1,630 known or predicted protein-coding genes. The researchers tried to obtain a broad overview of the molecular functions of these genes. They were surprised to discover that genes involved in immunity and defense were overrepresented in the 155 regions, a phenomenon also discovered in genome analysis of selection in natural populations. Natural and artificial selection were not expected to act on similar classes of genes, the researchers noted, but immune-related genes may be frequent targets of selection because of their critical role in defending against ever-changing infections.

The researchers honed in on a particular genome region in the Shar-Pei. Many of these dogs have excessive wrinkles, but some are smooth. The degree of skin folding correlates with levels of certain molecules whose production may be governed by a gene in this region. Rare mutations in this same gene also cause severe skin wrinkling in people. Tiny genetic variations in this gene seemed linked to whether a Shar-Pei would be smooth or wrinkled, and a rare genetic mutation was found in the Shar-Pei but not in other dogs.

The researchers explained, that, despite the many insights emerging from their data, there were several limitations to their study and in interpreting the findings. They pointed out that a pattern of variation that is unusual to the dog genome at large doesn't prove that specific genome region is under selection.

A major impetus behind studying dog genomics, the researchers pointed out, is its potential to advance knowledge about the genetic basis of human form variations and of differences in disease susceptibility among people. In many cases, the researchers said, it may be easier to locate the genetic targets of selection in dogs, and then map these to related regions in the human genome. Scientists are intrigued by the possibility that recent selection may have affected genome regions common to both human and dog lineages.

"This research has shown that artificial selection in dogs has acted on many of the same genes as natural selection in humans, and that many of these genes are regulators of gene activity," said Dr. Irene Eckstrand, who oversees evolution grants at the National Institute of General Medical Sciences at the National Institutes of Health. "The statistical and computational approaches used in this study will be of great value in deciphering the organization of human genetic variation, and in identifying the genetic basis of human characteristics."

The researchers also said that a better understanding of artificial selection in dogs may reveal the molecular mechanisms of rapid, short-term evolution. Future work, they hope, may uncover the gene activities responsible for shaping the incredible diversity among the world's dogs.

(Photo: Earl Steele)

University of Washington


0 comentarios
The leopard cannot change its spots, nor can the tiger change its stripes, but a new research report published in the January 2009 issue of the journal GENETICS tells us something about how cats end up with their spots and stripes. It demonstrates for the first time that at least three different genes are involved in the emergence of stripes, spots, and other markings on domestic cats. Researchers have also determined the genomic location of two of these genes, which will allow for further studies that could shine scientific light on various human skin disorders.

"We hope that the study opens up the possibility of directly investigating the genes involved in pattern formation (i.e., the establishment of stripes, spots, and other markings) on the skin of mammals, including their structure, function, and regulation," said Eduardo Eizirik, a researcher involved in the work from the Pontifical Catholic University of Rio Grande do Sul, Brazil. "From these studies, we hope to understand how the different coat patterns have evolved in different mammalian groups, and to be able to investigate their roles in adaptation to different environments, such as their importance for camouflage in wild cat species."

Scientists crossed domestic cats with different coat patterns, such as stripes and blotches, and tracked the inheritance of these patterns among their offspring. Genetic samples were collected and used to type various molecular markers. Results showed that specific markers were inherited by a kitten every time a given coat pattern appeared, suggesting that the marker and the gene causing the coat pattern were located in the same region of the genome. Using statistical procedures called linkage mapping, scientists determined the genomic location of two genes involved in these traits. By clarifying the inheritance of markings in one mammalian species, researchers hope to identify and characterize the implicated genes and then determine if they apply to other mammals, such as humans. The hope is that this discovery will shed new light on human skin diseases that appear to follow standardized patterns.

"Coat color and markings of animals are obvious traits that have long attracted the interest of geneticists" said Mark Johnston, Editor-in-Chief of the journal GENETICS, "and this study in cats may ultimately help us better understand the genetics behind hair and skin color in other mammals. In turn, this understanding could lead to new therapeutic strategies to correct skin problems in people."

The Genetics Society of America


0 comentarios

Abandon any notion that the duck-billed platypus is a soft and cuddly creature -- maybe like Perry the Platypus in the Phineas and Ferb cartoon. This platypus, renowned as one of the few mammals that lay eggs, also is one of only a few venomous mammals. The males can deliver a mega-sting that causes immediate, excruciating pain, like hundreds of hornet stings, leaving victims incapacitated for weeks. Now scientists are reporting an advance toward deciphering the chemical composition of the venom, with the first identification of a dozen protein building blocks. Their study is in the Journal of the American Chemical Society, a weekly publication.

Masaki Kita, Daisuke Uemura, and colleagues note that spurs in the hind limb of the male platypus can deliver the venom, a cocktail of substances that cause excruciating pain. The scientists previously showed that the venom triggers certain chemical changes in cultured human nerve cells that can lead to the sensation of pain. Until now, however, scientists did not know the exact components of the venom responsible for this effect.

To unlock its secrets, the scientists collected samples of platypus venom and used high-tech analytical instruments to separate and characterize its components. They identified 11 new peptides, or protein subunits, in the venom. Studies using nerve cells suggest that one of these substances, called Heptapeptide 1, is the main agent responsible for triggering pain. The substance appears to work by interacting with certain receptors in the nerve cells, they suggest.

(Photo: Wikimedia Commons)

American Chemical Society


0 comentarios

For the first time, two astronomers have explained the diversity of galaxy shapes seen in the universe. The scientists, Dr Andrew Benson of the California Institute of Technology (Caltech) and Dr Nick Devereux of Embry-Riddle University in Arizona, tracked the evolution of galaxies over thirteen billion years from the early Universe to the present day. Their results appear in the journal Monthly Notices of the Royal Astronomical Society.

Galaxies are the collections of stars, planets, gas and dust that make up most of the visible component of the cosmos. The smallest have a few million and the largest as many as a million million (a trillion) stars.

American astronomer Edwin Hubble first developed a taxonomy for galaxies in the 1930s that has since become known as the ‘Hubble Sequence’. There are three basic shapes: spiral, where arms of material wind out in a disk from a small central bulge, barred spirals, where the arms wind out in a disk from a larger bar of material and elliptical, where the galaxy’s stars are distributed more evenly in a bulge without arms or disk. For comparison, the galaxy we live in, the Milky Way, has between two and four hundred thousand million stars and is classified as a barred spiral.

Explaining the Hubble Sequence is complex. The different types clearly result from different evolutionary paths but until now a detailed explanation has eluded scientists.

Benson and Devereux combined data from the infrared Two Micron All Sky Survey (2MASS) with their sophisticated GALFORM computer model to reproduce the evolutionary history of the Universe over thirteen billion years. To their surprise, their computations reproduced not only the different galaxy shapes but also their relative numbers.

“We were completely astonished that our model predicted both the abundance and diversity of galaxy types so precisely”, said Devereux. “It really boosts my confidence in the model”, added Benson.

The astronomers’ model is underpinned by and endorses the ‘Lambda Cold Dark Matter’ model of the Universe. Here ‘Lambda’ is the mysterious ‘dark energy’ component believed to make up about 72% of the cosmos, with cold dark matter making up another 23%. Just 4% of the Universe consists of the familiar visible or ‘baryonic’ matter that makes up the stars and planets of which galaxies are comprised.

Galaxies are thought to be embedded in very large haloes of dark matter and Benson and Devereux believe these to be crucial to their evolution. Their model suggests that the number of mergers between these haloes and their galaxies drives the final outcome – elliptical galaxies result from multiple mergers whereas disk galaxies have seen none at all. Our Milky Way galaxy’s barred spiral shape suggests it has seen a complex evolutionary history, with only a few minor collisions and at least one episode where the inner disk collapsed to form the large central bar.

“These new findings set a clear direction for future research. Our goal now is to compare the model predictions with observations of more distant galaxies seen in images obtained with the Hubble and those of the soon to be launched James Webb Space Telescope (JWST)”, said Devereux.

(Photo: A. Benson (University of Durham), NASA / STScI)

Royal Astronomical Society


0 comentarios

The Y chromosome is often considered somewhat of a genetic oddball. Short and stubby, it carries hardly any genes, most of which are related to traits associated with maleness. Most of the chromosome consists of highly repetitive sequences of DNA, known as massive palindrome sequences, whose function is unknown.

Evolutionary biologists have long believed that the mammalian Y chromosome is essentially stagnant, having lost most of its genes hundreds of millions of years ago. But new research from MIT's Whitehead Institute, published in this week's issue of Nature, overturns that theory. The research team, led by Whitehead Institute director and MIT biology professor David Page, showed that the Y chromosome is actually evolving rapidly and continuously remaking itself.

For the first time, the researchers sequenced the Y chromosome of the chimpanzee, allowing for the first interspecies comparison of the chromosome. They found significant differences between the two species' Y chromosomes, suggesting that those chromosomes have evolved faster than other chromosomes during the six million years since humans and chimpanzees emerged from a common ancestor.

The findings offer the first evidence that a Y chromosome as evolutionarily old as the human Y is in fact still evolving, says Andrew Clark, a genetics professor at Cornell University who studies Y chromosome evolution in fruit flies.

"There's a dramatic amount of turnover, and it's not just degeneration — it's gain and loss of genes that do something on the Y chromosome," says Clark, adding that the new sequence comparison may also help researchers study male infertility, which is often driven by defects in the Y chromosome.

Hundreds of millions of years ago, the Y diverged from its sister X chromosome and became specialized for male-specific traits. Evolutionary biologists have theorized that it quickly lost most of its genes through a process known as degeneration, then lapsed into a fairly static state.

However, this theory was difficult to test because all of those repetitive DNA stretches make the Y chromosome very tricky to sequence, says Jennifer Hughes, a postdoctoral associate at the Whitehead Institute and lead author of the Nature paper. Most genome sequencing studies completely exclude the Y chromosome.

In 2003, Page's laboratory and collaborators at the Genome Center at Washington University (who were also involved in the new chimpanzee study) were the first team to sequence the human Y chromosome. They found that the Y carries 78 genes, more than expected, but still far fewer than the 1,000 or so located on the X chromosome.

"Having the human sequence tells us quite a bit, but to obtain information about the evolution of the Y, we needed to do a comparative analysis," says Hughes.

As its next target, the Whitehead team chose the chimpanzee, humans' closest living relative. Human and chimpanzees genomes differ very little: 98.8 percent of DNA base pairs are identical between the two species.

Page's team expected that the chimpanzee and human Y chromosomes would also be very similar. To their surprise, they found that chimp and human Y chromosomes differ considerably — far more than the rest of the chromosomes. During the six million years of separation, the chimp Y has lost one-third to one-half of the human Y chromosome genes. However, the chimp Y has twice as many massive palindrome sequences as the human Y.

Page compares the Y chromosome changes to a home undergoing continual renovation. "People are living in the house, but there's always some room that's being demolished and reconstructed," says Page, who is also a Howard Hughes Medical Institute investigator. "And this is not the norm for the genome as a whole."

The researchers suspect several factors are at play in the divergent evolution of human and chimp Y chromosomes, including differences in mating behaviors. Because a female chimpanzee may mate with many male chimpanzees around the same time, any genes on the Y chromosome that lead to enhanced sperm production offer a distinct competitive advantage.

If a Y chromosome with genes for enhanced sperm production also carries mutations that alter or eliminate a gene not related to sperm production, those less advantageous mutations also get passed on, resulting in a Y chromosome with far fewer genes than the human Y.

"The gene loss seen in chimps and the possibility that this has been driven by the influence of sperm competition in chimps but not humans is interesting, and we will be able to judge this more readily once we have functional information about the gene products, which is still sketchy in many cases," says Mark Jobling, professor of genetics at the University of Leicester, who studies the evolution of the Y chromosome.

Researchers in the Page lab and the Washington University Genome Center are now sequencing and examining the Y chromosomes of several other mammals, to investigate whether rapid evolution is occurring in species other than humans and chimpanzees.

(Photo: MIT)





Selected Science News. Copyright 2008 All Rights Reserved Revolution Two Church theme by Brian Gardner Converted into Blogger Template by Bloganol dot com