Tuesday, January 26, 2010


0 comentarios
Scientists have long puzzled over how iguanas, a group of lizards mostly found in the Americas, came to inhabit the isolated Pacific islands of Fiji and Tonga. For years, the leading explanation has been that progenitors of the island species must have rafted there, riding across the Pacific on a mat of vegetation or floating debris. But new research in the January issue of The American Naturalist suggests a more grounded explanation.

Using the latest genetic, geological and fossil data, biologists Brice Noonan of the University of Mississippi and Jack Sites of Brigham Young University have found that iguanas may have simply walked to Fiji and Tonga when the islands were still a part of an ancient southern supercontinent.

The two islands, located about 2000 miles east of Australia, are home to several iguana species, and their presence there is "one of the most perplexing scenarios in island biogeography," Noonan says. The other islands in the region, and closest continental landmass, Australia, have no iguanid species at all. In fact the closest iguanids are found about 5,000 miles away in the Americas. So how did these species get to these remote islands?

Some scientists have hypothesized that they must have rafted there—a journey of around 5,000 miles from South America to the islands. There is some precedent for rafting iguanas. There are documented cases of iguanas reaching remote Caribbean islands and the Galapagos Islands on floating logs. But crossing the Pacific is another matter entirely. Noonan and Sites estimate the trip would take six months or more—a long time for an iguana to survive on a log or vegetation mat.

So Noonan and Sites tested the possibility that iguanas simply walked to the islands millions of years ago, before the islands broke off from Gondwana—the ancient supercontinent made up of present-day Africa, Australia, Antarctica and parts of Asia. If that's the case, the island species would need to be old—very old. Using "molecular clock" analysis of living iguana DNA, Noonan and Sites found that, sure enough, the island lineages have been around for more than 60 million years—easily old enough to have been in the area when the islands were still connected via land bridges to Asia or Australia.

Fossil evidence backs the finding. Fossils uncovered in Mongolia suggest that iguanid ancestors did once live in Asia. Though there's currently no fossil evidence of iguanas in Australia, that doesn't necessarily mean they were never there. "[T]he fossil record of this continent is surprisingly poor and cannot be taken as evidence of true absence," the authors write.

So if the iguanas simply migrated to Fiji and Tonga from Asia or possibly Australia, why are they not also found on the rest of the Pacific islands? Noonan and Sites say fossil evidence suggests that iguana species did once inhabit other islands, but went extinct right around the time humans colonized those island. That's an indication that iguanas were on the menu for the early islanders. But Fiji and Tonga have a much shorter history of human presence, which may have helped the iguanas living there to escape extinction.

The molecular clock analysis combined with the fossil evidence suggests a "connection via drifting Australasian continental fragments that may have introduced [iguanas] to Fiji and Tonga," Noonan says. "The 'raft' they used may have been the land."

The researchers say that their study can't completely rule out the rafting hypothesis, but it does make the land bridge scenario "far more plausible than previously thought."

The University of Chicago


0 comentarios

The widespread view of Neanderthals as cognitively inferior to early modern humans is challenged by new research from the University of Bristol published in Proceedings of the National Academy of Sciences.

Professor João Zilhão and colleagues examined pigment-stained and perforated marine shells, most certainly used as neck pendants, from two Neanderthal-associated sites in the Murcia province of south-east Spain (Cueva de los Aviones and Cueva Antón). The analysis of lumps of red and yellow pigments found alongside suggest they were used in cosmetics. The practice of body ornamentation is widely accepted by archaeologists as conclusive evidence for modern behaviour and symbolic thinking among early modern humans but has not been recognised in Neanderthals – until now.

Professor Zilhão said: "This is the first secure evidence that, some 50,000 years ago – ten millennia before modern humans are first recorded in Europe – the behaviour of Neanderthals was symbolically organised."

A Spondylus gaederopus shell from the same site contained residues of a reddish pigmentatious mass made of lepidocrocite mixed with ground bits of hematite and pyrite (which, when fresh, have a brilliant black, reflective appearance), suggesting the kind of inclusion 'for effect' that one would expect in a cosmetic preparation.

The choice of a Spondylus shell as the container for such a complex recipe may relate to the attention-grabbing crimson, red, or violet colour and exuberant sculpture of these shells, which have led to their symbolic- or ritual-related collection in a variety of archaeological contexts worldwide.

A concentration of lumps of yellow colorant from Cueva de los Aviones (most certainly the contents of a purse made of skin or other perishable material) was found to be pure natrojarosite – an iron mineral used as a cosmetic in Ancient Egypt.

While functionally similar material has been found at Neanderthal-associated sites before, it has been explained by stratigraphic mixing (which can lead to confusion about the dating of particular artefacts), Neanderthal scavenging of abandoned modern human sites, or Neanderthal imitation without understanding of behaviours observed among contemporary modern human groups.

For example, controversy has surrounded the perforated and grooved teeth and decorated bone awls found in the Châtelperronian culture of France. In earlier work, Professor Zilhão and colleagues have argued they are genuine Neanderthal artefacts which demonstrate the independent evolution of advanced cognition in the Neanderthal lineage.

However, the Châtelperronian evidence dates from 40,000 to 45,000 years ago, thus overlapping with the period when anatomically modern human people began to disperse into Europe (between 40,000 and 42,000 years ago) and leaving open the possibility that these symbolic artifacts relate, in fact, to them.

Professor Zilhão said: "The evidence from the Murcian sites removes the last clouds of uncertainty concerning the modernity of the behaviour and cognition of the last Neanderthals and, by implication, shows that there is no reason any more to continue to question the Neanderthal authorship of the symbolic artefacts of the Châtelperronian culture.

"When considering the nature of the cultural and genetic exchanges that occurred between Neanderthals and modern humans at the time of contact in Europe, we should recognise that identical levels of cultural achievement had been reached by both sides."

Accurate radiocarbon dating of shell and charcoal samples from Cueva de los Aviones and Cueva Antón was crucial to the research. The dating was undertaken at the University of Oxford's Radiocarbon Accelerator Unit.

Dr Thomas Higham, Deputy Director of the Radiocarbon Unit in the School of Archaeology said: "Dating samples that approach the limit of the technique, at around 55,000 years before present, is a huge challenge. We used the most refined methods of pre-treatment chemistry to obtain accurate dates for the sites involved by removing small amounts of more modern carbon contamination to discover that the shells and charcoal samples were as early as 50,000 years ago."

(Photo: Bristol U.)

University of Bristol


0 comentarios
Couch potatoes beware: every hour of television watched per day may increase the risk of dying earlier from cardiovascular disease, according to research reported in Circulation: Journal of the American Heart Association.

Australian researchers tracked the lifestyle habits of 8,800 adults and found that each hour spent in front of the television daily was associated with:

• an 11 percent increased risk of death from all causes,
• a 9 percent increased risk of cancer death; and
• an 18 percent increased risk of cardiovascular disease (CVD)-related death.

Compared with people who watched less than two hours of television daily, those who watched more than four hours a day had a 46 percent higher risk of death from all causes and an 80 percent increased risk for CVD-related death. This association held regardless of other independent and common cardiovascular disease risk factors, including smoking, high blood pressure, high blood cholesterol, unhealthy diet, excessive waist circumference, and leisure-time exercises.

While the study focused specifically on television watching, the findings suggest that any prolonged sedentary behavior, such as sitting at a desk or in front of a computer, may pose a risk to one’s health. The human body was designed to move, not sit for extended periods of time, said David Dunstan, Ph.D., the study’s lead author and professor and Head of the Physical Activity Laboratory in the Division of Metabolism and Obesity at the Baker IDI Heart and Diabetes Institute in Victoria, Australia.

“What has happened is that a lot of the normal activities of daily living that involved standing up and moving the muscles in the body have been converted to sitting,” Dunstan said. “Technological, social, and economic changes mean that people don’t move their muscles as much as they used to - consequently the levels of energy expenditure as people go about their lives continue to shrink. For many people, on a daily basis they simply shift from one chair to another – from the chair in the car to the chair in the office to the chair in front of the television.”

Dunstan said the findings apply not only to individuals who are overweight and obese, but also those who have a healthy weight. “Even if someone has a healthy body weight, sitting for long periods of time still has an unhealthy influence on their blood sugar and blood fats,” he said.

Although the study was conducted in Australia, Dunstan said the findings are certainly applicable to Americans. Average daily television watching is approximately three hours in Australia and the United Kingdom, and up to eight hours in the United States, where two-thirds of all adults are either overweight or obese.

The benefits of exercise have been long established, but researchers wanted to know what happens when people sit too much. Television-watching is the most common sedentary activity carried out in the home.

Researchers interviewed 3,846 men and 4,954 women age 25 and older who underwent oral glucose-tolerance tests and provided blood samples so researchers could measure biomarkers such as cholesterol and blood sugar levels. Participants were enrolled from 1999–2000 and followed through 2006. They reported their television-viewing habits for the previous seven days and were grouped into one of three categories: those who watched less than two hours per day; those who watched between two and four hours daily; and those who watched more than four hours.

People with a history of CVD were excluded from the study. During the more than six-year follow-up, there were 284 deaths — 87 due to CVD and 125 due to cancer.

The association between cancer and television viewing was only modest, researchers reported. However, there was a direct association between the amount of television watched and elevated CVD death as well as death from all causes even after accounting for typical CVD risk factors and other lifestyle factors. The implications are simple, Dunstan said. “In addition to doing regular exercise, avoid sitting for prolonged periods and keep in mind to ‘move more, more often’. Too much sitting is bad for health.”

American Heart Association


0 comentarios

Using kitchen spoons to measure liquid medication tends to lead to significantly over- or underdosing, reports a new Cornell study.

Although spoon dosing is one of the major causes of pediatric poisonings, according to the Mayo Clinic, 70 percent of people who take liquid medicine use silverware spoons to measure doses.

In the Cornell study, almost 200 college students who had recently visited their university health center were asked to pour liquid medication into a kitchen spoon.

The researchers found that the students underdosed by more than 8 percent when using medium-size spoons and overdosed by an average of almost 12 percent -- but up to 20 percent more -- when using larger spoons. Yet, participants were confident that they had poured correct doses in all the test cases.

"Although these educated participants had poured in a well-lit room after a practice pour, they were unaware of their biases," said consumer behavior expert Brian Wansink, who led the study.

His findings are published in the Jan. 5 issue of the Annals of Internal Medicine (152:1).

Over the course of a day -- taking medication every four to six hours -- or over days, this error in dosing adds up, he said.

"It can lead to overdosing -- or underdosing -- potentially beneath the point of effectiveness," said Wansink. He added that the college-student study participants poured the medication in midday when well rested in a well-lighted room.

"But in the middle of night, when you're fatigued, feeling miserable or in a rush because a child is crying, the probability of error is undoubtedly much greater," he said.

Since a medicine's efficacy is often linked to its dose, consumers should use a measuring cap, dosing spoon, measuring dropper or a dosing syringe "rather than assume they can rely on their pouring experience and estimation abilities with tablespoons," concluded Wansink.

(Photo: Jason Koski/Cornell University)

Cornell University


0 comentarios
How would you analyze the contents of a million books? Or a million podcasts? Mats Rooth, Cornell professor of linguistics and computing and information sciences, will do it by using software to search for word patterns in text transcriptions of audio and video files.

Rooth is one of eight winners of an international competition, Digging into Data, that challenged scholars to devise innovative humanities and social science research projects using large-scale data analysis. His project, Harvesting Speech Datasets for Linguistic Research on the Web, is based on a pilot project Rooth conducted with graduate student Jonathan Howell. It will look at distinctions of prosody (rhythm, stress and intonation) in spoken language.

According to Rooth, native speakers easily identify what prosody is appropriate in a given sentence, but hypotheses explaining why people have this ability have been controversial to prove because of the difficulty of identifying enough examples of a given phenomenon. "Many of the things we study are so immediate and yet so subtle," he said.

Using the Internet to harvest hundreds or thousands of examples of spontaneous rather than lab-created use of word patterns will enable researchers to evaluate theories about the form and meaning of prosody on an unprecedented scale. Rooth expects his project to have a transformative effect on the understanding of prosody.

"I'm very excited," Rooth said. "It's a new methodology, and we think a lot of new information will come out."

Four leading research agencies sponsored the Digging into Data competition, with the intention of encouraging international partnerships: the National Endowment for the Humanities, the National Science Foundation, the United Kingdom's Joint Information Systems Committee, and Canada's Social Sciences and Humanities Research Council. Approximately $2 million will be divided among the eight winners.

Linguist Michael Wagner of McGill University is Rooth's international partner on the project. The Cornell team will be responsible for data retrieval and programming, while McGill researchers will focus on data analysis.

Cornell University


0 comentarios

Spectacular satellite images suggest that Mars was warm enough to sustain lakes three billion years ago, a period that was previously thought to be too cold and arid to sustain water on the surface, according to research published in the journal Geology.

The research, by a team from Imperial College London and University College London (UCL), suggests that during the Hesperian Epoch, approximately 3 billion years ago, Mars had lakes made of melted ice, each around 20km wide, along parts of the equator.

Earlier research had suggested that Mars had a warm and wet early history but that between 4 billion and 3.8 billion years ago, before the Hesperian Epoch, the planet lost most of its atmosphere and became cold and dry. In the new study, the researchers analysed detailed images from NASA’s Mars Reconnaissance Orbiter, which is currently circling the red planet, and concluded that there were later episodes where Mars experienced warm and wet periods.

The researchers say that there may have been increased volcanic activity, meteorite impacts or shifts in Mars’ orbit during this period to warm Mars’ atmosphere enough to melt the ice. This would have created gases that thickened the atmosphere for a temporary period, trapping more sunlight and making it warm enough for liquid water to be sustained.

Lead author of the study, Dr Nicholas Warner, from the Department of Earth Science and Engineering at Imperial College London, says:

“Most of the research on Mars has focussed on its early history and the recent past. Scientists had largely overlooked the Hesperian Epoch as it was thought that Mars was then a frozen wasteland. Excitingly, our study now shows that this middle period in Mars’ history was much more dynamic than we previously thought.”

The researchers used the images from the Mars Reconnaissance Orbiter to analyse several flat-floored depressions located above Ares Vallis, which is a giant gorge that runs 2,000 km across the equator of Mars. Scientists have previously been unable to explain how these depressions formed, but believed that the depressions may have been created by a process known as sublimation, where ice changes directly from its solid state into a gas without becoming liquid water. The loss of ice would have created cavities between the soil particles, which would have caused the ground to collapse into a depression.

In the new study, the researchers analysed the depressions and discovered a series of small sinuous channels that connected them together. The researchers say these channels could only be formed by running water, and not by ice turning directly into gas.

The scientists were able to lend further weight to their conclusions by comparing the Mars images to images of thermokarst landscapes that are found on Earth today, in places such as Siberia and Alaska. Thermokarst landscapes are areas where permafrost is melting, creating lakes that are interconnected by the same type of drainage channels found on Mars.

The team believe the melting ice would have created lakes and that a rise in water levels may have caused some of the lakes to burst their banks, which enabled water to carve a pathway through the frozen ground from the higher lakes and drain into the lower lying lakes, creating permanent channels between them.

Professor Jan-Peter Muller, Mullard Space Science Laboratory, Department of Space Climate Physics at University College London, was responsible for mapping the 3D shape of the surface of Mars. He adds:

“We can now model the 3D shape of Mars’ surface down to sub-metre resolution, at least as good as any commercial satellite orbiting the Earth. This allows us to test our hypotheses in a much more rigorous manner than ever before.”

The researchers determined the age of the lakes by counting crater impacts, a method originally developed by NASA scientists to determine the age of geological features on the moon. More craters around a geological feature indicate that an area is older than a region with fewer meteorite impacts. In the study, the scientists counted more than 35,000 crater impacts in the region around the lakes, and determined that the lakes formed approximately three billion years ago. The scientists are unsure how long the warm and wet periods lasted during the Hesperian epoch or how long the lakes sustained liquid water in them.

The researchers say their study may have implications for astrobiologists who are looking for evidence of life on Mars. The team say these lake beds indicate regions on the planet where it could have been warm and wet, potentially creating habitats that may have once been suitable for microbial life. The team say these areas may be good targets for future robotic missions.

The next step will see the team extend their survey to other areas along the equator of Mars so that they can ascertain how widespread these lakes were during the Hesperian Epoch. The team will focus their surveys on a region at the mouth of Ares Vallis called Chryse Planitia, where preliminary surveys of satellite images have suggested that this area may have also supported lakes.

(Photo: NASA)

Imperial College London


0 comentarios

The study of honeybees and their social structure can give scientists a greater understanding of how infectious disease spreads among animals and humans, says a Colorado State University professor who has embarked on a five-year study of honeybee behavior, funded by a National Science Foundation CAREER award.

Dhruba Naug, an assistant professor of biology in the College of Natural Sciences, is using the $650,000 grant to investigate how the social organization of bees has evolved in response to the threat of the parasites that infect the colony. At the same time, parasites can also be expected to adapt rapidly to these changes, making it a continuous arms race between the two parties. The grant includes some funds from the American Recovery and Reinvestment Act and also comprises an educational component to get children at the elementary school level more interested in biology and science.

“We are investigating whether we can use the honeybee colony as a model system to understand how diseases spread in a social group,” Naug said. “If you’re talking about the spread of disease in a human community, there is a social context to it. Who infects who is based on various things – what you do, how often you come in contact with other people, how you behave. Bees have a fairly sophisticated social structure, they get a lot of diseases, and they can be subjected to a variety of experimental paradigms - a golden combination for research.”

“What is it about the social structure that allows bees to be tolerant of diseases and what do parasites do to adapt to them?”

His research on diseased bees has produced some interesting results including:

• Bees that carry a disease experience starvation since the pathogens infecting them steal nutrition, leading to changes in their behavior. They are, for example, less willing to share food with other bees and more willing to leave the colony to find food, which may affect the contact structure and even affect honey production.

• Hunger in infected bees also means that they have problems maintaining their temperature; they get colder because they don’t have enough food to burn. As a result, they are likely to move toward the center of the colony for warmth, which is likely to infect more bees.

• The social structure of bees is such that the young individuals and the queen are segregated toward the center of the colony, providing them some amount of immunity from pathogens entering the colony.

“We’re not studying specific diseases as much as we are the transmission of diseases,” Naug said. “Anytime you’re looking at dense populations – such as livestock – you are talking about similar principles of how things spread.” Focusing on general principles that apply to most infectious diseases rather than any specific pathogen is more illuminating in many ways.

Naug, who joined Colorado State University in 2005, teaches undergraduate biology and a graduate course in behavioral ecology. He plans to teach an undergraduate course this fall that will explore how to apply the principles of evolutionary biology to medical science. Understanding more about host-parasite interactions can help doctors tackle such issues as antibiotic resistance.

“When we get a fever, we take a bunch of medicines to lower our temperature. But what if the body is ramping up the temperature to fight the pathogen? Maybe we need to understand the symptoms a little better rather than trying to come up with a chemical to fight every symptom of a disease.”

(Photo: Scott Camazine)

Colorado State University


0 comentarios

A bacterial species that depends on cooperation to survive is discriminating when it comes to the company it keeps. Scientists from Indiana University Bloomington and Netherlands' Centre for Terrestrial Ecology have learned Myxococcus xanthus cells are able to recognize genetic differences in one another that are so subtle, even the scientists studying them must go to great lengths to tell them apart.

The scientists' report, which appears in a recent issue of Current Biology, also provides further evidence that cooperation in nature is not always a festival of peace and love. Rather, cooperation may be more of a grudging necessity, in which partners continually compete and undermine one another in a bid for evolutionary dominance.

"In some social microbes, cooperation is something that happens primarily among identical or very similar cells, as a way of competing against relatively unrelated individuals in other cooperative units," said IU Bloomington biologist Gregory Velicer, who led the research. "This is unlike humans, who are more likely to cooperate with unrelated individuals as well as with close kin. In the bacteria we study, cooperation appears to be highly restricted."

Myxococcus xanthus is a predatory bacterium that swarms through soil, killing and eating other microbes by secreting toxic and digestive compounds. When food runs out, cells aggregate and exchange chemical signals to form cooperative, multi-cellular "fruiting bodies." Some of the cells create the fruiting body's structure, while other cells are destined to become hardy spores for the purpose of surviving difficult conditions.

Previously, experiments by Velicer and Ph.D. student Francesca Fiegna showed that when different Myxococcus strains isolated from around the globe were mixed together, the number of spores produced was much reduced. This indicated that this social bacterium had diverged into many socially conflicting types. Michiel Vos, then a Ph.D. student with Velicer at the Max Planck Institute for Developmental Biology in Tübingen, Germany, set out to find whether Myxococcus bacteria sharing the same centimeter-scale soil patch were still capable of efficiently forming fruiting bodies together, or whether these close neighbors would already engage in social conflict.

As part of the experimental design for their Current Biology study, Velicer and Vos paired Myxococcus strains isolated from soil samples taken just centimeters apart to see whether they would behave cooperatively or competitively.

The scientists found that some pairs of strains, inhabiting the same patch of soil and almost identical genetically, had nevertheless diverged enough to inhibit each other's ability to make spores.

In general, however, the scientists found competition was less intense among centimeter-scale pairings than for pairings of more distantly related bacteria isolated from distant locations. These results indicate that social divergence can evolve rapidly within populations, but this divergence can be augmented by geographic isolation.

Another set of experiments revealed that different strains actively avoid each other prior to starvation-induced fruiting body formation. Velicer and Vos argue that this type of exclusion within diverse populations -- in which the probability of social conflict among neighbors is high -- may serve to direct the benefits of cooperation to close kin only.

Velicer says he plans to conduct an exhaustive search for specific genetic differences that lead to antagonism and social exclusion in pairing of closely related strains. "We've got lots of candidate genes," he said.

A long-term goal, Velicer explains, is to understand how new species of social bacteria might evolve sympatrically, that is, in a geographical area shared with a parental species.

"If strong social incompatibilities evolve rapidly, that has implications for understanding how interacting strains diverge over long periods of time," Velicer said.

(Photo: Supriya Kadam and Juergen Berger, Max Planck Institute)

Indiana University


0 comentarios

Supernovae are spectacular events: Suddenly somewhere in the heavens a "new star" lights up and shines as bright as a whole galaxy consisting of billions of stars. The mechanisms behind these cosmic catastrophes are varied. Researchers at the Max Planck Institute for Astrophysics in Garching have now used computer simulations to confirm that some of these bright supernovae are due to the merger of two white dwarfs, compact massive stars at the end of their lifetime. As supernovae are used by astronomers to measure cosmic distances and study the expansion history of our Universe, understanding their mechanism is one of the key challenges in astrophysics.

Intermediate-mass stars such as our Sun end their lives as white dwarfs consisting of carbon and oxygen. The stellar fusion reactor in their centre is no longer active due to a lack of fuel. The stars have only the size of the Earth, but a high density. One teaspoon of matter would weigh about as much as a car on our planet.

In a binary system, two such exotic white dwarfs can form. As they orbit each other, they emit gravitational waves. The resulting energy loss shrinks the orbit, the stars approach each other and ultimately they merge. It has long been speculated that these events may produce Type Ia supernova explosions.

The supernova research group at the Max Planck Institute for Astrophysics has now performed computer simulations of two merging white dwarfs in unprecedented detail. In the case of equal masses of the two white dwarfs, the merger is particularly violent. Part of the material of one white dwarf crashes into the other and heats up the carbon/oxygen material such that a thermonuclear explosion triggers (see Figure). This disrupts the stars in a supernova explosion.

"With our detailed explosion simulations, we could predict observables that indeed closely match actual observations of Type Ia supernovae," explains Friedrich Röpke of the supernova team. Therefore it has been demonstrated that white dwarf mergers contribute to Type Ia supernovae, although this scenario probably cannot account for all these explosions.

"Supernovae are among the brightest observed cosmic explosions," explains Wolfgang Hillebrandt, director at the Max Planck Institute for Astrophysics and co-author of the Nature article. "How they form, however, remains largely unknown. With our simulations we have now shed light on at least part of the old riddle of the progenitors of Type Ia supernovae."

Further support for the picture that Type Ia supernovae originate from mergers of white dwarfs comes from a recent paper by another supernova research group at MPA. In an upcoming Nature publication, they show that the bulk of observed supernovae cannot be due to the white dwarf gradually accreting material from a normal companion star - the standard theory so far. At present the only alternative is the merger of two white dwarf stars.

(Photo: Max Planck Institute for Astrophysics)

Max Planck Institute


0 comentarios

On the marine microbial stage, there appears to be a vast, varied group of understudies only too ready to step in when "star" microbes "break a leg."

At least that's what happens at the Lost City hydrothermal vent field, according to research results published this week in the journal Proceedings of the National Academy of Sciences (PNAS).

The Lost City hydrothermal vent field is located in the mid-Atlantic and is the only one of its kind found thus far. It offers scientists access to microorganisms living in vents that range in age from those newly formed to those tens of thousands of years old.

A bit-player among microbes found in scant numbers in younger, more active vents became the lead actor in a chimney more than 1,000 years old where venting had moderated and cooled, changing the ecosystem, according to Phillip Taylor of the National Science Foundation (NSF)'s Division of Ocean Sciences, which funded the research.

This is the first evidence that microorganisms can remain rare for such a long time before completely turning the tables to become dominant when ecosystems change, says William Brazelton, a University of Washington (UW) postdoctoral researcher. It seems logical, but until recently, scientists weren't able to detect microorganisms at such low abundance, he says.

In 2006, scientists led by Mitchell Sogin of the Marine Biological Laboratory in Woods Hole, Mass., and a co-author of this week's paper, published the first results showing that microorganisms in the marine environment had been woefully undercounted.

Using the latest DNA sequencing techniques, they found that marine microorganisms could be 10 to 100 times more diverse than previously thought.

The scientists coined the term "rare biosphere" to describe a vast but unrecognized group of microorganisms--"rare" because each individual type of microorganism appeared to be present in only very low numbers, so low that they were previously undetectable.

If the new way of determining microbial diversity was accurate, scientists were left to wonder why such a large collection of low-abundance organisms existed.

"A fundamental prediction of the 'rare biosphere' model is that when environmental conditions change, some of these rare, preadapted microbes can rapidly exploit the new conditions, increase in abundance, and out-compete the once abundant organisms adapted to past conditions," Brazelton and co-authors write in their paper.

Yet, they continued, "No studies have tested this prediction by examining a shift in species composition involving extremely rare taxa occurring during a known time interval."

Until now.

Lost City, discovered by University of Washington researcher Deborah Kelley and others during an NSF expedition in 2000, was named in part because it was discovered by scientists using the research vessel Atlantis.

Follow-up work by Kelley and others showed that the hot springs are formed in a very different way than the metal-rich black smoker vents scientists have known about since the 1970s.

Unlike the 700 degrees Fahrenheit black smokers, the chimneys, vents and other structures at Lost City are nearly pure carbonate, the same material as limestone in caves. They're formed by serpentinization reactions, a chemical reaction between seawater and mantle rocks that underlie the field.

Water venting at Lost City is generally at 200 F or less. The fluids are highly alkaline and enriched in the gases methane and hydrogen--important energy sources for the microbes that inhabit Lost City.

Lost City also differs from magma-driven hydrothermal systems in that it is very long-lived.

There have been numerous seasonal and short-term studies of microbial responses to environmental changes--lasting years at the most--the Lost City hydrothermal vent field has provided a way to look at changes in vent ecosystems 1,000 years apart in age.

Analyses by Brazelton and colleagues revealed that DNA sequences that were rare in younger vents were abundant in older ones. Because it is likely that the older Lost City chimneys released higher-temperature, higher-pH fluids when they were younger, as the ecosystem changed, the rare microorganisms came to the fore.

This round of near-disappearance then dominance could have happened repeatedly during the more than 30,000 year lifetime of the Lost City vent field. The microorganisms present today are "pre-adapted" to certain conditions--and are waiting for the ecosystem to change to suit them best.

"The rare biosphere of the Lost City microbial community represents a large repository of genetic memory created during a long history of past environmental changes," the authors write. "The rare organisms were able to rapidly exploit the new niches as they arose because they had been previously selected for the same conditions in the past."

(Photo: NSF, NOAA, University of Washington)

National Science Foundation


0 comentarios

For more than two decades, the cold dark matter theory has been used by cosmologists to explain how the smooth universe born in the big bang more than 13 billion years ago evolved into the filamentary, galaxy-rich cosmic web that we see today.

There's been just one problem: the theory suggested most galaxies should have far more stars and dark matter at their cores than they actually do. The problem is most pronounced for dwarf galaxies, the most common galaxies in our own celestial neighborhood. Each contains less than 1 percent of the stars found in large galaxies such as the Milky Way.

Now an international research team, led by a University of Washington astronomer, reports Jan. 14 in Nature that it resolved the problem using millions of hours on supercomputers to run simulations of galaxy formation (1 million hours is more than 100 years). The simulations produced dwarf galaxies very much like those observed today by satellites and large telescopes around the world.

"Most previous work included only a simple description of how and where stars formed within galaxies, or neglected star formation altogether," said Fabio Governato, a UW research associate professor of astronomy and lead author of the Nature paper.

"Instead we performed new computer simulations, run over several national supercomputing facilities, and included a better description of where and how star formation happens in galaxies."

The simulations showed that as the most massive new stars exploded as supernovas, the blasts generated enormous winds that swept huge amounts of gas away from the center of what would become dwarf galaxies, preventing millions of new stars from forming.

With so much mass suddenly removed from the center of the galaxy, the pull of gravity on the dark matter there is diminished and the dark matter drifts away, Governato said. It is similar to what would happen if our sun suddenly disappeared and the loss of its gravitational pull allowed the Earth to drift off into space.

The cosmic explosions proved to be the missing piece of the puzzle, and adding them to the simulations generated formation of galaxies with substantially lower densities at their cores, closely matching the observed properties of dwarf galaxies.

"The cold dark matter theory works amazingly well at telling where, when and how many galaxies should form," Governato said. "What we did was find a better description of processes that we know happen in the real universe, resulting in more accurate simulations."

The theory of cold dark matter, first advanced in the mid 1980s, holds that the vast majority of the matter in the universe -- as much as 75 percent -- is made up of "dark" material that does not interact with electrons and protons and so cannot be observed from electromagnetic radiation. The term "cold" means that immediately following the big bang these dark matter particles have speeds far lower than the speed of light.

In the cold dark matter theory, smaller structures form first, then they merge with each other to form more massive halos, and finally galaxies form within the halos.

(Photo: Katy Brooks)

University of Washington


0 comentarios
Before patting yourself on the back for resisting that cookie or kicking yourself for giving in to temptation, look around. A new University of Georgia study has revealed that self-control—or the lack thereof—is contagious.

In a just-published series of studies involving hundreds of volunteers, researchers have found that watching or even thinking about someone with good self-control makes others more likely exert self-control. The researchers found that the opposite holds, too, so that people with bad self-control influence others negatively. The effect is so powerful, in fact, that seeing the name of someone with good or bad self-control flashing on a screen for just 10 milliseconds changed the behavior of volunteers.

“The take home message of this study is that picking social influences that are positive can improve your self-control,” said lead author Michelle vanDellen, a visiting assistant professor in the UGA department of psychology. “And by exhibiting self-control, you’re helping others around you do the same.”

People tend to mimic the behavior of those around them, and characteristics such as smoking, drug use and obesity tend to spread through social networks. But vanDellen’s study is thought to be the first to show that self-control is contagious across behaviors. That means that thinking about someone who exercises self-control by regularly exercising, for example, can make your more likely to stick with your financial goals, career goals or anything else that takes self-control on your part.

VanDellen’s findings, which are published in the early online edition of the journal Personality and Social Psychology Bulletin, are the result of five separate studies conducted over two years with study co-author Rick Hoyle at Duke University.

In the first study, the researchers randomly assigned 36 volunteers to think about a friend with either good or bad self-control. Those that thought about a friend with good self-control persisted longer on a handgrip task commonly used to measure self-control, while the opposite held true for those who were asked to think about a friend with bad self-control.

In the second study, 71 volunteers watched others exert self-control by choosing a carrot from a plate in front of them instead of a cookie from a nearby plate, while others watched people eat the cookies instead of the carrots. The volunteers had no interaction with the tasters other than watching them, yet their performance was altered on a later test of self-control depending on who they were randomly assigned to watch.

In the third study, 42 volunteers were randomly assigned to list friends with both good and bad self-control. As they were completing a computerized test designed to measure self-control, the computer screen would flash the names for 10 milliseconds—too fast to be read but enough to subliminally bring the names to mind. Those who were primed with the name of a friend with good self-control did better, while those primed with friends with bad self-control did worse.

In a fourth study, vanDellen randomly assigned 112 volunteers to write about a friend with good self-control, bad self-control or—for a control group—a friend who is moderately extroverted. On a later test of self-control, those who wrote about friends with good self-control did the best, while those who wrote about friends with bad self-control did the worst. The control group, those who wrote about a moderately extroverted friend, scored between the other two groups.

In the fifth study of 117 volunteers, the researchers found that those who were randomly assigned to write about friends with good self-control were faster than the other groups at identifying words related to self-control, such as achieve, discipline and effort. VanDellen said this finding suggests that self-control is contagious because being exposed to people with either good or bad self-control influences how accessible thoughts about self-control are.

VanDellen said the magnitude of the influence might be significant enough to be the difference between eating an extra cookie at a party or not, or deciding to go to the gym despite a long day at work. The effect isn’t so strong that it absolves people of accountability for their actions, she explained, but it is a nudge toward or away from temptation.

“This isn’t an excuse for blaming other people for our failures,” vanDellen said. “Yes, I’m getting nudged, but it’s not like my friend is taking the cookie and feeding it to me; the decision is ultimately mine.”

University of Georgia




Selected Science News. Copyright 2008 All Rights Reserved Revolution Two Church theme by Brian Gardner Converted into Blogger Template by Bloganol dot com