Saturday, February 27, 2010

WHERE DID INSECTS COME FROM?

0 comentarios

Since the dawn of the biological sciences, mankind has struggled to comprehend the relationships among the major groups of "jointed-legged" animals — the arthropods. Now, a team of researchers, including Dr. Joel Martin and Dr. Regina Wetzer from the Natural History Museum of Los Angeles County (NHM), has finished a completely new analysis of the evolutionary relationships among the arthropods, answering many questions that defied previous attempts to unravel how these creatures were connected.

Now, for the first time, science has a solid grasp of what those relationships are, and a framework upon which to build. The new study makes a major contribution to our understanding of the nature and origins of the planet's biodiversity. The paper's other researchers are Jerome C. Regier, Andreas Zwick and April Hussey from the University of Maryland Biotechnology Institute; Jeffrey W. Shultz of the University of Maryland's Department of Entomology; and Bernard Ball and Clifford W. Cunningham from Duke University's Department of Biology.

There are millions of distinct species of arthropods, including all the insects, crustaceans, millipedes, centipedes, spiders, and a host of other animals, all united by having a hard external shell and jointed legs. They are by far the most numerous, and most diverse, of all creatures on Earth — in terms of the sheer number of species, no other group comes close. They make up perhaps 1.6 million of the estimated 1.8 to 1.9 million described species, dominating the planet in number, biomass, and diversity.
The economic aspects of arthropods are also overwhelming. From seafood industries worth billions of dollars annually to the world's economy, to the importance of insects as pollinators of ornamental and agriculturally important crops, to the medical role played by arthropods (e.g. as disease vectors and parasites), to biological control of introduced species, to their role in every known food web, to toxicology and biopharmaceuticals, arthropods are by far the planet's most important group of animals.

"We've never really known how arthropods, the most successful animals on Earth, evolved into the diversity we see today," said research scientist and co-author Dr. Regina Wetzer. "For me, what makes this study really exciting is getting such a solid understanding of how these animals are related, so that now we can better understand how they evolved."

Because of their amazing diversity, deciphering the evolutionary history and relationships among the major subgroups of arthropods has proven difficult. Scientists have tried using various combinations of features, in recent years including DNA sequences, to try to understand which groups are related through common ancestors. To date, those attempts have been stymied by the sheer number of species and wild shape variations between the various groups.

One of the most important results of this new study is support for the hypothesis that the insects evolved from a group of crustaceans. So flies, honeybees, ants, and crickets all branched off the arthropod family tree from within the lineage that gave rise to today's crabs, shrimp, and lobsters. Another important finding is that the "Chelicerata" (a group that includes the spiders, scorpions, ticks, and mites) branched off very early, earlier than the millipedes, centipedes, crustaceans, and insects. That means that the spiders, for example, are more distantly related to the insects than many researchers previously thought.

This team approached the problem of illuminating the arthropod family tree by using genetic data (DNA sequences) obtained from 75 species carefully selected to sample the range of arthropod diversity. Many previous analyses were based on the sequences of a handful of genes. The researchers in this study, knowing the daunting diversity they faced, used DNA sequence information from as many genes as they could. In the end, they were able to apply data from 62 protein-coding genes to the problem, leading to an extremely well-supported analysis.

"The Museum's collection of arthropods, and in particular its collection of crustaceans, are what made a study like this possible in the first place," says Dr. Joel W. Martin, NHM Curator of Crustacea and one of the authors who designed the study nearly eight years ago. "The wealth of stored biodiversity information contained in it, both in terms of specimens and in terms of the data, theories, and research related to those specimens, are why natural history museums exist, and why they play such a critical role in explaining the world's diversity. Studies like this confirm the incredible value, not only of existing natural history museum collections, but of continuing to add to these collections every year."

A key problem that the research team had to solve was obtaining specimens of some of rare and obscure organisms whose DNA was needed for the analysis. Because of their extensive experience in field biology, this was a major contribution to the project from NHM scientists. Dr. Wetzer recalls lying on the beach with a microscope at Woods Hole, Massachusetts. She was hunting for specimens of a tiny, little-known crustacean that lives between grains of sand. "I got the mystacocarids we needed, but I think I also provided pretty good entertainment to the families at the beach that day," Dr. Wetzer said.

(Photo: Simon Richards)

Natural History Museum of Los Angeles County

BUDDY, CAN YOU SPARE A BANANA? STUDY FINDS THAT BONOBOS SHARE LIKE HUMANS

0 comentarios
New research suggests that the act of voluntarily sharing something with another may not be entirely exclusive to the human experience. A study published in the March 9th issue of Current Biology, a Cell Press publication, observed that bonobos—a sister species of chimpanzees and, like chimps, our closest living relatives—consistently chose to actively share their food with others.

"It has been suggested that only humans voluntarily share their food," says lead study author Brian Hare from Duke University in North Carolina. "However, the food sharing preferences of the unusually tolerant bonobos have never been studied experimentally." Dr. Hare and Suzy Kwetuenda from the Lola ya Bonobo Sanctuary for orphaned bonobos in the Democratic Republic of the Congo conducted a study with unrelated pairs of hungry bonobos.

In the study, bonobos had to choose whether to eat some food by themselves or to give another bonobo access to it. The test subjects had the opportunity to immediately eat the food or to use a "key" to open a door to an adjacent empty room or a room that had another bonobo in it. The test subjects could easily see into the adjacent rooms, so they know which one was empty and which was occupied.

"We found that the test subjects preferred to voluntarily open the recipient's door to allow them to share highly desirable food that they could have easily eaten alone—with no signs of aggression, frustration, or change in the speed or rate of sharing across trials," explains Dr. Hare. "This stable sharing pattern was particularly striking since in other, nonsharing contexts, bonobos are averse to food loss and adjust to minimize such losses."

The authors point out that it is possible that the bonobos in their study chose to share in order to obtain favors in the future. Additional studies are needed to gain further insight into why bonobos and humans share. "Given the continued debate about how to characterize the motivation underlying costly sharing in humans, it will certainly require future research to probe more precisely what psychological mechanisms motivate and maintain the preference we observe here in bonobos for voluntary, costly sharing," concludes Dr. Hare.

Cell Press

SETTING OUT TO DISCOVER NEW, LONG-LIVED ELEMENTS

0 comentarios

Besides the 92 elements that occur naturally, scientists were able to create 20 additional chemical elements, six of which were discovered at the GSI Helmholtz Centre for Heavy Ion Research in Darmstadt. These new elements were produced artificially with particle accelerators and are all very short-lived: they decay in a matter of a split second. However, scientists predict the existence of even heavier elements with an extreme longevity, leaving them to only decay after years. These elements form an island of stability. Scientists at the GSI Helmholtz Centre for Heavy Ion Research in Darmstadt have developed and applied a measuring apparatus that might allow them to discover such long-lived elements, reports the renowned scientific journal "Nature".

An international team of scientists headed by Michael Block was able to trap atoms of the element 102, nobelium, in an ion trap. This is the first time in history that a so-called super heavy element had been trapped. Trapping the element allowed the research team to measure the atomic mass of Nobelium with unprecedented accuracy. The atomic mass is one of the most essential characteristics of an atom. It is used to calculate the atom’s binding energy, which is what keeps the atom together. The atom’s binding energy determines the stability of an atom. With the help of the new measuring apparatus, scientists will be able to identify long-lived elements on the so called islands of stability that can no longer be assigned by their radioactive decay. The island of stability is predicted to be located in the vicinity of the elements 114 to 120.

“Precisely measuring the mass of nobelium with our Shiptrap device was a successful first step. Now, our goal is to improve the measuring apparatus so that we can extend our method to heavier and heavier elements and, one day, may reach the island of stability”, says Michael Block, head of the research team at the GSI Helmholtz Centre.

For their measurements, Michael Block and his team built a highly complex apparatus, the ion trap “Shiptrap”, and combined it with “Ship”, the velocity filter which was already used in the discovery of six short-lived elements at GSI. To produce nobelium, the research team used the GSI accelerator to fire calcium ions onto a lead foil. With the help of Ship, they then separated the freshly produced nobelium from the projectile atoms. Inside the Shiptrap apparatus, the nobelium was first decelerated in a gas-filled cell, then the slow ions were trapped in a so-called Penning trap.

Held inside the trap by electric and magnetic fields, the nobelium ion spun on a minuscule spiral course at a specific frequency. This frequency was used to calculate the atomic mass. With an uncertainty of merely 0,000005 per cent, this new technique allows determining the atomic mass and binding energy with unprecedented precision and, for the first time, directly without the help of theoretical assumptions.

(Photo: G. Otto, GSI Helmholtzzentrum für Schwerionenforschung)

GSI Helmholtzzentrum für Schwerionenforschung GmbH

GENE DISCOVERY TO INCREASE BIOMASS NEEDED FOR GREEN FUEL

0 comentarios

Manchester scientists have identified the genes that make plants grow fatter and plan to use their research to increase plant biomass in trees and other species – thus helping meet the need for renewable resources.

“The US has set the ambitious goal of generating a third of all liquid fuel from renewable source by the year 2025. Estimates suggest to reach their goal they would need 1 billion tonnes of biomass, which is a lot,” says Professor Simon Turner, one of the University of Manchester researchers whose BBSRC-funded study is published in Development (Wednesday 10th February 2010).

“Our work has identified the two genes that make plants grow outwards. The long, thin cells growing down the length of a plant divide outwards, giving that nice radial pattern of characteristic growth rings in trees. So you get a solid ring of wood in the centre surrounded by growing cells. Now we have identified the process by which the cells know how to grow outwards, we hope to find a way of making that plants grow thicker quicker, giving us the increased wood production that could be used for biofuels or other uses.

“And there is an added benefit. There are concerns that the growing of biofuel products competes with essential food production. However, the part of the plant we have studied is the stalk – not the grain – so there will be no competition with food production.”

Professor Turner and Dr Peter Etchells, at the Faculty of Life Sciences, studied the plant Arabidopsis which does not look like a tree but has a similar vascular system, (which carries water and sugar around the plant). They investigated growth in the vascular bundles and found that the genes PXY and CLE41 directed the amount and direction of cell division. Furthermore, they found over-expression of CLE41 caused a greater amount of growth in a well-ordered fashion, thus increasing wood production.

Professor Turner explained: “We wanted to know how the cells divided to produce this pattern, how they ‘knew’ which side to divide along, and we found that it was down to the interaction of these two genes.

“Trees are responsive to a lot of things. They stop growing in winter and start again in spring and this changes according to the amount of light and the day length. It might take a tree 150 years to grow in Finland and only ten years in Portugal.

“Now we know what genes are dictating the growth process, we can develop a system of increasing growth so that it is orientated to produce more wood – increasing the essential biomass needed for our future.”

The team are now growing poplar trees in the lab – to see if they fit the Arabidopsis model. They will use these results to develop a system of increasing wood production.

(Photo: U. Manchester)

University of Manchester

STUDY REVEALS DANGERS OF NICOTINE IN THIRD-HAND SMOKE

0 comentarios

Nicotine in third-hand smoke, the residue from tobacco smoke that clings to virtually all surfaces long after a cigarette has been extinguished, reacts with the common indoor air pollutant nitrous acid to produce dangerous carcinogens. This new potential health hazard was revealed in a multi-institutional study led by researchers with the Lawrence Berkeley National Laboratory (Berkeley Lab).

“The burning of tobacco releases nicotine in the form of a vapor that adsorbs strongly onto indoor surfaces, such as walls, floors, carpeting, drapes and furniture. Nicotine can persist on those materials for days, weeks and even months. Our study shows that when this residual nicotine reacts with ambient nitrous acid it forms carcinogenic tobacco-specific nitrosamines or TSNAs,” says Hugo Destaillats, a chemist with the Indoor Environment Department of Berkeley Lab’s Environmental Energy Technologies Division. “TSNAs are among the most broadly acting and potent carcinogens present in unburned tobacco and tobacco smoke.”

Destaillats is the corresponding author of a paper published in the Proceedings of the National Academy of Sciences (PNAS) titled “Formation of carcinogens indoors by surface-mediated reactions of nicotine with nitrous acid, leading to potential third-hand smoke hazards.”

Co-authoring the PNAS paper with Destaillats were Mohamad Sleiman, Lara Gundel and Brett Singer, all with Berkeley Lab’s Indoor Environment Department, plus James Pankow with Portland State University, and Peyton Jacob with the University of California, San Francisco.

The authors report that in laboratory tests using cellulose as a model indoor material exposed to smoke, levels of newly formed TSNAs detected on cellulose surfaces were 10 times higher than those originally present in the sample following exposure for three hours to a “high but reasonable” concentration of nitrous acid (60 parts per billion by volume). Unvented gas appliances are the main source of nitrous acid indoors. Since most vehicle engines emit some nitrous acid that can infiltrate the passenger compartments, tests were also conducted on surfaces inside the truck of a heavy smoker, including the surface of a stainless steel glove compartment. These measurements also showed substantial levels of TSNAs. In both cases, one of the major products found was a TSNA that is absent in freshly emitted tobacco smoke – the nitrosamine known as NNA. The potent carcinogens NNN and NNK were also formed in this reaction.

“Time-course measurements revealed fast TSNA formation, up to 0.4 percent conversion of nicotine within the first hour,” says lead author Sleiman. “Given the rapid sorption and persistence of high levels of nicotine on indoor surfaces, including clothing and human skin, our findings indicate that third-hand smoke represents an unappreciated health hazard through dermal exposure, dust inhalation and ingestion.”

Since the most likely human exposure to these TSNAs is through either inhalation of dust or the contact of skin with carpet or clothes, third-hand smoke would seem to pose the greatest hazard to infants and toddlers. The study’s findings indicate that opening a window or deploying a fan to ventilate the room while a cigarette burns does not eliminate the hazard of third-hand smoke. Smoking outdoors is not much of an improvement, as co-author Gundel explains.

“Smoking outside is better than smoking indoors but nicotine residues will stick to a smoker’s skin and clothing,” she says. “Those residues follow a smoker back inside and get spread everywhere. The biggest risk is to young children. Dermal uptake of the nicotine through a child’s skin is likely to occur when the smoker returns and if nitrous acid is in the air, which it usually is, then TSNAs will be formed.”

The dangers of mainstream and secondhand tobacco smoke have been well documented as a cause of cancer, cardiovascular disease and stroke, pulmonary disease and birth defects. Only recently, however, has the general public been made aware of the threats posed by third-hand smoke. The term was coined in a study that appeared in the January 2009 edition of the journal “Pediatrics,” in which it was reported that only 65 percent of non-smokers and 43 percent of smokers surveyed agreed with the statement that “Breathing air in a room today where people smoked yesterday can harm the health of infants and children.”

Anyone who has entered a confined space – a room, an elevator, a vehicle, etc. – where someone recently smoked, knows that the scent lingers for an extended period of time. Scientists have been aware for several years that tobacco smoke is adsorbed on surfaces where semi-volatile and non-volatile chemical constituents can undergo reactions, but reactions of residual smoke constituents with atmospheric molecules such as nitrous acid have been overlooked as a source of harmful pollutants. This is the first study to quantify the reactions of third-hand smoke with nitrous acid, according to the authors.

“Whereas the sidestream smoke of one cigarette contains at least 100 nanograms equivalent total TSNAs, our results indicate that several hundred nanograms per square meter of nitrosamines may be formed on indoor surfaces in the presence of nitrous acid,” says lead-author Sleiman.

Co-author James Pankow points out that the results of this study should raise concerns about the purported safety of electronic cigarettes. Also known as “e-cigarettes,” electronic cigarettes claim to provide the “smoking experience,” but without the risks of cancer. A battery-powered vaporizer inside the tube of a plastic cigarette turns a solution of nicotine into a smoky mist that can be inhaled and exhaled like tobacco smoke. Since no flame is required to ignite the e-cigarette and there is no tobacco or combustion, e-cigarettes are not restricted by anti-smoking laws.

“Nicotine, the addictive substance in tobacco smoke, has until now been considered to be non-toxic in the strictest sense of the term,” says Kamlesh Asotra of the University of California’s Tobacco-Related Disease Research Program, which funded this study. “What we see in this study is that the reactions of residual nicotine with nitrous acid at surface interfaces are a potential cancer hazard, and these results may be just the tip of the iceberg.”

The Berkeley Lab researchers are now investigating the long-term stability in an indoor environment of the TSNAs produced as a result of third-hand smoke interactions with nitrous acid. The authors are also looking into the development of biomarkers to track exposures to these TSNAs. In addition, they are conducting studies to gain a better understanding of the chemistry behind the formation of these TSNAs and to find out more about other chemicals that are being produced when third-hand smoke reacts with nitrous acid.

“We know that these residual levels of nicotine may build up over time after several smoking cycles, and we know that through the process of aging, third-hand smoke can become more toxic over time,” says Destaillats. “Our work highlights the importance of third-hand smoke reactions at indoor interfaces, particularly the production of nitrosamines with potential health impacts.”

In the PNAS paper, Destaillats and his co-authors suggest various ways to limit the impact of the third hand smoke health hazard, starting with the implementation of 100 percent smoke-free environments in public places and self-restrictions in residences and automobiles. In buildings where substantial smoking has occurred, replacing nicotine-laden furnishings, carpets and wallboard might significantly reduce exposures.

(Photo: Roy Kaltschmidt, Berkeley Lab Public Affairs)

Lawrence Berkeley National Laboratory

CALTECH NEUROSCIENTISTS DISCOVER BRAIN AREA RESPONSIBLE FOR FEAR OF LOSING MONEY

0 comentarios
Neuroscientists at the California Institute of Technology (Caltech) and their colleagues have tied the human aversion to losing money to a specific structure in the brain—the amygdala.

The finding, described in the latest online issue of the journal Proceedings of the National Academy of Sciences (PNAS), offers insight into economic behavior, and also into the role of the brain's amygdalae, two almond-shaped clusters of tissue located in the medial temporal lobes. The amygdala registers rapid emotional reactions and is implicated in depression, anxiety, and autism.

The research team responsible for these findings consists of Benedetto de Martino, a Caltech visiting researcher from University College London and first author on the study, along with Caltech scientists Colin Camerer, the Robert Kirby Professor of Behavioral Economics, and Ralph Adolphs, the Bren Professor of Psychology and Neuroscience and professor of biology.

The study involved an examination of two patients whose amygdalae had been destroyed due to a very rare genetic disease; those patients, along with individuals without amygdala damage, volunteered to participate in a simple experimental economics task.

In the task, the subjects were asked whether or not they were willing to accept a variety of monetary gambles, each with a different possible gain or loss. For example, participants were asked whether they would take a gamble in which there was an equal probability they'd win $20 or lose $5 (a risk most people will choose to accept) and if they would take a 50/50 gamble to win $20 or lose $20 (a risk most people will not choose to accept). They were also asked if they'd take a 50/50 gamble on winning $20 or losing $15—a risk most people will reject, "even though the net expected outcome is positive," Adolphs says.

Both of the amygdala-damaged patients took risky gambles much more often than subjects of the same age and education who had no amygdala damage. In fact, the first group showed no aversion to monetary loss whatsoever, in sharp contrast to the control subjects.

"Monetary-loss aversion has been studied in behavioral economics for some time, but this is the first time that patients have been reported who lack it entirely," says de Martino.

"We think this shows that the amygdala is critical for triggering a sense of caution toward making gambles in which you might lose," explains Camerer. This function of the amygdala, he says, may be similar to its role in fear and anxiety.

"Loss aversion has been observed in many economics studies, from monkeys trading tokens for food to people on high-stakes game shows," he adds, "but this is the first clear evidence of a special brain structure that is responsible for fear of such losses."

California Institute of Technology (Caltech)

Friday, February 26, 2010

STRONGEST EVIDENCE TO DATE SHOWS LINK BETWEEN EXPLORATION WELL AND LUSI MUD VOLCANO

0 comentarios

New data provides the strongest evidence to date that the world's biggest mud volcano, which killed 13 people in 2006 and displaced thirty thousand people in East Java, Indonesia, was not caused by an earthquake, according to an international scientific team that includes researchers from Durham University and the University of California, Berkeley.

Drilling firm Lapindo Brantas has denied that a nearby gas exploration well was the trigger for the volcano, instead blaming an earthquake that occurred 280 kilometers (174 miles) away. They backed up their claims in an article accepted this week for publication in the journal Marine and Petroleum Geology, by lead author Nurrochmat Sawolo, senior drilling adviser for Lapindo Brantas, and colleagues.

In response, a group of scientists from the United Kingdom, United States, Australia and Indonesia led by Richard Davies, director of the Durham Energy Institute, have written a discussion paper in which they refute the main arguments made by Nurrochmat Sawolo and document new data that provides the strongest evidence to date of a link between the well and the volcano. That paper has been accepted for publication in the same journal.

"The disaster was caused by pulling the drill string and drill bit out of the hole while the hole was unstable," Davies said. "This triggered a very large 'kick' in the well, where there is a large influx of water and gas from surrounding rock formations that could not be controlled.

"We found that one of the on-site daily drilling reports states that Lapindo Brantas pumped heavy drilling mud into the well to try to stop the mud volcano. This was partially successful and the eruption of the mud volcano slowed down. The fact that the eruption slowed provides the first conclusive evidence that the bore hole was connected to the volcano at the time of eruption."

The Lusi volcano, which first erupted on May 29, 2006, in the Porong sub-district of Sidoarjo, close to Indonesia's second city of Surabaya, East Java, now covers seven square kilometers – nearly three square miles and is 20 meters (65 feet) thick. The mud flow has razed four villages and 25 factories. Thirteen people have died as a result of a rupture in a natural gas pipeline underneath one of the holding dams. The Lusi crater has been oozing enough mud to fill 50 Olympic size swimming pools every day. All efforts to stem the mud flow have failed, including the construction of dams, levees, drainage channels, and even plugging the crater with concrete balls. Lusi may continue to erupt for decades, scientists believe.

Arguments over the causes of the Lusi volcano have stalled the establishment of liability for the disaster and delayed compensation to thousands of people affected by the mud. The Yogyakarta earthquake that occurred at the time of the volcano was cited by some as a possible cause of the eruption, but the research team rejected this explanation.

The Durham University-led group of scientists believe that their analysis resolves the cause beyond all reasonable doubt. According to their discussion paper, 'The pumping of heavy mud caused a reduction in the rate of flow to the surface. The reason for pumping the mud was to stop the flow by increasing the pressure exerted by the mud column in the well and slowing the rate of flux of fluid from surrounding formations.'

"An earthquake trigger can be ruled out because the earthquake was too small given its distance, and the stresses produced by the earthquake were minute smaller than those created by tides and weather," said co-author Michael Manga, professor of earth and planetary science at the University of California, Berkeley.

The group of scientists has identified five critical drilling errors as the causes of the Lusi mud volcano eruption:

•having a significant open hole section with no protective casing
•overestimating the pressure the well could tolerate
•after complete loss of returns, the decision to pull the drill string out of an extremely unstable hole
•pulling the bit out of the hole while losses were occurring
•not identifying the kick more rapidly

"This is the clearest evidence uncovered so far that the Lusi mud volcano was triggered by drilling," Davies said. "We have detailed data collected over two years that show the events that led to the creation of the Lusi volcano."

"The observation that pumping mud into the hole caused a reduction in eruption rate indicates a direct link between the wellbore and the eruption," he added. "The decision was made to pull the drill bit out of the hole without verifying that a stable mud column was in place and it was done while severe circulating mud losses were in progress. This procedure caused the kick."

(Photo: Channel 9 Australia)

University of California, Berkeley

HOMEBUILDING BEYOND THE ABYSS

0 comentarios

Evidence from the Challenger Deep– the deepest surveyed point in the world's oceans– suggests that tiny single-celled creatures called foraminifera living at extreme depths of more than ten kilometres build their homes using material that sinks down from near the ocean surface.

The Challenger Deep is located in the Mariana Trench in the western Pacific Ocean. It lies in the hadal zone beyond the abyssal zone, and plunges down to a water depth of around 11 kilometres.

"The hadal zone extends from around six kilometres to the deepest seafloor. Although the deepest parts of the deepest trenches are pretty inhospitable environments, at least for some types of organism, certain kinds of foraminifera are common in the bottom sediments," said Professor Andrew Gooday of the National Oceanography Centre, Southampton (NOCS) and member of a UK-Japanese team studying these organisms in samples collected in 2002 during a Japan-USA-Korea expedition to study life in the western depression of the Challenger Deep.

The researchers, whose findings appear in the latest issue of the journal Deep Sea Research, used the remotely operated vehicle KAIKO, operated by the Japan Agency for Marine-Earth Science and Technology (JAMSTEC), to take core samples from the soft sediment of the trench floor. Among many foraminiferans with an organic shell (or 'test'), they found four undescribed specimens with agglutinated tests.

"The Challenger Deep is an extreme environment for agglutinated foraminifera, which construct their tests from a wide range of particles cemented together by calcareous or organic matter," said Gooday. "At these great depths, particles made from biologically formed calcite and silica, as well as minerals such as quartz, should dissolve, leaving only clay grains available for test building."

The researchers were therefore surprised to discover that foraminiferan tests sampled from the Challenger Deep contained calcareous components, including the dissolved remnants of coccoliths, the calcium carbonate plates of tiny algae called coccolithophores, and planktonic foraminiferan test fragments.

The organic test surface of one species was densely pitted with imprints, which the researchers interpreted as representing mineral grains of various types, including quartz, which subsequently dissolved. Agglutinated particles, presumed to be clay minerals, survived only in one specimen.

"Our observations demonstrate that coccoliths, and probably also planktonic foraminiferan tests, reach the Challenger Deep intact," said Gooday. "These particles were probably transported to these extreme depths in rapidly sinking marine snow, the aggregated remains of phytoplankton that lived in the sunlit surface ocean, or in faecal pellets from zooplankton."

It seems likely, therefore, that at least some agglutinated foraminifera living at extreme hadal depths build their homes from material that sinks down from the ocean above, rather like manna from heaven.

(Photo: National Oceanography Centre)

National Oceanography Centre

NEW PICTURE OF ANCIENT OCEAN CHEMISTRY ARGUES FOR CHEMICALLY LAYERED WATER

0 comentarios

A research team led by biogeochemists at the University of California, Riverside has developed a detailed and dynamic three-dimensional model of Earth's early ocean chemistry that can significantly advance our understanding of how early animal life evolved on the planet.

Working on rock samples from the Doushantuo Formation of South China, one of the oldest fossil beds and long viewed by paleontologists to be a window to early animal evolution, the research team is the first to show that Earth's early ocean chemistry during a large portion of the Ediacaran Period (635-551 million years ago) was far more complex than previously imagined.

Their work is the first comprehensive geochemical study of the Doushantuo Formation to investigate the structure of the ocean going from shallow to deep water environments. It is also one of the most comprehensive studies for any Precambrian interval. (The Precambrian refers to a stretch of time spanning from the inception of the Earth approximately 4.5 billion years ago to about 540 million years ago. It was in the Precambrian when the first single-celled microbes evolved 3.5 billion years ago or earlier, followed by the first multicellular animals much later, around 700 million years ago.)

The researchers' model for the ancient ocean argues for a stratified marine basin, one with a chemically layered water column. While the surface ocean was oxygen-rich, the deep ocean was ferruginous – oxygen-deprived and iron-dominated. Further, sandwiched in this deep ocean was a dynamic wedge of sulfidic water, highly toxic to animal life, that impinged against the continental shelf.

Dominated by dissolved hydrogen sulfide, the sulfidic wedge was in a state of flux, varying in size and capable of encroaching on previously oxygenated areas of the continental shelf — killing all animal life there. The overall picture is a marine basin with co-existing oxygen-rich, sulfidic and ferruginous water layers.

Study results appeared Feb. 11 in Science Express.

In the modern sulfur-rich ocean, hydrogen sulfide in oxygen-poor waters reacts with iron to form the mineral pyrite, thus stripping the dissolved iron from the water column. But the researchers' results show that under specific geochemical conditions in the early ocean, when levels of dissolved sulfate (the source of hydrogen sulfide in the ocean) and oxygen were particularly low compared to the modern ocean, layers of sulfidic waters could coexist with ferruginous water masses, and even persist for long periods of time.

"This is an entirely new interpretation of ancient ocean chemistry," said Chao Li, a research specialist in UC Riverside's Department of Earth Sciences and the first/lead author of the research paper. "Our model provides a brand-new backdrop for the earliest evolution of animal life on the planet. We show that the sulfidic ocean wedge, along with an absence of oxygen, can hinder the colonization of early animals on the shallow seafloor and influence their evolution as they take a foothold. In other words, we cannot ignore hydrogen sulfide when piecing together how animals and other eukaryotes such as algae evolved on our planet."

The researchers posit that their robust pattern of a stratified marine basin is the best example of a new paradigm in studies of Precambrian ocean chemistry. They predict the record of much of the early ocean elsewhere will show similarities to the complex chemical layering seen in South China.

"This new world order asks that we take into account co-occurring spatial variations in water chemistry in an ocean basin, specifically when moving from near the shallow shoreline along continental shelves to progressively outwards into deeper waters, and when applying a diverse range of complementary geochemical analyses to elucidate these changes in ocean chemistry," said Gordon Love, an assistant professor of biogeochemistry, who collaborated on the study and in whose lab Li works.

Li explained that in the scientific literature the generally patchy fossil record of early animals observed through the Ediacaran has largely been attributed to poor preservation of fossils. The new research shows, however, that changes in environmental conditions, in this case variations in distribution of hydrogen sulfide, may explain gaps seen in the Ediacaran fossil record.

"Our model points to early animal life having to cope with changing chemical environments even in the shallow waters of the continental shelf," said Love, the principal investigator on the National Science Foundation (NSF) grant that funded the study. "At times, movement of toxic sulfide-rich waters into the shallow water would be calamitous to animal life. This well explains the patchy time record of animal fossils in most Ediacaran basins."

Timothy Lyons, a professor of biogeochemistry and a co-principal investigator on the NSF grant, explained that only an incomplete temporal record of animal microfossils has been unearthed in the Doushantuo Formation despite considerable efforts.

"Much of the unequivocal fossil evidence for animals is in the form of microfossil cysts found in only a few sedimentary layers, suggesting that the early animals were environmentally stressed," he said. "An explanation for this pattern is certain to lie in our model."

According to the researchers, a stratified marine basin was favored by an overall deficiency of dissolved sulfate in seawater following a long history of oxygen deficiency in the ocean. Ordinarily, sulfate gets introduced into the ocean from the weathering of continental sulfide minerals exposed to an atmosphere with photosynthetically produced oxygen. But the researchers argue that major glaciation events predating Doushantuo time exacerbated the scarcity of sulfate. They note that if glaciation was globally extensive, gas and chemical interactions between the oceans and atmosphere would be suppressed by a layer of ice cover in many areas.

"Ocean chemistry changes as the ice coverage acts like a pie crust sealing off the ocean interior from the atmosphere," Love said. "The effects of such ice coverage are a reduction of sulfate inputs into the ocean brought in by rivers and a buildup of dissolved iron in the deep ocean sourced by volcanic activity along the mid-ocean ridges. Later, as the ice cover abated, sulfate inputs from rivers localized the animal-inhibiting wedge of hydrogen sulfide along the shallow basin margins."

(Photo: Alex Sessions, Caltech)

University of California, Riverside

Thursday, February 25, 2010

SIGNS OF LIQUID WATER IN SATURNIAN MOON

0 comentarios

Scientists working on the Cassini space mission have found negatively charged water ions in the ice plume of Enceladus. Their findings, based on analysis from data taken in plume fly-throughs in 2008 and reported in the journal Icarus, provide evidence for the presence of liquid water, which suggests the ingredients for life inside the icy moon. The Cassini plasma spectrometer, used to gather this data, also found other species of negatively charged ions including hydrocarbons.

“While it’s no surprise that there is water there, these short-lived ions are extra evidence for sub-surface water and where there’s water, carbon and energy, some of the major ingredients for life are present,” said lead author Andrew Coates from University College London’s Mullard Space Science Laboratory.

“The surprise for us was to look at the mass of these ions. There were several peaks in the spectrum, and when we analysed them we saw the effect of water molecules clustering together one after the other.” The measurements were made as Cassini plunged through Enceladus’ plume on March 12, 2008.

Enceladus thus joins Earth, Titan and comets where negatively charged ions are known to exist in the solar system. Negative oxygen ions were discovered in Earth’s ionosphere at the dawn of the space age. At Earth’s surface, negative water ions are present where liquid water is in motion, such as waterfalls or crashing ocean waves.

The Cassini plasma spectrometer, originally designed to take data in Saturn’s magnetic environment, measures the density, flow velocity and temperature of ions and electrons that enter the instrument. But since the discovery of Enceladus’ water ice plume, the instrument has also successfully captured and analysed samples of material in the jets.

Early in its mission, Cassini discovered the plume that fountains water vapour and ice particles above Enceladus. Since then, scientists have found that these water products dominate Saturn’s magnetic environment and create Saturn’s huge E-ring.

At Titan, the same instrument detected extremely large negative hydrocarbon ions with masses up to 13,800 times that of hydrogen. A paper in Planetary and Space Science by Coates and colleagues in December 2009 showed that, at Titan, the largest hydrocarbon or nitrile ions are seen at the lowest altitudes of the atmosphere that Cassini flew (950 kilometers (590 miles). They suggest these large ions are the source of the smog-like haze that blocks most of Titan’s surface from view. They may be representative of the organic mix called “tholins” by Carl Sagan when he produced the reddish brew of prebiotic chemicals in the lab from gases that were known to be present in Titan’s atmosphere. Tholins that may be produced in Titan’s atmosphere could fall to the moon’s surface and may even make up the sand grains of the dunes that dominate part of Titan’s equatorial region.

The new findings add to our growing knowledge about the detailed chemistry of Enceladus’ plume and Titan’s atmosphere, giving new understanding of environments beyond Earth where prebiotic or life-sustaining environments might exist.

(Photo: NASA)

Science & Technology Facilities Council

RICE PHYSICISTS KILL CANCER WITH 'NANOBUBBLES'

0 comentarios
Using lasers and nanoparticles, scientists at Rice University have discovered a new technique for singling out individual diseased cells and destroying them with tiny explosions. The scientists used lasers to make "nanobubbles" by zapping gold nanoparticles inside cells. In tests on cancer cells, they found they could tune the lasers to create either small, bright bubbles that were visible but harmless or large bubbles that burst the cells.

"Single-cell targeting is one of the most touted advantages of nanomedicine, and our approach delivers on that promise with a localized effect inside an individual cell," said Rice physicist Dmitri Lapotko, the lead researcher on the project. "The idea is to spot and treat unhealthy cells early, before a disease progresses to the point of making people extremely ill."

The research is available online in the journal Nanotechnology.

Nanobubbles are created when gold nanoparticles are struck by short laser pulses. The short-lived bubbles are very bright and can be made smaller or larger by varying the power of the laser. Because they are visible under a microscope, nanobubbles can be used to either diagnose sick cells or to track the explosions that are destroying them.

In laboratory studies published last year, Lapotko and colleagues at the Laboratory for Laser Cytotechnologies at the A.V. Lykov Heat and Mass Transfer Institute in Minsk, Belarus, applied nanobubbles to arterial plaque. They found that they could blast right through the deposits that block arteries.

"The bubbles work like a jackhammer," Lapotko said.

In the current study, Lapotko and Rice colleague Jason Hafner, associate professor of physics and astronomy and of chemistry, tested the approach on leukemia cells and cells from head and neck cancers. They attached antibodies to the nanoparticles so they would target only the cancer cells, and they found the technique was effective at locating and killing the cancer cells.

Lapotko said the nanobubble technology could be used for "theranostics," a single process that combines diagnosis and therapy. In addition, because the cell-bursting nanobubbles also show up on microscopes in real time, Lapotko said the technique can be used for post-therapeutic assessment, or what physicians often refer to as "guidance."

Hafner said, "The mechanical and optical properties of the bubbles offer unique advantages in localizing the biomedical applications to the individual cell level, or perhaps even to work within cells."

Rice University

HOW ALGAE MASTERED QUANTUM PHYSICS

0 comentarios

Simple single-celled algae use highly sophisticated quantum physics to harvest and convert solar energy for their survival, a new study suggests.

The study, published in the prestigious science journal Nature, was by an international team of Canadian, Italian and Australian researchers, including two UNSW biophysicists – Professor Paul Curmi and Dr Krystyna Wilk.

It sheds new light on the process of photosynthesis used by green plants and algae to harvest energy from the sun. The findings may open up new avenues to develop organic solar cells and other electronic devices that emit or are initiated by light, such as lasers and visual displays.

The water-dwelling algae are in effect highly miniaturised quantum computers, the study suggests. They have mastered the process of photosynthesis so well that they can convert sunlight into electrical energy with near-perfect efficiency.

They do so by having their light-harvesting proteins "wired" together through a phenomenon known as quantum coherence, enabling them to transfer energy from one protein to another with lightning-fast speed and so reduce energy loss along the energy conversion pathway.

The study is part of a larger, ongoing collaboration between the biophysics lab at the UNSW School of Physics, the Centre for Applied Medical Research, St Vincent's Hospital Sydney, and the University of Toronto.

"We are working on understanding how a group of single-celled algae can thrive under low light conditions in marine and freshwater habitats,' Professor Curmi said.

"To do this, they must be incredibly efficient in capturing all solar energy and converting it to chemical energy via photosynthesis. They cannot afford to let any solar energy escape, so they have evolved elaborate antenna systems that trap light.”

(Photo: UNSW)

The University of New South Wales

Wednesday, February 24, 2010

STUDENTS FIND LOST OFFICE GEAR WITH TINY SENSORS

0 comentarios

Miniature sensors being developed by CSIRO promise to provide the answers to questions which seem to arise regularly in modern office workplaces like: “Where’s my pen?” and; “Who nicked my stapler?”

CSIRO is developing FLECK™ Nano – a miniature version of the highly successful FLECK sensor nodes that independently record environmental conditions then cooperate with each other to wirelessly send their data to a collection point.

Two students working with CSIRO's vacation scholarship scheme have been applying their research skills to bringing FLECK Nanos indoors. Doing so means things like temperature and power use can be monitored at a very refined level and small objects can be tracked unobtrusively.

“The idea of pervasive computing has been touted for some time, but is not yet available for everyday office items,” CSIRO ICT Centre researcher, Phil Valencia, says.

“We’re aiming to enable a level of ubiquitous sensing that hasn’t been experienced yet and see how it impacts on day-to-day office activities.”

Two university students have spent their summer holidays working with Mr Valencia as part of CSIRO’s vacation scholarship scheme.

A software engineering student at the Australian National University, David Kooymans, is working on reducing the energy demands of mobile FLECK Nanos.

“They communicate with a node in a known location using radio waves,” Mr Kooymans says.

“The more frequently location information is updated the more useful the other data becomes, but the transmitters consume a high proportion of energy so there’s a trade-off to be negotiated there.”

An electrical engineering student at the University of Queensland, Blake Newman, is looking for ways FLECK Nanos could ‘scavenge’ energy from the environment.

“You don’t want to be changing batteries in thousands of little devices so we are designing energy scavenging circuitry that will make power from whatever source it can,” Mr Valencia says.

“If a device doesn’t need much power, it’s amazing how much energy is all around just waiting to be tapped. For example, a FLECK Nano attached to a stapler on a desk in a windowless office is able to function if there is enough light to work by.”

(Photo: Samuel Klistorner, CSIRO)

The Commonwealth Scientific and Industrial Research Organisation

CLOTHING SOLUTION FOR CHILLY OPERATING ROOM ENVIRONMENT

0 comentarios

Hugging heated IV bags, layering undergarments and wrapping themselves in blankets - Barry Finegan and his co-workers do what they can to get warm before heading into the surgical theatre.

Hospital operating rooms are traditionally kept chilly, well below standard room temperature, for the comfort of surgeons sweating under the warm lights needed for their work. But that can leave others on the surgical team literally out in the cold.

Such is the plight for support staff in operating rooms everywhere, and Finegan is hoping bright minds at the University of Alberta can solve the dilemma.

Finegan, a professor in the U of A's Department of Anesthesiology and Pain Medicine and an OR team member at the University of Alberta Hospital, was tired of seeing his colleagues shivering as they cared for patients undergoing complex and often lengthy surgeries, and wanted to do something about it. He didn't have far to go.

"I thought, 'we have the expertise here at the university in clothing materials and design. We must be able to improve the technology we are using in our OR garments.'"

His quest turned out to be a great fit for the Department of Human Ecology in the Faculty of Agricultural, Life and Environmental Sciences. Under the guidance of human ecology professors Megan Strickfaden and Rachel McQueen, third-year students Annette Aslund and Alex Pedden are working with master's student Yin Xu to design a garment that will keep OR team members like Finegan more comfortable during surgery.

The trio is working on an improved design for a warm-up jacket that members of surgical teams can wear in the operating room.

"We have a challenging environment in the OR, in that we've got to keep the patient warm but at the same time, we have to make the environment appropriate for the people who work there," Finegan said.

While surgeons and other staff directly involved in an operation are kept warm by their scrubs and the lights over the table, those on the periphery are vulnerable to the cool temperatures that help keep the surgeons comfortable and focused.

Anesthesiologists like Finegan can sit through surgeries that can go on for up to 12 hours while they continuously monitor their patients, which often also means sitting under a vent that pushes cool air into the room. And nurses have to remove their current warm-up jackets to avoid contamination when preparing patients for surgery.

"The people who aren't patients or directly involved in the surgical process are in an uncomfortably cold environment," Finegan noted. And because the room must be kept sterile, homegrown solutions like sweaters or other garments aren't viable.

Aslund, Pedden and Xu conducted field research at the U of A hospital, monitoring room temperature and humidity in the surgical suites, observing medical teams at work and collecting textile samples from existing operating room garb. They also videotaped and photographed the donning and removal of the garments and, using body diagrams, had the staff indicate where they felt hot or cold.

High on the wish list for both groups were garments that were sterile, professional-looking, thermally functional, washable, that fit well and would be approved for use by Alberta Health Services. Nurses wanted less baggy sleeves, anesthesiologists wanted vests and both groups want multiple pockets.

Using their data, the students drew up a prototype design that will be the subject of a pilot study. Securing grants to continue the research is next, Strickfaden said. "We need to do more textile analysis and build on the early concepts we have for a garment design."

The project provided an opportunity not only for Strickfaden and McQueen to collaborate within their department, but also gave their team a chance to view a problem holistically and work directly with those affected-the ideal approach in human ecology. It was an "eye-opener," especially for the undergraduate students, Strickfaden added.

"The OR was a complex environment and they really got to experience a research situation, interviewing people and getting out in the field."

Aslund is studying for a science degree in clothing and textiles, with a minor in product development. She got "a real sense of satisfaction" from her field research, which involved interviewing members of the OR team. "It felt more university-like than sitting in a classroom. And we just scratched the surface of it. I would have liked to go on to the textile testing. But I did gain some knowledge about taking a holistic approach to solving a problem."

She believes her early foray into field work will serve her well, no matter what her future career brings. "If I have to do a focus group in marketing or business I'll be able to talk to a client or conduct different kinds of research."

The experience has been just as rewarding for Finegan, who looks forward to the final outcome.

"I was impressed by the detailed approach the students and the department took to assessing our environment. For me as a medical researcher, it's always illuminating to work with those in other disciplines and realize the strength of cross-disciplinary research. We forget sometimes the importance of human ecology in ensuring that the environment in which we work is optimal. Obviously for us, temperature issues are potential distractions."

(Photo: U. Alberta)

University of Alberta

SOME MORBIDLY OBESE PEOPLE ARE MISSING GENES

0 comentarios

A small but significant proportion of morbidly obese people are missing a section of their DNA, according to research published today in Nature. The authors of the study, from Imperial College London and ten other European Centres, say that missing DNA such as that identified in this research may be having a dramatic effect on some people's weight.

According to the new findings, around seven in every thousand morbidly obese people are missing a part of their DNA, containing approximately 30 genes. The researchers did not find this kind of genetic variation in any normal weight people.

There are an estimated 700,000 morbidly obese people in England, with a Body Mass Index (BMI) of over 40. Researchers believe that the weight problems of around one in twenty morbidly obese people are due to known genetic variations, including mutations and missing DNA. Many more similar obesity-causing mutations, such as the one in this study, remain to be found, says the team.

Previous research had identified several genetic variations that contribute to obesity, most of which are single mutations in a person's DNA that change the function of a gene. Today's research is the first to clearly demonstrate that obesity in otherwise physically healthy individuals can be caused by a rare genetic variation in which a section of a person's DNA is missing. The researchers do not yet know the function of the missing genes, but previous research has suggested that some of them may be associated with delayed development, autism and schizophrenia.

People inherit two copies of their DNA, one from their mother and one from their father. Sometimes, missing one copy of one or several genes - as in the individuals identified in this latest study - can have a drastic effect on the body.

The researchers believe there may be other genetic deletions, in addition to those identified today, that increase a person's risk of becoming obese. They hope that by identifying genetic variations causing people to be extremely obese, they can develop genetic tests to help determine the best course of treatment for these individuals.

Professor Philippe Froguel, lead author of the study from the School of Public Health at Imperial College London, said: "Although the recent rise in obesity in the developed world is down to an unhealthy environment, with an abundance of unhealthy food and many people taking very little exercise, the difference in the way people respond to this environment is often genetic. It is becoming increasingly clear that for some morbidly obese people, their weight gain has an underlying genetic cause. If we can identify these individuals through genetic testing, we can then offer them appropriate support and medical interventions, such as the option of weight loss surgery, to improve their long-term health."

The Imperial team first identified the missing genes in teenagers and adults who had learning difficulties or delayed development. They found 31 people who had nearly identical 'deletions' in one copy of their DNA. All of the adults with this genetic change had a BMI of over 30, which means they were obese.

The researchers then went on to study the genomes of 16,053 people who were either obese or normal weight, (with a BMI between 18.5 and 25), from eight European cohorts. They identified 19 more people with the same genetic deletion, all of whom were severely obese, but did not find the deletion in any healthy normal weight people. This means the genetic deletion was found in seven in every 1,000 morbidly obese people, making it the second most frequent known genetic cause of obesity.

People with the deletion tended to be normal weight toddlers, becoming overweight during childhood and then severely obese as adults. The researchers also looked at the genomes of their parents, and found that 11 people inherited the deletion from their mother and four from their father, with ten of the deletions occurring by chance. All the parents with the deletion were also obese.

The next step in this research will be to determine the function of the missing genes. Previous studies have suggested that some of the genes may be associated with delayed development, autism and schizophrenia, so the researchers also plan to investigate the possible links between these conditions and obesity.

According to first author Dr Robin Walters, from the School of Public Health at Imperial College London, there are likely to be many more variations like the deletion identified in this study that remain to be found. He said: "Although individually rare, the combined effect of several variations of this type could explain much of the genetic risk for severe obesity, which is known to run in families. Previously identified genetic influences on weight gain have a much less drastic effect - increasing weight by just one or two pounds, for example. By looking at groups of people with severe obesity, we may be more likely to find these rare genetic variations."

Professor Froguel added: "The method used in the study could also help find novel genetic variations that affect the risk of other conditions. We identified this variant by first studying very obese individuals, and then homing in on the region of interest in larger, less severely affected groups. This powerful approach could be used to identify genetic influences on other diseases that are poorly understood at present, such as Type 2 diabetes."

(Photo: ICL)

Imperial College London

RECORD-BREAKING COLLISIONS

0 comentarios

In December, the Large Hadron Collider, the world’s largest particle accelerator, shattered the world record for highest energy particle collisions.

Recently, team led by researchers from MIT, CERN and the KFKI Research Institute for Particle and Nuclear Physics in Budapest, Hungary, completed work on the first scientific paper analyzing the results of those collisions. Its findings show that the collisions produced an unexpectedly high number of particles called mesons — a factor that will have to be taken into account when physicists start looking for more rarer particles and for the theorized Higgs boson.

“This is the very first step in a long road to performing extremely sensitive analyses that can detect particles produced only in one in a billion collisions,” says Gunther Roland, MIT associate professor of physics and an author of the new paper.

Roland and MIT professors Wit Busza and Boleslaw Wyslouch, who are members of the CMS (compact muon solenoid) collaboration, were among the study leaders. The CMS collaboration runs one of four detectors at the collider.

The Large Hadron Collider (LHC), located underground near Geneva, Switzerland, started its latest run in late November. On Dec. 8, the proton beams around the 17-mile ring collided at a peak energy of 2.36 tera electron volts (TeV), breaking the previous record of 1.96 TeV achieved at the Fermi National Accelerator Lab. Because of Einstein’s equation, E=mc2, which correlates mass and energy, higher energy levels should produce heavier particles — possibly including some never seen before.

In the new paper, submitted to the Journal of High Energy Physics by CMS, the physicists analyzed the number of particles produced in the aftermath of the high-energy collisions. When protons collide, their energy is predominantly transformed into particles called mesons — specifically, two types of mesons known as pions and kaons.

To their surprise, the researchers that the number of those particles increased faster with collision energy than was predicted by their models, which were based on results of lower-energy collisions.

Taking the new findings into account, the team is now tuning its predictions of how many of those mesons will be found during even higher energy collisions. When those high-energy experiments are conducted, it will be critical to know how many such particles to expect so they can be distinguished from more rare particles.

“If we’re looking for rare particles later on, these mesons will be in the background,” says Roland. “These results show us that our expectations were not completely wrong, but we have to modify things a bit.”

Using the Large Hadron Collider, physicists hope to eventually detect the Higgs boson, a particle that is theorized to give all other particles their mass, as well as evidence for other physical phenomena such as supersymmetry, extra dimensions of space and the creation of a new form of matter called quark-gluon plasma (QGP). The new data provide an important reference point when CMS will look for signatures of QGP creation in collisions of lead ions at the LHC later this year.

The CMS team, which includes more than 2,000 scientists around the world, has 45 members (including faculty, students and research scientists) from the MIT Laboratory for Nuclear Science’s Particle Physics Collaboration and heavy-ion research groups.

The Large Hadron Collider is capable of creating collisions up to 14 TeV, but scientists are gradually easing the machine up to that level to try to avoid safety issues that have arisen in the past. In September 2008, the collider had to be shut down for several months after a connector joining two of the collider’s magnets failed, causing an explosion and leakage of the liquid helium that cools the magnets.

During the collider’s next run in March, researchers hope to create collisions of 7 TeV, says Roland. The success of the latest effort “makes us extremely optimistic about the detector,” he says. “It performed beautifully during the run.”

(Photo: CERN)

MIT

PCS AROUND THE WORLD UNITE TO MAP THE MILKY WAY

0 comentarios

At this very moment, tens of thousands of home computers around the world are quietly working together to solve the largest and most basic mysteries of our galaxy.

Enthusiastic and inquisitive volunteers from Africa to Australia are donating the computing power of everything from decade-old desktops to sleek new netbooks to help computer scientists and astronomers at Rensselaer Polytechnic Institute map the shape of our Milky Way galaxy. Now, just this month, the collected computing power of these humble home computers has surpassed one petaflop, a computing speed that surpasses the world’s second fastest supercomputer.

The project, MilkyWay@Home, uses the Berkeley Open Infrastructure for Network Computing (BOINC) platform, which is widely known for the SETI@home project used to search for signs of extraterrestrial life. Today, MilkyWay@Home has outgrown even this famous project, in terms of speed, making it the fastest computing project on the BOINC platform and perhaps the second fastest public distributed computing program ever in operation (just behind Folding@home).

The interdisciplinary team behind MilkyWay@Home, which ranges from professors to undergraduates, began the formal development under the BOINC platform in July 2006 and worked tirelessly to build a volunteer base from the ground up to build its computational power.

Each user participating in the project signs up their computer and offers up a percentage of the machine’s operating power that will be dedicated to calculations related to the project. For the MilkyWay@Home project, this means that each personal computer is using data gathered about a very small section of the galaxy to map its shape, density, and movement.

In particular, computers donating processing power to MilkyWay@Home are looking at how the different dwarf galaxies that make up the larger Milky Way galaxy have been moved and stretched following their merger with the larger galaxy millions of years ago. This is done by studying each dwarf’s stellar stream. Their calculations are providing new details on the overall shape and density of dark matter in the Milky Way galaxy, which is widely unknown.

The galactic computing project had very humble beginnings, according to Heidi Newberg, associate professor of physics, applied physics, and astronomy at Rensselaer. Her personal research to map the three-dimensional distribution of stars and matter in the Milky Way using data from the extensive Sloan Digital Sky Survey could not find the best model to map even a small section of a single galactic star stream in any reasonable amount of time.

“I was a researcher sitting in my office with a very big computational problem to solve and very little personal computational power or time at my fingertips,” Newberg said. “Working with the MilkyWay@Home platform, I now have the opportunity to use a massive computational resource that I simply could not have as a single faculty researcher, working on a single research problem.”

Before taking the research to BOINC, Newberg worked with Malik Magdon-Ismail, associate professor of computer science, to create a stronger and faster algorithm for her project. Together they greatly increased the computational efficiency and set the groundwork for what would become the much larger MilkyWay@Home project.

“Scientists always need additional computing power,” Newberg said. “The massive amounts of data out there make it so that no amount of computational power is ever enough.” Thus, her work quickly exceeded the limits of laboratory computers and the collaboration to create MilkyWay@Home formally began in 2006 with the assistance of the Claire and Roland Schmitt Distinguished Professor of Computer Science Boleslaw Szymanski; Associate Professor of Computer Science Carlos Varela; postdoctoral research assistant Travis Desell; as well as other graduate and undergraduate students at Rensselaer.

With this extensive collaboration, leaps and bounds have been made to further the astrophysical goals of the project, but important discoveries have also been made along the way in computational science to create algorithms that make the extremely distributed and diverse MilkyWay@Home system work so well, even with volunteered computers that can be highly unreliable.

“When you use a supercomputer, all the processors are the same and in the same location, so they are producing the same results at the same time,” Varela said. “With an extremely distributed system, like we have with MilkyWay@Home, we are working with many different operating systems that are located all over the globe. To work with such asynchronous results we developed entirely new algorithms to process work as it arrives in the system.” This makes data from even the slowest of computers still useful to the project, according to Varela. “Even the slowest computer can help if it is working on the correct problem in the search.”

In total, nine articles have been published and multiple public talks have been given regarding the computer science discoveries made during the creation of the project, and many more are expected as the refined algorithms are utilized for other scientific problems. Collaboration has already begun to develop a DNA@Home platform to find gene regulations sites on human DNA. Collaborations have also started with biophysicists and chemists on two other BOINC projects at Rensselaer to understand protein folding and to design new drugs and materials.

In addition to important discoveries in computer science and astronomy, the researchers said the project is also making important strides in efforts to include the public in scientific discovery. Since the project began, more than 45,000 individual users from 169 countries have donated computational power to the effort. Currently, approximately 17,000 users are active in the system.

“This is truly public science,” said Desell, who began working on the project as a graduate student and has seen the project through its entire evolution. “This is a really unique opportunity to get people interested in science while also allowing us to create a strong computing resource for Rensselaer research.” All of the research, results, data, and even source code are made public and regularly updated for volunteers on the main MilkyWay@Home Web site found at: http://MilkyWay.cs.rpi.edu/.

Desell cites the public nature and regular communication as important components of the project’s success. “They are not just sitting back and allowing the computer to do the work,” he says, referencing that volunteers have made donations for equipment as well as made their own improvements to the underlying algorithms that greatly increased computational speed. Varela jokes, “We may end up with a paper with 17,000 authors.”

(Photo: Sloan Digital Sky Survey)

Reensealer Polytechnic Institute

LEAF VEINS INSPIRE A NEW MODEL FOR DISTRIBUTION NETWORKS

0 comentarios
A team of biophysicists at Rockefeller University developed a mathematical model showing that complex sets of interconnecting loops — like the netted veins that transport water in a leaf — provide the best distribution network for supplying fluctuating loads to varying parts of the system. It also shows that such a network can best handle damage. The findings could change the way engineers think about designing networks to handle a variety of challenges like the distribution of water or electricity in a city.

Operations researchers have long believed that the best distribution networks for many scenarios look like trees, with a succession of branches stemming from a central stalk and then branches from those branches and so on, to the desired destinations. But this kind of network is vulnerable: If it is severed at any place, the network is cut in two and cargo will fail to reach any point “downstream” of the break.

By contrast, in the leaves of most complex plants, evolution has devised a system to distribute water that is more supple in at least two key ways. Plants are under constant attack from bugs, diseases, animals and the weather. If a leaf’s distribution network were tree-like and damaged, the part of the leaf downstream of the damage would starve for water and die. In some of the Earth’s more ancient plants, such as the gingko, this is the case (see video, bottom). But many younger, more sophisticated plants have evolved a vein system of interconnected loops that can reroute water around any damage, providing many paths to any given point, as in the lemon leaf (see video, top). Operations researchers have appreciated that these redundancies are an effective hedge against damage. What’s most surprising in the new research, according to Marcelo O. Magnasco, head of the Laboratory of Mathematical Physics at Rockefeller University, is that the complex network also does a better job of handling fluctuating loads according to shifts in demand from different parts of the system — a common real-world need within dynamic distribution networks.

“For decades, people have believed that the tree-like network was optimal for fluctuating demand,” Magnasco says. “These findings could seriously shake things up. People will have to take another look at how they design these kinds of systems.”

In a paper published as the cover story of the January 29 Physical Review Letters, Magnasco, lead researcher Eleni Katifori, a fellow at Rockefeller’s Center for Studies in Physics and Biology, and colleagues lay out a model that assigns a cost to each section of leaf vein proportional to how much water it can carry. They looked for networks that suffered the least strain in the face of two challenges common in both leaves and human-built networks: damage to a randomly chosen segment of the network and changes in the load demanded by different parts of the network. In both scenarios, they found the most robust system was a complex, hierarchical network of nested loops, similar to the fractal-like web of veins that transport water in leaves. This loopy network design is also found in the blood vessels of the retina, the architecture of some corals and the structural veins of insect wings.

Katifori is now extending the research to delve more deeply into how distribution networks handle fluctuating loads, guided by nature’s own solution in the leaf.

“It is tempting to ignore the loops, because the central veins stand out and have a tree-like form,” Katifori says. “But they are all connected, and the loops are right there to see, if you just look at the leaf.”

Rockefeller University

Tuesday, February 23, 2010

UF RESEARCHERS FIND GENES THAT TUNE FLOWER FRAGRANCES

0 comentarios

Shakespeare famously wrote, “That which we call a rose by any other name would smell as sweet.” With all due respect to the Bard, University of Florida researchers may have to disagree: no matter what you call a flower, its scent can be changed.

A team at UF’s Institute of Food and Agricultural Sciences has uncovered some of the genes that control the complex mixture of chemicals that comprise a flower’s scent, opening new ways of “turning up” and “tuning” a flower’s aromatic compounds to produce desired fragrances.

“For a long time, breeders have mostly focused on how flowers look, their size, color and how long blooms last,” said David Clark, a professor of environmental horticulture. “But scent has gotten left behind. Go to a florist and try to smell the flowers. You probably won’t get what you expect.”

Over the years, Clark says, breeders have selected flowering plants that produce bigger, more attractive flowers with long vase lives; but in doing so, they may have been inadvertently selecting plants that were willing to devote less to producing fragrance.

That may change. For example, a customer may someday be able to walk into a florist and select from scented or unscented varieties of the same flower.

In work published in the January issue of The Plant Journal and the February issue of Phytochemistry, the researchers describe how various genes in petunias help regulate the amount of the 13 major aromatic compounds in that flower’s fragrance.

The work will help researchers control the levels of these compounds, adjusting a flower’s fragrance while also producing more or less of it.

In the papers, the researchers also describe some of the more fundamental aspects of how flowers produce scent. For example, they observed that the scents are largely manufactured in the petunia flower’s petals, and that scent production is activated when the flower opens.

The studies are part of an ongoing effort to isolate the chain reaction responsible for producing scent, so that fragrances can be modified without interfering with other flower qualities, said Thomas Colquhoun, a UF environmental horticulture researcher and first author on both papers.

For more than a decade, Clark and his colleagues have combed through more than 8,000 petunia genes. The search has yielded some interesting finds.

For example, the gene that produces the compound that gives rose oil its distinctive scent also makes tomatoes taste good.

By manipulating this gene, UF researchers led by horticulture professor Harry Klee have been able to create tomatoes with more flavor. Klee, Clark and colleagues are now working with plant breeders and taste specialists to prepare the tomato for the marketplace. Better smelling roses are also in the pipeline.

“The taste of food, the smell of a flower — these are things that enrich our lives in ways we don’t fully understand yet,” Clark said. “Learning how plants interact with us and their environment brings us closer to truly appreciating what the natural world has to offer.”

(Photo: Tyler Jones, UF/IFAS)

University of Florida

SILVER NANOPARTICLES MAY ONE DAY BE KEY TO DEVICES THAT KEEP HEARTS BEATING STRONG AND STEADY

0 comentarios

Diamonds and gold may make some hearts flutter on Valentine's Day, but in a University at Buffalo laboratory, silver nanoparticles are being designed to do just the opposite.

The nanoparticles are part of a new family of materials being created in the laboratory of SUNY Distinguished Professor and Greatbatch Professor of Advanced Power Sources Esther Takeuchi, PhD, who developed the lithium/silver vanadium oxide battery. The battery was a major factor in bringing implantable cardiac defibrillators (ICDs) into production in the late 1980s. ICDs shock the heart into a normal rhythm when it goes into fibrillation.

Twenty years later, with more than 300,000 of these units being implanted every year, the majority of them are powered by the battery system developed and improved by Takeuchi and her team. For that work she has earned more than 140 patents, believed to be more than any other woman in the United States. Last fall, she was one of four recipients honored in a White House ceremony with the National Medal of Technology and Innovation.

ICD batteries, in general, now last five to seven years. But she and her husband and co-investigator, SUNY Distinguished Teaching Professor of Chemistry Kenneth Takeuchi, PhD, and Amy Marschilok, PhD, UB research assistant professor of chemistry, are exploring even-better battery systems, by fine-tuning bimetallic materials at the atomic level.

Their research investigating feasibility for ICD use is funded by the National Institutes of Health, while their investigation of new, bimetallic systems is funded by the U.S. Department of Energy.

So far, their results show that they can make their materials 15,000 times more conductive upon initial battery use due to in-situ (that is, in the original material) generation of metallic silver nanoparticles. Their new approach to material design will allow development of higher-power, longer-life batteries than was previously possible.

These and other improvements are boosting interest in battery materials and the revolutionary devices that they may make possible.

"We may be heading toward a time when we can make batteries so tiny that they -- and the devices they power -- can simply be injected into the body," Takeuchi says.

Right now, her team is exploring how to boost the stability of the new materials they are designing for ICDs. The materials will be tested over weeks and months in laboratory ovens that mimic body temperature of 37 degrees Celsius.

"What's really exciting about this concept is that we are tuning the material at the atomic level," says Takeuchi. "So the change in its conductivity and performance is inherent to the material. We didn't add supplements to achieve that, we did it by changing the active material directly."

She explains that new and improved batteries for biomedical applications could, in a practical way, revolutionize treatments for some of the most persistent diseases by making feasible devices that would be implanted in the brain to treat stroke and mental illness, in the spine to treat chronic pain or in the vagal nerve system to treat migraines, Alzheimer's disease, anxiety, even obesity.

And even though batteries are an historic technology, they are far from mature, Takeuchi notes. This spring, she is teaching the energy storage course in UB's School of Engineering and Applied Sciences and the class is filled to capacity. "I've never seen interest in batteries as high as it is now," she says.

(Photo: U. Buffalo)

University at Buffalo

MOTHER BATS EXPERT AT SAVING ENERGY

0 comentarios
In order to regulate their body temperature as efficiently as possible, wild female bats switch between two strategies depending on both the ambient temperature and their reproductive status. During pregnancy and lactation, they profit energetically from clustering when temperatures drop. Once they have finished lactating, they use torpor to a greater extent, to slow their metabolic rate and drop their body temperature right down so that they expend as little energy as possible.

These findings by Iris Pretzlaff, from the University of Hamburg in Germany, and colleagues, were just published online in Springer’s journal Naturwissenschaften – The Science of Nature.

When energy demands are high, such as during pregnancy and lactation, female bats need to efficiently regulate their body temperature to minimize energy expenditure. In bats, energy expenditure is influenced by environmental conditions, such as ambient temperature, as well as by social thermoregulation – clustering to minimize heat and energy loss. Torpor, another common temperature regulation strategy, has disadvantages for reproductive females, such as delayed offspring development and compromised milk production.

Pretzlaff and team investigated, for the first time in the wild, the thermoregulation strategies used by communally roosting Bechstein’s bats during different periods of their reproductive cycle – pre-lactation, lactation, and post-lactation. They collected data from two maternity colonies roosting in deciduous forests near Würzburg in Germany, predominantly in bat boxes. The authors measured ambient temperature over those three periods as well as the bats’ metabolic rate by using respirometry (measuring the rate of oxygen consumption).

They found that the bats’ metabolic rate was strongly influenced by the ambient temperature. However, by roosting in groups (social thermoregulation), the bats were able to regulate their body temperature more effectively, despite changes in daily ambient temperature.

The bats also used torpor to minimize energy expenditure, particularly post-lactation - more than twice as often than during the other two periods. This suggests that they predominantly use torpor once they can afford to do so without compromising offspring development and milk production. They also formed much smaller groups post-lactation when temperatures were lower because roosting in smaller groups reduces the risk of disturbances by conspecifics. This resulted in longer torpor bouts and therefore longer periods of energy saving.

The authors conclude: “We were able to demonstrate on wild Bechstein’s bats, during different reproductive periods, the significance of behavioral and physiological flexibility for optimal thermoregulatory behavior. Our study also highlights the importance of field studies, where the animals can use their behavioural and physiological repertoire, which is often not possible under the generally more controlled regimes in laboratory studies.”

Springer Science+Business Media

STUDY FINDS SURPRISING NEW BRANCHES ON ARTHROPOD FAMILY TREE

0 comentarios

In a scientific and technological tour de force that was nearly a decade in the making, a team of scientists from Duke University, the University of Maryland and the Natural History Museum of Los Angeles County have compared genetic sequences from 75 different species to draw a new family tree that includes every major arthropod lineage. Some of the relationships are so surprising that new names had to be coined for five newly-discovered groupings.

The work, which was supported by the National Science Foundation, appears early online Wednesday in the journal Nature.

A big surprise to tumble out of the new tree is that the closest living relatives of insects include a small and obscure group of creatures called remipedes that were only discovered in the late 1970s living in a watery cave in the Bahamas. With linear bodies like centipedes, simple legs and no eyes, it was thought that this small group -- now placed with cephalocarids in the newly-named Xenocarida or "strange shrimp" -- would be found at the base of the crustacean family tree.

Now, after analyzing 62 shared genetic sequences across all the arthropods, the researchers are putting the strange shrimp together with the six-legged insects, Hexapoda, to form a new group they dubbed Miracrustacea, or "surprising crustaceans." As a "sister clade" to hexapods, the Xenocarida likely represent the sort of creature that came onto land to start the spectacular flowering of the insect lineage, said Cliff Cunningham, a professor of biology at Duke who led the study.

Triops, a 2-inch crustacean that looks like a cross between a horseshoe crab and a mayfly, had also been thought of as an early crustacean, but it too was shown to have a relatively modern origin in the new analysis, Cunningham said.

"Taxonomists have been arguing about these things for decades, and people kept coming at this with one data set after another," Cunningham said. This latest study has created a fuller picture of the arthropod family tree by using more species and more genes, he said.

Beginning in 2001, Jeffrey Shultz, an associate professor of entomology at Maryland, led the efforts to figure out which species needed to be sequenced for a robust comparison, and then to round up suitable specimens of each. The study included nematodes, scorpions, dragonflies, barnacles, copepods and centipedes.

Remipedes, one of the two species of Xenocarida in the study, had to be fetched from partially submerged limestone caves in the Yucatan Peninsula and preserved just so. Bitty creatures called mystacocarids that live between grains of sand were captured by the Natural History Museum's Regina Wetzer, using a microscope on a Massachusetts beach.

Once assembled, the 75 species were then stripped down to their DNA for a painstaking search to find genetic sequences that would appear across all arthropods, enabling statistical comparisons.

The lab of Jerome Regier at Maryland's Center for Biosystems Research combed through 2,500 different combinations of PCR primers to find 62 protein-coding gene sequences that could be compared across all 75 species. Regier was an early proponent of using protein coding genes to sort out the arthropod tree, while most other researchers were using relatively less complex analyses from the DNA found in ribosomes and mitochondria.

The researchers ran four different statistical approaches, including two new ones invented at Maryland, "and they all came up with the same answer," Cunningham said. Earlier studies had not used as many genes or as many species, making this study about four times larger than anything done previously.

The spiders, ticks and scorpions of the subgroup Chelicerata are shown to have split from the line leading to insects and crustaceans even before the millipedes and centipedes of the subphylum Myriapoda. Most recent molecular studies had grouped these arachnids in Chelicerata together with millipedes and centipedes of the Myriapoda. But the new analysis puts millipedes and centipedes together with crustaceans and insects in a group taxonomists had long ago named Mandibulata.

"The only thing people thought they knew before molecular data was available was that the Myriapods were with the insects," Shultz said. But that turned out to be wrong. Even the grouping Crustacea is no longer correct, since it includes the six-legged insects.

Within the insect group Hexapoda, the good news for taxonomists who have grouped insects according to body shape and features is that they were pretty much on the mark, Shultz added.

There are still many holes that need to be filled in, Cunningham said, but at least the shape of the tree seems right. "Now the developmental biologists can really piece things together."

(Photo: Simon Richards)

Duke University

RESEARCHERS FIND HOW BRAIN HEARS THE SOUND OF SILENCE

0 comentarios

A team of University of Oregon researchers have isolated an independent processing channel of synapses inside the brain's auditory cortex that deals specifically with shutting off sound processing at appropriate times. Such regulation is vital for hearing and for understanding speech.

The discovery, detailed in the Feb. 11 issue of the journal Neuron, goes against a long-held assumption that the signaling of a sound's appearance and its subsequent disappearance are both handled by the same pathway. The new finding, which supports an emerging theory that a separate set of synapses is responsible, could lead to new, distinctly targeted therapies such as improved hearing devices, said Michael Wehr, a professor of psychology and member of the UO Institute of Neuroscience.

"It looks like there is a whole separate channel that goes all the way from the ear up to the brain that is specialized to process sound offsets," Wehr said. The two channels finally come together in a brain region called the auditory cortex, situated in the temporal lobe.

To do the research, Wehr and two UO undergraduate students -- lead author Ben Scholl, now a graduate student at the Oregon Health and Science University in Portland, and Xiang Gao -- monitored the activity of neurons and their connecting synapses as rats were exposed to millisecond bursts of tones, looking at the responses to both the start and end of a sound. They tested varying lengths and frequencies of sounds in a series of experiments.

It became clear, the researchers found, that one set of synapses responded "very strongly at the onset of sounds," but a different set of synapses responded to the sudden disappearance of sounds. There was no overlap of the two responding sets, the researchers noted. The end of one sound did not affect the response to a new sound, thus reinforcing the idea of separate processing channels.

The UO team also noted that responses to the end of a sound involved different frequency tuning, duration and amplitude than those involved in processing the start of a sound, findings that agree with a trend cited in at least three other studies in the last decade.

"Being able to perceive when sound stops is very important for speech processing," Wehr said. "One of the really hard problems in speech is finding the boundaries between the different parts of words. It is really not well understood how the brain does that."

As an example, he noted the difficulty some people have when they are at a noisy cocktail party and are trying to follow one conversation amid competing background noises. "We think that we've discovered brain mechanisms that are important in finding the necessary boundaries between words that help to allow for successful speech recognition and hearing," he said.

The research -- funded in part by the UO's Robert and Beverly Lewis Center for Neuroimaging Fund -- aims to provide a general understanding of how areas of the brain function. The new findings, Wehr said, could also prove useful in working with children who have deficits in speech and learning, as well as in the design of hearing aids and cochlear implants. He also noted that people with dyslexia have problems defining the boundaries of sounds in speech, and tapping these processing areas in therapy could boost reading skills.

(Photo: U. Oregon)

University of Oregon

Followers

Archive

 

Selected Science News. Copyright 2008 All Rights Reserved Revolution Two Church theme by Brian Gardner Converted into Blogger Template by Bloganol dot com