Tuesday, November 30, 2010

ESRF REVEALS HUMAN CHILDREN OUTPACED NEANDERTHALS BY SLOWING DOWN

0 comentarios

Human childhood is considerably longer than chimpanzees, our closest-living ape relatives. A multinational team of specialists, led by researchers from Harvard University, Max-Planck Institute for Evolutionary Anthropology (MPI-EVA) and the ESRF applied cutting-edge synchrotron X-ray imaging to resolve microscopic growth in 10 young Neanderthal and Homo sapiens fossils. They found that significant developmental differences exist despite some overlap, which is common in closely-related species. Modern humans are the slowest to the finish line, stretching out their maturation, which may have given them a unique evolutionary advantage.

Evolutionary biology has shown that small changes during early development may lead to differences that result in new species. These changes may take the form of modifications in the sequence or timing of developmental events; therefore, understanding developmental transformation is key to reconstructing evolutionary history. Anthropologists have documented many differences in adult characteristics among closely related species, such as humans and chimpanzees. Genomic data combined with fossil evidence indicate that these two lineages split six to seven million years ago, and have since been evolving separately. However, we know much less about which changes led to the separate lineages, how these changes arose, and when they occurred.

One poorly understood change is our unique life history, or the way in which we time growth, development, and reproductive efforts. Compared to humans, non-human primate life history is marked by a shorter gestation period, faster post-natal maturation rates, younger age at first reproduction, shorter post-reproductive period, and a shorter overall lifespan. For example, chimpanzees reach reproductive maturity several years before humans, bearing their first offspring by age 13, in contrast to the human average of 19.

It might seem that life history is invisible in the fossil record, but it turns out that many life history variables correlate strongly with dental development. “Teeth are remarkable time recorders, capturing each day of growth much like rings in trees reveal yearly progress. Even more impressive is the fact that our first molars contain a tiny ‘birth certificate,’ and finding this birth line allows us to calculate exactly how old a juvenile was when it died” says Tanya Smith, researcher at Harvard University and MPI-EVA.

This forensic approach to the past is possible with a ‘super-microscope:’ extremely powerful X-ray beams produced at the European Synchrotron Radiation Facility (ESRF) in Grenoble, France, which is one of the largest synchrotron in the world. Paul Tafforeau, from the ESRF, notes: “At the ESRF, we are able to look inside invaluable fossils without damaging them by using the special properties of high energy synchrotron X-rays. We can investigate fossils at different scales and in three-dimensions, ranging from studies of overall 3D shape down to microscopic daily growth lines. This is currently the only place where these studies of fossil humans are possible.” Scientists and curators have been quietly visiting the European synchrotron, often with some of the rarest hominin fossils in the world, for imaging with this state-of-the-art technique.

The study includes some of the most famous Neanderthal children, including the first hominin fossil ever discovered. This Belgian Neanderthal child, discovered in the winter of 1829-1830, was thought to be 4-5 years of age when it died. Powerful synchrotron X-rays and cutting-edge imaging software revealed that it actually died at a mere 3 years of age. Another invaluable Neanderthal discovered in Le Moustier, France in 1908, barely survived the shelling of its’ German repository during the Second World War.

A remarkable finding of this five-year study is that Neanderthals grow their teeth significantly faster than members of our own species, including some of the earliest groups of modern humans to leave Africa between 90-100,000 years ago. The Neanderthal pattern appears to be intermediate between early members of our genus (e.g., Homo erectus) and living people, suggesting that the characteristically slow development and long childhood is a recent condition unique to our own species. This extended period of maturation may facilitate additional learning and complex cognition, possibly giving early Homo sapiens a competitive advantage over their contemporaneous Neanderthal cousins.

These new results present a unique opportunity to assess the origins of a fundamentally human condition: the costly yet advantageous shift from a primitive “live fast and die young” strategy to the “live slow and grow old” strategy that has helped to make us one of the most successful organisms on the planet.

(Photo: Graham Chedd (PBS), Paul Tafforeau (ESRF), and Tanya Smith (Harvard University and MPI-EVA))

ESRF

NEW WAY OF PREDICTING DOMINANT SEASONAL FLU STRAIN

0 comentarios
Rice University scientists have found a way to predict rapidly whether a new strain of the influenza virus should be included in the annual seasonal flu vaccine. While it sometimes takes new flu strains up to three years to become dominant worldwide, the new method can predict whether they will become dominant as little as two weeks after the sequence first appears in the GenBank database, the National Institutes of Health's collection of all publicly available DNA sequences.

"We studied a new strain of the virus that evolved in British Columbia in the middle of March 2009," said Michael Deem, co-author of a new study featured on the cover of the Dec. 12 issue of Protein Engineering Design and Selection. "By the end of March, just about two weeks after it came out, we could detect that it would become the dominant strain of H3N2 in 2009. By contrast, it wasn't detectable as a novel strain by the standard methods that the World Health Organization uses until July or the middle of August."

It takes several months to produce the millions of doses of flu vaccine needed each year, and officials at the World Health Organization (WHO) use a combination of statistical methods and animal tests to choose the following year's formula.

Just a month before the British Columbia strain was first recorded in GenBank, the WHO had made its recommendations for the annual 2009-2010 vaccine. While the biggest flu story of 2009 was the H1N1 pandemic that began in Mexico and spread rapidly worldwide, the British Columbia strain went on to become the dominant variant of H3N2 the following year. Because it was significantly different from the H3N2 strain that had been included in the seasonal vaccine for that year, the vaccine's efficacy against British Columbia was estimated at about 20 percent.

"It's not that we could have predicted that British Columbia would have emerged out of thin air, but once it had emerged, our method could detect the signature of its eventual dominance with a very limited amount of sequence data," said Deem, the John W. Cox Professor in Biochemical and Genetic Engineering and professor of physics and astronomy at Rice.

Deem and study co-author Jiankui He, a graduate student in physics and astronomy, developed a mathematical method that used freely available genetic profiles of new flu strains to predict whether a strain will become dominant. Using the method, they examined the H3N2 flu strain for the past 14 years and made their own predictions based on the available data in GenBank, where public health officials post all the latest genetic sequences of new flu strains.

Deem and He compared their predictions with the WHO's predictions from 1996 to 2010. They found their new method correctly predicted the dominant strain of H3N2 for most years, including three years -- 2002, 2003 and 2009 -- when the WHO vaccine was formulated with an H3N2 strain that turned out not to be the dominant strain that year.

The new method involves a statistical technique called multidimensional scaling that is used to create graphical plots of complex data in fields as diverse as marketing and physics. In their study, He and Deem used multidimensional scaling to create a graphical plot of amino acid sequence data for all strains of H3N2. They limited their study to a 329-amino-acid region of the virus that mutates regularly to avoid detection by immune system.

"Using multidimensional scaling, we project from those 329 dimensions to the two dimensions that contain the most information," Deem said. "We just plot all of the points as a function of two variables instead of listing all 329, which is too much information to work with. With the two-dimensional scaling, we have a workable problem and we still have enough information to see clusters of new strains that will eventually become dominant."

Deem said the results of the study suggests that public health officials could benefit by using the new method, which is both fast and inexpensive, in addition to the well-accepted methods that are currently used to formulate vaccine strain recommendations.

Rice University

RESCUE MISSIONS UNDER WAY TO SAVE HAITI'S SPECIES FROM MASS EXTINCTION

0 comentarios

Haiti is on the brink of an era of mass extinctions similar to the time when dinosaurs and many other species suddenly disappeared from the Earth, reports a biologist at Penn State, who announced on Nov. 16 the establishment of a species-rescue program for Haiti's threatened frogs and other species, including captive-breeding and gene-preservation efforts.

"During the next few decades, many Haitian species of plants and animals will become extinct because the forests where they live, which originally covered the entire country, are nearly gone," said Blair Hedges, a professor of biology at Penn State and leader of the rescue missions in Haiti and other countries in the Caribbean. "The decline of frogs in particular, because they are especially vulnerable, is a biological early-warning signal of a dangerously deteriorating environment, just as a dying canary is an early-warning sign of dangerously deteriorating air in a coal mine," said Hedges, who is also one of the world's foremost authorities on amphibians and reptiles. "When frogs start disappearing, other species will follow and the Haitian people will suffer, as well, from this environmental catastrophe."

Hedges recently relocated 10 critically endangered species of frogs from Haiti to a captive-breeding program at the Philadelphia Zoo. One of these species already has begun breeding, laying eggs, and producing hatchlings in Philadelphia. Hedges has discovered at least five new frog species during three expeditions to Haiti this year, but he was not able to find two species that may now be extinct because they have not been seen there for 25 years. His scientific descriptions of the new species will be published in future issues of research journals.

The rescue mission led by Hedges is part of a new effort supported by the National Science Foundation to determine which species of amphibians and reptiles currently survive in Haiti, to pinpoint their locations, to discover any new species that previously were not documented scientifically, to relocate live populations of frogs for captive breeding, and to deep-freeze cells at Penn State. "Captive breeding and cryobanking are two efforts to preserve the species in case they become extinct in Haiti," said Hedges, who is one of the few scientists worldwide who have established cryobanking programs in their labs for endangered frogs and other species.

Cryobanking involves the preservation of cells and DNA in liquid nitrogen that will permit whole-animal cloning, if necessary, in the future. "These are time-consuming and costly backup plans to save species, normally reserved for those species closest to extinction, as in Haiti," Hedges said. "The goal is to release offspring of rescued frogs in Haiti if and when their forest habitat improves." Hedges and the Philadelphia Zoo also are working with the Haitian government and non-governmental agencies to train Haitians in this conservation research so that they can develop the capacity to breed these species in Haiti.

Frog species have been disappearing worldwide during the last 10 to 20 years, and one-third of the 6,000 frog species on Earth now are threatened with extinction. But 92 percent of Haiti's 50 frog species are threatened -- the highest percentage of any country in the world. Even worse, most Haitian frog species are officially designated as "endangered" or "critically endangered," the two highest levels of concern. "We found that as many as 26 species occur together in the isolated mountain forests of southwest Haiti, greatly increasing the threat of mass extinctions when the forests there are cut down," Hedges said. Of the 50 frog species in Haiti, two-thirds -- 30 species -- live only in Haiti and do not occur in the neighboring Dominican Republic.

"Less than one percent of the original forest is left in Haiti, which is a lower percentage than in any other country that I know of," Hedges said. "There definitely is no other place in the western half of the world -- and some scientists would argue in the entire world -- where the extinction threat is greater than in Haiti."

Hedges explained that the forests of Haiti are disappearing because the trees are being cut down to produce charcoal for the 10 million Haitian people who have few other sources of cooking fuel. "When you have a species that lives only in one mountain-top forest, and that forest suddenly disappears because you start harvesting all the trees for charcoal, then that species is guaranteed to become extinct along with all other species of animals and plants living only in that forest," Hedges said. "Virtually every truck you see on the rural roads is loaded down with bags of charcoal coming from the mountains, where people are cutting down the trees and making charcoal to be sold in the city of Port au Prince. The forests from some entire mountains now have been removed completely. In places, it looks like a lunar landscape, with nearly all the soil washed away and only the rocks and some weeds left behind."

Hedges has found that trees are not being protected even in the national parks of Haiti. "The commander of the park guards in the largest national park told us that only 10 unarmed guards are working at any one time in the park but typically 200 teams of tree cutters are at work there, armed with machetes and other weapons."

Recently a park guard in Haiti was killed by tree cutters.

"The pressure for cutting down the forests is coming from a whole island nation of people needing cooking fuel -- a problem requiring economic and possibly engineering solutions and needing the help of major international conservation organizations and government agencies," said Philippe Bayard, president of the Audubon Society of Haiti and collaborator with Hedges in efforts to save Haiti's biodiversity. "Unless effective help arrives soon, it is inevitable that there will be mass extinctions, and I think they are in progress."

Robin Moore, amphibian conservation officer for Conservation International, agrees. He joined Hedges and Martinez on the latest rescue mission. He points out, optimistically, that "despite the massive deforestation, the fact that the frogs are still hanging on, though barely, means that it is not too late to protect their habitat."

The scientists emphasize that the loss of forests also is a catastrophe for the Haitian people because forests are their major source of energy and they are critical for their economy, agriculture and drinking water.

Hedges, who has studied the genomes of diverse species worldwide in his laboratory research, is focusing his efforts in Haiti on preserving the cells and genomes of the endangered species there. Besides rescuing 10 frog species for captive breeding at the Philadelphia Zoo, he has cryobanked frogs and other species in his lab. One of the rescued frog species is the smallest one known on the island -- a species whose adults are the size of a small human fingernail.

"A captive-breeding program is a huge responsibility. You have to feed the animals, breed them, and keep them going for years and years, possibly indefinitely," said Carlos Martinez, Amphibian Conservation Biologist for the Philadelphia Zoo, who accompanied Hedges on the recent frog rescue mission. "But the survival of these species may depend on this work, so it is well worth the effort." Hedges was impressed with the zoo's willingness to take on this challenge. "I am absolutely delighted that the Philadelphia Zoo generously agreed to accept all 10 species, and I consider it a huge success that so many critically endangered frog species are being captive bred and cryobanked."

Ideally, a population should have at least 30 males and females to begin successful captive breeding, but some of these 10 species at the Philadelphia Zoo have fewer individuals. Hedges explains, "we were up on the top of a mountain that we might never get to again anytime soon, and we feared that this could be the last chance for the survival of this species, so we decided to at least try to breed them in captivity." He points out that, despite their limited habitat, these small animals occur in sufficient numbers to be unaffected by collecting efforts of the rescue team. Hedges now is looking for more zoos with the capability and willingness to host captive-breeding programs for endangered Haitian species, including reptiles, which he hopes to rescue during future expeditions.

"Haiti has suffered a terrible earthquake and it is enduring a cholera outbreak and so many other environmental and human disasters, and now it is clear that Haiti also is suffering the beginning of a mass-extinction event that likely will affect many more species in addition to its frogs," Hedges said. "I would like to have hope that the destruction is going to stop and the forests are going to come back, but I have fears about what will happen to the animals and the people of Haiti unless something major is done very soon to resolve the life-threatening problems there."

Hedges has set up a website, at http://www.CaribNature.org/ online, where he is posting multimedia information as well as links to conservation organizations that are working to solve the problems that are causing the species extinctions in Haiti and other areas of the Caribbean.

(Photo: Claudio Contreras)

Penn State

NEW ANALYSIS EXPLAINS FORMATION OF BULGE ON FARSIDE OF MOON

0 comentarios

A bulge of elevated topography on the farside of the moon--known as the lunar farside highlands--has defied explanation for decades. But a new study led by researchers at the University of California, Santa Cruz, shows that the highlands may be the result of tidal forces acting early in the moon's history when its solid outer crust floated on an ocean of liquid rock.

Ian Garrick-Bethell, an assistant professor of Earth and planetary sciences at UC Santa Cruz, found that the shape of the moon's bulge can be described by a surprisingly simple mathematical function. "What's interesting is that the form of the mathematical function implies that tides had something to do with the formation of that terrain," said Garrick-Bethell, who is the first author of a paper on the new findings published in the November 11 issue of Science.

The paper describes a process for formation of the lunar highlands that involves tidal heating of the moon's crust about 4.4 billion years ago. At that time, not long after the moon's formation, the crust was decoupled from the mantle below it by an intervening ocean of magma. As a result, the gravitational pull of the Earth caused tidal flexing and heating of the crust. At the polar regions, where the flexing and heating was greatest, the crust became thinner, while the thickest crust would have formed in the regions in line with the Earth.

This process still does not explain why the bulge is now found only on the farside of the moon. "You would expect to see a bulge on both sides, because tides have a symmetrical effect," Garrick-Bethell said. "It may be that volcanic activity or other geological processes over the past 4.4 billion years have changed the expression of the bulge on the nearside."

The paper's coauthors include Francis Nimmo, associate professor of Earth and planetary sciences at UCSC, and Mark Wieczorek, a planetary geophysicist at the Institut de Physique du Globe in Paris. The researchers analyzed topographical data from NASA's Lunar Reconnaissance Orbiter and gravitational data from Japan's Kaguya orbiter.

A map of crustal thickness based on the gravity data showed that an especially thick region of the moon's crust underlies the lunar farside highlands. The variations in crustal thickness on the moon are similar to effects seen on Jupiter's moon Europa, which has a shell of ice over an ocean of liquid water. Nimmo has studied the effects of tidal heating on the structure of Europa, and the researchers applied the same analytical approach to the moon.

"Europa is a completely different satellite from our moon, but it gave us the idea to look at the process of tidal flexing of the crust over a liquid ocean," Garrick-Bethell said.

The mathematical function that describes the shape of the moon's bulge can account for about one-fourth of the moon's shape, he said. Although mysteries still remain, such as what made the nearside so different, the new study provides a mathematical framework for further investigations into the shape of the moon.

"It's still not completely clear yet, but we're starting to chip away at the problem," Garrick-Bethell said.

(Photo: NASA/Goddard)

University of California, Santa Cruz

ASTRONOMERS DISCOVER MERGING STAR SYSTEMS THAT MIGHT EXPLODE

0 comentarios

Sometimes when you're looking for one thing, you find something completely different and unexpected. In the scientific endeavor, such serendipity can lead to new discoveries. Today, researchers who found the first hypervelocity stars escaping the Milky Way announced that their search also turned up a dozen double-star systems. Half of those are merging and might explode as supernovae in the astronomically near future.

All of the newfound binary stars consist of two white dwarfs. A white dwarf is the hot, dead core left over when a sun-like star gently puffs off its outer layers as it dies. A white dwarf is incredibly dense, packing as much as a sun's worth of material into a sphere the size of Earth. A teaspoon of it would weigh more than a ton.

"These are weird systems - objects the size of the Earth orbiting each other at a distance less than the radius of the Sun," said Smithsonian astronomer Warren Brown, lead author of the two papers reporting the find.

The white dwarfs found in this survey are lightweight among white dwarfs, holding only about one-fifth as much mass as the Sun. They are made almost entirely of helium, unlike normal white dwarfs made of carbon and oxygen.

"These white dwarfs have gone through a dramatic weight loss program," said Carlos Allende Prieto, an astronomer at the Instituto de Astrofisica de Canarias in Spain and a co-author of the study. "These stars are in such close orbits that tidal forces, like those swaying the oceans on Earth, led to huge mass losses."

Remarkably, because they whirl around so close to each other, the white dwarfs stir the space-time continuum, creating expanding ripples known as gravitational waves. Those waves carry away orbital energy, causing the stars to spiral closer together. Half of the systems are expected to merge eventually. The tightest binary, orbiting once every hour, will merge in about 100 million years.

"We have tripled the number of known, merging white-dwarf systems," said Smithsonian astronomer and co-author Mukremin Kilic. "Now, we can begin to understand how these systems form and what they may become in the near future."

When two white dwarfs merge, their combined mass can exceed a tipping point, causing them to detonate and explode as a Type Ia supernova. Brown and his colleagues suggest that the merging binaries they have discovered might be one source of underluminous supernovae -- a rare type of supernova explosion 100 times fainter than a normal Type Ia supernova, which ejects only one-fifth as much matter.

"The rate at which our white dwarfs are merging is the same as the rate of underluminous supernovae - about one every 2,000 years," explained Brown. "While we can't know for sure whether our merging white dwarfs will explode as underluminous supernovae, the fact that the rates are the same is highly suggestive."

(Photo: Clayton Ellis (CfA))

CfA

STUDY REWRITES THE EVOLUTIONARY HISTORY OF C4 GRASSES

0 comentarios

According to a popular hypothesis, grasses such as maize, sugar cane, millet and sorghum got their evolutionary start as a result of a steep drop in atmospheric carbon dioxide levels during the Oligocene epoch, more than 23 million years ago. A new study overturns that hypothesis, presenting the first geological evidence that the ancestors of these and other C4 grasses emerged millions of years earlier than previously established.

The findings are published in the journal Geology.

C4 plants are more efficient than C3 plants at taking up atmospheric carbon dioxide and converting it into the starches and sugars vital to plant growth. (C3 and C4 refer to the number of carbon atoms in the first molecular product of photosynthesis.) Having evolved relatively recently, C4 plants make up 3 percent of all living species of flowering plants. But they account for about 25 percent of global plant productivity on land. They dominate grasslands in tropical, subtropical and warm temperate areas. They also are a vital food source and an important feedstock for the production of biofuels.

"C4 plants are very successful, they're economically very important, but we actually don't know when they originated in the geological history," said University of Illinois plant biology professor Feng Sheng Hu, who led the new analysis. "To me, it's one of the most profound geological and ecological questions as a paleoecologist I can tackle."

A previous study dated the oldest C4 plant remnant found, a tiny fragment called a phytolith, to about 19 million years ago. Other studies analyzed the ratios of carbon isotopes in bulk soil samples to determine the ratio of C3 to C4 plant remains at different soil horizons, which correspond to different geological time periods. (C3 and C4 plants differ in their proportions of two carbon isotopes, C-12 and C-13.) Those studies indicated that C4 grasses were present as early as the Early Micocene, about 18 million years ago.

Rather than analyzing plant matter in bulk sediment samples, David Nelson, a postdoctoral researcher in Hu's lab at the time of the study (now a professor at the University of Maryland), analyzed the carbon isotope ratios of individual grains of grass pollen, a technique he pioneered while working with Hu in the lab of biogeochemistry professor Ann Pearson at Harvard University.

Using a spooling-wire micro-combustion device to combust the grains, and an isotope mass spectrometer to determine the relative ratio of C-12 and C-13 in the sample, Nelson and Illinois graduate student Michael Urban analyzed hundreds of individual grains of grass pollen collected from study sites in Spain and France.

"Because we analyze carbon isotopes in a material unique to grasses (pollen) we were able to detect C4 grasses at lower abundances than previous studies," Nelson said.

This analysis found "unequivocal evidence for C4 grasses in southwestern Europe by the Early Oligocene," the authors wrote. This means these grasses were present 32 to 34 million years ago, well before studies indicate atmospheric carbon dioxide levels made their precipitous decline.

"The evidence refutes the idea that low (atmospheric) CO2 was an important driver and/or precondition for the development of C4 photosynthesis," the authors wrote.

"This study challenges that hypothesis and basically says that something else was responsible for the evolution of C4 plants, probably higher temperature or drier conditions," Hu said. With atmospheric carbon dioxide levels now on the increase, he said, "there are also implications about how C3 and C4 plants will fare in the future."

(Photo: L. Brian Stauffer, U. of I. News Bureau)

University of Illinois at Urbana-Champaign

Monday, November 29, 2010

ENGINEERS TEST EFFECTS OF FIRE ON STEEL STRUCTURES

0 comentarios

Researchers at Purdue University are studying the effects of fire on steel structures, such as buildings and bridges, using a one-of-a-kind heating system and a specialized laboratory for testing large beams and other components.

Building fires may reach temperatures of 1,000 degrees Celsius, or more than 1,800 degrees Fahrenheit, said Amit Varma, a Purdue associate professor of civil engineering who is leading the work.

"At that temperature, exposed steel would take about 25 minutes to lose about 60 percent of its strength and stiffness," he said. "As you keep increasing the temperature of the steel, it becomes softer and weaker."

One project focuses on how a building's steel-and-concrete floor and its connections to the building behave in a fire. Another project concentrates on how fire affects steel columns and a building's frame.

Such testing is customarily conducted inside large furnaces.

"However, in a furnace it is very difficult to heat a specimen while simultaneously applying loads onto the structure to simulate the forces exerted during a building's everyday use," Varma said.

To overcome this limitation, Purdue researchers designed a system made up of heating panels to simulate fire. The panels have electrical coils, like giant toaster ovens, and are placed close to the surface of the specimens. As the system is used to simulate fire, test structures are subjected to forces with hydraulic equipment.

In practice, beams and other steel components in buildings are covered with fireproofing materials to resist the effects of extreme heating.

"Because the steel in buildings is coated with a fireproofing material, the air might be at 1,000 degrees but the steel will be at 300 or 400 degrees," Varma said. "We conduct tests with and without fire protection."

The work is funded by the National Science Foundation and the U.S. Department of Commerce's National Institute of Standards and Technology.

The heating system is being used to test full-scale steel columns at Purdue's Robert L. and Terry L. Bowen Laboratory for Large-Scale Civil Engineering Research. It is believed to be the only such heating system in the world, Varma said.

Each panel is about 4 feet square, and the system contains 25 panels that cover 100 square feet. Having separate panels enables researchers to heat certain portions of specimens, recreating "the heating and cooling path of a fire event," Varma said.

The Bowen Lab is one of a handful of facilities where testing can be performed on full-scale structures to yield more accurate data. The 66,000-square-foot laboratory is equipped with special hydraulic testing equipment and powerful overhead cranes.

The research group also has tested 10-foot-by-10-foot "composite floor systems" - made of steel beams supporting a concrete slab - inside a furnace operated by Michigan State University. The composite design is the most common type of floor system used in steel structures.

Findings from that research will be compared with floor-system testing to be conducted at the Bowen Lab. Results from both experiments will be used to test and verify computational models used to design buildings.

"Most of these experiments are showing that we have good models, and we are using data to benchmark the models and make sure the theory and experiment agree with each other," Varma said.

Models are needed to design composite floor systems, which can be heavily damaged by fire.

"When you have a floor supporting weight, the floor starts sagging from the heat," Varma said. "It expands, but it's got nowhere to go so it starts bowing down, which produces pulling forces on the building's frame. It starts pulling on the columns and then it becomes longer and permanently deformed. After the fire, it starts cooling, and then it starts pulling on the columns even harder."

(Photo: Purdue University/Mark Simons)

Purdue University

THE LIFEBLOOD OF LEAVES: VEIN NETWORKS CONTROL PLANT PATTERNS

0 comentarios

New University of Arizona research indicates that leaf vein patterns correlate with functions such as carbon intake and water use – knowledge that could help scientists better understand the complex carbon cycle that is at the heart of global climate warming.

"Leaves have very different networks of veins. They have different shapes, different sizes, different thicknesses," said Benjamin Blonder, a doctoral student in the department of ecology and evolutionary biology. "The really interesting question is how a leaf with a certain form produces a certain function."

Blonder developed a mathematical model to predict the functions of leaves based on three properties of the vein network: density, distance between veins and number of loops, or enclosed regions of smaller veins much like capillaries in humans.

Vein density reflects how much energy and resources the leaf has invested in the network, while distance between veins shows how well the veins are supplying resources to the leaf. The number of loops is a measure of the leaf's resilience and plays a role in determining its lifespan. If the veins reconnect often and part of the leaf becomes damaged, resources can be circulated through different pathways.

"It's like in a city where there's a roadblock somewhere," said Blonder. "If the city was designed well, you can still take another road to get to where you want to be."

Blonder won the UA Graduate and Professional Student Showcase President's Award for his work, which was published this week online in the journal Ecology Letters.

The vein network inside of a leaf is like most of the important organ systems in a person, Blonder said.

"It's like the skeleton because it holds the whole leaf up and lets it capture sunlight and not get blown over in a windstorm. It's like the circulatory system because it's distributing water from the roots up to all the cells within the leaf, and it's also bringing resources from the leaf back to the rest of the plant after photosynthesis has happened. It's also like a nervous system because there are chemical signals that are transmitted to the leaves from other parts of the plant through the liquid in the veins," he said.

"This is important for the function of the leaf because when this one structure is implicated in so many different patterns, clearly there're going to be tradeoffs between being able to do all of these different functions well," said Blonder. For example, a leaf with a very loopy network of veins might live longer, but it will also cost a lot of carbon, which plants absorb from carbon dioxide in the atmosphere, to develop that vein network.

Blonder's model successfully predicted relationships among photosynthetic rate, lifespan, carbon cost and nitrogen cost for more than 2,500 species worldwide based on global data. But that doesn't mean it will work on a local scale.

To find out, the team tested leaves from 25 plant species on the UA campus. While initial results appear to show that the model will work, the team hasn't tested enough samples to know if it successfully predicts relationships in leaf function on a case-specific basis. More extensive studies will include leaves from species at the Rocky Mountain Biological Laboratory in Colorado.

"If it's successful, we hopefully have a really satisfying way of understanding why leaves look different in different environments – also a useful way of understanding how leaves are functioning in different environments that can be used for climate modeling or for reconstructing past climates from fossils of leaves," said Blonder.

So how do relationships among plant leaf functions impact global carbon levels?

"Carbon can only get into leaves through little pores on the leaf surface, and when carbon comes in, which is something good for the plant, water also comes out," said Blonder. "There's this incredibly tricky tradeoff for all plants where they need to gain carbon to make energy, but to gain that carbon they lose a lot of water in the process. So if you want to gain more carbon, you have to lose more water."

Plants with denser vein networks – veins that are closer together – are able to withstand higher levels of water loss and absorb more carbon. Unfortunately, that doesn't mean you should plant trees with dense leaf vein networks if you want to save the planet.

"It becomes a little bit more difficult to scale up beyond there because a plant is not only just its leaves: It's also the trunk and the roots and so on," said Blonder. "The important thing to think about is that other parts of the plant are going to be contributing to the carbon cycle also in terms of decomposition or other large-scale environmental effects."

"Carbon flux from plants is critical to understanding global change and the global carbon cycle," said Blonder. "What we're hoping to be able to do is understand the leaf side of the picture, but there's clearly a lot more to plants and the environment than that. So this is not the answer to every environmental question but it's a good start because leaves are the site of photosynthesis and carbon flux, and it's certainly necessary to understand those before you can understand plants in general."

Blonder hopes to use his model to develop more comprehensive climate models that take plants into account and to better understand past climates. Blonder's model could play an important role in understanding plant ecology, global carbon cycling and other environmental processes in the future.

(Photo: Tuan Cao)

University of Arizona

BODY CLOCK CONTROLS HOW BODY BURNS FAT

0 comentarios

UC Irvine researchers have discovered that circadian rhythms — the internal body clock — regulate fat metabolism. This helps explain why people burn fat more efficiently at certain times of day and could lead to new pharmaceuticals for obesity, diabetes and energy-related illnesses.

The study was headed by Paolo Sassone-Corsi, Donald Bren Professor and chair of pharmacology. A leading expert on circadian rhythms, he discovered many of the key molecular switches governing these biological processes. He and his colleagues found that one of these, a protein called PER2, directly controls PPAR-gamma, a protein essential for lipid metabolism. Since circadian proteins are activated by 24-hour, light-dark patterns, PER2 turns on and off PPAR-gamma’s metabolic capabilities at regular intervals.

“What surprised us most, though, is that PER2 targets one specific amino acid on the surface of the PPAR-gamma molecule,” Sassone-Corsi said. “This kind of specificity is very rare in cell biology, which makes it exciting, because it presents us with a singular target for drug development.”

Daniele Piomelli, Louise Turner Arnold Chair in Neurosciences at UCI, and Todd Leff, associate professor of pathology at Wayne State University in Detroit, collaborated on the study, which appears this month in Cell Metabolism.

Twenty-four-hour circadian rhythms regulate fundamental biological and physiological processes in almost all organisms. They anticipate environmental changes and adapt certain bodily functions to the appropriate time of day. Disruption of these cycles can profoundly influence human health and has been linked to obesity, diabetes, insomnia, depression, heart disease and cancer.

Last year, Sassone-Corsi helped discover that proteins involved with circadian rhythms and metabolism are intrinsically linked and dependent upon each other to ensure that cells operate properly and remain healthy.

(Photo: Daniel A. Anderson / University Communications)

University of California, Irvine

Friday, November 26, 2010

NANOTECHNOLOGY: A DEAD END FOR PLANT CELLS?

0 comentarios
Using particles that are 1/100,000 the width of a human hair to deliver drugs to cells or assist plants in fighting off pests may sound like something out of a science fiction movie, but these scenarios may be a common occurrence in the near future.

Carbon nanotubes, cylindrically shaped carbon molecules with a diameter of about 1 nanometer, have many potential applications in a variety of fields, such as biomedical engineering and medical chemistry. Proteins, nucleic acids, and drugs can be attached to these nanotubes and delivered to cells and organs. Carbon nanotubes can be used to recognize and fight viruses and other pathogens. However, results of studies in animals have also raised concerns about the potential toxicity of nanoparticles.

Recent research by a team of researchers from China, led by Dr. Nan Yao, explored the effects of nanoparticles on plant cells. The findings of Dr. Yao and his colleagues are published in the October issue of the American Journal of Botany (http://www.amjbot.org/cgi/reprint/97/10/1602).

Dr. Yao and his team of researchers isolated cells from rice as well as from the model plant species Arabidopsis. The researchers treated these cells with carbon nanotubes, and then assessed the cells for viability, damage to DNA, and the presence of reactive oxygen species.

The researchers found an increase in levels of the reactive oxygen species hydrogen peroxide. Reactive oxygen species cause oxidative stress to cells, and this stress can result in programmed cell death. Dr. Yao and his colleagues discovered that the effect of carbon nanotubes on cells was dosage dependent—the greater the dose, the greater the likelihood of cell death. In contrast, cells exposed to carbon particles that were not nanotubes did not suffer any ill effects, demonstrating that the size of the nanotubes is a factor in their toxicity.

"Nanotechnology has a large scope of potential applications in the agriculture industry, however, the impact of nanoparticles have rarely been studied in plants," Dr. Yao said. "We found that nanomaterials could induce programmed cell death in plant cells."

Despite the scientists' observations that carbon nanotubes had toxic effects on plant cells, the use of nanotechnology in the agriculture industry still has great promise. The scientists only observed programmed cell death as a temporary response following the injection of the nanotubes and did not observe further changes a day and a half after the nanotube treatments. Also, the researchers did not observe death at the tissue level, which indicates that injecting cells with carbon nanotubes caused only limited injury.

"The current study has provided evidence that certain carbon nanoparticles are not 100% safe and have side effects on plants, suggesting that potential risks of nanotoxicity on plants need to be assessed," Dr. Yao stated. In the future, Dr. Yao and colleagues are interested in investigating whether other types of nanoparticles may also have toxic effects on plant cells. "We would like to create a predictive toxicology model to track nanoparticles."

Only once scientists have critically examined the risks of nanoparticles can they take advantage of the tremendous potential benefits of this new technology.

American Journal of Botany

RISø ENERGY REPORT 9: CO2-FREE ENERGY CAN MEET THE WORLD'S ENERGY NEEDS IN 2050

0 comentarios
Risø Energy Report 9 lists a wide range of energy technologies in the market with low or no emissions of greenhouse gases, describing how several of these will be made commercially available in the next decades.

However, it is not possible to make the world's energy supply CO2-free as cheaply as possible, using only technology development in the current energy systems. There must be room for technological leaps and there is a need for an integrated process to optimise the entire energy system, from energy production, through transformation into energy carriers, to energy transportation and distribution and efficient end use.

There is also a need for a smart grid, connecting production and end use at local level. End users should contribute to maintain balance in the future energy system and new technologies should be introduced to the end users, including houses with low and flexible consumption, smart electronic equipment, heat pumps, energy storage and local energy supplies such as solar cells and micro CHP. Information and communication technology (ICT) will determine how successful the integration of renewables into the grid actually will be.

Considering the security of supply in the short and long term, there is still a need for access to fossil fuels, but they must be continuously replaced with renewable energy sources. If we do not make efforts to promote renewable energy sources, coal and gas might easily be prevailing in the global energy supply for the rest of this century. For many countries, however, it could be advantageous to switch to renewable energy sources in order to reduce dependence on imported oil and gas. In addition, this transition can help the countries achieve their environmental policy goals.

Seen in isolation, Denmark has a great chance for achieving these goals and for phasing out fossil fuels at a rapid pace and thus reduce emissions of greenhouse gases at the required pace.

Danish wind and biomass resources in particular will make it possible to phase out fossil fuels in connection with power generation and heat production before 2040. It will take further 10 years to eliminate fossil fuels within the transport sector.

A future smart energy system requires that we start investments now. If we do not make these investments, future generations will look back on this period wondering how we could be satisfied with an outdated energy system, without taking advantage of the opportunities which we already were aware of.

Risø National Laboratory for Sustainable Energy

DOES THE WISDOM OF CROWDS PREVAIL WHEN BETTING ON FOOTBALL?

0 comentarios
Point spreads—the number of points by which a strong team can be expected to defeat a weaker team—are supposed to reflect the "wisdom of crowds." But a new study in the Journal of Consumer Research found that crowds don't have a clue.

"Point spread betting markets seem to offer an important example of crowd wisdom, because point spreads are very accurate and are widely believed to reflect the 'crowd's' prediction of upcoming sporting events," write authors Joseph P. Simmons (Yale University), Leif D. Nelson (University of California at Berkeley), Jeff Galak (Carnegie Mellon University), and Shane Frederick (Yale University). But previous research shows that bettors are biased in their predictions; their intuitions tend to favor "favorites" over "underdogs."

The authors conducted a season-long investigation of the betting habits of enthusiastic NFL football fans from diverse regions of the United States. Participants wagered more than $20,000 on football games against point spreads that were manipulated to favor the underdog.

The authors first tested a hypothesis that crowds will wisely choose underdogs against spreads that disadvantage favorites. The bettors failed this test, predicting vastly more favorites (89 percent) than underdogs. Next, they found that even when bettors were warned that the spreads had been increased they still predicted favorites only slightly less often (83 percent).

"In this context, the temptation to rely on one's intuition is so strong as to lead people to rely on what they intuitively feel to be true (this favorite will prevail against the spread) rather than on what they generally know to be true (the favorite will usually lose against the spread)" the authors write. And it seems people have trouble learning from their mistakes: the crowd's predictions worsened over time, rather than getting better.

Finally, the researchers hit upon a method of eliciting better choices. "Asking people to predict point differentials rather than make choices against point spreads decreased reliance on faulty intuitions and produced vastly different, and vastly wiser, predictions against the spread," the authors conclude.

University of Chicago Press

DO CONSUMERS PREFER 1 PERCENT INTEREST OVER 0 PERCENT INTEREST OR IS ZERO SIMPLY CONFUSING?

0 comentarios
Why would someone choose a credit card with a one percent interest rate over another with a zero percent rate? A new study in the Journal of Consumer Research finds that consumers are often flummoxed when it comes to zero.

"A reasonable assumption is that a product will be more attractive when it offers more of a good thing, such as free pictures (with a digital camera purchase), or less of a bad thing, like interest rates on a credit card," writes author Mauricio Palmeira (Monash University, Australia). But Palmeira's research found that consumer comparison methods tend to get confused when one of the comparison terms has a zero value.

For example, a consumer interested in a new credit card may need to choose between one with a $45 annual fee and a one percent interest rate and another with a $15 fee and a 20 percent interest rate. "One could view this decision as a choice between an extra $30 annually for a 19 percent reduction in interest rate. Alternately, it can be viewed in relative terms. In this sense, a $30 difference between $15 and $45 appears much bigger than the same difference between $115 and $145," writes Palmeira. Consumers tend to be more sensitive to relative rather than absolute differences, which is why a one percent interest rate looks good, since its interest rate is 20 times less than 20 percent.

But what if consumers compare a 20 percent interest rate to a zero percent one? "I argue that whereas a 20 percent interest rate may look very large compared to one percent (it is 20 times larger!), it may not look as large compared to zero percent. Zero eliminates the reference point we use to assess the size of things," Palmeira explains.

"This leads to a counterintuitive situation, in which a credit card can increase its likelihood of being selected when it has a small but non-zero interest rate," writes Palmeira. The same is true of other attributes that consumers want to minimize, like interest rates and fat content.

The inverse is true when consumers desire an attribute. For example, if a digital camera offers a promotion that adds 200 free pictures to a purchase, a competitor may be better off offering nothing rather than just a few free pictures. "This is because 200 will look larger compared to 10 or 20 than compared to zero," Palmeira writes.

Chicago Journals

T. REX'S BIG TAIL WAS ITS KEY TO SPEED AND HUNTING PROWESS

0 comentarios
Tyrannosaurus rex was far from a plodding Cretaceous era scavenger whose long tail only served to counterbalance the up-front weight of its freakishly big head.

T. rex's athleticism (and its rear end) has been given a makeover by University of Alberta graduate student Scott Persons. His extensive research shows that powerful tail muscles made the giant carnivore one of the fastest moving hunters of its time.

As Persons says, "contrary to earlier theories, T. rex had more than just junk in its trunk."

The U of A paleontology student began his research by comparing the tails of modern-day reptiles like crocodiles and Komodo dragons to T.rex's tail. Persons found for that all animals in his study, the biggest muscles in the tail are attached to upper leg bones. These caudofemoralis muscles provide the power stroke allowing fast forward movement.

But Persons found T.rex had one crucial difference in its tail structure.

The tails of both T.rex and modern animals are given their shape and strength by rib bones attached to the vertebrae. Persons found that the ribs in the tail of T. rex are located much higher on the tail. That leaves much more room along the lower end of the tail for the caudofemoralis muscles to bulk-up and expand. Without rib bones to limit the size of the caudofemoralis muscles, they became a robust power-plant enabling T.rex to run.

Persons’ extensive measurements of T.rex bones and computer modeling shows previous estimates of the muscle mass in the dinosaur's tall were underestimated by as much as 45 per cent.

That led many earlier T. rex researchers to believe the animal lacked the necessary muscle mass for running which in turn limited its hunting skills. That lack of speed cast T. rex in the role of a scavenger only able to survive by feeding on animals killed by other predators.

As for an T. rex's exact speed, researchers say that is hard to measure, but Persons says it could likely run down any other animal in its ecosystem.

University of Alberta

BREAKING THE ICE BEFORE IT BEGINS

0 comentarios

Engineers from Harvard University have designed and demonstrated ice-free nanostructured materials that literally repel water droplets before they even have the chance to freeze.

The finding, reported online in ACS Nano on November 9th, could lead to a new way to keep airplane wings, buildings, powerlines, and even entire highways free of ice during the worst winter weather. Moreover, integrating anti-ice technology right into a material is more efficient and sustainable than conventional solutions like chemical sprays, salt, and heating.

A team led by Joanna Aizenberg, Amy Smith Berylson Professor of Materials Science at the Harvard School of Engineering and Applied Sciences (SEAS) and a Core Member of the Wyss Institute for Biologically Inspired Engineering at Harvard, focused on preventing rather than fighting ice buildup.

"We wanted to take a completely different tact and design materials that inherently prevent ice formation by repelling the water droplets," says Aizenberg. "From past studies, we also realized that the formation of ice is not a static event. The crucial approach was to investigate the entire dynamic process of how droplets impact and freeze on a supercooled surface."

For initial inspiration, the researchers turned to some elegant solutions seen in nature. For example, mosquitos can defog their eyes, and water striders can keep their legs dry thanks to an array of tiny bristles that repel droplets by reducing the surface area each one encounters.

"Freezing starts with droplets colliding with a surface," explains Aizenberg. "But very little is known about what happens when droplets hit surfaces at low temperatures."

To gain a detailed understanding of the process, the researchers watched high-speed videos of supercooled droplets hitting surfaces that were modeled after those found in nature. They saw that when a cold droplet hits the nanostructured surface, it first spreads out, but then the process runs in reverse: the droplet retracts to a spherical shape and bounces back off the surface before ever having a chance to freeze.

By contrast, on a smooth surface without the structured properties, a droplet remains spread out and eventually freezes.

"We fabricated surfaces with various geometries and feature sizes—bristles, blades, and interconnected patterns such as honeycombs and bricks—to test and understand parameters critical for optimization," says Lidiya Mishchenko, a graduate student in Aizenberg's lab and first author of the paper.

The use of such precisely engineered materials enabled the researchers to model the dynamic behavior of impacting droplets at an amazing level of detail, leading them to create a better design for ice-preventing materials.

Another important benefit of testing a wide variety of structures, Mishchenko adds, was that it allowed the team to optimize for pressure-stability. They discovered that the structures composed of interconnected patterns were ideally suited for stable, liquid-repelling surfaces that can withstand high-impact droplet collisions, such as those encountered in driving rain or by planes in flight.

The nanostructured materials prevent the formation of ice even down to temperatures as low as -25 to -30 degrees Celsius. Below that, due to the reduced contact area that prevents the droplets from fully wetting the surface, any ice that forms does not adhere well and is much easier to remove than the stubborn sheets that can form on flat surfaces.

"We see this approach as a radical and much needed shift in anti-ice technologies," says Aizenberg. "The concept of friction-free surfaces that deflect supercooled water droplets before ice nucleation can even occur is more than just a theory or a proof-of-principle experiments. We have begun to test this promising technology in real-world settings to provide a comprehensive framework for optimizing these robust ice-free surfaces for a wide range of applications, each of which may have a specific set of performance requirements."

In comparison with traditional ice prevention or removal methods like salting or heating, the nanostructured materials approach is efficient, non-toxic, and environmentally friendly. Further, when chemicals are used to de-ice a plane, for example, they can be washed away into the environment and their disposal must be carefully monitored. Similarly, salt on roads can lead to corrosion and run-off problems in local water sources.

The researchers anticipate that with their improved understanding of the ice forming process, a new type of coating integrated directly into a variety of materials could soon be developed and commercialized.

(Photo: Mike Oliveri, Flickr)

Harvard University

Thursday, November 25, 2010

DARWIN'S THEORY OF GRADUAL EVOLUTION NOT SUPPORTED BY GEOLOGICAL HISTORY

0 comentarios
Charles Darwin's theory of gradual evolution is not supported by geological history, New York University Geologist Michael Rampino concludes in an essay in the journal Historical Biology. In fact, Rampino notes that a more accurate theory of gradual evolution, positing that long periods of evolutionary stability are disrupted by catastrophic mass extinctions of life, was put forth by Scottish horticulturalist Patrick Matthew prior to Darwin's published work on the topic.

"Matthew discovered and clearly stated the idea of natural selection, applied it to the origin of species, and placed it in the context of a geologic record marked by catastrophic mass extinctions followed by relatively rapid adaptations," says Rampino, whose research on catastrophic events includes studies on volcano eruptions and asteroid impacts. "In light of the recent acceptance of the importance of catastrophic mass extinctions in the history of life, it may be time to reconsider the evolutionary views of Patrick Matthew as much more in line with present ideas regarding biological evolution than the Darwin view."

Matthew (1790-1874), Rampino notes, published a statement of the law of natural selection in a little-read Appendix to his 1831 book Naval Timber and Arboriculture. Even though both Darwin and his colleague Alfred Russel Wallace acknowledged that Matthew was the first to put forth the theory of natural selection, historians have attributed the unveiling of the theory to Darwin and Wallace. Darwin's notebooks show that he arrived at the idea in 1838, and he composed an essay on natural selection as early as 1842—years after Matthew's work appeared. Darwin and Wallace's theory was formally presented in 1858 at a science society meeting in London. Darwin's Origin of Species appeared a year later.

In the Appendix of Naval Timber and Arboriculture, Matthew described the theory of natural selection in a way that Darwin later echoed: "There is a natural law universal in nature, tending to render every reproductive being the best possibly suited to its condition…As the field of existence is limited and pre-occupied, it is only the hardier, more robust, better suited to circumstance individuals, who are able to struggle forward to maturity…"

However, in explaining the forces that influenced this process, Matthew saw catastrophic events as a prime factor, maintaining that mass extinctions were crucial to the process of evolution: "...all living things must have reduced existence so much, that an unoccupied field would be formed for new diverging ramifications of life... these remnants, in the course of time moulding and accommodating ... to the change in circumstances."

When Darwin published his Origin of Species nearly three decades later, he explicitly rejected the role of catastrophic change in natural selection: "The old notion of all the inhabitants of the Earth having been swept away by catastrophes at successive periods is very generally given up," he wrote. Instead, Darwin outlined a theory of evolution based on the ongoing struggle for survival among individuals within populations of existing species. This process of natural selection, he argued, should lead to gradual changes in the characteristics of surviving organisms.

However, as Rampino notes, geological history is now commonly understood to be marked by long periods of stability punctuated by major ecological changes that occur both episodically and rapidly, casting doubt on Darwin's theory that "most evolutionary change was accomplished very gradually by competition between organisms and by becoming better adapted to a relatively stable environment."

"Matthew's contribution was largely ignored at the time, and, with few exceptions, generally merits only a footnote in modern discussions of the discovery of natural selection," Rampino concludes. "Others have said that Matthew's thesis was published in too obscure a place to be noticed by the scientific community, or that the idea was so far ahead of its time that it could not be connected to generally accepted knowledge. As a result, his discovery was consigned to the dustbin of premature and unappreciated scientific ideas."

New York University

BILINGUAL BENEFITS REACH BEYOND COMMUNICATION

0 comentarios
Speaking two languages can be handy when traveling abroad, applying for jobs, and working with international colleagues, but how does bilingualism influence the way we think?

In the current issue of Psychological Science in the Public Interest, a journal of the Association for Psychological Science, Ellen Bialystok (York University), Fergus I.M. Craik (Rotman Research Institute), David W. Green (University College London), and Tamar H. Gollan (University of California, San Diego) review the latest research on bilingualism and ways in which knowing two languages can change brain function, even affecting brain areas not directly involved in communication.

Children learning two languages from birth achieve the same basic milestones (e.g., their first word) as monolinguals do, but they may use different strategies for language acquisition. Although bilinguals tend to have smaller vocabularies in each language than do children who know one language, bilinguals may have an advantage when it comes to certain nonverbal cognitive tasks. Bilinguals tend to perform better than monolinguals on exercises that require blocking out distractions and switching between two or more different tasks. The authors note that “when a bilingual speaks two languages regularly, speaking in just one of these languages requires use of the control network to limit interference from the other language and to ensure the continued dominance of the intended language.” The bilingual advantage in attention and cognitive control may have important, long-term benefits. Preliminary evidence even suggests that their increased use of these systems may protect bilinguals against Alzheimer’s.

The differences between monolinguals and bilinguals have important clinical implications. For example, vocabulary tests are commonly used in psychologists’ offices and bilinguals’ scores may not accurately reflect their language ability. According to the authors, “Bilinguals who score below average may be inaccurately diagnosed with impairment when none is present, or could be diagnosed as ‘normal for a bilingual’ even though impairment is in fact present and treatment is needed.” Clinicians need to be aware of the potential to misinterpret bilinguals’ test scores. Developing tests that specifically target bilingual populations may result in better outcomes for these patients.

Association for Psychological Science

THE MIND USES SYNTAX TO INTERPRET ACTIONS

0 comentarios
Most people are familiar with the concept that sentences have syntax. A verb, a subject, and an object come together in predictable patterns. But actions have syntax, too; when we watch someone else do something, we assemble their actions to mean something, according to a new study published in Psychological Science, a journal of the Association for Psychological Science.

“There are oceans and oceans of work on how we understand languages and how we interpret the things other people say,” says Matthew Botvinick of Princeton University, who cowrote the paper with his colleagues Kachina Allen, Steven Ibara, Amy Seymour, and Natalia Cordova. They thought the same principle might be applied to understanding actions. For example, if you see someone buy a ticket, give it to the attendant, and ride on the carousel, you understand that exchanging money for a piece of paper gave him the right to get on the round thing and go around in circles.

Botvinick and his colleagues focused on action sequences that followed two contrasting kinds of syntax—a linear syntax, in which action A (buying a ticket) leads to action B (giving the ticket to the attendant), which leads to outcome C (riding the carousel), and another syntax in which actions A and B both independently lead to outcome C. They were testing whether the difference in structure affected the way that people read about the actions.

The experiments were based on studies suggesting that people read a sentence faster if it comes after a sentence with the same grammatical form. But in this case, the scientists varied relationships between actions rather than the order of parts of speech. In one experiment, volunteers read sentences that described three actions. They took one of two forms: either one action leads to the next action, which leads to the outcome, such as “John purchased a carousel ticket, gave it to the attendant, and went for a ride,” or sentences like “John sliced up some tomatoes, rinsed off some lettuce, and tossed together a salad”—in which both of the first two actions lead to the result, without the second depending on the first.

Indeed, people were able to read a sentence more quickly if it followed a set of actions arranged the same way than if it followed a sentence of the other type. This indicates that readers’ minds had some kind of abstract representation of the ways goals and actions relate, says Botvinick. “It’s the underlying knowledge structure that kind of glues actions together. Otherwise, you could watch somebody do something and say it’s just a random sequence of actions.”

In the carousel example, a Martian might not understand why John exchanges paper for another piece of paper, why he gives the paper to the other man, why he goes around and around in circles, and what relationship there is between these actions. As humans, we’ve worked all of those things out, and Botvinick thinks he’s a step closer to understanding the process.

Association for Psychological Science

THE EMERGENCE OF HOLOGRAPHIC VIDEO

0 comentarios

Researchers at the University of Arizona (UA), Tucson, have developed a holographic system that can transmit a series of 3D images in near-real-time, a precursor to holographic videoconferencing.

The system incorporates a novel, photorefractive polymer--one that can rapidly refresh holographic images and is scalable for production--coupled to a unique system for recording and transmitting 3D images of individuals and objects via Ethernet.

Lead author Pierre-Alexandre Blanche and his colleagues from the university and Nitto Denko Technical Corp. of Oceanside, Calif., describe the breakthrough in the cover story of the Nov. 4, 2010, issue of Nature.

"This advance brings us a step closer to the ultimate goal of realistic holographic telepresence with high-resolution, full-color, human-size, 3D images that can be sent at video refresh rates from one part of the world to the other," says co-author and project lead Nasser Peyghambarian of UA and the Director of NSF's multi-institution Engineering Research Center for Integrated Access Networks (CIAN).

The researchers had previously demonstrated a refreshable polymer display system, but it could refresh images only once every four minutes. The new system can refresh images every two seconds; while not yet ideal for a display, the rate is more than one hundred times faster.

Additionally, using a single-laser system for writing the images onto the photorefractive polymer, the researchers can display visuals in color. While the current refresh rate for multi-color display is even slower than for monochromatic images, the development suggests a true 3D, multicolor system may be feasible.

"This breakthrough opens new opportunities for optics as a means to transport images in real time," says Lynn Preston, director of the NSF Engineering Research Centers program that supports CIAN. "Such a system can have an important impact on telepresence, telemedicine, engineering design and manufacturing, and other applications. This is an early and tremendously important outcome from this three-year old center."

(Photo: University of Arizona)

National Science Foundation

GRAVITY EASES ITS PULL

0 comentarios
A Newcastle University scientist has provided new insights into the way that electromagnetic forces interact with gravity.

The nature of gravity has baffled scientists through the ages and has proved a major stumbling block to the creation of a single 'theory of everything'.

But a new analysis by Dr David Toms, a theoretical physicist at Newcastle University, now shows that gravity may at least make some fundamental calculations more manageable.

He has found that gravity seems to calm the electromagnetic force at high energies. The finding could make some calculations easier, and is a rare case in which gravity seems to work in harmony with quantum mechanics, the theory of small particles. His full paper is published today in Nature.

Dr Toms explains: "The basic idea is that the value of the electric charge depends on how close you are to that charge.

"The number for the electric charge that you look up in the back of a textbook assumes that you are a very large distance - on the atomic scale - from the charge. The reason that the value changes with energy has to do with quantum mechanics.

"My research shows conclusively that charge is affected by gravity, and that it tends to make the charge weaker as you proceed to smaller distances. This is unexpected because in the complete absence of gravity the charge gets larger as the distance decreases."

In Dr Toms work, gravity seems to smoothe the interaction, making the force between the electron and photon nearly zero at high energies. This weakening of the force means that theorists can calculate the behaviour of high-energy electrons and photons after all.

"What gravity seems to do is make things better for you but there is still a lot of work to do", he warns.

Newcastle University

THE BRAINS OF NEANDERTHALS AND MODERN HUMANS DEVELOPED DIFFERENTLY

0 comentarios

Researcher at the Max Planck Institute for Evolutionary Anthropology in Leipzig, Germany have documented species differences in the pattern of brain development after birth that are likely to contribute to cognitive differences between modern humans and Neanderthals.

Whether cognitive differences exist between modern humans and Neanderthals is the subject of contentious disputes in anthropology and archaeology. Because the brain size range of modern humans and Neanderthals overlap, many researchers previously assumed that the cognitive capabilities of these two species were similar. Among humans, however, the internal organization of the brain is more important for cognitive abilities than its absolute size is. The brain’s internal organization depends on the tempo and mode of brain development.

Based on detailed measurements of internal shape changes of the braincase during individual growth, a team of scientists from the MPI has shown that these are differences in the patterns of brain development between humans and Neanderthals during a critical phase for cognitive development.

Discussions about the cognitive abilities of fossil humans usually focus on material culture (e.g. the complexity of the stone tool production process) and endocranial volumes. "The interpretation of the archaeological evidence remains controversial, and the brain-size ranges of Neanderthals and modern humans overlap," says Jean-Jacques Hublin, director of the Department of Human Evolution at the MPI-EVA in Leipzig where the research was conducted. Hublin adds, "our findings show how biological differences between modern humans and Neanderthals may be linked to behavioural differences inferred from the archaeological record."

Nature of the evidence: As the brain does not fossilize, for fossil skulls, only the imprints of the brain and its surrounding structures in the bone (so called "endocasts") can be studied. The researchers used state-of-the-art statistical methods to compare shape changes of virtual endocasts extracted from computed-tomographic scans. The distinct globular shape of the braincase of adult Homo sapiens is largely the result of a brain development phase that is not present in Neanderthals.

One of the key pieces of evidence was the skull reconstruction of a Neanderthal newborn. In 1914, a team of French archaeologists had excavated the skeleton of a Neanderthal baby at the rock shelter of Le Moustier in the Dordogne. The original bones of the skeleton had been lost to science for more than 90 years, until they were rediscovered among museum collections by Bruno Maureille and the museum staff. The restored original baby bones are now on permanent display at the Musée National de Préhistoire in Les Eyzies-de-Tayac-Sireuil. The museum’s director Jean-Jacques Cleyet-Merle made it possible to scan the delicate fragments using a high-resolution computed-tomographic scanner (µCT). Using computers at the Max Planck Institute’s virtual reality lab in Leipzig, Philipp Gunz and Simon Neubauer then reconstructed the Neanderthal baby from the digital pieces, like in a three-dimensional jigsaw puzzle. "When we compare the skulls of a Neanderthal and a modern human newborn, the Neanderthal’s face is already larger at the time of birth. However, most shape differences of the internal braincase develop after birth," explains Gunz. Both Neanderthals and modern human neonates have elongated braincases at the time of birth, but only modern human endocasts change to a more globular shape in the first year of life. Modern humans and Neanderthals therefore reach large adult brain sizes via different developmental pathways.

In a related study the same team of MPI researchers had previously shown that the developmental patterns of the brain were remarkably similar between chimpanzees and humans after the first year of life, but differed markedly directly after birth. "We interpret those aspects of development that are shared between modern humans, Neanderthals, and chimpanzees as conserved," explains Simon Neubauer. "This developmental pattern has probably not changed since the last common ancestor of chimpanzees and humans several million years ago." In the first year of life, modern humans, but not Neanderthals, depart from this ancestral pattern of brain development.

Establishing when the species differences between Neanderthal and modern human adults emerge during development was critical for understanding whether differences in the pattern of brain development might underlie potential cognitive differences. As the differences between modern humans and Neanderthals are most prominent in the period directly after birth, they likely have implications for the neuronal and synaptic organization of the developing brain.

The development of cognitive abilities during individual growth is linked to the maturation of the underlying wiring pattern of the brain; around the time of birth, the neural circuitry is sparse in humans, and clinical studies have linked even subtle alterations in early brain development to changes in the neural wiring patterns that affect behaviour and cognition. The connections between diverse brain regions that are established during this period in modern humans are important for higher-order social, emotional, and communication functions. It is therefore unlikely that Neanderthals saw the world as we do.

The new study shows that modern humans have a unique pattern of brain development after birth, which separates us from our closest relatives, the Neanderthals. This uniquely modern human pattern of early brain development is particularly interesting in light of the recent breakthroughs in the Neanderthal genome project. A comparison of Neanderthal and modern human genomes revealed several regions with strong evidence for positive selection within Homo sapiens, i.e. the selection occurred after the split between modern humans and Neanderthals. Three among these are likely to be critical for brain development, as they affect mental and cognitive development.

"Our findings have two important implications," says Philipp Gunz. "We have discovered differences in the patterns of brain development that might contribute to cognitive differences between modern humans and Neanderthals. Maybe more importantly, however, this discovery will tell us more about our own species than about Neanderthals; we hope that our findings will help to identify the function of some genes that show evidence for recent selection in modern humans."

(Photo: Max Planck Institute for Evolutionary Anthropology)

Max Planck Institute

CAVEMAN BEHAVIOURAL TRAITS MIGHT KICK IN AT DINNER TABLE BEFORE EATING

0 comentarios
Frank Kachanoff was surprised. He thought the sight of meat on the table would make people more aggressive, not less. After all, don’t football coaches feed their players big hunks of red meat before a game in hopes of pumping them up? And what about our images of a grunting or growling animal snarling at anyone who dares take their meat away from them? Wouldn’t that go for humans, too?

Kachanoff, a researcher with a special interest in evolution at McGill University’s Department of Psychology, has discovered quite the reverse. According to research presented at a recent symposium at McGill, seeing meat appears to make human beings significantly less aggressive. “I was inspired by research on priming and aggression, that has shown that just looking at an object which is learned to be associated with aggression, such as a gun, can make someone more likely to behave aggressively. I wanted to know if we might respond aggressively to certain stimuli in our environment not because of learned associations, but because of an innate predisposition. I wanted to know if just looking at the meat would suffice to provoke an aggressive behavior.”

The idea that meat would illicit aggressive behaviour makes sense, as it would have helped our primate ancestors with hunting, co-opting and protecting their meat resources. Kachanoff believed that humans may therefore have evolved an innate predisposition to respond aggressively towards meat, and recruited 82 males to test his theory, using long-established techniques for provoking and measuring aggression. The experiment itself was quite simple – subjects had to punish a script reader every time he made an error while sorting photos, some with pictures of meat, and others with neutral imagery. The subjects believed that they could inflict various volumes of sound, including “painful,” to the script reader, which he would hear after his performance. While the research team figured that the group sorting pictures of meat would inflict more discomfort on the reader, they were very surprised by the results.

“We used imagery of meat that was ready to eat. In terms of behaviour, with the benefit of hindsight, it would make sense that our ancestors would be calm, as they would be surrounded by friends and family at meal time,” Kachanoff explained. “I would like to run this experiment again, using hunting images. Perhaps Thanksgiving next year will be a great opportunity for a do-over!”

Evolutionary psychologists believe it is useful to look at innate reflexes in order to better understand societal trends and personal behavior. Kachanoff’s research is important because it looks at ways society may influence environmental factors to decrease the likelihood of aggressive behavior. His research was carried out under the direction of Dr. Donald Taylor and Ph.D student, Ms. Julie Caouette of McGill’s Department of Psychology, and was presented at the university’s annual undergraduate science symposium.

McGill University

FAT CELLS REACH THEIR LIMIT AND TRIGGER CHANGES LINKED TO TYPE 2 DIABETES

0 comentarios
Scientists have found that the fat cells and tissues of morbidly obese people and animals can reach a limit in their ability to store fat appropriately. Beyond this limit several biological processes conspire to prevent further expansion of fat tissue and in the process may trigger other health problems.

Research funded by the Biotechnology and Biological Sciences Research Council (BBSRC), the Medical Research Council (MRC) and the European Union Sixth Framework Programme, shows that a protein called secreted frizzled-related protein 1 (SFRP1) is produced by fat cells and may be involved in changes to our metabolism that could increase the risk of diabetes and cardiovascular disease. The work was carried out at the University of Cambridge and will be published in a future edition of the International Journal of Obesity.

Professor Antonio Vidal-Puig from the Institute of Metabolic Science, University of Cambridge said "We have known for some time that many obese individuals are at greater risk of developing diabetes, cardiovascular disease and also cancer. But this is not true for all obese people."

Dr Jaswinder Sethi, also from the Institute of Metabolic Sciences, University of Cambridge added "What we still do not fully understand, is how the expansion of fat tissue is regulated in healthy people and how this process of regulation might be different in those obese people who have health problems such as the metabolic syndrome."

One hypothesis is that storing surplus fat in itself may not lead to metabolic syndrome but there may be a maximum limit of how much fat a person can store safely before the body's natural responses lead to the debilitating chronic health problems often associated with obesity.

Dr Sethi continued "To investigate this we have been using a combination of molecular cell biology, human gene profiling and mouse genetics as tools to understand what is happening as fat cells and tissues develop and then, in some very obese people, lose their normal process of regulation."

The researchers have found that the level of SFRP1 increases as fat cells and tissues increase in volume until it peaks at about the point of mild obesity. There is evidence that SFRP1 is involved in recruiting new fat cells, thereby facilitating the expansion of fat tissue up until this point where it peaks.

"SFRP1 seems to be very closely linked to some sort of tipping point, after which the way in which our fat tissue is regulated changes significantly and there are knock-on consequences to our wider metabolism. We think that in very obese people this may be an early event that triggers metabolic syndrome and the chronic health problems associated with it, such as diabetes and cardiovascular disease," said Dr Sethi.

The fat tissue of people who are obese and also have diabetes shows signs of not being regulated as it usually would be. In this tissue, the researchers also see the levels of SFRP1 begin to fall so as to prevent further expansion of the tissue. It is this fall in SFRP1 that has knock-on effects on metabolism that may in part explain the link between morbid obesity and metabolic syndrome.

The researchers believe that SFRP1 works in concert with other molecules to respond to the availability, or not, of energy. Together these molecules also determine to what extent our fat tissue can continue to expand as we consume more calories than we burn.

Professor Douglas Kell, BBSRC Chief Executive said: "Research such as this leads to better understanding of the biochemistry that drives normal human physiology. In particular we can see how we usually respond to extremes brought on by the various onslaughts of our lifestyles and environments. Increasing our understanding of the fundamentals of metabolic signalling is an important part of working towards an increase in health span to match our increasing life spans."

Biological Sciences Research Council

SKELETONS FROM THE 18TH CENTURY REVEAL TYPHUS EPIDEMIC FROM SPAIN

0 comentarios

By studying the dental pulp of skeletons buried in Douai (northern France), researchers from CNRS and the Université de la Méditerranée have identified the pathogenic agents responsible for trench fever and typhus. Published in the journal PLoS ONE, this work reveals for the first time the presence of typhus in Europe at the start of the 18th century and lends weight to the hypothesis that this disease could have been imported into Europe by Spanish conquistadors returning from the Americas.

Between 1710 and 1712, while Louis XIV was waging a war with the north of Europe for the Spanish succession, the town of Douai in northern France was besieged on several occasions. In 1981, during building construction work, mass graves were discovered in the town. The skeletons brought to light were subjected to paleomicrobiology studies directed by Didier Raoult of the Unité de Recherche sur les Maladies Infectieuses et Tropicales Emergentes (CNRS/Université de la Méditerranée/IRD), in collaboration with researchers from the Laboratoire d'Anthropologie Bioculturelle de Marseille (CNRS/Université de la Méditerranée/EFS).

The manner in which the skeletons found in the mass graves were laid out and the absence of any bodily injuries caused by weapons point to an epidemic, possibly more lethal than the battles that took place during the siege of Douai in the early 18th century. Molecular biology analyses enabled the team to identify the pathogenic agent responsible for the epidemic. Using DNA extracted from dental pulp, the scientists identified the DNA of the bacteria responsible for trench fever (Bartonella quintana) and mostly typhus (Rickettsia prowazekii). This is the earliest demonstration of the presence in Europe of the typhus agent, an infectious disease transmitted by lice.

The same team of scientists had already revealed the presence of these pathogenic agents a century later in Napoleonic armies. The genotyping of Rickettsia prowazekii shows that it is the same bacterium that later became rife in Spain, thus supporting the hypothesis that typhus was imported into Europe by Spanish conquistadors at the start of the 18th century.

(Photo: © Communauté d'agglomération du Douaisis, direction de l'archéologie préventive)

CNRS

Wednesday, November 24, 2010

TUNING IN TO A NEW HEARING MECHANISM

0 comentarios

More than 30 million Americans suffer from hearing loss, and about 6 million wear hearing aids. While those devices can boost the intensity of sounds coming into the ear, they are often ineffective in loud environments such as restaurants, where you need to pick out the voice of your dining companion from background noise.

To do that, you need to be able to distinguish sounds with subtle differences. The human ear is exquisitely adapted for that task, but the underlying mechanism responsible for this selectivity has remained unclear. Now, new findings from MIT researchers reveal an entirely new mechanism by which the human ear sorts sounds, a discovery that could lead to improved, next-generation assistive hearing devices.

“We’ve incorporated into hearing aids everything we know about how sounds are sorted, but they’re still not very effective in problematic environments such as restaurants, or anywhere there are competing speakers,” says Dennis Freeman, MIT professor of electrical engineering, who is leading the research team. “If we knew how the ear sorts sounds, we could build an apparatus that sorts them the same way.”

In a 2007 Proceedings of the National Academy of Sciences paper, Freeman and his associates A.J. Aranyosi and lead author Roozbeh Ghaffari showed that the tiny, gel-like tectorial membrane, located in the inner ear, coordinates with the basilar membrane to fine-tune the ear’s ability to distinguish sounds. Last month, they reported in Nature Communications that a mutation in one of the proteins of the tectorial membrane interferes with that process.

It has been known for more than 50 years that sound waves entering the ear travel along the spiral-shaped, fluid-filled cochlea in the inner ear. Hair cells lining the ribbon-like basilar membrane in the cochlea translate those sound waves into electrical impulses that are sent to the brain. As sound waves travel along the basilar membrane, they “break” at different points, much as ocean waves break on the beach. The break location helps the ear to sort sounds of different frequencies.

Until recently, the role of the tectorial membrane in this process was not well understood.

In their 2007 paper, Freeman and Ghaffari showed that the tectorial membrane carries waves that move from side to side, while up-and-down waves travel along the basilar membrane. Together, the two membranes can work to activate enough hair cells so that individual sounds are detected, but not so many that sounds can’t be distinguished from each other.

Made of a special gel-like material not found elsewhere in the body, the entire tectorial membrane could fit inside a one-inch segment of human hair. The tectorial membrane consists of three specialized proteins, making them the ideal targets of genetic studies of hearing.

One of those proteins is called beta-tectorin (encoded by the TectB gene), which was the focus of Ghaffari, Aranyosi and Freeman’s recent Nature Communications paper. The researchers collaborated with biologist Guy Richardson of the University of Sussex and found that in mice with the TectB gene missing, sound waves did not travel as fast or as far along the tectorial membrane as waves in normal tectorial membranes. When the tectorial membrane is not functioning properly in these mice, sounds stimulate a smaller number of hair cells, making the ear less sensitive and overly selective.

Until the recent MIT studies on the tectorial membrane, researchers trying to come up with a model to explain the membrane’s role didn’t have a good way to test their theories, says Karl Grosh, professor of mechanical and biomedical engineering at the University of Michigan. “This is a very nice piece of work that starts to bring together the modeling and experimental results in a way that is very satisfying,” he says.

Mammalian hearing systems are extremely similar across species, which leads the MIT researchers to believe that their findings in mice are applicable to human hearing as well.

Most hearing aids consist of a microphone that receives sound waves from the environment, and a loudspeaker that amplifies them and sends them into the middle and inner ear. Over the decades, refinements have been made to the basic design, but no one has been able to overcome a fundamental problem: Instead of selectively amplifying one person’s voice, all sounds are amplified, including background noise.

Freeman believes that by incorporating the interactions between the tectorial membrane and basilar membrane traveling waves, this new model could improve our understanding of hearing mechanisms and lead to hearing aids with enhanced signal processing. Such a device could help tune in to a specific range of frequencies, for example, those of the person’s voice that you want to listen to. Only those sounds would be amplified.

Freeman, who has hearing loss from working in a noisy factory as a teenager and side effects of a medicine he was given for rheumatic fever, worked on hearing-aid designs 25 years ago. However, he was discouraged by the fact that most new ideas for hearing-aid design did not offer significant improvements. He decided to conduct basic research in this area, hoping that understanding the ear better would naturally lead to new approaches to hearing-aid design.

“We’re really trying to figure out the algorithm by which sounds are sorted, because if we could figure that out, we could put it into a machine,” says Freeman, who is a member of MIT’s Research Laboratory of Electronics and the Harvard-MIT Division of Health Sciences and Technology. His group’s recent tectorial membrane research was funded by the National Institutes of Health.

Next, the researchers are continuing their studies of tectorial membrane protein mutations to see if tectorial membrane traveling waves play similar roles in other genetic disorders of hearing.

(Photo: MIT)

Massachusetts Institute of Technology

Followers

Archive

 

Selected Science News. Copyright 2008 All Rights Reserved Revolution Two Church theme by Brian Gardner Converted into Blogger Template by Bloganol dot com