Monday, November 30, 2009

AFTER MASTODONS AND MAMMOTHS, A TRANSFORMED LANDSCAPE

0 comentarios

Roughly 15,000 years ago, at the end of the last ice age, North America's vast assemblage of large animals — including such iconic creatures as mammoths, mastodons, camels, horses, ground sloths and giant beavers — began their precipitous slide to extinction.

And when their populations crashed, emptying a land whose diversity of large animals equaled or surpassed Africa's wildlife-rich Serengeti plains then or now, an entirely novel ecosystem emerged as broadleaved trees once kept in check by huge numbers of big herbivores claimed the landscape. Soon after, the accumulation of woody debris sparked a dramatic increase in the prevalence of wildfire, another key shaper of landscapes.

This new picture of the ecological upheaval of the North American landscape just after the retreat of the ice sheets is detailed in a study published in the journal Science. The study, led by researchers from the University of Wisconsin-Madison, uses fossil pollen, charcoal and dung fungus spores to paint a picture of a post-ice age terrain different from anything in the world today.

The work is important because it is "the clearest evidence to date that the extinction of a broad guild of animals had effects on other parts of these ancient ecosystems," says John W. Williams, a UW-Madison professor of geography and an expert on ancient climates and ecosystems who is the study's senior author. What's more, he says, the detailing of changes on the ice age landscape following the crash of keystone animal populations can provide critical insight into the broader effects of animals disappearing from modern landscapes.

The study was led by Jacquelyn Gill, a graduate student in Williams' lab. Other co-authors are Stephen T. Jackson of the University of Wyoming, Katherine B. Lininger of UW-Madison and Guy S. Robinson of Fordham University.

The new work, says Gill, informs but does not resolve the debate over what caused the extinction of 34 genera or groups of large animals, including icons of the ice age such as elephant like mastodons and ground sloths the size of sport utility vehicles. "Our data are not consistent with a rapid, 'blitzkrieg' overkill of large animals by humans," notes Gill, nor was their decline due to a loss of habitat.

However, the work does seem to rule out a recent hypothesis that a meteor or comet impact some 12.9 thousand years ago was responsible for the extinction of ice age North America's signature large animals.

The study was conducted using lake sediment cores obtained from Appleman Lake in Indiana, as well as data obtained previously by Robinson from sites in New York. Gill, Williams and their colleagues used pollen, charcoal and the spores of a dung fungus that requires passage through a mammalian intestinal tract to complete its life cycle to reconstruct a picture of sweeping change to the ice age environment. The decline of North America's signature ice age mammals was a gradual process, the Wisconsin researchers explain, taking about 1,000 years. The decline in the huge numbers of ice age animals is preserved in the fossil record when the fungal spores disappear from the record altogether: "About 13.8 thousand years ago, the number of spores drops dramatically. They're barely in the record anymore," Gill explains.

Like detectives reconstructing a crime scene, the group's use of dung fungus spores helps establish a precise sequence of events, showing that the crash of ice age megafauna began before plant communities started to change and before fires appeared widely on the landscape.

"The data suggest that the megafaunal decline and extinction began at the Appleman Lake site sometime between 14.8 thousand and 13.7 thousand years ago and preceded major shifts in plant community composition and the frequency of fire," notes Williams.

Absent the large herbivores that kept them in check, such tree species as black ash, elm and ironwood began to colonize a landscape dominated by coniferous trees such as spruce and larch. The resulting mix of boreal and temperate trees formed a plant community unlike any observed today.

"As soon as herbivores drop off the landscape, we see different plant communities," Gill explains, noting that mastodon herds and other large animals occupied a parkland like landscape, typified by large open spaces and patches of forest and swamp. "Our data suggest that these trees would have been abundant sooner if the herbivores hadn't been there to eat them."

While both the extinction of North America's ice age megafauna and the sweeping change to the landscape are well-documented phenomena, there was, until now, no detailed chronology of the events that remade the continent's biological communities beginning about 14.8 thousand years ago. Establishing that the disappearance of mammoths, giant beavers, ground sloths and other large animals preceded the massive change in plant communities, promises scientists critical new insight into the dynamics of extinction and its pervasive influence on a given landscape.

(Photo: Barry Roal Carlsen)

University of Wisconsin-Madison

RICH ORE DEPOSITS LINKED TO ANCIENT ATMOSPHERE

0 comentarios

Much of our planet's mineral wealth was deposited billions of years ago when Earth's chemical cycles were different from today's. Using geochemical clues from rocks nearly 3 billion years old, a group of scientists including Andrey Bekker and Doug Rumble from the Carnegie Institution have made the surprising discovery that the creation of economically important nickel ore deposits was linked to sulfur in the ancient oxygen-poor atmosphere.

These ancient ores -- specifically iron-nickel sulfide deposits -- yield 10% of the world's annual nickel production. They formed for the most part between two and three billion years ago when hot magmas erupted on the ocean floor. Yet scientists have puzzled over the origin of the rich deposits. The ore minerals require sulfur to form, but neither seawater nor the magmas hosting the ores were thought to be rich enough in sulfur for this to happen.

"These nickel deposits have sulfur in them arising from an atmospheric cycle in ancient times. The isotopic signal is of an anoxic atmosphere," says Rumble of Carnegie's Geophysical Laboratory, a co-author of the paper appearing in the November 20 issue of Science.

Rumble, with lead author Andrey Bekker (formerly Carnegie Fellow and now at the University of Manitoba), and four other colleagues used advanced geochemical techniques to analyze rock samples from major ore deposits in Australia and Canada. They found that to help produce the ancient deposits, sulfur atoms made a complicated journey from volcanic eruptions, to the atmosphere, to seawater, to hot springs on the ocean floor, and finally to molten, ore-producing magmas.

The key evidence came from a form of sulfur known as sulfur-33, an isotope in which atoms contain one more neutron than "normal" sulfur (sulfur-32). Both isotopes act the same in most chemical reactions, but reactions in the atmosphere in which sulfur dioxide gas molecules are split by ultraviolet light (UV) rays cause the isotopes to be sorted or "fractionated" into different reaction products, creating isotopic anomalies.

"If there is too much oxygen in the atmosphere then not enough UV gets through and these reactions can't happen," says Rumble. "So if you find these sulfur isotope anomalies in rocks of a certain age, you have information about the oxygen level in the atmosphere."

By linking the rich nickel ores with the ancient atmosphere, the anomalies in the rock samples also answer the long-standing question regarding the source of the sulfur in the ore minerals. Knowing this will help geologists track down new ore deposits, says Rumble, because the presence of sulfur and other chemical factors determine whether or not a deposit will form.

"Ore deposits are a tiny fraction of a percent of the Earth's surface, yet economically they are incredibly important. Modern society cannot exist without specialized metals and alloys," he says. "But it's all a matter of local geological circumstance whether you have a bonanza -- or a bust."

Carnegie Institution

'HOBBITS' ARE A NEW HUMAN SPECIES -- ACCORDING TO THE STATISTICAL ANALYSIS OF FOSSILS

0 comentarios
In 2003 Australian and Indonesian scientists discovered small-bodied, small-brained, hominin (human-like) fossils on the remote island of Flores in the Indonesian archipelago. This discovery of a new human species called Homo floresiensis has spawned much debate with some researchers claiming that the small creatures are really modern humans whose tiny head and brain are the result of a medical condition called microcephaly.

Researchers William Jungers, Ph.D., and Karen Baab, Ph.D. studied the skeletal remains of a female (LB1), nicknamed "Little Lady of Flores" or "Flo" to confirm the evolutionary path of the hobbit species. The specimen was remarkably complete and included skull, jaw, arms, legs, hands, and feet that provided researchers with integrated information from an individual fossil.

The cranial capacity of LB1 was just over 400 cm, making it more similar to the brains of a chimpanzee or bipedal "ape-men" of East and South Africa. The skull and jawbone features are much more primitive looking than any normal modern human. Statistical analysis of skull shapes show modern humans cluster together in one group, microcephalic humans in another and the hobbit along with ancient hominins in a third.

Due to the relative completeness of fossil remains for LB1, the scientists were able to reconstruct a reliable body design that was unlike any modern human. The thigh bone and shin bone of LB1 are much shorter than modern humans including Central African pygmies, South African KhoeSan (formerly known as 'bushmen") and "negrito" pygmies from the Andaman Islands and the Philippines. Some researchers speculate this could represent an evolutionary reversal correlated with "island dwarfing." "It is difficult to believe an evolutionary change would lead to less economical movement," said Dr. Jungers. "It makes little sense that this species re-evolved shorter thighs and legs because long hind limbs improve bipedal walking. We suspect that these are primitive retentions instead."

Further analysis of the remains using a regression equation developed by Dr. Jungers indicates that LB1 was approximately 106 cm tall (3 feet, 6 inches)—far smaller than the modern pygmies whose adults grow to less than 150 cm (4 feet, 11 inches). A scatterplot depicts LB1 far outside the range of Southeast Asian and African pygmies in both absolute height and body mass indices. "Attempts to dismiss the hobbits as pathological people have failed repeatedly because the medical diagnoses of dwarfing syndromes and microcephaly bear no resemblance to the unique anatomy of Homo floresiensis," noted Dr. Baab.

Wiley-Blackwell

Saturday, November 28, 2009

WAKING UP MEMORIES WHILE YOU SLEEP

0 comentarios
They were in a deep sleep, yet sounds, such as a teakettle whistle and a cat's meow, somehow penetrated their slumber.

The 25 sounds presented during the nap were reminders of earlier spatial learning, though the Northwestern University research participants were unaware of the sounds as they slept.

Yet, upon waking, memory tests showed that spatial memories had changed. The participants were more accurate in dragging an object to the correct location on a computer screen for the 25 images whose corresponding sounds were presented during sleep (such as a muffled explosion for a photo of dynamite) than for another 25 matched objects.

"The research strongly suggests that we don't shut down our minds during deep sleep," said John Rudoy, lead author of the study and a neuroscience Ph.D. student at Northwestern. "Rather this is an important time for consolidating memories."

Most provocatively, the research showed that sounds can penetrate deep sleep and be used to guide rehearsal of specific information, pushing people's consolidation of memories in one direction over another.

"While asleep, people might process anything that happened during the day -- what they ate for breakfast, television shows they watched, anything," said Ken Paller, senior author of the study and professor of psychology in the Weinberg College of Arts and Sciences at Northwestern. "But we decided which memories our volunteers would activate, guiding them to rehearse some of the locations they had learned an hour earlier."

The Northwestern study adds a new twist to a growing body of research showing that memories are processed during sleep. It substantiates the literature showing that the brain is very busy during sleep, going over recently acquired information and integrating it with other knowledge in a mysterious consolidation process that sustains our memory abilities when awake.

"Strengthening Individual Memories by Reactivating Them During Sleep" will be published in the journal Science. Besides Paller and Rudoy, the paper's co-authors are Northwestern colleagues Joel L. Voss and Carmen E. Westerberg.

Whether or not memories are processed during sleep has been a subject of controversy, with most of the research on the topic focusing on REM, a normal stage of sleep characterized by rapid movement of the eyes. Vividly recalled dreams mostly occur during REM sleep. Recent research, including the new Northwestern study, however, focuses on memory processing during deep sleep, rather than during REM sleep.

"We are beginning to see that deep sleep actually is a key time for memory processing," Paller said.

Prior to their naps, the 12 study participants were taught to associate each of 50 images with a random location on a computer screen. Each object, such as a shattering wine glass, was paired with a corresponding sound, such as that of breaking glass, delivered over a speaker.

Locations were learned by repeating trials until study participants got quite good at placing all the objects in their assigned places. Approximately 45 minutes after learning, each participant reclined in a quiet, darkened room. Electrodes attached to their scalp measured their brain activity, indicating when they were asleep. Sleep sounds were presented without waking anyone up. When asked later, none of the participants thought sounds had been played during the naps. Yet, memory testing showed that placements of the objects were more accurate for those cued by their associated sounds during sleep than for those not cued.

"Our little experiment opens the door to many questions," Paller said.

Would high-school students do better on SAT tests if daytime studying was supplemented with sleep sounds at night? Would students learning foreign vocabulary words or other facts do better in the morning after listening to related information as they slept? Infants spend an inordinate amount of time sleeping, while their brains work over their recent experiences. Could an infant learn a first language more quickly if stimulation occurred during naps or overnight? What about an actor trying to learn lines or a law student trying to memorize numerous details of case law? Could playing sounds related to such learning improve the recall of relevant facts the next day?

The study opens avenues for discovering boundaries of what can happen to memories during sleep, said co-author Voss. "Can memories be distorted as well as strengthened? Can people be guided to forget unwanted memories?"

Much work remains to determine whether the results of the new research translate to these and other contexts, Paller emphasized. "We don't know the answers at this point," he said, "but more experiments about memory processing during sleep are certain to follow."

Northwestern University

STUDY RAISES CONCERNS ABOUT OUTDOOR SECOND-HAND SMOKE

0 comentarios
Indoor smoking bans have forced smokers at bars and restaurants onto outdoor patios, but a new University of Georgia study in collaboration with the Centers for Disease Control and Prevention suggests that these outdoor smoking areas might be creating a new health hazard.

The study, thought to be the first to assess levels of a nicotine byproduct known as cotinine in nonsmokers exposed to second-hand smoke outdoors, found levels up to 162 percent greater than in the control group. The results appear in the November issue of the Journal of Occupational and Environmental Hygiene.

"Indoor smoking bans have helped to create more of these outdoor environments where people are exposed to secondhand smoke," said study co-author Luke Naeher, associate professor in the UGA College of Public Health. "We know from our previous study that there are measurable airborne levels of secondhand smoke in these environments, and we know from this study that we can measure internal exposure.

"Secondhand smoke contains several known carcinogens and the current thinking is that there is no safe level of exposure," he added. "So the levels that we are seeing are a potential public health issue."

Athens-Clarke County, Ga., enacted an indoor smoking ban in 2005, providing Naeher and his colleagues and ideal environment for their study. The team recruited 20 non-smoking adults and placed them in one of three environments: outside bars, outside restaurants and, for the control group, outside the UGA main library. Immediately before and after the six-hour study period, the volunteers gave a saliva sample that was tested for levels of cotinine, a byproduct of nicotine and a commonly used marker of tobacco exposure.

The team found an average increase in cotinine of 162 percent for the volunteers stationed at outdoor seating and standing areas at bars, a 102 percent increase for those outside of restaurants and a 16 percent increase for the control group near the library.

Naeher acknowledges that an exposure of six-hours is greater than what an average patron would experience but said that employees can be exposed for even longer periods.

"Anyone who works in that environment—waitresses, waiters or bouncers—may be there for up to six hours or longer," Naeher said. "Across the country, a large number of people are occupationally exposed to second-hand smoke in this way."

Studies that measured health outcomes following indoor smoking bans have credited the bans with lowering rates of heart attacks and respiratory illness, but Naeher said that the health impacts of outdoor second-hand smoke are still unknown.

In Naeher's study, cotinine levels in the volunteers at the bar setting saw their levels increase from an average pre-exposure level of 0.069 ng/ml (nanograms per milliliter) to an average post-exposure level of 0.182 ng/ml. The maximum value observed, however, was 0.959 ng/ml. To put that number into context, a widely cited study has determined that an average cotinine level of 0.4 ng/ml increases lung cancer deaths by 1 for every 1,000 people and increases heart disease deaths by 1 for every 100 people.

Still, the researchers caution that it's too early to draw policy conclusions from their findings. Cotinine is a marker of exposure to tobacco, Naeher said, but is not a carcinogen. The team is currently planning a study that would measure levels of a molecule known as NNAL, which is a marker of tobacco exposure and a known carcinogen, in people exposed to second-hand smoke outdoors.

"Our study suggests that there is reason to be concerned about second-hand smoke levels outdoors," said study co-author Gideon St. Helen, who is pursuing his Ph.D. through the university's Interdisciplinary Toxicology Program, "and our findings are an incentive for us to do further studies to see what the effects of those levels are."

University of Georgia

STUDY: SEA STARS BULK UP TO BEAT THE HEAT

0 comentarios
A new study finds that a species of sea star stays cool using a strategy never before seen in the animal kingdom. The sea stars soak up cold sea water into their bodies during high tide as buffer against potentially damaging temperatures brought about by direct sunlight at low tide.

"Sea stars were assumed to be at the mercy of the sun during low tide," said the study's lead author, Sylvain Pincebourde of François Rabelais University in Tours, France. "This work shows that some sea stars have an unexpected back-up strategy."

The researcher is published in the December issue of The American Naturalist.

Sea stars need to endure rapid changes in temperature. During high tide, they are fully submerged in cool sea water. But when tides receded, the stars are often left on rocky shorelines, baking in the sun.

Clearly the stars had some way of beating the heat, but scientists were unsure how they did it. Pincebourde and his team thought it might have something to do with fluid-filled cavities found in the arms of sea stars. So he set up an experiment to test it.

The researchers placed sea stars in aquariums and varied the water level to simulate tidal patterns. Heat lamps were used to control temperature, with some stars experiencing hotter temperatures than others. The researchers found that stars exposed to higher temperatures at low tide had higher body mass after the high tide that followed. Since the stars were not allowed to eat, the increased mass must be from soaking up water.

"This reservoir of cool water keeps the sea star from overheating when the tide recedes again the next day, a process called 'thermal inertia,'" Pincebourde said.

What appears to be happening, the researchers say, is that a hot low tide serves as a cue telling the star to soak up more water during the next high tide. And the amount of water the stars can hold is remarkable.

"It would be as if humans were able to look at a weather forecast, decide it was going to be hot tomorrow, and then in preparation suck up 15 or more pounds of water into our bodies," said co-author Brian Helmuth of the University of South Carolina in Columbia.

The researchers are concerned, however, that climate change may put this novel cooling strategy in peril.

"This strategy only works when the sea water is colder than the air," said co-author Eric Sanford of the University of California, Davis. "Ocean warming might therefore break down this buffering mechanism, making this sea star susceptible to global warming. There are likely limits to how much this mechanism can buffer this animal against global change."

University of Chicago

Friday, November 27, 2009

BIGGER NOT NECESSARILY BETTER, WHEN IT COMES TO BRAINS

0 comentarios

Tiny insects could be as intelligent as much bigger animals, despite only having a brain the size of a pinhead, say scientists at Queen Mary, University of London.

"Animals with bigger brains are not necessarily more intelligent," according to Lars Chittka, Professor of Sensory and Behavioural Ecology at Queen Mary's Research Centre for Psychology and University of Cambridge colleague, Jeremy Niven. This begs the important question: what are they for?

Research repeatedly shows how insects are capable of some intelligent behaviours scientists previously thought was unique to larger animals. Honeybees, for example, can count, categorise similar objects like dogs or human faces, understand 'same' and 'different', and differentiate between shapes that are symmetrical and asymmetrical.

"We know that body size is the single best way to predict an animal's brain size," explains Chittka, writing in the journal Current Biology, today. "However, contrary to popular belief, we can't say that brain size predicts their capacity for intelligent behaviour."

Differences in brain size between animals is extreme: a whale’s brain can weigh up to 9 kg (with over 200 billion nerve cells), and human brains vary between 1.25 kg and 1.45 kg (with an estimated 85 billion nerve cells). A honeybee’s brain weighs only 1 milligram and contains fewer than a million nerve cells.

While some increases in brain size do affect an animal's capability for intelligent behaviour, many size differences only exist in a specific brain region. This is often seen in animals with highly developed senses (like sight or hearing) or an ability to make very precise movements. The size increase allows the brain to function in greater detail, finer resolution, higher sensitivity or greater precision: in other words, more of the same.

Research suggests that bigger animals may need bigger brains simply because there is more to control - for example they need to move bigger muscles and therefore need more and bigger nerves to move them.

Chittka says: "In bigger brains we often don't find more complexity, just an endless repetition of the same neural circuits over and over. This might add detail to remembered images or sounds, but not add any degree of complexity. To use a computer analogy, bigger brains might in many cases be bigger hard drives, not necessarily better processors."

This must mean that much 'advanced' thinking can actually be done with very limited neuron numbers. Computer modelling shows that even consciousness can be generated with very small neural circuits, which could in theory easily fit into an insect brain.

In fact, the models suggest that counting could be achieved with only a few hundred nerve cells and only a few thousand could be enough to generate consciousness. Engineers hope that this kind of research will lead to smarter computing with the ability to recognise human facial expressions and emotions.

(Photo: Queen Mary, UL)

Queen Mary, University of London

HEART DISEASE FOUND IN EGYPTIAN MUMMIES

0 comentarios

Hardening of the arteries has been detected in Egyptian mummies, some as old as 3,500 years, suggesting that the factors causing heart attack and stroke are not only modern ones; they afflicted ancient people, too.

"Atherosclerosis is ubiquitous among modern day humans and, despite differences in ancient and modern lifestyles, we found that it was rather common in ancient Egyptians of high socioeconomic status living as much as three millennia ago," says UC Irvine clinical professor of cardiology Dr. Gregory Thomas, a co-principal investigator on the study. "The findings suggest that we may have to look beyond modern risk factors to fully understand the disease."

The nameplate of the Pharaoh Merenptah (c. 1213-1203 BC) in the Museum of Egyptian Antiquities reads that, when he died at approximately age 60, he was afflicted with atherosclerosis, arthritis, and dental decay. Intrigued that atherosclerosis may have been widespread among ancient Egyptians, Thomas and a team of U.S. and Egyptian cardiologists, joined by experts in Egyptology and preservation, selected 20 mummies on display and in the basement of the Museum of Egyptian Antiquities for scanning on a Siemens 6 slice CT scanner during the week of Feb. 8, 2009.

The mummies underwent whole body scanning with special attention to the cardiovascular system. The researchers found that 9 of the 16 mummies who had identifiable arteries or hearts left in their bodies after the mummification process had calcification either clearly seen in the wall of the artery or in the path were the artery should have been. Some mummies had calcification in up to 6 different arteries.

Using skeletal analysis, the Egyptology and preservationist team was able to estimate the age at death for all the mummies and the names and occupations in the majority. Of the mummies who had died when they were older than 45, 7 of 8 had calcification and thus atherosclerosis while only 2 of 8 of those dying at an earlier age had calcification. Atherosclerosis did not spare women; vascular calcifications were observed in both male and female mummies.

The most ancient Egyptian afflicted with atherosclerosis was Lady Rai, who lived to an estimated age of 30 to 40 years around 1530 BC and had been the nursemaid to Queen Ahmose Nefertiri. To put this in context, Lady Rai lived about 300 years prior to the time of Moses and 200 prior to King Tutankhamun (Tut).

In those mummies whose identities could be determined, all were of high socioeconomic status, generally serving in the court of the Pharaoh or as priests or priestess. While the diet of any one mummy could not be determined, eating meat in the form of cattle, ducks and geese was not uncommon during these times.

"While we do not know whether atherosclerosis caused the demise of any of the mummies in the study, we can confirm that the disease was present in many," Thomas says.

(Photo: Dr. Michael Miyamoto / UC San Diego)

UC Irvine

RIGHT-HANDED CHIMPANZEES PROVIDE CLUES TO THE ORIGIN OF HUMAN LANGUAGE

0 comentarios
Most of the linguistic functions in humans are controlled by the left cerebral hemisphere. A study of captive chimpanzees at the Yerkes National Primate Research Center (Atlanta, Georgia), reported in the January 2010 issue of Elsevier's Cortex, suggests that this "hemispheric lateralization" for language may have its evolutionary roots in the gestural communication of our common ancestors.

A large majority of the chimpanzees in the study showed a significant bias towards right-handed gestures when communicating, which may reflect a similar dominance of the left hemisphere for communication in chimpanzees as that seen for language functions in humans.

A team of researchers, supervised by Prof. William D. Hopkins of Agnes Scott College (Decatur, Georgia), studied hand-use in 70 captive chimpanzees over a period of 10 months, recording a variety of communicative gestures specific to chimpanzees. These included 'arm threat', 'extend arm' or 'hand-slap' gestures produced in different social contexts, such as attention-getting interactions, shared excitation, threat, aggression, greeting, reconciliation or invitations for grooming or for play. The gestures were directed at the human observers, as well as toward other chimpanzees.

"The degree of predominance of the right hand for gestures is one of the most pronounced we have ever found in chimpanzees in comparison to other non-communicative manual actions. We already found such manual biases in this species for pointing gestures exclusively directed to humans. These additional data clearly showed that right-handedness for gestures is not specifically associated to interactions with humans, but generalizes to intraspecific communication", notes Prof. Hopkins.

The French co-authors, Dr. Adrien Meguerditchian and Prof. Jacques Vauclair, from the Aix-Marseille University (Aix-en-Provence, France), also point out that "this finding provides additional support to the idea that speech evolved initially from a gestural communicative system in our ancestors. Moreover, gestural communication in apes shares some key features with human language, such as intentionality, referential properties and flexibility of learning and use".

Elsevier's Cortex

Thursday, November 26, 2009

“DROPOUTS” PINPOINT EARLIEST GALAXIES

0 comentarios
Astronomers, conducting the broadest survey to date of galaxies from about 800 million years after the Big Bang, have found 22 early galaxies and confirmed the age of one by its characteristic hydrogen signature at 787 million years post Big Bang. The finding is the first age-confirmation of a so-called dropout galaxy at that distant time and pinpoints when an era called the reionization epoch likely began. The research will be published in a December issue of the Astrophysical Journal.

With recent technological advancements, such as the Wide-Field Camera 3 on the Hubble Space Telescope, there has been an explosion of research of the reionization period, the farthest back in time that astronomers can observe. The Big Bang, 13.7 billion years ago, created a hot, murky universe. Some 400,000 years later, temperatures cooled, electrons and protons joined to form neutral hydrogen, and the murk cleared. Some time before 1 billion years after the Big Bang, neutral hydrogen began to form stars in the first galaxies, which radiated energy and changed the hydrogen back to being ionized. Although not the thick plasma soup of the earlier period just after the Big Bang, this star formation started the reionization epoch. Astronomers know that this era ended about 1 billion years after the Big Bang, but when it began has eluded them and intrigued researchers like lead author Masami Ouchi of the Carnegie Observatories.

The U.S. and Japanese team led by Ouchi used a technique for finding these extremely distant galaxies. “We look for ‘dropout’ galaxies,” explained Ouchi. “We use progressively redder filters that reveal increasing wavelengths of light and watch which galaxies disappear from or ‘dropout’ of images made using those filters. Older, more distant galaxies ‘dropout’ of progressively redder filters and the specific wavelengths can tell us the galaxies’ distance and age. What makes this study different is that we surveyed an area that is over 100 times larger than previous ones and, as a result, had a larger sample of early galaxies (22) than past surveys. Plus, we were able to confirm one galaxy’s age,” he continued. “Since all the galaxies were found using the same dropout technique, they are likely to be the same age.”

Ouchi’s team was able to conduct such a large survey because they used a custom-made, super-red filter and other unique technological advancements in red sensitivity on the wide-field camera of the 8.3-meter Subaru Telescope. They made their observations from 2006 to 2009 in the Subaru Deep Field and Great Observatories Origins Deep Survey North field. They then compared their observations with data gathered in other studies.

Astronomers have wondered whether the universe underwent reionization instantaneously or gradually over time, but more importantly, they have tried to isolate when the universe began reionization. Galaxy density and brightness measurements are key to calculating star-formation rates, which tell a lot about what happened when. The astronomers looked at star-formation rates and the rate at which hydrogen was ionized.

Using data from their study and others, they determined that the star-formation rates were dramatically lower from 800 million years to about one billion years after the Big Bang, than thereafter. Accordingly, they calculated that the rate of ionization would be very slow during this early time, because of this low star-formation rate.

“We were really surprised that the rate of ionization seems so low, which would constitute a contradiction with the claim of NASA’s WMAP satellite. It concluded that reionization started no later than 600 million years after the Big Bang,” remarked Ouchi. “We think this riddle might be explained by more efficient ionizing photon production rates in early galaxies. The formation of massive stars may have been much more vigorous then than in today’s galaxies. Fewer, massive stars produce more ionizing photons than many smaller stars,” he explained.

Carnegie Institution of Washington

DARTMOUTH PROFESSOR FINDS THAT ICONIC OSWALD PHOTO WAS NOT FAKED

0 comentarios

Dartmouth Computer Scientist Hany Farid has new evidence regarding a photograph of accused John F. Kennedy assassin Lee Harvey Oswald. Farid, a pioneer in the field of digital forensics, digitally analyzed an iconic image of Oswald pictured in a backyard setting holding a rifle in one hand and Marxist newspapers in the other. Oswald and others claimed that the incriminating photo was a fake, noting the seemingly inconsistent lighting and shadows. After analyzing the photo with modern-day forensic tools, Farid says the photo almost certainly was not altered.

“If we had found evidence of photo tampering, then it would have suggested a broader plot to kill JFK,” said Farid, who is also the director of the Neukom Institute for Computational Science at Dartmouth. “Those who believe that there was a broader conspiracy can no longer point to this photo as possible evidence.” Farid added that federal officials long ago said that this image had not been tampered with, but a surprising number of skeptics still assert that there was a conspiracy.

The study will appear in a forthcoming issue of the journal Perception.

Farid and his team have developed a number of digital forensic tools used to determine whether digital photos have been manipulated, and his research is often used by law enforcement officials and in legal proceedings. The tools can measure statistical inconsistencies in the underlying image pixels, improbable lighting and shadow, physically impossible perspective distortion, and other artifacts introduced by photo manipulators. The play of light and shadow was fundamental in the Oswald photo analysis.

“The human brain, while remarkable in many aspects, also has its weaknesses,” says Farid. “The visual system can be quite inept at making judgments regarding 3-D geometry, lighting, and shadows.”

At a casual glance, the lighting and shadows in the Oswald photo appear to many to be incongruous with the outdoor lighting. To determine if this was the case, Farid constructed a 3-D model of Oswald’s head and portions of the backyard scene, from which he was able to determine that a single light source, the sun, could explain all of the shadows in the photo.

“It is highly improbable that anyone could have created such a perfect forgery with the technology available in 1963,” said Farid. With no evidence of tampering, he concluded that the incriminating photo was authentic.

”As our digital forensic tools become more sophisticated, we increasingly have the ability to apply them to historic photos in an attempt to resolve some long-standing mysteries,” said Farid.

(Photo: Darmouth C.)

Dartmouth College

BABIES WITH AN ACCENT

0 comentarios

In the first days of their lives, French infants already cry in a different way to German babies. This was the result of a study by researchers from the Max Planck Institute for Human Cognitive and Brain Sciences in Leipzig, the Centre for Pre-language Development and Developmental Disorders (ZVES) at the University Clinic Würzburg, and the Laboratory of Cognitive Sciences and Linguistics at the Ecole Normale Supérieure in Paris.

In this study, the scientists compared recordings of 30 French and 30 German infants aged between two and five days old. While the French newborns more frequently produced rising crying tones, German babies cried with falling intonation. The reason for this is presumably the differing intonation patterns in the two languages, which are already perceived in the uterus and are later reproduced.

In the last trimester of the pregnancy, human fetuses become active listeners. "The sense of hearing is the first sensory system that develops", says Angela Friederici, one of the Directors at the Max Planck Institute. "The mother’s voice, in particular, is sensed early on." However, the fetus’ hearing in the uterus is restricted due to the amniotic fluid. "What gets through are primarily the melodies and intonation of the respective language". In previous work, Professor Friederici’s research team found evidence that the intonation patterns of the respective mother tongue are already ingrained in the brains of four-month-old babies.

Especially large differences exist between spoken German and French. "In French, a lot of words have stress at the end, so that the intonation rises, while in German, it is mostly the opposite", Friederici explains. The word "papa" is pronounced "papa" in French and "papa" in German, for example. Until now it was believed unlikely that this had an influence on newborns' cries as it was assumed that their "crying melody" was influenced by the building up and falling of breath pressure, as in baby chimpanzees, and not by mental representations in the brain. A misassumption, as the analysis of more than 20 hours of babies’ crying from German and French maternity wards shows. The analysis of crying conducted under the supervision of the psychologist Kathleen Wermke from the ZWES showed that the newborns tended to produce the intonation pattern most typical for their respective mother tongue. The crying patterns of the German infants mostly began loud and high and followed a falling curve while the French infants more often cried with a rising tone. This early sensitivity to features of intonation may later help the infants learn their mother tongue, the researchers say. "When they begin to form their first sounds, they can build on melodic patterns that are already familiar and, in this way, don’t have to start from scratch", says the neuropsychologist. The evolutionary roots of this behaviour are older than the emergence of spoken language, the researchers believe. "The imitation of melodic patterns developed over millions of years and contributes to the mother-child bond" says Friederici.

(Photo: MPI für Kognitions- und Neurowissenschaften)

Max-Planck-Gesellschaft, München

IOWA STATE SCIENTIST DEVELOPS LAB MACHINE TO STUDY GLACIAL SLIDING RELATED TO RISING SEA LEVELS

0 comentarios

Neal Iverson opened his laboratory's walk-in freezer and said the one-of-a-kind machine inside could help scientists understand how glaciers slide across their beds. And that could help researchers predict how glaciers will react to climate change and contribute to rising sea levels.

Iverson is an Iowa State University professor of geological and atmospheric sciences. He's worked for three years on his big new machine, which is over nine feet tall, that he calls a glacier sliding simulator.

At the center of the machine is a ring of ice about eight inches thick and about three feet across. Below the ice is a hydraulic press that can put as much as 170 tons of force on the ice, creating pressures equal to those beneath a glacier 1,300 feet thick. Above are motors that can rotate the ice ring at its centerline at speeds of 100 to 7,000 feet per year. Either the speed of the ice or the stress dragging it forward can be controlled. Around the ice is circulating fluid - its temperature controlled to 1/100th of a degree Celsius - that keeps the ice at its melting point so it slides on a thin film of water.

As Iverson starts running experiments with the simulator this month, he'll be looking for data that help explain glacier movement.

"For a particular stress, which depends on a glacier's size and shape, we'd like to know how fast a glacier will slide," Iverson said.

Glacier sliding is something that matters far from the ice fields. As the climate warms, Iverson said glaciers slide faster. When they hit coasts, they dump ice into the ocean. And when those icebergs melt they contribute to rising sea levels.

But there's a lot about the process researchers still don't know.

"We can't predict how fast glaciers slide - even to a factor of 10," Iverson said. "We don't know enough about how they slide to do that."

And so Iverson came up with the idea of a glacier in a freezer that allows him to isolate effects of stress, temperature and melt-water on speeds of glacier sliding.

The project is supported by a $529,922 grant from the National Science Foundation. While Iverson had a rough design for the simulator, he said a team of three engineers from the U.S. Department of Energy's Ames Laboratory - Terry Herrman, Dan Jones and Jerry Musselman - improved the design and turned it into a working machine.

Iverson said the machine won't simulate everything about glacier sliding.

"The fact is we can't simulate the real process," he said. "We can only simulate key elements of the process. The purpose of these experiments will be to idealize how the system works and thereby learn fundamentals of the sliding process that can't be learned in the field because of the complexity there."

Iverson, who also does field studies at glaciers in Sweden and Norway, said glaciology needs work on the ground and in the lab. But it's been decades since anybody has attempted the kind of laboratory simulations he'll be doing.

"There hasn't been a device to do this," Iverson said. "And so there haven't been any experiments."

To change that, Iverson is pulling on a coat, hat and gloves and working in his lab's freezer. He has ice rings to build. Equipment to calibrate. And experiments to run.

(Photo: Bob Elbert)

Iowa State University

ARGONNE "HOMEGROWN" HYBRID SOLAR CELL AIMS FOR LOW-COST POWER

0 comentarios

Scientists at the U.S. Department of Energy’s (DOE) Argonne National Laboratory have refined a technique to manufacture solar cells by creating tubes of semiconducting material and then "growing" polymers directly inside them. The method has the potential to be significantly cheaper than the process used to make today’s commercial solar cells.

Funding for this research was provided by the Department of Energy’s Office of Basic Energy Sciences and by the NSF-Materials Research Science and Engineering Center at the University of Chicago.

Because the production costs of today's generation of solar cells prevent them from competing economically with fossil fuels, Argonne researchers are working to re-imagine the solar cell's basic design. Most current solar cells use crystalline silicon or cadmium telluride, but growing a high-purity crystal is energy- and labor-intensive, making the cells expensive.

The next generation, called hybrid solar cells, uses a blend of cheaper organic and inorganic materials. To combine these materials effectively, Argonne researchers created a new technique to grow organic polymers directly inside inorganic nanotubes.

At its most basic level, solar cell technology relies on a series of processes initiated when photons, or particles of light, strike semiconducting material. When a photon hits the cell, it excites one electron out of its initial state, leaving behind a "hole" of positive charge.

Hybrid solar cells contain two separate types of semiconducting material: one conducts electrons, the other holes. At the junction between the two semiconductors, the electron-hole pair gets pulled apart, creating a current.

In the study, Argonne nanoscientist Seth Darling and colleagues at Argonne and the University of Chicago had to rethink the geometry of the two materials. If the two semiconductors are placed too far apart, the electron-hole pair will die in transit. However, if they're packed too closely, the separated charges won’t make it out of the cell.

In designing an alternative, scientists paired an electron-donating conjugated polymer with the electron acceptor titanium dioxide (TiO2).

Titanium dioxide readily forms miniscule tubes just tens of nanometers across—10,000 times smaller than a human hair. Rows of tiny, uniform nanotubes sprout across a film of titanium that has been submerged in an electrochemical bath.

The next step required the researchers to fill the nanotubes with the organic polymer—a frustrating process.

"Filling nanotubes with polymer is like trying to stuff wet spaghetti into a table full of tiny holes," Darling said. "The polymer ends up bending and twisting, which leads to inefficiencies both because it traps pockets of air as it goes and because twisted polymers don’t conduct charges as well.

"In addition, this polymer doesn’t like titanium dioxide," Darling added. "So it pulls away from the interface whenever it can."

Trying to sidestep this problem, the team hit on the idea of growing the polymer directly inside the tubes. They filled the tubes with a polymer precursor, turned on ultraviolet light, and let the polymers grow within the tubes.

Grown this way, the polymer doesn’t shy away from the TiO2. In fact, tests suggest the two materials actually mingle at the molecular level; together they are able to capture light at wavelengths inaccessible to either of the two materials alone. This "homegrown" method is potentially much less expensive than the energy-intensive process that produces the silicon crystals used in today’s solar cells.

These devices dramatically outperform those fabricated by filling the nanotubes with pre-grown polymer, producing about 10 times more electricity from absorbed sunlight. The solar cells produced by this technique, however, do not currently harness as much of the available energy from sunlight as silicon cells can. Darling hopes that further experiments will improve the cells' efficiency.

(Photo: ANL)

Argonne National Laboratory

RESEARCH SHOWS IMPACTS FROM AIRBORNE NITROGEN

0 comentarios

The impact of airborne nitrogen released from the burning of fossil fuels and widespread use of fertilizers in agriculture is much greater than previously recognized and even extends to remote alpine lakes, according to a study published Nov. 6 in the journal Science.

Examining nitrogen deposition in alpine and subalpine lakes in Colorado, Sweden and Norway, James Elser, a limnologist in the School of Life Sciences at Arizona State University, and his colleagues found that, on average, nitrogen levels in lakes were elevated, even those isolated from urban and agricultural centers.

The article “Shifts in lake N:P stoichiometry and nutrient limitation driven by atmospheric nitrogen deposition” presents experimental data from more than 90 lakes. The researchers’ collaboration also revealed that nitrogen-rich air pollution has already altered the lakes’ fundamental ecology.

“This is because plant plankton or phytoplankton, like all plants, need nitrogen and phosphorus for growth,” Elser says. “Inputs from pollution in the atmosphere appear to shift the supplies of nitrogen relative to other elements, like phosphorus.”

The increase in the availability of nitrogen means that growing phytoplankton in lakes receiving elevated nitrogen deposition are now limited by how much phosphorus they can acquire. Elser says that this is important because “we know that phosphorus-limited phytoplankton are poor food – basically ‘junk food’ for animal plankton, which in turn are food for fish.”

“Such a shift could potentially affect biodiversity,” he adds. “However, we don’t really know because unlike in terrestrial systems, the impacts of nitrogen deposition on aquatic systems have not been widely studied.”

Elser’s collaborators include researchers Tom Andersen and Dag Hessen from the University of Oslo; Jill Baron of the United States Geological Survey and Natural Resource Ecology Laboratory at Colorado State University; Ann-Kristin Bergström and Mats Jansson with Umeå University, Sweden; and Koren Nydick of the Mountain Studies Institute in Colorado, in addition to Marcia Kyle and Laura Steger, who are members of his own group in ASU’s College of Liberal Arts and Sciences.

Hessen, a well-known limnologist, and Elser have had a long-standing collaborative relationship, looking not only at nitrogen deposition but also zooplankton nutrition and a broad range of stoichiometric studies. Elser met Bergström at a conference at Umeå University and discovered that she had performed similar experiments in Sweden.

“By combining these studies we were able to achieve a more global picture of how nitrogen was impacting a broad range of lakes and come to firmer conclusions about effects of deposition,” Elser says.

Elser and Hessen hope to expand on these findings and have a pending grant proposal with the Norwegian government. In addition, Elser says he hopes to perform similar studies in China “where atmospheric nitrogen pollution is extremely high,” but, as yet, unstudied.

Elser has built a career around asking questions about energy and material flows in ecosystems, and traveling all over the world to find answers. Understanding the balance of phosphorus, carbon and nitrogen in systems forms the backbone of Elser’s worldview, known as “stoichiometric theory.” His pioneering studies have jumpstarted new research approaches, insights into nutrient limitation, trophic dynamics, biogeochemical cycling, and linkages between evolutionary and ecosystem processes. This study was supported by the National Science Foundation.

(Photo: James Elser)

Arizona State University

NANOPARTICLES MAY CAUSE DNA DAMAGE ACROSS A CELLULAR BARRIER

0 comentarios

Scientists have shown in the laboratory that metal nanoparticles damaged the DNA in cells on the other side of a cellular barrier. The nanoparticles did not cause the damage by passing through the barrier, but generated signalling molecules within the barrier cells that were then transmitted to cause damage in cells the other side of the barrier.

The research was carried out by a team at the University of Bristol and colleagues, and is published online this week in Nature Nanotechnology.

The team grew a layer of human cells (about 3 cells in thickness) in the lab. They then used this barrier to examine the indirect effects of cobalt-chromium nanoparticles on cells that were lying behind this barrier.

The amount of DNA damage seen in the cells behind the protective barrier was similar to the DNA damage caused by direct exposure of the cells to the nanoparticles.

Dr Patrick Case, senior author on the study, said: “We need to be clear that our experimental set up is not a model of the human body. The cells receiving the exposure were bathed in culture media, whilst in the body they might be separated from the barrier by connective tissue and blood vessels. The barrier cells were malignant cell line and 3 cells in thickness whilst all barriers in the body are less thick and of non malignant cells.”

Gevdeep Bhabra, lead author on the paper, said: “Even though this work was done in the laboratory, our results suggest the existence of a mechanism by which biological effects can be signalled through a cellular barrier, thus it gives us insights into how barriers in the body such as the skin, the placenta and the blood-brain barrier might work.”

Professor Ashley Blom, Head of Orthopaedic Surgery at the University of Bristol, added: “If barriers in the body do act in this way, then it gives us insight into how small particles such as metal debris or viruses may exert an influence in the body. It also highlights a potential mechanism whereby we might be able to deliver novel drug therapies in the future.”

These findings suggest that the indirect, as well as the direct, effects of nanoparticles on cells might be important when evaluating their safety.

(Photo: Bristol U.)

University of Bristol

KEEPING THAT RELATIONSHIP SECRET? IT'S LIKELY DETRIMENTAL TO YOUR HEALTH, SELF-ESTEEM

0 comentarios

People who keep their relationships secret may face damage to their health as well as their relationship over the long-term, a Colorado State University psychology professor says in a new published set of studies.

The research, by Justin Lehmiller, assistant professor of Applied Social Psychology, is the first to look at the health issues surrounding secret relationships – information that could someday help the psychology profession with couples counseling.

The studies appeared in the November issue of Personality and Social Psychology Bulletin.

Lehmiller examined online responses of two different groups totaling more than 700 people. A large number of respondents indicated that they were keeping their relationships secret from other persons. Those relationships that were being kept secret ranged from interracial and same-sex partnerships to workplace romances. All participants were asked to report on their feelings about their relationship, as well as their personal physical and psychological health.

“We found people who keep relationships secret tend to be less committed – secrecy seems to limit how close you can get to a partner and whether they can become a central part of your life,” Lehmiller said.

“Such people also reported worse physical health outcomes and lower self-esteem. The data suggests that one of the reasons for this seems to be that keeping a relationship secret is stressful. It makes people nervous and anxious and scared. We suspect that when people chronically experience those negative emotions, that’s what undermines your health.”

People in secret relationships reported more frequent symptoms of poor health, such as headaches, loss of sexual interest/pleasure, low energy and poor appetite. They also had worse self-esteem, or more negative feelings about who they are as people.

Lehmiller cautioned that the studies only reveal general trends and should not be taken to mean that secret relationships are inherently bad. People with particularly strong social support networks may be less likely to suffer even if they keep their relationships secret, he said.

Therapists could potentially use the results to help individuals or couples seeking treatment, Lehmiller said: “We know that being in a secret relationship is challenging and may have negative effects on both the relationship and the partners’ health. This means therapists need to evaluate these situations carefully and ask, ‘Is it worthwhile to disclose the relationship?’ It’s possible that, for some, disclosure might improve the health of the individuals as well as their partnerships because it reduces stress and burden.”

Lehmiller was most surprised by the variety of secret relationships that people disclosed in the surveys. He plans to conduct long-term follow-up studies with the same individuals to see how they’re coping with their relationships and whether the effects of secrecy change over time.

(Photo: Colorado SU)

Colorado State University

CORNELL RESEARCHERS IDENTIFY WEAK LINK IN CANCER CELL ARMOR

0 comentarios

It has long been known that the so-called p53 gene suppresses tumors -- when it mutates, cancer cells take hold and multiply. New research at Cornell's College of Veterinary Medicine, however, shows that inhibiting a second gene (Hus1) is lethal to cells with p53 mutations, knowledge that has scientists investigating whether the same combination may kill cancerous cells.

Using a mouse model, senior author Robert Weiss, associate professor of molecular genetics, first author and graduate student Stephanie Yazinski and colleagues explored how cells respond when both genes are inhibited. When they inactivated the Hus1 gene in healthy mammary gland tissues, the researchers report, it caused genome damage and cell death. And when they studied the effects of Hus1 inactivation in p53-deficient cells, which are highly resistant to cell death, they discovered that the ability of Hus1 inactivation to kill cells was even greater.

The study is published in the Nov. 9 issue of the Proceedings of the National Academy of Sciences.

"Our work contributes to an important new understanding of cancer cells and their weaknesses," Weiss said. "The mutations that allow cancer cells to divide uncontrollably also make the cancer cells more dependent on certain cellular processes. We were able to exploit one such dependency of p53-deficient cells and could efficiently kill these cells by inhibiting Hus1."

Weiss and his team have new experiments under way.

"We've proven the power of inhibiting both pathways in normal tissue," said Weiss. "Now we want to extend our knowledge to cancerous tissue and determine if the loss of Hus1 will impact the ability of cancers with p53 mutations to take hold and grow."

(Photo: Cornell U.)

Cornell University

Wednesday, November 25, 2009

THE EARTH REALLY DOES MOVE FOR INSECTS

0 comentarios

Researchers at Rothamsted Research, an Institute of the Biotechnology and Biological Sciences Research Council, and at the University of Greenwich have explained a characteristic feature of insect migration that has puzzled researchers for over 40 years: how do insects maintain wind-related orientation at altitudes of several hundreds of metres in the dark?

The environmental cues used by nocturnal insect migrants to select and maintain common headings, while flying at altitudes of several hundreds of metres above the ground in low illumination levels, and the adaptive benefits of this behaviour, have long remained a mystery. Studies made with both entomological and meteorological radars have frequently reported the occurrence of insects moving in layers, and that the individuals forming these layers often show a considerable degree of uniformity in their headings – behaviour known as ‘common orientation’. This theory accounts for flight behaviour of many medium-sized (10-70 mg) insect species flying at night, at high altitudes, under conditions where downwind orientation cannot be explained by visual assessment of movement relative to the ground or by compass mechanisms.

A collaboration between mathematical modellers and biologists has now revealed that these insects are responding to the effects of turbulence. Insects possess sensors that are capable of detecting extremely faint air movements. The migrating insects will be bounced around in the turbulence, and the authors conclude that the insects are able to use the turbulence to sense which direction the air is moving. In response to this buffeting they alter their vertical profile and direction, increasing both speed and migration distance.

Lead researcher, Andy Reynolds said: "Common orientation close to the downwind direction allows the nocturnal migrants to add their flight speeds (of approx 2 m/s) to the wind speed, thus increasing the distance travelled during the migratory flight.”

This mechanism also predicts that insects flying in the Northern Hemisphere, will typically be offset to the right of the mean wind line, as a consequence of the Earth’s rotation. The researchers report on the first evidence for this effect in their data from insect-monitoring radars.

Reynolds said: "Nocturnal insects are frequently being ‘misled’ by the action of the Ekman spiral (the turning of the mean wind direction due to the Earth’s rotation). Consistent with expectations, here in the UK we observed that insects have a tendency to fly to the right of the mean wind line.”

The findings have clear implications for the accurate prediction of the flight trajectories of migrating nocturnal insects. “Over long distances even these relatively small but consistent offsets have significant effects on the destination of the migrating insects and should be taken into account when predicting the flight trajectories of migrating insects” said Reynolds. This is particularly important when designing forecasting models to predict the movement of insect pests.

(Photo: Andy Banthorpe)

The Biotechnology and Biological Sciences Research Council

SOLVING BIG PROBLEMS

0 comentarios

One of the most basic problems in maths is solving very large linear equations. There's nothing mysterious about them, they simply take time and the more variables there are, the longer it takes. Even a supercomputer would struggle to solve a system of equations that has a trillion variables.

However, in a new paper recently published in Physical Review Letters, Aram Harrow at the University of Bristol and colleagues from MIT in the United States have discovered a quantum algorithm that solves the problem much faster than conventional computers can. And the larger the problem, the greater the speedup.

To understand how the quantum algorithm works, think of a digital equaliser in a stereo CD player. The equaliser needs to amplify some components of the signal and attenuate others. Ordinary equalisers employ classical computer algorithms that treat each component of the sound one at a time.

By contrast, a quantum equaliser could employ a quantum algorithm that treats all components together at once (a trick called `quantum parallelism'). The result is a huge reduction in the difficulty of signal processing.

“Large-scale linear systems of equations exist in many fields, such as weather prediction, engineering, and computer vision”, says Harrow. “Quantum computers could supply serious improvements for these and many other problems. For example, a trillion-variable problem would take a classical computer at least a hundred trillion steps to solve, but using the new algorithm, a quantum computer could solve the problem in just a few hundred steps”.

The solution could also be applied to other complex processes such as image and video processing, genetic analyses and even Internet traffic control.

(Photo: Bristol U.)

University of Bristol

CONTROVERSIAL NEW CLIMATE CHANGE RESULTS

0 comentarios

New data show that the balance between the airborne and the absorbed fraction of carbon dioxide has stayed approximately constant since 1850, despite emissions of carbon dioxide having risen from about 2 billion tons a year in 1850 to 35 billion tons a year now.

This suggests that terrestrial ecosystems and the oceans have a much greater capacity to absorb CO2 than had been previously expected.

The results run contrary to a significant body of recent research which expects that the capacity of terrestrial ecosystems and the oceans to absorb CO2 should start to diminish as CO2 emissions increase, letting greenhouse gas levels skyrocket. Dr Wolfgang Knorr at the University of Bristol found that in fact the trend in the airborne fraction since 1850 has only been 0.7 ± 1.4% per decade, which is essentially zero.

The strength of the new study, published online in Geophysical Research Letters, is that it rests solely on measurements and statistical data, including historical records extracted from Antarctic ice, and does not rely on computations with complex climate models.

This work is extremely important for climate change policy, because emission targets to be negotiated at the United Nations Climate Change Conference in Copenhagen early next month have been based on projections that have a carbon free sink of already factored in. Some researchers have cautioned against this approach, pointing at evidence that suggests the sink has already started to decrease.

So is this good news for climate negotiations in Copenhagen? “Not necessarily”, says Knorr. “Like all studies of this kind, there are uncertainties in the data, so rather than relying on Nature to provide a free service, soaking up our waste carbon, we need to ascertain why the proportion being absorbed has not changed”.

Another result of the study is that emissions from deforestation might have been overestimated by between 18 and 75 per cent. This would agree with results published last week in Nature Geoscience by a team led by Guido van der Werf from VU University Amsterdam. They re-visited deforestation data and concluded that emissions have been overestimated by at least a factor of two.

(Photo: Bristol U.)

University of Bristol

Tuesday, November 24, 2009

NITROGEN LOSS THREATENS DESERT PLANT LIFE

0 comentarios

As the climate gets warmer, arid soils lose nitrogen as gas, reports a new Cornell study. That could lead to deserts with even less plant life than they sustain today, say the researchers.

"This is a way that nitrogen is lost from an ecosystem that people have never accounted for before," said Jed Sparks, associate professor of ecology and evolutionary biology and co-author of the study, published in the Nov. 6 issue of Science. "It allows us to finally understand the dynamics of nitrogen in arid systems"

Available nitrogen is second only to water as the biggest constraint to biological activity in arid ecosystems, but before now, ecologists struggled to understand how the inputs and outputs of nitrogen in deserts balance.

By showing that the higher temperatures cause nitrogen to escape as gas from desert soils, the Cornell researchers have balanced the nitrogen budget in deserts. They stress that most climate change models need to be altered to consider these findings.

Sparks and lead author Carmody McCalley, a graduate student, warn that temperature increases and shifting precipitation patterns due to climate change may lead to further nitrogen losses in arid ecosystems. That would make arid soils even more infertile and unable to support most plant life, McCalley warned. Although, some climate models predict more summer rainfall for desert areas, the water, when combined with heat, would greatly increase nitrogen losses, she added.

"We're on a trajectory where plant life in arid ecosystems could cease to do well," she said.

In the past, researchers focused on biological mechanisms where soil microbes near the surface produce nitrogen gas that dissipates into the air, but McCalley and Sparks found that non-biological processes (abiotic) play a bigger role in nitrogen losses. They used instruments sensitive enough to measure levels of nitrogen gases in parts per trillion that had never before been applied to soil measurements.

The researchers covered small patches of soil in the Mojave Desert with sealed containers to measure nitrogen oxide (NO), NOy (a group of more than 25 different compounds containing oxidized nitrogen) and ammonia gases that escape from desert soils. To rule out the role of light in this process, McCalley kept light constant but varied the temperatures in lab experiments.

"At 40 to 50 degrees Celsius [about 100-120 F], we found rapid increases in gases coming out of the soil" regardless of the light, McCalley said. Midday ground temperatures average about 150 F and can reach almost 200 F in the Mojave Desert.

"Any place that gets hot and dry, in all parts of the world, will likely exhibit this pattern," said Sparks.

In addition, the researchers note, more nitrogen oxides in the lower atmosphere creates ozone near the ground, which contributes to air pollution and increases the greenhouse effect that warms the planet.

With deserts accounting for 35 to 40 percent of the Earth's surface and arid and semiarid lands being the most likely areas for new human settlements, air quality issues, loss of soil fertility and further desertification need to be considered as the climate warms, the researchers said.

The researchers also point out that most climate modelers now use algorithms that only consider biological factors to predict nitrogen gases coming from soils.

"The code in climate models would have to change to account for abiotic impacts on this part of the nitrogen budget," McCalley concluded.

(Photo: Cornell U.)

Cornell University

REDUCING GREENHOUSE GASES MAY NOT BE ENOUGH TO SLOW CLIMATE CHANGE

0 comentarios
Georgia Tech City and Regional Planning Professor Brian Stone publishes a paper in the December edition of Environmental Science and Technology that suggests policymakers need to address the influence of global deforestation and urbanization on climate change, in addition to greenhouse gas emissions.

According to Stone’s paper, as the international community meets in Copenhagen in December to develop a new framework for responding to climate change, policymakers need to give serious consideration to broadening the range of management strategies beyond greenhouse gas reductions alone.

“Across the U.S. as a whole, approximately 50 percent of the warming that has occurred since 1950 is due to land use changes (usually in the form of clearing forest for crops or cities) rather than to the emission of greenhouse gases,” said Stone. “Most large U.S. cities, including Atlanta, are warming at more than twice the rate of the planet as a whole – a rate that is mostly attributable to land use change. As a result, emissions reduction programs – like the cap and trade program under consideration by the U.S. Congress – may not sufficiently slow climate change in large cities where most people live and where land use change is the dominant driver of warming.”

According to Stone’s research, slowing the rate of forest loss around the world, and regenerating forests where lost, could significantly slow the pace of global warming.

“Treaty negotiators should formally recognize land use change as a key driver of warming,” said Stone. “The role of land use in global warming is the most important climate-related story that has not been widely covered in the media.”

Stone recommends slowing what he terms the “green loss effect” through the planting of millions of trees in urbanized areas and through the protection and regeneration of global forests outside of urbanized regions. Forested areas provide the combined benefits of directly cooling the atmosphere and of absorbing greenhouse gases, leading to additional cooling. Green architecture in cities, including green roofs and more highly reflective construction materials, would further contribute to a slowing of warming rates. Stone envisions local and state governments taking the lead in addressing the land use drivers of climate change, while the federal government takes the lead in implementing carbon reduction initiatives, like cap and trade programs.

“As we look to address the climate change issue from a land use perspective, there is a huge opportunity for local and state governments,” said Stone. “Presently, local government capacity is largely unharnessed in climate management structures under consideration by the U.S. Congress. Yet local governments possess extensive powers to manage the land use activities in both the urban and rural areas.”

Georgia Institute of Technology

CAUGHT IN THE ACT

0 comentarios

Breaking up may actually not be hard to do, say scientists who’ve found a population of butterflies that may be on its way to a split into two distinct species. What’s the cause of this particular breakup? A shift in wing color and mate preference.

In a paper published in the journal Science, the researchers at Harvard University, the University of Texas, and the University of Hawaii describe the relationship between diverging color patterns in Heliconius butterflies and the long-term divergence of populations into new and distinct species.

“Our paper provides a unique glimpse into the earliest stage of ecological speciation, where natural selection to fit the environment causes the same trait in the same population to be pushed in two different directions,” said Marcus Kronforst, a co-author of the study and a Bauer Fellow in the Center for Systems Biology at Harvard University. “If this trait is also involved in reproduction, this process can have a side effect of causing the divergent subpopulations to no longer interbreed. This appears to be the process that is just beginning among Heliconius butterflies in Ecuador.”

Heliconius species display incredible color-pattern variation across Central and South America, with closely related species usually sporting different colors. For instance, in Costa Rica, the two most closely related species differ in color: One species is white and the other yellow. In addition, both display a marked preference for others of the same color.

The Ecuadorean population examined by Kronforst and his colleagues shows the same white and yellow variation found in Costa Rica, but this population has not yet reached a level of strong reproductive isolation. The entire population lives in close proximity, and individuals of both colors come in contact with — and mate with — each other.

But by studying the Ecuadorean population in captivity, scientists found that the two colors do not mate randomly. Despite the genetic similarity between the groups — white and yellow varieties differ only at the color-determining gene — yellow Ecuadorean individuals show a preference for those of the same color. White male butterflies, most of which are heterozygous at the gene that controls color, show no color preference.

“This subtle difference in mate preference between the color forms in Ecuador may be the first step in a process that could eventually result in two species, as we see in Costa Rica,” said Kronforst, who began studies of Heliconius color pattern and behavioral genetics as a doctoral student in the laboratory of Lawrence Gilbert, professor in the section of integrative biology at the University of Texas, Austin.

Previous studies of species formation have focused on the characteristics of well-differentiated species, and the health and viability of their hybrids in particular, in an effort to identify how the species may have emerged and how they stay distinct. Heliconius provides a model for a different kind of study. The researchers considered species emergence from the opposite end, studying populations that have yet to diverge into separate species in order to identify the role of mate choice in the potential emergence of new species.

Having identified color-based mate preference in Heliconius, the researchers used a battery of genetic markers to compare the genomes of the white and yellow varieties, showing that they are genetically identical except for their different colors and preferences. The work suggests that the genes for color and preference are very close to one another in the genome; the two traits could even be caused by the same gene. The researchers’ next step is to identify the gene or genes responsible for the differences in color and mate preference.

“If we can identify this gene or genes, we can say conclusively how they influence both color and mate choice,” said Kronforst. “Subsequent work could elucidate exactly how changes in individual genes can, over long periods of time, lead to novel species.”

”This study shows the great potential of the genus Heliconius as a model system for integrating genetics, development, behavior, ecology, and evolution,” said co-author Gilbert.

(Photo: Rose Lincoln/Harvard Staff Photographer)

Harvard University

SHIFTING BLAME IS SOCIALLY CONTAGIOUS

0 comentarios
Merely observing someone publicly blame an individual in an organization for a problem – even when the target is innocent – greatly increases the odds that the practice of blaming others will spread with the tenacity of the H1N1 flu, according to new research from the USC Marshall School of Business and Stanford University.

Nathanael J. Fast, an assistant professor of management and organization at the USC Marshall School of Business and Larissa Tiedens, a professor of organizational behavior at Stanford, conducted four different experiments and found that publicly blaming others dramatically increases the likelihood that the practice will become viral. The reason: blame spreads quickly because it triggers the perception that one's self-image is under assault and must be protected.

The study called "Blame Contagion: The Automatic Transmission of Self-Serving Attributions" is believed to be the first to examine whether shifting blame to others is socially contagious. The results will be published in the November issue of Journal of Experimental Social Psychology.

"When we see others protecting their egos, we become defensive too," says Fast, the study's lead author. "We then try to protect our own self-image by blaming others for our mistakes, which may feel good in the moment." He adds that in the long run, such behavior could hurt one's reputation and be destructive to an organization and further to our society as a whole.

Tiedens said the study didn't specifically look at the impact of hard economic times, but it undoubtedly makes the problem worse. "Blaming becomes common when people are worried about their safety in an organization," she said. "There is likely to be more blaming going on when people feel their jobs are threatened."

Fast says that when public blaming becomes common practice – especially by leaders -- its effects on an organization can be insidious and withering: Individuals who are fearful of being blamed for something become less willing to take risks, are less innovative or creative, and are less likely to learn from their mistakes.

"Blame creates a culture of fear," Fast said, "and this leads to a host of negative consequences for individuals and for groups."

A manager can keep a lid on the behavior by rewarding employees who learn from their mistakes and by making a point to acknowledge publicly his or her own mistakes, Fast says. Managers may also want to assign blame, when necessary, in private and offer praise in public to create a positive attitude in the workplace.

Or, managers could follow the lead of companies such as Intuit, which implemented a "When Learning Hurts" session where they celebrated and learned from mistakes, rather than pointing fingers and assigning blame. The blame contagion research provides empirical evidence that such a practice can avoid negative effects in the culture of the organization.

Anyone can become a blamer, Fast says, but there are some common traits. Typically, they are more ego defensive, have a higher likelihood of being narcissistic, and tend to feel chronically insecure.

President Richard Nixon is one example the authors point to in the study. Nixon harbored an intense need to enhance and protect his self-image and, as a result, made a practice of blaming others for his shortcomings. His former aides reported that that this ego-defensiveness pervaded his administration. It was the culture of fear and blame that ultimately led to Nixon's political downfall.

The experiments showed that individuals who watched someone blame another for mistakes went on to do the same with others. In one experiment, half of the participants were asked to read a newspaper article about a failure by Governor Schwarzenegger who blamed special interest groups for the controversial special election that failed in 2005, costing the state $250 million. A second group read an article in which the governor took full responsibility for the failure.

Those who read about the governor blaming special interest groups were more likely to blame others for their own, unrelated shortcomings, compared with those who read about Schwarzenegger shouldering the responsibility.

Another experiment found that self-affirmation inoculated participants from blame. The tendency for blame to spread was completely eliminated in a group of participants who had the opportunity to affirm their self-worth.

"By giving participants the chance to bolster their self-worth we removed their need to self protect though subsequent blaming," says Fast.

The results have particularly important implications for CEOs. Executives and leaders would be wise to learn from such examples, Fast suggests, and instead display behaviors that help to foster a culture of psychological safety, learning, and innovation.

The University of Southern California

OCEANOGRAPHERS DEVELOP "SWARMS" OF ROBOTIC OCEAN EXPLORERS

0 comentarios

In an effort to plug gaps in knowledge about key ocean processes, the National Science Foundation (NSF)'s division of ocean sciences has awarded nearly $1 million to scientists at the Scripps Institution of Oceanography in La Jolla, Calif. The Scripps marine scientists will develop a new breed of ocean-probing instruments. Jules Jaffe and Peter Franks will spearhead an effort to design and deploy autonomous underwater explorers, or AUEs. AUEs will trace the fine details of oceanographic processes vital to tiny marine inhabitants.

While oceanographers have been skilled in detailing large-scale ocean processes, a need has emerged to zero in on functions unfolding at smaller scales. By defining localized currents, temperature, salinity, pressure and biological properties, AUEs will offer new and valuable information about a range of ocean phenomena.

"We're seeing great success in the global use of ocean profiling floats to document large-scale circulation patterns and other physical and chemical attributes of the deep and open seas," said Phillip Taylor of NSF's division of ocean sciences. "These innovative AUEs will allow researchers to sample the environments of coastal regions as well, and to better understand how small organisms operate in the complex surroundings of the oceans."

The miniature robots will aid in obtaining information needed for developing marine protected areas, determining critical nursery habitats for fish and other animals, tracking harmful algae blooms, and monitoring oil spills.

For marine protected areas, AUEs will help inform debates about the best areas for habitat protection. With harmful algal blooms and oil spills, the instruments can be deployed directly into outbreak patches to gauge how they develop and change over time. In the case of an airplane crash over the ocean, AUEs should be able to track currents to determine where among the wreckage a black box may be located.

"AUEs will fill in gaps between existing marine technologies," said Jaffe. "They will provide a whole new kind of information."

AUEs work through a system in which several soccer-ball-sized explorers are deployed with many tens--or even hundreds--of pint-sized explorers. Collectively, the entire "swarm" of AUEs will track ocean currents that organisms at a small-scale, such as tiny abalone larvae, for example, experience in the ocean.

"AUEs will give us information to figure out how small organisms survive, how they move in the ocean, and the physical dynamics they experience as they get around," said Franks. "AUEs should improve ocean models and allow us to do a better job of following 'the weather and climate of the ocean,' as well as help us understand things like carbon fluxes."

Franks, who conducts research on marine phytoplankton, says that "plankton are somewhat like the balloons of the ocean floating around out there. With AUEs, we're trying to figure out how the ocean works at scales that matter to plankton.

"If we place 100 AUEs in the ocean and let them go, we'll be able to look at how they move to get a sense of the physics driving current flows."

During the pilot phase of the project, Jaffe and colleagues will build five to six of the soccer-ball-sized explorers and 20 of the smaller versions. An outreach component of the project will enlist school children in building and ultimately deploying AUEs.

(Photo: SIO)

The Scripps Institution of Oceanography

CENTRAL AFRICA'S TROPICAL CONGO BASIN WAS ARID, TREELESS IN LATE JURASSIC

0 comentarios

The Congo Basin — with its massive, lush tropical rain forest — was far different 150 million to 200 million years ago. At that time Africa and South America were part of the single continent Gondwana. The Congo Basin was arid, with a small amount of seasonal rainfall, and few bushes or trees populated the landscape, according to a new geochemical analysis of rare ancient soils.

The geochemical analysis provides new data for the Jurassic period, when very little is known about Central Africa's paleoclimate, says Timothy S. Myers, a paleontology doctoral student in the Roy M. Huffington Department of Earth Sciences at Southern Methodist University in Dallas.

"There aren't a whole lot of terrestrial deposits from that time period preserved in Central Africa," Myers says. "Scientists have been looking at Africa's paleoclimate for some time, but data from this time period is unique."

There are several reasons for the scarcity of deposits: Ongoing armed conflict makes it difficult and challenging to retrieve them; and the thick vegetation, a humid climate and continual erosion prevent the preservation of ancient deposits, which would safeguard clues to Africa's paleoclimate.

Myers' research is based on a core sample drilled by a syndicate interested in the oil and mineral deposits in the Congo Basin. Myers accessed the sample — drilled from a depth of more than 2 kilometers — from the Royal Museum for Central Africa in Tervuren, Belgium, where it is housed. With the permission of the museum, he analyzed pieces of the core at the SMU Huffington Department of Earth Sciences Isotope Laboratory.

"I would love to look at an outcrop in the Congo," Myers says, "but I was happy to be able to do this."

The Samba borehole, as it's known, was drilled near the center of the Congo Basin. The Congo Basin today is a closed canopy tropical forest — the world's second largest after the Amazon. It's home to elephants, great apes, many species of birds and mammals, as well as the Congo River. Myers' results are consistent with data from other low paleolatitude, continental, Upper Jurassic deposits in Africa and with regional projections of paleoclimate generated by general circulation models, he says.

"It provides a good context for the vertebrate fossils found in Central Africa," Myers says. "At times, any indications of the paleoclimate are listed as an afterthought, because climate is more abstract. But it's important because it yields data about the ecological conditions. Climate determines the plant communities, and not just how many, but also the diversity of plants."

While there was no evidence of terrestrial vertebrates in the deposits that Myers studied, dinosaurs were present in Africa at the same time. Their fossils appear in places that were once closer to the coast, he says, and probably wetter and more hospitable.

The Belgium samples yielded good evidence of the paleoclimate. Myers found minerals indicative of an extremely arid climate typical of a marshy, saline environment. With the Congo Basin at the center of Gondwana, humid marine air from the coasts would have lost much of its moisture content by the time it reached the interior of the massive continent.

"There probably wouldn't have been a whole lot of trees; more scrubby kinds of plants," Myers says.

The clay minerals that form in soils have an isotopic composition related to that of the local rainfall and shallow groundwater. The difference in isotopic composition between these waters and the clay minerals is a function of surface temperature, he says. By measuring the oxygen and hydrogen isotopic values of the clays in the soils, researchers can estimate the temperature at which the clays formed.

(Photo: SMU)

Southern Methodist University

NEW TRANSPARENT INSULATING FILM COULD ENABLE ENERGY-EFFICIENT DISPLAYS

0 comentarios

Johns Hopkins materials scientists have found a new use for a chemical compound that has traditionally been viewed as an electrical conductor, a substance that allows electricity to flow through it. By orienting the compound in a different way, the researchers have turned it into a thin film insulator, which instead blocks the flow of electricity, but can induce large electric currents elsewhere. The material, called solution-deposited beta-alumina, could have important applications in transistor technology and in devices such as electronic books.

The discovery is described in the November issue of the journal Nature Materials and appears in an early online edition.

“This form of sodium beta-alumina has some very useful characteristics,” said Howard E. Katz, a professor of materials science and engineering who supervised the research team. “The material is produced in a liquid state, which means it can easily be deposited onto a surface in a precise pattern for the formation of printed circuits. But when it’s heated, it forms a solid, thin transparent film. In addition, it allows us to operate at low voltages, meaning it requires less power to induce useful current. That means its applications could operate with smaller batteries or be connected to a battery instead of a wall outlet.”

The transparency and thinness of the material (the hardened film is only on the order of 100 atoms thick) make it ideal for use in the increasingly popular e-book readers, which rely on see-through screens and portable power sources, Katz said. He added that possible transportation applications include instrument readouts that can be displayed in the windshield of an aircraft or a ground vehicle.

The emergence of sodium beta-alumina as an insulator was a surprising development, Katz said. The compound, known for decades, has traditionally been used to conduct electricity and for this reason has been considered as a possible battery component. The material allows charged particles to flow easily parallel to a two-dimensional plane formed within its distinct atomic crystalline arrangement. “But we found that current does not flow nearly as easily perpendicular to the planes, or in unoriented material,” Katz said. “The material acts as an insulator instead of a conductor. Our team was the first to exploit this discovery.”

The Johns Hopkins researchers developed a method of processing sodium beta-alumina in a way that makes use of this insulation behavior occurring in the form of a thin film. Working with the Johns Hopkins Technology Transfer staff, Katz’s team has filed for international patent protection for their discovery.

The lead author of the Nature Materials paper was Bhola N. Pal, who was a postdoctoral fellow in Katz’s laboratory. In addition to Katz, who is chair of the Department of Materials Science and Engineering in the university’s Whiting School of Engineering, the co-authors were Bal Mukund Dhar, a current doctoral student in the lab, and Kevin C. See, who recently completed his doctoral studies under Katz.

(Photo: Will Kirk, Homewoodphoto.jhu.edu)

Johns Hopkins University

Followers

Archive

 

Selected Science News. Copyright 2008 All Rights Reserved Revolution Two Church theme by Brian Gardner Converted into Blogger Template by Bloganol dot com