Monday, February 7, 2011

2010 TIED AS HOTTEST YEAR, SAY U.S. RESEARCHERS

0 comentarios

Global surface temperatures in 2010 were tied with 2005 as the warmest on record as part of a continuing long-term trend, according to analyses released by two separate U.S. government scientific institutions.

Figures from the NASA Goddard Institute for Space Studies (GISS) indicate that the difference between the two years was less than 0.018 degrees F—a statistical dead heat, because it was less than the uncertainty built into the year-to-year comparisons. The next warmest years since recordkeeping began in 1880--all tied--were 1998, 2002, 2003, 2006 and 2007. Global climate has warmed about a third of a degree Fahrenheit per decade since the late 1970s, and the last decade was the warmest ever, say the researchers. The full study appears in the journal Reviews of Geophysics.

A separate study released the same day by the National Oceanic and Atmospheric Administration (NOAA) agrees closely with the GISS analysis. Although it uses slightly different methods, it confirms the 2010-2005 tie, as well as the long-term trend. NOAA researchers said both years were 1.12 degrees F above the 20th century average. The two studies cover temperatures both on land at sea, but the NOAA study said that land temperatures taken alone were the hottest ever recorded, at 1.8 degrees F over the 20th-century average.

A great majority of earth scientists agree that the warming is being driven mainly by the release of human-generated gases into the atmosphere, including carbon dioxide. “If the warming trend continues, as expected, if greenhouse gases continue to increase, the 2010 record will not stand for long,” said James Hansen, director of GISS, which is an affiliate of Columbia University’s Earth Institute.

The GISS analysis is compiled from weather data from more than 1,000 meteorological stations around the world, satellite observations of sea-surface temperatures, and measurements made at polar research stations. A computer then calculates temperature anomalies—the difference between surface temperatures in a given month and the average temperature for the same period from 1951 to 1980, the baseline years for the analysis. GISS makes the analysis annually, and the resulting yearly figures have matched closely with those of NOAA, and a third group at the United Kingdom’s Met Office Hadley Centre (which has not yet released its 2010 figures).

The GISS researchers said the high overall average of 2010 was particularly noteworthy because the last half of the year was marked by a strong transition to cooler temperatures over vast areas of the tropical Pacific Ocean—part of a periodic natural cycle known as El Niño and La Niña. “Global temperature is rising as fast in the past decade as in the prior two decades, despite year-to-year fluctuations associated with the El Niño-La Niña,” the researchers reported in their paper.

In the contiguous United States, the year was not as exceptional as across the world as a whole, but some areas were marked by extreme weather. The East Coast saw record-breaking winter cold and snow, then record-breaking summer heat waves; there were floods in California and Tennessee; and an unusually high number of tornadoes in the Midwest and northern Plains.

Abroad, unprecedented summer heat drove wildfires in Russia; in December some parts of northeastern Canada were a full 18 degrees warmer than normal, according to the GISS figures; and catastrophic floods hit widespread places, from Pakistan to Australia. Climatologists connected some regional events, including the U.S. East Coast snows, to natural cycles, not climate change. However, many scientists project that extreme weather of all kinds will increase with global warming. Lately, some have begun to consider whether unusual cold last year in some places, including Europe, was, paradoxically, driven by dramatic warming in the arctic. They say that melting of sea ice may have shifted wind patterns, sending frigid polar air into the populated mid-latitudes during winter.

“One possibility is that the heat source due to open water in Hudson Bay affected arctic wind patterns, with a seesaw pattern that has arctic air downstream pouring into Europe,” Hansen said.

(Photo: EICU)

Columbia University’s Earth Institute

FINDINGS ON POLLUTION DAMAGE TO HUMAN AIRWAYS COULD YIELD NOVEL THERAPIES

0 comentarios
Researchers from Duke University Medical Center have identified how nanoparticles from diesel exhaust damage lung airway cells, a finding which could lead to new therapies for people susceptible to airway disease.

The scientists also discovered that the severity of the injury depends on the genetic make-up of the affected individual.

"We gained insight into why some people can remain relatively healthy in polluted areas and why others don't," said lead author Wolfgang Liedtke, MD, PhD, assistant professor in the Duke Department of Medicine and an attending physician in the Duke Clinics for Pain and Palliative Care.

The work was published online in the journal Environmental Health Perspectives on Jan. 18.

Diesel exhaust particles, a major part of urban smog, consist of a carbon core coated with organic chemicals and metals. The Duke team showed that the particle core delivers these organic chemicals onto the brush-like surfaces called cilia which clear mucus from the lining of the airways.

Contact with these chemicals then triggers a "signaling cascade," as the cells respond.

In some patients, who have a single letter difference in their DNA, a circuit called the TRPV4 ion channel signals more potently in response to the pollutants. Previous research showed that this gene variant makes humans more liable to develop chronic-obstructive disease (COPD), and the current study provides an explanation for this observation.

About 75 percent of people have the version of the gene MMP-1 which leads to more production of the lung-destructive molecule MMP-1 mediator. This genetic make-up allows for a turbo-charged production of MMP-1, which damages airways and lungs at multiple levels, Liedtke said.

A more fortunate 25 percent of people escape this high level of production of MMP-1, which may be reflected in the fact that certain individuals can better manage the effects of air pollution without grave airway damage.

The injurious molecule MMP-1 is known to enhance the development of certain devastating lung diseases, such as chronic-obstructive pulmonary disease (COPD), a top-ten ailment in world-wide morbidity and mortality, according to the World Health Organization.

The devastating, tissue-destructive actions of MMP-1 can also lead to lung emphysema, which is chronic reduction of the lung surface dedicated to gaseous exchange, and to the spread of lung cancer cells, through migration of these cells from lung tissue that has become cancerous.

The new study also provides a direction for developing therapeutics for those who are genetically more susceptible to air pollution and airway damage, Liedtke said. "If we can find a way to stop the hyperactivation of MMP-1 in response to diesel-engine exhaust particles and reduce it to levels that the airways can manage, then we will be helping a large number of people worldwide," he said.

"It is attractive to envision inhaled TRPV4 inhibitor drugs, rather than swallowing a pill or taking an injection. I envision this as rather similar to inhaled drugs for allergic airway disease that are currently available."

Duke University Medical Center

LCD PROJECTOR USED TO CONTROL BRAINS & MUSCLES OF TINY ORGANISMS

0 comentarios

Researchers are using inexpensive components from ordinary liquid crystal display (LCD) projectors to control the brains and muscles of tiny organisms, including freely moving worms.

Red, green and blue lights from a projector activate light-sensitive microbial proteins that are genetically engineered into the worms, allowing the researchers to switch neurons on and off like light bulbs and turn muscles on and off like engines.

Use of the LCD technology to control small animals advances the field of optogenetics -- a mix of optical and genetic techniques that has given researchers unparalleled control over brain circuits in laboratory animals. Until now, the technique could be used only with larger animals by placement of an optical fiber into an animal's brain, or required illumination of an animal's entire body.

A paper published Jan. 16 in the advance online edition of the journal Nature Methods describes how the inexpensive illumination technology allows researchers to stimulate and silence specific neurons and muscles of freely moving worms, while precisely controlling the location, duration, frequency and intensity of the light.

"This illumination instrument significantly enhances our ability to control, alter, observe and investigate how neurons, muscles and circuits ultimately produce behavior in animals," said Hang Lu, an associate professor in the School of Chemical & Biomolecular Engineering at the Georgia Institute of Technology.

Lu and graduate students Jeffrey Stirman and Matthew Crane developed the tool with support from the National Institutes of Health and the Alfred P. Sloan Foundation.

The illumination system includes a modified off-the-shelf LCD projector, which is used to cast a multi-color pattern of light onto an animal. The independent red, green and blue channels allow researchers to activate excitable cells sensitive to specific colors, while simultaneously silencing others.

"Because the central component of the illumination system is a commercially available projector, the system's cost and complexity are dramatically reduced, which we hope will enable wider adoption of this tool by the research community," explained Lu.

By connecting the illumination system to a microscope and combining it with video tracking, the researchers are able to track and record the behavior of freely moving animals, while maintaining the lighting in the intended anatomical position. When the animal moves, changes to the light's location, intensity and color can be updated in less than 40 milliseconds.

Once Lu and her team built the prototype system, they used it to explore the "touch" circuit of the worm Caenorhabditis elegans by exciting and inhibiting its mechano-sensory and locomotion neurons. Alexander Gottschalk, a professor in the Johann Wolfgang Goethe-University Frankfurt Institute of Biochemistry in Frankfurt, Germany, and his team provided the light-sensitive optogenetic reagents for the Georgia Tech experiments.

For their first experiment, the researchers illuminated the head of a worm at regular intervals while the animal moved forward. This produced a coiling effect in the head and caused the worm to crawl in a triangular pattern. In another experiment, the team scanned light along the bodies of worms from head to tail, which resulted in backward movement when neurons near the head were stimulated and forward movement when neurons near the tail were stimulated.

Watch a movie showing Georgia Tech researchers illuminating the head of a worm expressing light-sensitive optogenetic reagents. The light produces a coiling effect in the head and causes the worm to crawl in a triangular pattern.

Watch a movie showing how researchers at Georgia Tech use light from an LCD projector to directly control the muscles of an immobilized worm.

Additional experiments showed that the intensity of the light affected a worm's behavior and that several optogenetic reagents excited at different wavelengths could be combined in one experiment to understand circuit functions. The researchers were able to examine a large number of animals under a variety of conditions, demonstrating that the technique's results were both robust and repeatable.

"This instrument allowed us to control defined events in defined locations at defined times in an intact biological system, allowing us to dissect animal functional circuits with greater precision and nuance," added Lu.

While these proof-of-concept studies investigated the response of C. elegans to mechanical stimulation, the illumination system can also be used to evaluate responses to chemical, thermal and visual stimuli. Researchers can also use it to study a variety of neurons and muscles in other small animals, such as the zebrafish and fruit fly larvae.

"Experiments with this illumination system yield quantitative behavior data that cannot be obtained by manual touch assays, laser cell ablation, or genetic manipulation of neurotransmitters," said Lu.

(Photo: GIT)

Georgia Institute of Technology

LIKE HUMANS, AMOEBAE PACK A LUNCH BEFORE THEY TRAVEL

0 comentarios

Some amoebae do what many people do. Before they travel, they pack a lunch. In results of a study reported in the journal Nature, evolutionary biologists Joan Strassmann and David Queller of Rice University show that long-studied social amoebae Dictyostellum discoideum (commonly known as slime molds) increase their odds of survival through a rudimentary form of agriculture.

Research by lead author Debra Brock, a graduate student at Rice, found that some amoebae sequester their food--particular strains of bacteria--for later use.

"We now know that primitively social slime molds have genetic variation in their ability to farm beneficial bacteria as a food source," says George Gilchrist, program director in the National Science Foundation's Division of Environmental Biology, which funded the research. "But the catch is that with the benefits of a portable food source, comes the cost of harboring harmful bacteria."

After these "farmer" amoebae aggregate into a slug, they migrate in search of nourishment--and form a fruiting body, or a stalk of dead amoebae topped by a sorus, a structure containing fertile spores. Then they release the bacteria-containing spores to the environment as feedstock for continued growth.

The findings run counter to the presumption that all "Dicty" eat everything in sight before they enter the social spore-forming stage.

Non-farmer amoebae do eat everything, but farmers were found to leave food uneaten, and their slugs don't travel as far.

Perhaps because they don't have to.

The advantages of going hungry now to ensure a good food supply later are clear, as farmers are able to thrive in environments in which non-farmers find little food.

The researchers found that about a third of wild-collected Dicty are farmers.

Instead of consuming all the bacteria they encounter, these amoebae eat less and incorporate bacteria into their migratory systems.

Brock showed that carrying bacteria is a genetic trait by eliminating all living bacteria from four farmers and four non-farmers--the control group--by treating them with antibiotics.

All amoebae were grown on dead bacteria; tests confirmed that they were free of live bacteria.

When the eight clones were then fed live bacteria, the farmers all regained their abilities to seed bacteria colonies, while the non-farmers did not.

Dicty farmers are always farmers; non-farmers never learn.

Rice graduate student Tracy Douglas co-authored the paper with Brock, Queller and Strassmann. She confirmed that farmers and non-farmers belong to the same species and do not form a distinct evolved group.

Still, mysteries remain.

The researchers want to know what genetic differences separate farmers from non-farmers. They also wonder why farmer clones don't migrate as far as their counterparts.

It might be a consequence of bacterial interference, they say, or an evolved response, since farmers carry the seeds of their own food supply and don't need to go as far.

Also, some seemingly useless or even harmful bacteria are not consumed as food, but may serve an as-yet-undetermined function, Brock says.

That has implications for treating disease as it may, for instance, provide clues to the way tuberculosis bacteria invade cells, says Strassmann, infecting the host while resisting attempts to break them down.

The results demonstrate the importance of working in natural environments with wild organisms whose complex ties to their living environment have not been broken.

(Photo: Scott Solomon)

National Science Foundation

NEW REACTOR PAVES THE WAY FOR EFFICIENTLY PRODUCING FUEL FROM SUNLIGHT

0 comentarios

Using a common metal most famously found in self-cleaning ovens, Sossina Haile hopes to change our energy future. The metal is cerium oxide—or ceria—and it is the centerpiece of a promising new technology developed by Haile and her colleagues that concentrates solar energy and uses it to efficiently convert carbon dioxide and water into fuels.

Solar energy has long been touted as the solution to our energy woes, but while it is plentiful and free, it can't be bottled up and transported from sunny locations to the drearier—but more energy-hungry—parts of the world. The process developed by Haile—a professor of materials science and chemical engineering at the California Institute of Technology (Caltech)—and her colleagues could make that possible.

The researchers designed and built a two-foot-tall prototype reactor that has a quartz window and a cavity that absorbs concentrated sunlight. The concentrator works "like the magnifying glass you used as a kid" to focus the sun's rays, says Haile.

At the heart of the reactor is a cylindrical lining of ceria. Ceria—a metal oxide that is commonly embedded in the walls of self-cleaning ovens, where it catalyzes reactions that decompose food and other stuck-on gunk—propels the solar-driven reactions. The reactor takes advantage of ceria's ability to "exhale" oxygen from its crystalline framework at very high temperatures and then "inhale" oxygen back in at lower temperatures.

"What is special about the material is that it doesn't release all of the oxygen. That helps to leave the framework of the material intact as oxygen leaves," Haile explains. "When we cool it back down, the material's thermodynamically preferred state is to pull oxygen back into the structure."

Specifically, the inhaled oxygen is stripped off of carbon dioxide (CO2) and/or water (H2O) gas molecules that are pumped into the reactor, producing carbon monoxide (CO) and/or hydrogen gas (H2). H2 can be used to fuel hydrogen fuel cells; CO, combined with H2, can be used to create synthetic gas, or "syngas," which is the precursor to liquid hydrocarbon fuels. Adding other catalysts to the gas mixture, meanwhile, produces methane. And once the ceria is oxygenated to full capacity, it can be heated back up again, and the cycle can begin anew.

For all of this to work, the temperatures in the reactor have to be very high—nearly 3,000 degrees Fahrenheit. At Caltech, Haile and her students achieved such temperatures using electrical furnaces. But for a real-world test, she says, "we needed to use photons, so we went to Switzerland." At the Paul Scherrer Institute's High-Flux Solar Simulator, the researchers and their collaborators—led by Aldo Steinfeld of the institute's Solar Technology Laboratory—installed the reactor on a large solar simulator capable of delivering the heat of 1,500 suns.

In experiments conducted last spring, Haile and her colleagues achieved the best rates for CO2 dissociation ever achieved, "by orders of magnitude," she says. The efficiency of the reactor was uncommonly high for CO2 splitting, in part, she says, "because we're using the whole solar spectrum, and not just particular wavelengths." And unlike in electrolysis, the rate is not limited by the low solubility of CO2 in water. Furthermore, Haile says, the high operating temperatures of the reactor mean that fast catalysis is possible, without the need for expensive and rare metal catalysts (cerium, in fact, is the most common of the rare earth metals—about as abundant as copper).

In the short term, Haile and her colleagues plan to tinker with the ceria formulation so that the reaction temperature can be lowered, and to re-engineer the reactor, to improve its efficiency. Currently, the system harnesses less than 1% of the solar energy it receives, with most of the energy lost as heat through the reactor's walls or by re-radiation through the quartz window. "When we designed the reactor, we didn’t do much to control these losses," says Haile. Thermodynamic modeling by lead author and former Caltech graduate student William Chueh suggests that efficiencies of 15% or higher are possible.

Ultimately, Haile says, the process could be adopted in large-scale energy plants, allowing solar-derived power to be reliably available during the day and night. The CO2 emitted by vehicles could be collected and converted to fuel, "but that is difficult," she says. A more realistic scenario might be to take the CO2 emissions from coal-powered electric plants and convert them to transportation fuels. "You'd effectively be using the carbon twice," Haile explains. Alternatively, she says, the reactor could be used in a "zero CO2 emissions" cycle: H2O and CO2 would be converted to methane, would fuel electricity-producing power plants that generate more CO2 and H2O, to keep the process going.

(Photo: Caltech)

California Institute of Technology (Caltech)

ATMOSPHERE CLEANS ITSELF MORE EFFICIENTLY THAN PREVIOUSLY THOUGHT

0 comentarios

The earth's atmosphere is less sensitive to pollutants than some researchers previously thought. An international team of researchers, including scientists from the Max Planck Institute for Chemistry in Mainz, has found that the concentration of hydroxyl radicals in the atmosphere has changed little in recent years. Hydroxyl radicals clean the air by breaking down organic substances such as climate-damaging methane. Because this self-cleaning capacity has scarcely varied over the past few years, the researchers believe that it is only marginally affected by environmental changes. These findings refute the view held by other scientists who believed that the atmosphere is very sensitive to air pollutants.

We now have a clearer picture of how the atmosphere cleanses itself of air pollution and harmful gases. With their recent study, an international team headed by scientists from the National Oceanic and Atmospheric Administration (NOAA) in the U.S. and the Max Planck Institute for Chemistry in Germany have made an important contribution to our understanding of this process. According to the study, the global amount of hydroxyl radicals in the air varies by just a few percent from year to year. Some researchers believed that it varies by up to 25 %.

Hydroxyl radicals are a key component of the self-cleaning capacity of the atmosphere, as they rid the air of many dangerous pollutants. They oxidize hydrocarbons, including the greenhouse gas methane and emissions from industry and transport, and make them water-soluble so that they can be removed from the atmosphere by rainfall.

However for many years it was unclear how well the atmosphere could actually oxidize pollutants. Some studies gave rise to alarming conclusions, suggesting that this cleaning capacity was very sensitive to environmental changes. "We now know that this vital feature of our atmosphere, the ability to cleanse itself of pollutants, is in fact very stable," says Steve Montzka, head author of the study and researcher at the National Oceanic and Atmospheric Administration NOAA.

"Although we predicted that the self-cleaning capacity is well buffered, we were only able to prove it through years of systematic measurements and advanced modelling techniques," says Jos Lelieveld, director of the Max Planck Institute for Chemistry and initiator of the study. "Our findings will make predictions about the climate and global air quality more reliable, because we can now better describe the composition of the atmosphere with the aid of computer models."

A hydroxyl radical consists of one hydrogen atom and one oxygen atom. These molecules usually form as the result of a reaction of sunlight with water molecules and ozone. However hydroxyl is also recycled by a number of chemical reactions, with the result that its concentration remains stable. In the atmosphere, these radicals are present in extremely low concentrations. They are formed quickly but also degraded on average within one second because they react quickly with other molecules. This makes it very difficult to determine their concentration accurately.

To measure the amount of hydroxyl, scientists have a trick up their sleeve. They analyze a molecule in the atmosphere that reacts with hydroxyl radicals but is much more stable and easier to measure: methyl chloroform. This chemical has long been used as a solvent in paints, adhesives and cleaning products. It is degraded in the atmosphere primarily by reaction with hydroxyl, so the amount of methyl chloroform in the air correlates with the amount of hydroxyl.

Only reduced pollutant emissions permanently protect the climate

However, in the 1980s and 1990s this method produced widely varying estimates that gave rise to serious concerns. High fluctuations in free radical levels would imply that the self-cleaning ability of the atmosphere is very sensitive to natural or man-made atmospheric changes. In fact, the reason for the different results obtained for hydroxyl concentration was the prolonged, difficult-to-estimate emissions of methyl chloroform.

The production of methyl chloroform came to an end in the mid-1990s as a result of the Montreal Protocol on the protection of the ozone layer. As researchers no longer had to take new emissions into account, they could assume that the existing methyl chloroform was being continually degraded. Thus, they could determine the amount of hydroxyl in the atmosphere much more accurately.

Between 1998 and 2008 the team measured the concentration of methyl chloroform once a week at nine different locations, mostly remote islands and coastal stations where local air pollution is negligible. These measurements were part of NOAA's collaborative air sampling program. Using this data and with the help of modelling systems, researchers at the Max Planck Institute for Chemistry and the University of Wageningen in the Netherlands were able to calculate the global amount of methyl chloroform.

The study brought to an end the prolonged and heated scientific debate over the self-cleaning capacity of the atmosphere. "It's satisfying to finally resolve scientific questions," comments Lelieveld, with reference to the publication of the results in the magazine Science. "But even if we now have one less concern regarding our environment, we should still do everything we can to minimize emissions of greenhouse gases and pollutants. This is the only way we can protect our atmosphere and prevent further climate change.

(Photo: © James Elkins, NOAA)

Max Planck Institute

RISING INDOOR WINTER TEMPERATURES LINKED TO OBESITY?

0 comentarios

Increases in winter indoor temperatures in the United Kingdom, United States and other developed countries may be contributing to rises in obesity in those populations, according to UCL research published recently.

The review paper, published in the journal Obesity Reviews, examines evidence of a potential causal link between reduced exposure to seasonal cold and increases in obesity in the UK and US.

Reduced exposure to cold may have two effects on the ability to maintain a healthy weight: minimising the need for energy expenditure to stay warm and reducing the body's capacity to produce heat. The review summarises the evidence for increases in winter indoor temperatures in the UK and US and also examines the biological plausibility of the idea that exposure to seasonal cold could help to regulate energy balance and body weight on a population level.

The paper brings together existing evidence showing that winter indoor temperatures have increased over the last few decades and that there has also been an increase in homogenisation of temperatures in domestic settings. Increasing expectations of thermal comfort mean that seasonal cold exposure is decreasing and we are spending more time exposed to milder temperatures.

The authors also discuss the role of brown adipose tissue (brown fat) in human heat production. Brown fat differs from white fat in that it has the capacity to burn energy to create heat, and its development in the body is thought to be triggered by exposure to cold temperatures. Recent studies suggest that increased time spent in warm conditions may lead to a loss of brown fat, and therefore reduced capacity to burn energy.

Lead author Dr Fiona Johnson, UCL Epidemiology & Public Health, said: "Increased time spent indoors, widespread access to central heating and air conditioning, and increased expectations of thermal comfort all contribute to restricting the range of temperatures we experience in daily life and reduce the time our bodies spend under mild thermal stress - meaning we're burning less energy. This could have an impact on energy balance and ultimately have an impact on body weight and obesity.

"Research into the environmental drivers behind obesity, rather then the genetic ones, has tended to focus on diet and exercise – which are undoubtedly the major contributors. However, it is possible that other environmental factors, such as winter indoor temperatures, may also have a contributing role. This research therefore raises the possibility for new public health strategies to address the obesity epidemic."

Co-author, Marcella Ucci, UCL Bartlett School of Graduate Studies, said: "The findings suggest that lower winter temperatures in buildings might contribute to tackling obesity as well reducing carbon emissions."

(Photo: UCL)

UCL

NO LEFTOVERS FOR T. REX

0 comentarios

T.rex hunted like a lion, rather than regularly scavenging like a hyena, reveals new research published in the journal Proceedings of the Royal Society B.

The findings end a long-running debate about the hunting behaviour of this awesome predator.

Scientists from the Zoological Society of London (ZSL) used an ecological model based on predator relationships in the Serengeti to determine whether scavenging would have been an effective feeding strategy for T.rex.

Previous attempts to answer the question about T.rex's hunting behaviour have focused on its morphology. The flaw in this approach is that two species can possess similar physical features and still have very different hunting strategies, such as vultures and eagles.

Lead author Dr Chris Carbone, says "By understanding the ecological forces at work, we have been able to show that scavenging was not a viable option for T.rex as it was out-competed by smaller, more abundant predatory dinosaurs.

"These smaller species would have discovered carcasses more quickly, making the most of 'first-come-first-served' opportunities."

Like polar bears and lions, the authors conclude that an individual T.rex would have roamed over large distances to catch its prey, potentially areas several times the size of Greater London.

This research now opens the door to look at the behaviour of T.rex as a hunter.

(Photo: ZSL)

Zoological Society of London

EYEWITNESSES -- NOT AS RELIABLE AS 1 MIGHT BELIEVE

0 comentarios
Those who have witnessed a crime would do best not to tell anyone about it. Contrary to what one might believe, a person's memory of an event is not improved by retelling the story. Instead, the risk of an incorrect account increases the more the story is retold and discussed.

"The most accurate witness statements come from people who have seen a crime and then write down what happened before they recount it or discuss it with anyone", says Farhan Sarwar.

However, it is quite unusual for witnesses to do this. On the contrary, many want to immediately discuss what they have seen. One example of how wrong they can be is the eyewitness descriptions of Anna Lindh's murderer. Those who were there and saw the murderer were in agreement that he was wearing military clothing. When the pictures from the department store's cameras were examined, it could be seen that he was wearing normal sports clothes.

Farhan Sarwar's studies show that eyewitnesses are particularly bad at remembering details, such as what the perpetrator was wearing or what weapon was used. On the other hand, they are better at recalling the key events.

A witness who has told his story many times may become increasingly sure of the details of the crime. This could have devastating consequences for a criminal investigation, as the police place great importance on how confident the witness is, says Farhan Sarwar.

But if eyewitness accounts are so flawed, should they be used at all?

Yes, says Farhan Sarwar. Criminals are getting increasingly canny. They rarely leave any clues. Therefore, eyewitness accounts are still the most important thing the police have to go on. However, the police must be aware of how much importance they attribute to them.

Alongside the work on his thesis, Farhan Sarwar has developed a method to measure the likelihood of eyewitness accounts being correct, together with colleague Sverker Sikström. He has written a computer program that uses algorithms to give a reliable percentage figure for how likely it is that an eyewitness really has remembered correctly.

Testing on the method is not yet complete, but when it is ready the program will quickly be able to process a large number of recorded witness statements. He can already see a number of areas for use: courts, police questioning, security services, insurance companies, etc.

Lund University

AND THEN THERE WAS LIGHT

0 comentarios

In the beginning, there was no light. After the Big Bang created the universe 13 billion years ago, the universe remained enshrouded in darkness. Based on observations of the radiation left over from the Big Bang, astronomers have theorized that several hundred million years after this event, gravity caused hydrogen and helium particles to condense into clouds. The energy from this activity eventually ignited those clouds, setting in motion a chain of events that led to the birth of the first stars. Although the transition between the so-called cosmic dark ages and the birth of stars and galaxies may explain the origin and evolution of many celestial objects, astronomers know very little about this period.

Recently, two astronomers conducted an experiment to try to learn more about this transitional period, which is known as the Epoch of Reionization, or EOR. Because identifying any light from the earliest galaxies is nearly impossible, Alan Rogers, a research affiliate at MIT’s Haystack Observatory, and Judd Bowman, an assistant professor at Arizona State University, instead focused their efforts on detecting radio waves emitted by hydrogen that existed between the first galaxies. Some of these radio waves are just reaching us today, and astronomers have theorized that certain characteristics of the waves could hold clues about the EOR.

As the first stars started to form during the EOR, their ultraviolet radiation (light) excited nearby hydrogen atoms, knocking off their electrons and giving them a positive electrical charge. This process, known as ionization, is important to cosmologists because it marks a pivotal moment in the transition between the early universe, which contained only hydrogen and helium gas, and today’s universe, which is filled with diverse galaxies, planets and black holes. Figuring out exactly when — and for how long — this ionization occurred is an important first step for confirming or modifying current models of the evolution of the universe.

To understand more about this period, the researchers focused their study on the frequency of radiation emitted by non-ionized, or neutral, hydrogen. Specifically, they looked to see how the signal changed over time, which would indicate how long it might have taken for the non-ionized hydrogen to become ionized as a result of the birth of stars and galaxies. As the researchers reported in a paper published last month in Nature, it took at least 5 million years for the non-ionized hydrogen to become ionized. It is a good bet, then, that the birth of the first stars and galaxies took the same amount of time or more to develop into the stars and galaxies we recognize today.

The finding isn’t surprising to Harvard astronomer Avi Loeb, who says that many models predict that the EOR lasted for several hundred million years. Even so, the study is significant because it provides the first observational results about the EOR — a research area that Loeb calls the “major frontier” in astronomy over the next decade. By showing that radio observations can probe ancient radio waves, Rogers and Bowman “have basically opened the window for using this simple technique” in parallel with more sophisticated instruments, he says. To refine their estimate, Loeb suggests that the researchers improve the calibration of their antenna in order to remove any interference produced by the instrument itself.

Rogers and Bowman hope to deploy a system with improved calibration later this month. They are also involved in developing a large radio telescope that will attempt to make much more sophisticated measurements from the EOR. Known as the Murchison Widefield Array, the telescope consists of 512 antenna “tiles” that will try to discover low-frequency radio phenomena that may reveal details about how the galaxies formed and evolved.

(Photo: Judd Bowman/Arizona State University)


RESEARCHERS SHOW HOW ONE GENE BECOMES TWO (WITH DIFFERENT FUNCTIONS)

0 comentarios

Researchers report that they are the first to show in molecular detail how one gene evolved two competing functions that eventually split up – via gene duplication – to pursue their separate destinies.

The study, in the Proceedings of the National Academy of Sciences, validates a decades-old hypothesis about a key mechanism of evolution. The study also confirms the ancestry of a family of “antifreeze proteins” that helps the Antarctic eelpout survive in the frigid waters of the Southern Ocean.

“I’m always asking the question of where these antifreeze proteins come from,” said University of Illinois animal biology professor Christina Cheng, who has spent three decades studying the genetic adaptations that enable Antarctic fish to survive in one of the coldest zones on the planet. “The cell usually does not create new proteins from scratch.”

Scientists have known since 2001 that the sequence of genes coding for a family of antifreeze proteins (known as AFP III) was very similar to part of a sequence of a gene that codes for a cellular enzyme in humans. Since Antarctic fish also produce this enzyme, sialic acid synthase (SAS), it was thought that the genes for these antifreeze proteins had somehow evolved from a duplicate copy of the SAS gene. But no study had shown how this happened with solid experimental data.

Cheng and her colleagues at the Chinese Academy of Sciences began by comparing the sequences of the SAS and AFP III genes.

There are two SAS genes in fish: SAS-A and SAS-B. The researchers confirmed that the AFP III genes contain sequences that are most similar to those in a region of SAS-B.

They also found a sequence in the SAS-B gene that, when translated into a new protein, could – with a few modifications – direct the cell to secrete the protein. This slightly modified signal sequence also appears in the AFP III genes. Unlike the SAS enzymes, which remain inside the cell, the AFP III proteins are secreted into the blood or extracellular fluid, where they can more easily disrupt the growth of invading ice crystals.

“This basically demonstrates how something that ‘lives’ inside the cell can acquire this new functionality and get moved out into the bloodstream to do something else,” Cheng said.

Further analysis revealed that the SAS proteins function as enzymes but also have modest ice-binding capabilities. This finding supports a decades-old hypothesis that states that when a single gene begins to develop more than one function, duplication of that gene could result in the divergent evolution of the original gene and its duplicate.

The new finding also supports the proposed mechanism, called “escape from adaptive conflict,” by which this can occur. According to this idea, if a gene has more than one function, mutations or other changes to the gene through natural selection that enhance one function may undermine its other function.

“The original enzyme function and the emerging ice-binding function of the ancestral SAS molecule might conflict with each other,” Cheng said. When the SAS-B gene became duplicated as a result of a copying error or some other random event in the cell, she said, then each of the duplicate genes was freed from the conflict and “could go on its own evolutionary path.”

“This is the first clear demonstration – with strong supporting molecular and functional evidence – of escape from adaptive conflict as the underlying process of gene duplication and the creation of a completely new function in one of the daughter copies,” Cheng said. “This has not been documented before in the field of molecular evolution.”

Cheng said that even before the gene for the secreted antifreeze protein was formed, the original SAS protein appears to have had both the enzymatic and ice-binding functions. This suggests that somehow the SAS protein (which is not secreted) acted within the cell to disrupt the growth of ice.

This could have occurred “in the early developmental stages of the fish,” Cheng said, since the eggs are spawned into a cold environment and would benefit from even the modest antifreeze capabilities of the SAS protein.

Later, after the SAS gene was duplicated and the AFP gene went on its own evolutionary path, Cheng said, the antifreeze protein appears to have evolved into a secreted protein, allowing it to disrupt ice formation in the bloodstream and extracellular fluid, where it would be of most benefit to the adult fish.

(Photo: Christina Cheng)

University of Illinois

BETTER THAN THE HUMAN EYE

0 comentarios

Researchers from Northwestern University and the University of Illinois at Urbana-Champaign are the first to develop a curvilinear camera, much like the human eye, with the significant feature of a zoom capability, unlike the human eye.

The “eyeball camera” has a 3.5x optical zoom, takes sharp images, is inexpensive to make and is only the size of a nickel. (A higher zoom is possible with the technology.)

While the camera won’t be appearing at Best Buy any time soon, the tunable camera -- once optimized -- should be useful in many applications, including night-vision surveillance, robotic vision, endoscopic imaging and consumer electronics.

“We were inspired by the human eye, but we wanted to go beyond the human eye,” said Yonggang Huang, Joseph Cummings Professor of Civil and Environmental Engineering and Mechanical Engineering at Northwestern’s McCormick School of Engineering and Applied Science. “Our goal was to develop something simple that can zoom and capture good images, and we’ve achieved that.”

The tiny camera combines the best of both the human eye and an expensive single-lens reflex (SLR) camera with a zoom lens. It has the simple lens of the human eye, allowing the device to be small, and the zoom capability of the SLR camera without the bulk and weight of a complex lens. The key is that both the simple lens and photodetectors are on flexible substrates, and a hydraulic system can change the shape of the substrates appropriately, enabling a variable zoom.

The research will be published by the Proceedings of the National Academy of Sciences (PNAS).

Huang, co-corresponding author of the PNAS paper, led the theory and design work at Northwestern. His colleague John Rogers, the Lee J. Flory Founder Chair in Engineering and professor of materials science and engineering at the University of Illinois at Urbana-Champaign, led the design, experimental and fabrication work. Rogers is a co-corresponding author of the paper.

Earlier eyeball camera designs are incompatible with variable zoom because these cameras have rigid detectors. The detector must change shape as the in-focus image changes shape with magnification. Huang and Rogers and their team use an array of interconnected and flexible silicon photodetectors on a thin, elastic membrane, which can easily change shape. This flexibility opens up the field of possible uses for such a system. (The array builds on their work in stretchable electronics.)

The camera system also has an integrated lens constructed by putting a thin, elastic membrane on a water chamber, with a clear glass window underneath.

Initially both detector and lens are flat. Beneath both the membranes of the detector and the simple lens are chambers filled with water. By extracting water from the detector’s chamber, the detector surface becomes a concave hemisphere. (Injecting water back returns the detector to a flat surface.) Injecting water into the chamber of the lens makes the thin membrane become a convex hemisphere.

To achieve an in-focus and magnified image, the researchers actuate the hydraulics to change the curvatures of the lens and detector in a coordinated manner. The shape of the detector must match the varying curvature of the image surface to accommodate continuously adjustable zoom, and this is easily done with this new hemispherical eye camera.

(Photo: Northwestern University)

Northwestern University

INCLINED ORBITS PREVAIL IN EXOPLANETARY SYSTEMS

0 comentarios

A research team led by astronomers from the University of Tokyo and the National Astronomical Observatory of Japan (NAOJ) has discovered that inclined orbits may be typical rather than rare for exoplanetary systems -- those outside of our solar system.

Their measurements of the angles between the axes of the star's rotation (stellar rotational axis) and the planet's orbit (planetary orbital axis) of exoplanets HAT-P-11b and XO-4b demonstrate that these exoplanets' orbits are highly tilted. This is the first time that scientists have measured the angle for a small planet like HAT-P-11 b. The new findings provide important observational indicators for testing different theoretical models of how the orbits of planetary systems have evolved.

Since the discovery of the first exoplanet in 1995, scientists have identified more than 500 exoplanets, planets outside of our solar system, nearly all of which are giant planets. Most of these giant exoplanets closely orbit their host stars, unlike our solar system's giant planets, like Jupiter, that orbit the Sun from a distance. Accepted theories propose that these giant planets originally formed from abundant planet-forming materials far from their host stars and then migrated to their current close locations. Different migration processes have been suggested to explain close-in giant exoplanets.

Disk-planet interaction models of migration focus on interactions between the planet and its protoplanetary disk, the disk from which it originally formed. Sometimes these interactions between the protoplanetary disk and the forming planet result in forces that make the planet fall toward the central star. This model predicts that the spin axis of the star and the orbital axis of the planet will be in alignment with each other.

Planet-planet interaction models of migration have focused on mutual scatterings among giant planets. Migration can occur from planet scattering, when multiple planets scatter during the creation of two or more giant planets within the protoplanetary disk. While some of the planets scatter from the system, the innermost one may establish a final orbit very close to the central star. Another planet-planet interaction scenario, Kozai migration, postulates that the long-term gravitational interaction between an inner giant planet and another celestial object such as a companion star or an outer giant planet over time may alter the planet's orbit, moving an inner planet closer to the central star. Planet-planet migration interactions, including planet-planet scattering and Kozai migration, could produce an inclined orbit between the planet and the stellar axis.

Overall, the inclination of the orbital axes of close-in planets relative to the host stars' spin axes emerges as a very important observational basis for supporting or refuting migration models upon which theories of orbital evolution center. A research group led by astronomers from the University of Tokyo and NAOJ concentrated their observations with the Subaru Telescope on investigating these inclinations for two systems known to have planets: HAT-P-11 and XO-4. The group measured the Rossiter-McLaughlin (hereafter, RM) effect of the systems and found evidence that their orbital axes incline relative to the spin axes of their host stars.

The RM effect refers to apparent irregularities in the radial velocity or speed of a celestial object in the observer's line of sight during planetary transits. Unlike the spectral lines that are generally symmetrical in measures of radial velocity, those with the RM effect deviate into an asymmetrical pattern (see Figure 1). Such apparent variation in radial velocity during a transit reveals the sky-projected angle between the stellar spin axis and planetary orbital axis. Subaru Telescope has participated in previous discoveries of the RM effect, which scientists have investigated for approximately thirty-five exoplanetary systems thus far.

In January 2010, a research team led by the current team's astronomers from the University of Tokyo and the National Astronomical Observatory of Japan used the Subaru Telescope to observe the planetary system XO-4, which lies 960 light years away from Earth in the Lynx region. The system's planet is about 1.3 times as massive as Jupiter and has a circular orbit of 4.13 days. Their detection of the RM effect showed that the orbital axis of the planet XO-4 b tilts to the spin axis of the host star. Only the Subaru Telescope has measured the RM effect for this system so far.

In May and July 2010, the current research team conducted targeted observations of the HAT-P-11 exoplanetary system, which lies 130 light years away from the Earth toward the constellation Cygnus. The Neptune-sized planet HAT-P-11 b orbits its host star in a non-circular (eccentric) orbit of 4.89 days and is among the smallest exoplanets ever discovered. Until this research, scientists had only detected the RM effect for giant planets. The detection of the RM effect for smaller-sized planets is challenging because the signal of the RM effect is proportional to the size of the planet; the smaller the transiting planet, the fainter the signal.

The team took advantage of the enormous light-collecting power of the Subaru Telescope's 8.2m mirror as well as the precision of its High Dispersion Spectrograph. Their observations not only resulted it the first detection of the RM effect for a smaller Neptune-sized exoplanet but also provided evidence that the orbital axis of the planet inclines to the stellar spin axis by approximately 103 degrees in the sky. A research group in the U.S. used the Keck Telescope and made independent observations of the RM effect of the same system in May and August 2010; their results were similar to those from the University of Tokyo/NAOJ team's May and July 2010 observations.

The current team's observations of the RM effect for the planetary systems HAT-P-11 and XO-4 have shown that they have planetary orbits highly tilted to the spin axes of their host stars. The latest observational results about these systems, including those obtained independently of the findings reported here, suggest that such highly inclined planetary orbits may commonly exist in the universe. The planet-planet scenario of migration, whether caused by planet-planet scattering or Kozai migration, rather than the planet-disk scenario could account for their migration to the present locations.

However, measurements of the RM effect for individual systems cannot decisively discriminate between the migration scenarios. Statistical analysis can help scientists determine which, if any, process of migration is responsible for the highly inclined orbits of giant planets. Since different migration models predict different distributions of the angle between the stellar axis and planetary orbit, developing a large sample of the RM effect enables scientists to support the most plausible migration process. Inclusion of the measurements of the RM effect for such a small-sized planet as HAT-P-11 b in the sample will play an important role in discussions of planetary migration scenarios.

Many research groups are planning to make observations of the RM effect with telescopes around the world. The current team and the Subaru Telescope will play an integral role in investigations to come. Continuous observations of transiting exoplanetary systems will contribute to an understanding of the formation and migration history of planetary systems in the near future.

(Photo: Subaru Telescope)

National Astronomical Observatory of Japan

SMALL MOLECULES MAY PREVENT EBOLA INFECTION

0 comentarios
Ebola, a virus that causes deadly hemorrhagic fever in humans, has no known cure or vaccine. But a new study by University of Illinois at Chicago scientists has uncovered a family of small molecules which appear to bind to the virus's outer protein coat and may inhibit its entry into human cells.

The results are to be published in the Journal of Medicinal Chemistry and are now online.

Previous studies have shown that small molecules can interfere with the Ebola infection process, says Duncan Wardrop, associate professor of chemistry at UIC and corresponding author of the new study. But almost all of these compounds "appear to exert their effects by altering the cells' response to the virus once it's entered the cell -- by which time it's too late," he said.

The new findings demonstrate that it is possible for a small molecule to bind to the virus before it has a chance to enter the cell and thereby prevent infection, he said.

Wardrop collaborated with UIC virologist Lijun Rong, who created a screening system that uses a chimeric HIV-Ebola virus bearing the protein coat of the Ebola virus. The chimera looks like Ebola but isn't life-threatening for scientists to work with.

After screening more than 230 candidate compounds, Wardrop and Rong found two molecules that inhibited cell entry, but only one that demonstrated specificity for the Ebola virus -- plus a bonus.

"We found that our lead compound also inhibits Marburg," Wardrop said, referring to a related virus that, along with Ebola, is one of the deadliest pathogens known. "That was a nice surprise. There's growing evidence the two viruses have the same cell-entry mechanism, and our observations appear to point to this conclusion."

In an effort to find even more potent anti-Ebola agents, Wardrop and graduate student Maria Yermolina synthesized a series of derivatives of the lead molecule -- a member of a family of compounds called isoxazoles -- and found several that displayed increased activity against Ebola infection. Exactly how and where these small molecules bind to the virus's protein coat is now being determined through nuclear magnetic resonance spectroscopy, done by Michael Caffrey, associate professor of biochemistry and molecular genetics.

While it's too early to predict whether the findings will lead to a new treatment for Ebola or Marburg infections, Wardrop said the positive results so far raise hope. The next step would be to see if small-molecule treatments prove effective in animal models.

The UIC scientists also hope their findings will provide further insight into mechanisms the Ebola and Marburg viruses use to enter human cells.

"This knowledge may spur development of new anti-viral agents," Wardrop said.

"From a wider perspective, we're searching for compounds to use as probes to study biological processes. Small molecules which bind to specific proteins and alter their function are invaluable to understanding what these proteins do in living cells," he said.

University of Illinois at Chicago

MATHEMATICAL MODEL EXPLAINS HOW COMPLEX SOCIETIES EMERGE, COLLAPSE

0 comentarios

The instability of large, complex societies is a predictable phenomenon, according to a new mathematical model that explores the emergence of early human societies via warfare. Capturing hundreds of years of human history, the model reveals the dynamical nature of societies, which can be difficult to uncover in archaeological data.

The research, led Sergey Gavrilets, associate director for scientific activities at the National Institute for Mathematical and Biological Synthesis and a professor at the University of Tennessee-Knoxville, is published in the first issue of the new journal Cliodynamics: The Journal of Theoretical and Mathematical History, the first academic journal dedicated to research from the emerging science of theoretical history and mathematics.

The numerical model focuses on both size and complexity of emerging "polities" or states as well as their longevity and settlement patterns as a result of warfare. A number of factors were measured, but unexpectedly, the largest effect on the results was due to just two factors – the scaling of a state's power to the probability of winning a conflict and a leader's average time in power. According to the model, the stability of large, complex polities is strongly promoted if the outcomes of conflicts are mostly determined by the polities' wealth or power, if there exist well-defined and accepted means of succession, and if control mechanisms within polities are internally specialized. The results also showed that polities experience what the authors call "chiefly cycles" or rapid cycles of growth and collapse due to warfare.

The wealthiest of polities does not necessarily win a conflict, however. There are many other factors besides wealth that can affect the outcome of a conflict, the authors write. The model also suggests that the rapid collapse of a polity can occur even without environmental disturbances, such as drought or overpopulation.

By using a mathematical model, the researchers were able to capture the dynamical processes that cause chiefdoms, states and empires to emerge, persist and collapse at the scale of decades to centuries.

"In the last several decades, mathematical models have been traditionally important in the physical, life and economic sciences, but now they are also becoming important for explaining historical data," said Gavrilets. "Our model provides theoretical support for the view that cultural, demographic and ecological conditions can predict the emergence and dynamics of complex societies."

(Photo: Gavrilets S, Anderson D, Turchin P.)

National Institute for Mathematical and Biological Synthesis

Followers

Archive

 

Selected Science News. Copyright 2008 All Rights Reserved Revolution Two Church theme by Brian Gardner Converted into Blogger Template by Bloganol dot com