Tuesday, March 1, 2011

Researchers Work Toward Automating Sedation in Intensive Care Units

0 comentarios

Researchers at the Georgia Institute of Technology and the Northeast Georgia Medical Center are one step closer to their goal of automating the management of sedation in hospital intensive care units (ICUs). They have developed control algorithms that use clinical data to accurately determine a patient's level of sedation and can notify medical staff if there is a change in the level.

"ICU nurses have one of the most task-laden jobs in medicine and typically take care of multiple patients at the same time, so if we can use control system technology to automate the task of sedation, patient safety will be enhanced and drug delivery will improve in the ICU," said James Bailey, the chief medical informatics officer at the Northeast Georgia Medical Center in Gainesville, Ga. Bailey is also a certified anesthesiologist and intensive care specialist.

During a presentation at the IEEE Conference on Decision and Control, the researchers reported on their analysis of more than 15,000 clinical measurements from 366 ICU patients they classified as "agitated" or "not agitated." Agitation is a measure of the level of patient sedation. The algorithm returned the same results as the assessment by hospital staff 92 percent of the time.

"Manual sedation control can be tedious, imprecise, time-consuming and sometimes of poor quality, depending on the skills and judgment of the ICU nurse," said Wassim Haddad, a professor in the Georgia Tech School of Aerospace Engineering. "Ultimately, we envision an automated system in which the ICU nurse evaluates the ICU patient, enters the patient's sedation level into a controller, which then adjusts the sedative dosing regimen to maintain sedation at the desired level by continuously collecting and analyzing quantitative clinical data on the patient."

This project is supported in part by the U.S. Army. On the battlefield, military physicians sometimes face demanding critical care situations and the use of advanced control technologies is essential for extending the capabilities of the health care system to handle large numbers of injured soldiers.

Working with Haddad and Bailey on this project are Allen Tannenbaum and Behnood Gholami. Tannenbaum holds a joint appointment as the Julian Hightower Chair in the Georgia Tech School of Electrical and Computer Engineering and the Wallace H. Coulter Department of Biomedical Engineering at Georgia Tech and Emory University, while Gholami is currently a postdoctoral fellow in the Georgia Tech School of Electrical and Computer Engineering.

This research builds on Haddad and Bailey's previous work automating anesthesia in hospital operating rooms. The adaptive control algorithms developed by Haddad and Bailey control the infusion of an anesthetic drug agent in order to maintain a desired constant level of depth of anesthesia during surgery in the operating room. Clinical trial results that will be published in the March issue of the journal IEEE Transactions on Control Systems Technology demonstrate excellent regulation of unconsciousness allowing for a safe and effective administration of an anesthetic agent.

Critically ill patients in the ICU frequently require invasive monitoring and other support that can lead to anxiety, agitation and pain. Sedation is essential for the comfort and safety of these patients.

"The challenge in developing closed-loop control systems for sedating critically ill patients is finding the appropriate performance variable or variables that measure the level of sedation of a patient, in turn allowing an automated controller to provide adequate sedation without oversedation," said Gholami.

In the ICU, the researchers used information detailing each patient's facial expression, gross motor movement, response to a potentially noxious stimulus, heart rate and blood pressure stability, noncardiac sympathetic stability, and nonverbal pain scale to determine a level of sedation.

The researchers classified the clinical data for each variable into categories. For example, a patient's facial expression was categorized as "relaxed," "grimacing and moaning," or "grimacing and crying." A patient's noncardiac sympathetic stability was classified as "warm and dry skin," "flushed and sweaty," or "pale and sweaty."

They also recorded each patient's score on the motor activity and assessment scale (MAAS), which is used by clinicians to evaluate level of sedation on a scale of zero to six. In the MAAS system, a score of zero represents an "unresponsive patient," three represents a "calm and cooperative patient," and six represents a "dangerously agitated patient." The MAAS score is subjective and can result in inconsistencies and variability in sedation administration.

Using a Bayesian network, the researchers used the clinical data to compute the probability that a patient was agitated. Twelve-thousand measurements collected from patients admitted to the ICU at the Northeast Georgia Medical Center between during a one-year period were used to train the Bayesian network and the remaining 3,000 were used to test it.

In 18 percent of the test cases, the computer classified a patient as "agitated" but the MAAS score described the same patient as "not agitated." In five percent of the test cases, the computer classified a patient as "not agitated," whereas the MAAS score indicated "agitated." These probabilities signify an 18 percent false-positive rate and a five percent false-negative rate.

"This level of performance would allow a significant reduction in the workload of the intensive care unit nurse, but it would in no way replace the nurse as the ultimate judge of the adequacy of sedation," said Bailey. "However, by relieving the nurse of some of the work associated with titration of sedation, it would allow the nurse to better focus on other aspects of his or her demanding job."

The researchers' next step toward closed-loop control of sedation in the ICU will be to continuously collect clinical data from ICU patients in real time. Future work will involve the development of objective techniques for assessing ICU sedation using movement, facial expression and responsiveness to stimuli.

Digital imaging will be used to assess a patient's facial expression and also gross motor movement. In a study published in the June 2010 issue of the journal IEEE Transactions on Biomedical Engineering, the researchers showed that machine learning methods could be used to assess the level of pain in patients using facial expressions.

"We will explore the relationship between the data we can extract from these multiple sensors and the subjective clinical MAAS score," said Haddad. "We will then use the knowledge we have gained in developing feedback control algorithms for anesthesia dosage levels in the operating room to develop an expert system to automate drug dosage in the ICU."

(Photo: GIT)

Georgia Institute of Technology

Thawing permafrost likely will accelerate global warming in coming decades, says study

0 comentarios
Up to two-thirds of Earth's permafrost likely will disappear by 2200 as a result of warming temperatures, unleashing vast quantities of carbon into the atmosphere, says a new study by the University of Colorado Boulder's Cooperative Institute for Research in Environmental Sciences.

The carbon resides in permanently frozen ground that is beginning to thaw in high latitudes from warming temperatures, which will impact not only the climate but also international strategies to reduce fossil fuel emissions, said CU-Boulder's Kevin Schaefer, lead study author. "If we want to hit a target carbon dioxide concentration, then we have to reduce fossil fuel emissions that much lower than previously thought to account for this additional carbon from the permafrost," he said. "Otherwise we will end up with a warmer Earth than we want."

The escaping carbon comes from plant material, primarily roots trapped and frozen in soil during the last glacial period that ended roughly 12,000 years ago, he said. Schaefer, a research associate at CU-Boulder's National Snow and Ice Data Center, an arm of CIRES, likened the mechanism to storing broccoli in a home freezer. "As long as it stays frozen, it stays stable for many years," he said. "But if you take it out of the freezer it will thaw out and decay."

While other studies have shown carbon has begun to leak out of permafrost in Alaska and Siberia, the study by Schaefer and his colleagues is the first to make actual estimates of future carbon release from permafrost. "This gives us a starting point, and something more solid to work from in future studies," he said. "We now have some estimated numbers and dates to work with."

The new study was published online Feb. 14 in the scientific journal Tellus. Co-authors include CIRES Fellow and Senior Research Scientist Tingjun Zhang from NSIDC, Lori Bruhwiler of the National Oceanic and Atmospheric Administration and Andrew Barrett from NSIDC. Funding for the project came from NASA, NOAA and the National Science Foundation.

Schaefer and his team ran multiple Arctic simulations assuming different rates of temperature increases to forecast how much carbon may be released globally from permafrost in the next two centuries. They estimate a release of roughly 190 billion tons of carbon, most of it in the next 100 years. The team used Intergovernmental Panel on Climate Change scenarios and land-surface models for the study.

"The amount we expect to be released by permafrost is equivalent to half of the amount of carbon released since the dawn of the Industrial Age," said Schaefer. The amount of carbon predicted for release between now and 2200 is about one-fifth of the total amount of carbon in the atmosphere today, according to the study.

While there were about 280 parts per million of CO2 in Earth's atmosphere prior to the Industrial Age beginning about 1820, there are more than 380 parts per million of carbon now in the atmosphere and the figure is rising. The increase, equivalent to about 435 billion tons of carbon, resulted primarily from human activities like the burning of fossil fuels and deforestation.

Using data from all climate simulations, the team estimated that about 30 to 60 percent of Earth's permafrost will disappear by 2200. The study took into account all of the permanently frozen ground at high latitudes around the globe.

The consensus of the vast majority of climate scientists is that the buildup of CO2 and other greenhouse gases in Earth's atmosphere is the primary reason for increasingly warm temperatures on Earth. According to NOAA, 2010 was tied for the hottest year on record. The hottest decade on record occurred from 2000 to 2010.

Greater reductions in fossil fuel emissions to account for carbon released by the permafrost will be a daunting global challenge, Schaefer said. "The problem is getting more and more difficult all the time," he said. "It is hard enough to reduce the emissions in any case, but now we have to reduce emissions even more. We think it is important to get that message out now."

University of Colorado

Biological Anthropologists Question Claims for Human Ancestry

0 comentarios
“Too simple” and “not so fast” suggest biological anthropologists from the George Washington University and New York University about the origins of human ancestry. In the upcoming issue of the journal Nature, the anthropologists question the claims that several prominent fossil discoveries made in the last decade are our human ancestors. Instead, the authors offer a more nuanced explanation of the fossils’ place in the Tree of Life. They conclude that instead of being our ancestors the fossils more likely belong to extinct distant cousins.

“Don’t get me wrong, these are all important finds,” said co-author Bernard Wood, University Professor of Human Origins and professor of Human Evolution Anatomy at GW and director of its Center for the Advanced Study of Hominid Paleobiology. “But to simply assume that anything found in that time range has to be a human ancestor is naïve.”

The paper, “The evolutionary context of the first hominins,” reconsiders the evolutionary relationships of fossils named Orrorin, Sahelanthropus and Ardipithecus, dating from four to seven million years ago, which have been claimed to be the earliest human ancestors. Ardipithecus, commonly known as “Ardi,” was discovered in Ethiopia and was found to be radically different from what many researchers had expected for an early human ancestor. Nonetheless, the scientists who made the discovery were adamant it is a human ancestor.

“We are not saying that these fossils are definitively not early human ancestors,” said co-author Terry Harrison, a professor in NYU’s Department of Anthropology and director of its Center for the Study of Human Origins. “But their status has been presumed rather than adequately demonstrated, and there are a number of alternative interpretations that are possible. We believe that it is just as likely or more likely that they are fossil apes situated close to the ancestry of the living great ape and humans.”

The authors are skeptical about the interpretation of the discoveries and advocate a more nuanced approach to classifying the fossils. Wood and Harrison argue that it is naïve to assume that all fossils are the ancestors of creatures alive today and also note that shared morphology or homoplasy – the same characteristics seen in species of different ancestry – was not taken into account by the scientists who found and described the fossils. For example, the authors claim that for Ardipithecus to be a human ancestor, one must assume that homoplasy does not exist in our lineage, but is common in the lineages closest to ours. The authors suggest there are a number of potential interpretations of these fossils and that being a human ancestor is by no means the simplest, or most parsimonious explanation.

The scientific community has long concluded that the human lineage diverged from that of the chimpanzee six to eight million years ago. It is easy to differentiate between the fossils of a modern-day chimpanzee and a modern human. However, it is more difficult to differentiate between the two species when examining fossils that are closer to their common ancestor, as is the case with Orrorin, Sahelanthropus, and Ardipithecus.

In their paper, Wood and Harrison caution that history has shown how uncritical reliance on a few similarities between fossil apes and humans can lead to incorrect assumptions about evolutionary relationships. They point to the case of Ramapithecus, a species of fossil ape from south Asia, which was mistakenly assumed to be an early human ancestor in the 1960s and 1970s, but later found to be a close relative of the orangutan.

Similarly, Oreopithecus bambolii, a fossil ape from Italy shares many similarities with early human ancestors, including features of the skeleton that suggest that it may have been well adapted for walking on two legs. However, the authors observe, enough is known of its anatomy to show that it is a fossil ape that is only distantly related to humans, and that it acquired many “human-like” features in parallel.

Wood and Harrison point to the small canines in Ardipithecus and Sahelanthropus as possibly the most convincing evidence to support their status as early human ancestors. However, canine reduction was not unique to the human lineage for it occurred independently in several lineages of fossil apes (e.g., Oreopithecus, Ouranopithecus and Gigantopithecus) presumably as a result of similar shifts in dietary behavior.

(Photo: ©iStockPhoto.com/wllad)

New York University

Ground-based lasers vie with satellites to map Earth’s magnetic field

0 comentarios

Mapping the Earth’s magnetic field – to find oil, track storms or probe the planet’s interior – typically requires expensive satellites.

University of California, Berkeley, physicists have now come up with a much cheaper way to measure the Earth’s magnetic field using only a ground-based laser.

The method involves exciting sodium atoms in a layer 90 kilometers (60 miles) above the surface and measuring the light they give off.

“Normally, the laser makes the sodium atom fluoresce,” said Dmitry Budker, UC Berkeley professor of physics. “But if you modulate the laser light, when the modulation frequency matches the spin precession of the sodium atoms, the brightness of the spot changes.”

Because the local magnetic field determines the frequency at which the atoms precess, this allows someone with a ground-based laser to map the magnetic field anywhere on Earth.

Budker and three current and former members of his laboratory, as well as colleagues with the European Southern Observatory (ESO), lay out their technique in a paper appearing online this week in the journal Proceedings of the National Academy of Sciences.

Various satellites, ranging from the Geostationary Operational Environmental Satellites, or GOES, to an upcoming European mission called SWARM, carry instruments to measure the Earth’s magnetic field, providing data to companies searching for oil or minerals, climatologists tracking currents in the atmosphere and oceans, geophysicists studying the planet’s interior and scientists tracking space weather.

Ground-based measurements, however, can avoid several problems associated with satellites, Budker said. Because these spacecraft are moving at high speed, it’s not always possible to tell whether a fluctuation in the magnetic field strength is real or a result of the spacecraft having moved to a new location. Also, metal and electronic instruments aboard the craft can affect magnetic field measurements.

“A ground-based remote sensing system allows you to measure when and where you want and avoids problems of spatial and temporal dependence caused by satellite movement,” he said. “Initially, this is going to be competitive with the best satellite measurements, but it could be improved drastically.”

The idea was sparked by a discussion Budker had with a colleague about of the lasers used by many modern telescopes to remove the twinkle from stars caused by atmospheric disturbance. That technique, called laser guide star adaptive optics, employs lasers to excite sodium atoms deposited in the upper atmosphere by meteorites. Once excited, the atoms fluoresce, emitting light that mimics a real star. Telescopes with such a laser guide star, including the Very Large Telescope in Chile and the Keck telescopes in Hawaii, adjust their “rubber mirrors” to cancel the laser guide star’s jiggle, and thus remove the jiggle for all nearby stars.

It is well known that these sodium atoms are affected by the Earth’s magnetic field. Budker, who specializes in extremely precise magnetic-field measurements, realized that you could easily determine the local magnetic field by exciting the atoms with a pulsed or modulated laser of the type used in guide stars. The method is based on the fact that the electron spin of each sodium atom precesses like a top in the presence of a magnetic field. Hitting the atom with light pulses at just the right frequency will cause the electrons to flip, affecting the way the atoms interact with light.

“It suddenly struck me that what we do in my lab with atomic magnetometers we can do with atoms freely floating in the sky,” he said.

Budker’s former post-doctoral fellow James Higbie ‑ now an assistant professor of physics and astronomy at Bucknell University – conducted laboratory measurements and computer simulations confirming that the effects of a modulated laser could be detected from the ground by a small telescope. He was assisted by Simon M. Rochester, who received his Ph.D. in physics from UC Berkeley last year under Budker’s direction and is now running a start-up consulting company, Rochester Scientific; and current post-doctoral fellow Brian Patton.

In practice, a 20- to 50-watt laser ‑ small enough to load on a truck or boat ‑ tuned to the orange sodium line (589 nanometer wavelength) would shine polarized light into the 10 kilometer-thick (approximately five miles) sodium layer in the mesosphere, which is about 90 kilometers overhead. The frequency with which the laser light is modulated or pulsed would be shifted slightly around this wavelength to stimulate a spin flip.

The decrease or increase in brightness when the modulation is tuned to a “sweet spot” determined by the magnitude of the magnetic field could be as much as 10 percent of the typical fluorescence, Budker said. The spot itself would be too faint to see with the naked eye, but the brightness change could easily be measured by a small telescope.

“This is such a simple idea, I thought somebody must have thought of it before,” Budker said.

He was right. William Happer, a physicist who pioneered spin-polarized spectroscopy and the sodium laser guide stars, had thought of the idea, but had never published it.

“I was very, very happy to hear that, because I felt there may be a flaw in the idea, or that it had already been published,” Budker said.

(Photo: Budker lab)

University of California, Berkeley

Yale Researchers Find Clues to Mystery of Preterm Delivery

0 comentarios

Researchers at Yale School of Medicine have found that excessive formation of calcium crystal deposits in the amniotic fluid may be a reason why some pregnant women suffer preterm premature rupture of the membranes (PPROM) leading to preterm delivery.

This is a key breakthrough in solving the mystery of preterm birth, a leading cause of death and permanent disability in newborns.

Researchers know that infection, maternal stress and placental bleeding can trigger some preterm deliveries, but the cause of many other preterm deliveries remains unknown. In these cases, women experience early contractions, cervical dilation and a torn amniotic sac.

A team of researchers in the Department of Obstetrics, Gynecology & Reproductive Sciences at Yale, including first author Lydia Shook and her mentor Irina Buhimschi, M.D., investigated the idea that calcification-excessive buildup of calcium-of the fetal membranes may lead to PPROM and preterm birth. "We noticed that in many women, analysis of the proteins in their amniotic fluid did not show signs of inflammation, and we could not find any cause for their preterm birth," said Shook, a Yale medical student. "We took a fresh look for what was causing breakdown of the membranes, which can lead to lost elasticity, integrity and eventually rupture."

Scientists know that calcifying nanoparticles are involved in many degenerative conditions including arthritis and atherosclerosis. "These mineral-protein complexes can disrupt normal cellular processes and cause cell death," Shook said. "We wondered whether they could also be responsible for damage to the fetal membranes in pregnant women."

Shook and her co-authors used a stain to look for calcium deposits in placental and fetal membrane tissue from patients with PPROM and preterm birth, as well as full-term deliveries. They used a sterile culture technique to determine whether amniotic fluid can form nanoparticles. They then exposed fetal membranes to the cultured nanoparticles to determine their ability to induce cell dysfunction, damage and cell death.

The team found evidence of calcification of fetal membranes collected from preterm deliveries. Fetuin, one of the major proteins involved in nanoparticle formation, was found in these deposits. Levels of fetuin in amniotic fluid were lower in women who delivered with PPROM compared to those who delivered early with intact membranes.

"This preliminary evidence suggests that amniotic fluid has the potential to form nanoparticles and deposit them in the fetal membranes," said Shook. "Low fetuin may be a biomarker for women at risk of PPROM. The goal of this research is to identify women at risk of developing this condition early in their pregnancy and to intervene with targeted therapy."

(Photo: Yale U.)

Yale University

Lie Detection: Misconceptions, Pitfalls, and Opportunities for Improvement

0 comentarios
Unlike Pinocchio, liars do not usually give telltale signs that they are being dishonest. In lieu of a growing nose, is there a way to distinguish people who are telling the truth from those who aren’t? A new report in Psychological Science in the Public Interest, a journal of the Association for Psychological Science, discusses some of the common misconceptions about those proficient in the art of deception, reviews the shortcomings of commonly used lie-detection techniques, and presents new empirically supported methods for telling liars from truth-tellers with greater accuracy.

Trapping a liar is not always easy. Lies are often embedded in truths and behavioral differences between liars and truth-tellers are usually very small. In addition, some people are just very good at lying. Lie detectors routinely make the common mistakes of overemphasizing nonverbal cues, neglecting intrapersonal variations (i.e., how a person acts when they are telling the truth versus when they are lying), and being overly confident in their lie-detection skills.

In this report, Aldert Vrij of the University of Portsmouth, Anders Granhag of the University of Gothenburg, and Stephen Porter of the University of British Columbia review research suggesting that verbal methods of deception detection are more useful than nonverbal methods commonly believed to be effective, and that there are psychological differences between liars and truth-tellers that can be exploited in the search for the truth.

In an information-gathering interview suspects are asked to give detailed statements about their activities through open questions—for example, “What did you do yesterday between 3 p.m. and 4 p.m.?” This interview style encourages suspects to talk and allows for opportunities to identify inconsistencies between the answer and available evidence. Asking very specific questions that a suspect is unlikely to anticipate may also help in lie detection.

Lying can be more cognitively demanding than truth-telling—it requires more brain power to come up with a lie and keep track of it (e.g., who was told what) than it does to tell the truth. Imposing cognitive load on interviewees by asking them to recall the events in reverse order may also be useful in weeding out liars from those telling the truth.

This research has important implications in a variety of settings, including the courtroom, police interviews, and screening individuals with criminal intent, for instance, identifying potential terrorists.

Association for Psychological Science

Monday, February 28, 2011

NEW STUDY FINDS TO ESCAPE BLAME BE A VICTIM NOT A HERO

0 comentarios

Great works and praiseworthy behavior may bring respect and admiration, but these won't help us to escape blame when we do something wrong, says a new study by researchers at the University of Maryland and Harvard University. To do that, the researchers say, one needs to be a victim not a hero!

In the study, participants responded to a number of scenarios that mirrored real-life moral transgressions, from stealing money to harming someone. Results revealed that, no matter how many previous good deeds someone had done, they received just as much blame - if not more - than someone with a less heroic background.

"People may come down even harder on someone like the Dalai Lama, than they do on 'Joe Blow,' said author Kurt Gray, assistant professor of psychology at the University of Maryland." However, in our research those who have suffered in the past received significantly less blame - even if such suffering was both totally unrelated to the misdeed and long since past."

The article, titled "To Escape Blame, Don't be a Hero - be a Victim" is published in the March issue of the Journal of Experimental Social Psychology. The findings are based on three experiments conducted by Gray and Daniel Wegner, professor of psychology at Harvard University.

"Our research suggests that morality is not like some kind of cosmic bank, where you can deposit good deeds and use them to offset future misdeeds," said Gray, who directs the Mind Perception and Morality Lab at the University of Maryland. "Instead, people ignore heroic pasts - or even count them against you - when assigning blame."

Gray suggests that the explanation for these findings is our tendency to divide the world up into moral agents - those who do good and evil - and moral patients - those who receive good or evil. "Psychologically, the perceived distance between a hero and a villain is quite small, whereas there's a wide gap between a villain and a victim. This means that heroes are easily recast as evil doers, whereas it's very hard to turn a victim into a villain."

In the experiments involved in this study, those who highlighted past suffering were held less responsible for transgressions and given less punishment. According to the authors, this fact suggests an explanation for why many celebrities immediately go into rehab or claim victimhood after being caught doing something wrong. Of course, this research doesn't address whether someone is actually blameworthy, but it does indicate a clear strategy for escaping blame.

In fact their research finds that people had trouble even remembering the misdeeds of victims. In one experiment, people read about either a hero, normal person, or a victim stealing some money, and then were given a surprise memory test after. Far fewer people remembered the victim stealing money.

The authors note that there certainly are benefits from good deeds for both individuals and society. For example they say, "not only do virtuous deeds help the recipient of the deed, but research suggests that even small acts of good can serve to significantly improve the doers mood."

But "whether you are trying to defend yourself against a spouse's wrath for a missed birthday or save yourself from execution for a grisly murder, your task is to become the ultimate victim . . . with stories of childhood abuse, of broken hearts and broken arms," Gray and Wegner write.

(Photo: Butch Kriege)

University of Maryland

GEOLOGISTS GET UNIQUE AND UNEXPECTED OPPORTUNITY TO STUDY MAGMA

0 comentarios

Geologists drilling an exploratory geothermal well in 2009 in the Krafla volcano in Iceland encountered a problem they were simply unprepared for: magma (molten rock or lava underground) which flowed unexpectedly into the well at 2.1 kilometers (6,900 ft) depth, forcing the researchers to terminate the drilling.

"To the best of our knowledge, only one previous instance of magma flowing into a geothermal well while drilling has been documented," said Wilfred Elders, a professor emeritus of geology in the Department of Earth Sciences at the University of California, Riverside, who led the research team. "We were drilling a well that was designed to search for very deep – 4.5 kilometers (15,000 feet) – geothermal resources in the volcano. While the magma flow interrupted our project, it gave us a unique opportunity to study the magma and test a very hot geothermal system as an energy source."

Currently, a third of the electric power and 95 percent of home heating in Iceland is produced from steam and hot water that occurs naturally in volcanic rocks.

"The economics of generating electric power from such geothermal steam improves the higher its temperature and pressure," Elders explained. "As you drill deeper into a hot zone the temperature and pressure rise, so it should be possible to reach an environment where a denser fluid with very high heat content, but also with unusually low viscosity occurs, so-called 'supercritical water.' Although such supercritical water is used in large coal-fired electric power plants, no one had tried to use supercritical water that should occur naturally in the deeper zones of geothermal areas."

Elders and colleagues report in the March issue of Geology (the research paper was published online on Feb. 3) that although the Krafla volcano, like all other volcanoes in Iceland, is basaltic (a volcanic rock containing 45-50 percent silica), the magma they encountered is a rhyolite (a volcanic rock containing 65-70 percent silica).

"Our analyses show that this magma formed by partial melting of certain basalts within the Krafla volcano," Elders said. "The occurrence of minor amounts of rhyolite in some basalt volcanoes has always been something of a puzzle. It had been inferred that some unknown process in the source area of magmas, in the mantle deep below the crust of the Earth, allows some silica-rich rhyolite melt to form in addition to the dominant silica-poor basalt magma."

Elders explained that in geothermal systems water reacts with and alters the composition of the rocks, a process termed "hydrothermal alteration." "Our research shows that the rhyolite formed when a mantle-derived basaltic magma encountered hydrothermally altered basalt, and partially melted and assimilated that rock," he said.

Elders and his team studied the well within the Krafla caldera as part of the Iceland Deep Drilling Project, an industry-government consortium, to test whether geothermal fluids at supercritical pressures and temperatures could be exploited as sources of power. Elders's research team received support of $3.5 million from the National Science Foundation and $1.5 million from the International Continental Scientific Drilling Program.

In the spring of 2009 Elders and his colleagues progressed normally with drilling the well to 2 kilometers (6,600 feet) depth. In the next 100 meters (330 feet), however, multiple acute drilling problems occurred. In June 2009, the drillers determined that at 2104 meters (6,900 feet) depth, the rate of penetration suddenly increased and the torque on the drilling assembly increased, halting its rotation. When the drill string was pulled up more than 10 meters (33 feet) and lowered again, the drill bit became stuck at 2095 meters (6,875 feet). An intrusion of magma had filled the lowest 9 meters (30 feet) of the open borehole. The team terminated the drilling and completed the hole as a production well.

"When the well was tested, high pressure dry steam flowed to the surface with a temperature of 400 Celsius or 750 Fahrenheit, coming from a depth shallower than the magma," Elders said. "We estimated that this steam could generate 25 megawatts of electricity if passed through a suitable turbine, which is enough electricity to power 25,000 to 30,000 homes. What makes this well an attractive source of energy is that typical high-temperature geothermal wells produce only 5 to 8 megawatts of electricity from 300 Celsius or 570 Fahrenheit wet steam."

Elders believes it should be possible to find reasonably shallow bodies of magma, elsewhere in Iceland and the world, wherever young volcanic rocks occur.

"In the future these could become attractive sources of high-grade energy," said Elders, who got involved in the project in 2000 when a group of Icelandic engineers and scientists invited him to join them to explore concepts of developing geothermal energy.

The Iceland Deep Drilling Project has not abandoned the search for supercritical geothermal resources. The project plans to drill a second deep hole in southwest Iceland in 2013.

(Photo: Bjarni Palssen)

University of California, Riverside

REGROWING HAIR: UCLA-VA RESEARCHERS MAY HAVE ACCIDENTALLY DISCOVERED A SOLUTION

0 comentarios
It has been long known that stress plays a part not just in the graying of hair but in hair loss as well. Over the years, numerous hair-restoration remedies have emerged, ranging from hucksters' "miracle solvents" to legitimate medications such as minoxidil. But even the best of these have shown limited effectiveness.

Now, a team led by researchers from UCLA and the Veterans Administration that was investigating how stress affects gastrointestinal function may have found a chemical compound that induces hair growth by blocking a stress-related hormone associated with hair loss — entirely by accident.

The serendipitous discovery is described in an article published in the online journal PLoS One.

"Our findings show that a short-duration treatment with this compound causes an astounding long-term hair regrowth in chronically stressed mutant mice," said Million Mulugeta, an adjunct professor of medicine in the division of digestive diseases at the David Geffen School of Medicine at UCLA and a corresponding author of the research. "This could open new venues to treat hair loss in humans through the modulation of the stress hormone receptors, particularly hair loss related to chronic stress and aging."

The research team, which was originally studying brain–gut interactions, included Mulugeta, Lixin Wang, Noah Craft and Yvette Taché from UCLA; Jean Rivier and Catherine Rivier from the Salk Institute for Biological Studies in La Jolla, Calif.; and Mary Stenzel-Poore from the Oregon Health and Sciences University.

For their experiments, the researchers had been using mice that were genetically altered to overproduce a stress hormone called corticotrophin-releasing factor, or CRF. As these mice age, they lose hair and eventually become bald on their backs, making them visually distinct from their unaltered counterparts. The Salk Institute researchers had developed the chemical compound, a peptide called astressin-B, and described its ability to block the action of CRF. Stenzel-Poore had created an animal model of chronic stress by altering the mice to overproduce CRF.

UCLA and VA researchers injected the astressin-B into the bald mice to observe how its CRF-blocking ability affected gastrointestinal tract function. The initial single injection had no effect, so the investigators continued the injections over five days to give the peptide a better chance of blocking the CRF receptors. They measured the inhibitory effects of this regimen on the stress-induced response in the colons of the mice and placed the animals back in their cages with their hairy counterparts.

About three months later, the investigators returned to these mice to conduct further gastrointestinal studies and found they couldn't distinguish them from their unaltered brethren. They had regrown hair on their previously bald backs.

"When we analyzed the identification number of the mice that had grown hair we found that, indeed, the astressin-B peptide was responsible for the remarkable hair growth in the bald mice," Mulugeta said. "Subsequent studies confirmed this unequivocally."

Of particular interest was the short duration of the treatments: Just one shot per day for five consecutive days maintained the effects for up to four months.

"This is a comparatively long time, considering that mice's life span is less than two years," Mulugeta said.

So far, this effect has been seen only in mice. Whether it also happens in humans remains to be seen, said the researchers, who also treated the bald mice with minoxidil alone, which resulted in mild hair growth, as it does in humans. This suggests that astressin-B could also translate for use in human hair growth. In fact, it is known that the stress-hormone CRF, its receptors and other peptides that modulate these receptors are found in human skin.

UCLA

Saturday, February 26, 2011

Oldest fossils of large seaweeds, possible animals tell story about oxygen in an ancient ocean

0 comentarios

Almost 600 million years ago, before the rampant evolution of diverse life forms known as the Cambrian explosion, a community of seaweeds and worm-like animals lived in a quiet deep-water niche under the sea near what is now Lantian, a small village in Anhui Province of South China. Then they simply died, leaving some 3,000 nearly pristine fossils preserved between beds of black shale deposited in oxygen-free waters.

Scientists from the Chinese Academy of Sciences, Virginia Tech in the U.S., and Northwest University in Xi'an, China report the discovery of the fossils and the mystery in the Feb. 17 issue of Nature.

In addition to perhaps ancient versions of algae and worms, the Lantian biota – named for its location – included macrofossils with complex and puzzling structures. In all, scientists identified about 15 different species at the site.

The fossils suggest that morphological diversification of macroscopic eukaryotes – the earliest versions of organisms with complex cell structures -- may have occurred only tens of millions of years after the snowball earth event that ended 635 million years ago, just before the Ediacaran Period. And their presence in the highly organic-rich black shale suggests that, despite the overall oxygen-free conditions, brief oxygenation of the oceans did come and go.

"So there are two questions," said Shuhai Xiao, professor of geobiology in the College of Science at Virginia Tech. "Why did this community evolve when and where it did? It is clearly different in terms of the number of species compared to biotas preserved in older rocks. There are more species here and they are more complex and larger than what evolved before. These rocks were formed shortly after the largest ice age ever, when much of the global ocean was frozen. By 635 million years ago, the snowball earth event ended and oceans were clear of ice. Perhaps that prepared the ground for the evolution of complex eukaryotes."

The team was examining the black shale rocks because, although they were laid down in waters that were not good for oxygen-dependent organisms, "they are known to be able to preserve fossils very well," said Shuhai. "In most cases, dead organisms were washed in and preserved in black shales. In this case, we discovered fossils that were preserved in pristine condition where they had lived – some seaweeds still rooted."

The conclusion that the environment would have been poisonous is derived from geochemical data, "but the bedding surfaces where these fossils were found represent moments of geological time during which free oxygen was available and conditions were favorable. They are very brief moments to a geologist," said Xiao. "but long enough for the oxygen-demanding organisms to colonize the Lantian basin and capture the rare opportunities."

The research team suggests in the article in Nature that the Lantian basin was largely without oxygen but was punctuated by brief oxic episodes that were opportunistically populated by complex new life forms, which were subsequently killed and preserved when the oxygen disappeared. "Such brief oxic intervals demand high-resolution sampling for geochemical analysis to capture the dynamic and complex nature of oxygen history in the Ediacaran Period," said lead author Xunlai Yuan, professor of palaeontology with the Chinese Academy of Sciences.

Proving that hypothesis awaits further study. The rocks in the study region are deposited in layered beds. The nature of the rock changes subtly and there are finer and finer layers that can be recognized within each bed. "We will need to sample each layer to see whether there is any difference in oxygen contents between layers with fossils and those without" said co-author Chuanming Zhou, professor of palaeontology with the Chinese Academy of Sciences.

(Photo: Zhe Chen)


Zinc reduces the burden of the common cold

0 comentarios
Zinc supplements reduce the severity and duration of illness caused by the common cold, according to a systematic review published in The Cochrane Library. The findings could help reduce the amount of time lost from work and school due to colds.

The common cold places a heavy burden on society, accounting for approximately 40% of time taken off work and millions of days of school missed by children each year. The idea that zinc might be effective against the common cold came from a study carried out in 1984, which showed that zinc lozenges could reduce how long symptoms lasted. Since then, trials have produced conflicting results and although several biological explanations for the effect have been proposed, none have been confirmed.

The review updates a previous Cochrane Systematic Review, carried out in 1999, with data from several new trials. In total, data from 15 trials, involving 1,360 people, were included. According to the results, zinc syrup, lozenges or tablets taken within a day of the onset of cold symptoms reduce the severity and length of illness. At seven days, more of the patients who took zinc had cleared their symptoms compared to those who took placebos. Children who took zinc syrup or lozenges for five months or longer caught fewer colds and took less time off school. Zinc also reduced antibiotic use in children, which is important because overuse has implications for antibiotic resistance.

"This review strengthens the evidence for zinc as a treatment for the common cold," said lead researcher Meenu Singh of the Post Graduate Institute of Medical Education and Research in Chandigarh, India. "However, at the moment, it is still difficult to make a general recommendation, because we do not know very much about the optimum dose, formulation or length of treatment."

Further research should focus on the benefits of zinc in defined populations, the review suggests. "Our review only looked at zinc supplementation in healthy people," said Singh. "But it would be interesting to find out whether zinc supplementation could help asthmatics, whose asthma symptoms tend to get worse when they catch a cold." The researchers also say that more work needs to be carried out in low-income countries, where zinc deficiency may be prevalent.

Wiley

Earliest humans not so different from us, research suggests

0 comentarios
That human evolution follows a progressive trajectory is one of the most deeply-entrenched assumptions about our species. This assumption is often expressed in popular media by showing cavemen speaking in grunts and monosyllables (the GEICO Cavemen being a notable exception). But is this assumption correct? Were the earliest humans significantly different from us?

In a paper published in the latest issue of Current Anthropology, archaeologist John Shea (Stony Brook University) shows they were not.

The problem, Shea argues, is that archaeologists have been focusing on the wrong measurement of early human behavior. Archaeologists have been searching for evidence of "behavioral modernity", a quality supposedly unique to Homo sapiens, when they ought to have been investigating "behavioral variability," a quantitative dimension to the behavior of all living things.

Human origins research began in Europe, and the European Upper Paleolithic archaeological record has long been the standard against which the behavior of earlier and non-European humans is compared. During the Upper Paleolithic (45,000-12,000 years ago), Homo sapiens fossils first appear in Europe together with complex stone tool technology, carved bone tools, complex projectile weapons, advanced techniques for using fire, cave art, beads and other personal adornments. Similar behaviors are either universal or very nearly so among recent humans, and thus, archaeologists cite evidence for these behaviors as proof of human behavioral modernity.

Yet, the oldest Homo sapiens fossils occur between 100,000-200,000 years ago in Africa and southern Asia and in contexts lacking clear and consistent evidence for such behavioral modernity. For decades anthropologists contrasted these earlier "archaic" African and Asian humans with their "behaviorally-modern" Upper Paleolithic counterparts, explaining the differences between them in terms of a single "Human Revolution" that fundamentally changed human biology and behavior. Archaeologists disagree about the causes, timing, pace, and characteristics of this revolution, but there is a consensus that the behavior of the earliest Homo sapiens was significantly that that of more-recent "modern" humans.

Shea tested the hypothesis that there were differences in behavioral variability between earlier and later Homo sapiens using stone tool evidence dating to between 250,000- 6000 years ago in eastern Africa. This region features the longest continuous archaeological record of Homo sapiens behavior. A systematic comparison of variability in stone tool making strategies over the last quarter-million years shows no single behavioral revolution in our species' evolutionary history. Instead, the evidence shows wide variability in Homo sapiens toolmaking strategies from the earliest times onwards. Particular changes in stone tool technology can be explained in terms of the varying costs and benefits of different toolmaking strategies, such as greater needs for cutting edge or more efficiently-transportable and functionally-versatile tools. One does not need to invoke a "human revolution" to account for these changes, they are explicable in terms of well-understood principles of behavioral ecology.

This study has important implications for archaeological research on human origins. Shea argues that comparing the behavior of our most ancient ancestors to Upper Paleolithic Europeans holistically and ranking them in terms of their "behavioral modernity" is a waste of time. There are no such things as modern humans, Shea argues, just Homo sapiens populations with a wide range of behavioral variability. Whether this range is significantly different from that of earlier and other hominin species remains to be discovered. However, the best way to advance our understanding of human behavior is by researching the sources of behavioral variability in particular adaptive strategies.

The University of Chicago Press

Friday, February 25, 2011

ANCIENT MESOAMERICAN SCULPTURE UNCOVERED IN SOUTHERN MEXICO

0 comentarios

With one arm raised and a determined scowl, the figure looks ready to march right off his carved tablet and into the history books. If only we knew who he was - corn god? Tribal chief? Sacred priest?

"It's beautiful and was obviously very important," says University of Wisconsin-Madison archaeologist John Hodgson of the newly discovered stone monument. "But we will probably never know who he was or what the sculpture means in its entirety."

The man is the central figure on a stone monument discovered in 2009 at a site called Ojo de Agua in far southern Mexico in the state of Chiapas along the Pacific coast. Hodgson, a doctoral candidate in anthropology at UW-Madison, describes the new monument in the cover article of the current issue (December 2010) of Mexicon, a leading peer-reviewed journal of Mesoamerican studies. The article, titled "Ojo de Agua Monument 3: A New Olmec-Style Sculpture from Ojo de Agua, Chiapas, Mexico," is co-authored with John E. Clark, of Brigham Young University, and Emiliano Gallaga Murrieta, director of the National Institute of Anthropology and History in Chiapas.

Monument 3 is just the second carved monument found in Ojo de Agua. Monument 1 was discovered accidently when a local farmer hit it with a plow in the 1960s. Monument 3 was a similarly fortuitous finding, uncovered in the process of digging an irrigation ditch. (Monument 2 is a large boulder with a flat surface and no visible carving, which Hodgson found in 2005 and reported in the January/February 2006 issue of Archaeology magazine in an article on Ojo de Agua.)

Hodgson was working in the area and received word of the finding within just a few days of its discovery. He was able to see the monument's impression in the trench wall and study the soil layers where it had been buried, gaining a wealth of information that is usually lost long before any archaeologist lays eyes on a piece.

"Usually sculptures are first seen by archaeologists in private art collections and we normally have no good idea where they came from. The depictions of figures and the motifs change in form through time so you can get an approximate date by comparing styles," he says. "But we were able to date the new monument by where it was found to a narrow 100-year window, which is very rare."

The archaeological context and radiocarbon dating of ceramic sherds associated with the stone monument show that it dates to 1100 to 1000 B.C., making it approximately 3,000 years old. Its age and style correspond to the Early Formative period, when an early culture known as the Olmec dominated the area.

Its purpose and meaning, however, will be harder to ascertain.

"Everything means something in this kind of culture," says pre-eminent archaeologist Michael D. Coe, a professor emeritus of anthropology at Yale University and expert on Mesoamerican civilizations. "It obviously was a public monument — an important one, probably in connection with some really big cheese who lorded it over the area." Coe was not directly involved in the work but is familiar with the newly discovered monument.

"It appears to me to be a depiction of an event or a way to convey other types of information," Hodgson adds. "This dates to a time prior to a developed written language, but like the modern symbol used internationally for the Red Cross, symbols are very efficient at communicating complicated ideas."

The main figure on the tablet is depicted wearing an elaborate headdress, loincloth and ornate accessories, including a pair of large, comb-like ear ornaments, a rope-like necklace and a thick belt with a jaguar-head buckle. A face on the headdress includes features such as sprouting plants that identify it as a corn god. The tablet also includes a smaller secondary figure and a series of asymmetric zigzag designs that the authors suggest could represent lightning, local mountain ranges, or other features of the natural world.

"This is closely connected with agriculture and the cult of the corn god," Coe says, pointing out the zigzags. "Thunderstorms bring the rain."

The monument is a carved flat slab of a relatively soft, local volcanic stone that weighs about 130 pounds. It stands nearly three feet tall, about 14 inches wide, and ranges from four to seven inches thick. The use of local materials shows it was made in or near Ojo de Agua, Hodgson says, but style similarities to pieces found in larger Olmec centers near the Gulf of Mexico and the Valley of Mexico indicate pan-regional influences as well.

"It's cruder in execution than most Olmec monuments from the other side of the isthmus — 'provincial' Olmec," Coe says. But despite lacking some of the intricate artistry, it is still relatively sophisticated, he says. "This adds to our knowledge of the Olmec on the south side of the isthmus."

The depiction of corn is particularly notable. Corn cultivation is generally associated with a settled lifestyle rather than a nomadic existence, indicating that Ojo de Agua was almost certainly a farming community. The grain's storability and nutritional content also would have allowed the population to expand drastically and the civilization to become more complex, Hodgson says, adding that "the early date of the monument supports the idea that there was an early association between corn and religion."

Ojo de Agua lies in the heart of the ancient Aztec province Soconusco, nestled in a bend of the Coatán River. It is the earliest known site in Mesoamerica with formal pyramids built around plazas.

Though it has not been worked very extensively as an archeological site, Ojo de Agua appears to cover about 200 hectares and is the largest site in the area from the time period 1200-1000 B.C. The limited work to date describes civic architecture consistent with a decent-sized planned settlement. The identified platform mounds are laid out in a deliberate alignment that may be relative to magnetic north.

"That's something we see later but to see it this early is pretty surprising," says Hodgson, who has been working at Ojo de Agua since 2003.

The site appears to have been occupied for 150 to 200 years before being abandoned for unknown reasons.

Hodgson expects there are many more clues at Ojo de Agua and hopes to have the opportunity to continue working at the site and perhaps another look at Monument 3.

"We've just scratched the surface there. The things we've found are fantastic," Hodgson says. "These early societies were a lot more complicated than we thought they were."

(Photo: John Hodgson; Drawing: Kisslan Chan and John Clark, New World Archaeological Foundation)

University of Wisconsin-Madison

HOW MUCH INFORMATION IS THERE IN THE WORLD?

0 comentarios

Think you're overloaded with information? Not even close. A study appearing on Feb. 10 in Science Express, an electronic journal that provides select Science articles ahead of print, calculates the world's total technological capacity — how much information humankind is able to store, communicate and compute.

"We live in a world where economies, political freedom and cultural growth increasingly depend on our technological capabilities," said lead author Martin Hilbert of the USC Annenberg School for Communication & Journalism. "This is the first time-series study to quantify humankind's ability to handle information."

So how much information is there in the world? How much has it grown?

Prepare for some big numbers:

* Looking at both digital memory and analog devices, the researchers calculate that humankind is able to store at least 295 exabytes of information. (Yes, that's a number with 20 zeroes in it.)

Put another way, if a single star is a bit of information, that's a galaxy of information for every person in the world. That's 315 times the number of grains of sand in the world. But it's still less than one percent of the information that is stored in all the DNA molecules of a human being.

* 2002 could be considered the beginning of the digital age, the first year worldwide digital storage capacity overtook total analog capacity. As of 2007, almost 94 percent of our memory is in digital form.

* In 2007, humankind successfully sent 1.9 zettabytes of information through broadcast technology such as televisions and GPS. That's equivalent to every person in the world reading 174 newspapers every day.

* On two-way communications technology, such as cell phones, humankind shared 65 exabytes of information through telecommunications in 2007, the equivalent of every person in the world communicating the contents of six newspapers every day.

* In 2007, all the general-purpose computers in the world computed 6.4 x 10^18 instructions per second, in the same general order of magnitude as the number of nerve impulses executed by a single human brain. Doing these instructions by hand would take 2,200 times the period since the Big Bang.

* From 1986 to 2007, the period of time examined in the study, worldwide computing capacity grew 58 percent a year, ten times faster than the United States' GDP.

Telecommunications grew 28 percent annually, and storage capacity grew 23 percent a year.

"These numbers are impressive, but still miniscule compared to the order of magnitude at which nature handles information" Hilbert said. "Compared to nature, we are but humble apprentices. However, while the natural world is mind-boggling in its size, it remains fairly constant. In contrast, the world's technological information processing capacities are growing at exponential rates."

(Photo: USC)

USC

LIVING FAST BUT DYING OLDER IS POSSIBLE -- IF YOU'RE A SHEEP

0 comentarios
According to Dr Annette Baudisch of the Max Planck Institute for Demographic Research in Rostock, Germany, current methods of comparing patterns of ageing are limited because they confound two different elements of ageing – pace and shape.

"Some organisms live a short time, others live a long time. This is the pace of ageing. Short-lived species have a fast pace of ageing, and long-lived species have a slow pace of ageing. Pace describes how quickly the clock of life ticks away. For humans it ticks slowly, for small songbirds like the robin it ticks very fast," explains Dr Baudisch.

By contrast the shape of ageing describes how much mortality – the risk of dying – changes with age. One way of measuring the shape of ageing is the 'ageing factor'. For example, the common swift has an ageing factor of 2, meaning mortality doubles during its adult life, compared with modern humans, who have an ageing factor that exceeds 2000.

"At the age of 15, only 2 out of 100,000 girls in Sweden die, but one out of every two women aged 110 will die. This large difference in mortality at the beginning and end of adult life means that for humans the shape of ageing is steep, whereas in other species like the common swift it is shallow. And in some species the risk of death can even fall with age, with older individuals having the least risk of dying. This seems to be the case for the desert tortoise, and for alligators or crocodiles."

Using data for 10 different groups of animals from herring gull, European robin, common swift and lake sturgeon to Dall mountain sheep, African buffalo, wild and captive chimpanzees, hunter-gatherers and modern humans (Swedish women), Dr Baudisch classified how each species aged in terms of pace and shape of ageing.

Of the species she analysed, Dr Baudisch found that although modern humans are the longest-lived, they also age most strongly.

Adult life expectancy for Swedish women (that is the remaining life expectancy after reaching maturity) is about 70 years, whereas a robin's adult life lasts just 1.7 years. But over that adult lifespan, ageing is so strong in the human that the ageing factor is 2132, but for the robin only about 2.

"Comparing robins with Swedish women, humans have a slow pace of ageing whereas the robin's is fast, so in terms of length of life the humans are doing best. But if we look at the impact ageing has on death rate the robin wins. Its shape of ageing is fairly flat whereas the humans' is steep, indicating that death rates increase markedly with age," she says.

Dr Baudisch's results have important implications for evolutionary biology and the study of ageing: "We need to compare species to understand how evolution has shaped the biology of ageing in different species, but current methods of comparing patterns of ageing across species are limited because they confound the pace and shape of ageing. Not accounting for this difference can lead to incorrect conclusions about which species age more than others."

"Separating pace from shape of ageing gives a clearer picture of the characteristics of ageing. It could reveal that certain species are very similar to each other in terms of their shape of ageing, species that we would maybe never have grouped together. Ultimately, this should help us identify the determinants of ageing – the characteristics that determine whether death rates goes up or down with age and reveal species that can successfully avoid ageing," she says.

In more everyday terms, her results might make us re-think common expressions, such as "live fast, die young". As dying young in Dr Baudisch's terms corresponds to a small ageing factor and dying old to a large one, "live fast, die young" only applies to some short-lived species.

Robins live fast and die young, but even though Dall mountain sheep only live for around 4.2 years, their ageing factor is 7.

"Not all species with short lives live fast and die young. Robins do, but mountain sheep do things differently. They also live pretty fast but die older. From the data I have, it seems that live fast die young is only one option; you can also live fast and die older, or live slower and die young, or live slow and die old. There might be every combination in nature. That's something we need to find out in the future with better data."

Wiley-Blackwell

Thursday, February 24, 2011

Centuries of Sailors Weren’t Wrong: Looking at the Horizon Stabilizes Posture

0 comentarios
Everybody who has been aboard a ship has heard the advice: if you feel unsteady, look at the horizon. For a study published in Psychological Science, a journal of the Association for Psychological Science, researchers measured how much people sway on land and at sea and found there’s truth in that advice; people aboard a ship are steadier if they fix their eyes on the horizon.

Thomas A. Stoffregen of the University of Minnesota has been studying “body sway” for decades—how much people rock back and forth in different situations, and what this has to do with motion sickness. In just a normal situation, standing still, people move back and forth by about four centimeters every 12 to 15 seconds. Stoffregen and his coauthors, Anthony M. Mayo and Michael G. Wade, wanted to know how this changes when you’re standing on a ship.

To study posture at sea, Stoffregen made contact with the U.S. consortium that runs scientific research ships. “I’m really an oddball for these folks, because they’re studying oceanography, like hydrothermal vents. Here’s this behavioral scientist, calling them up,” he says. He boards a ship when it is travelling between different projects—for example, in this study, he rode on the research vessel Atlantis as it went between two points in the Gulf of California. “It had nothing to do with the fact that I like cruising near the tropics,” he jokes. Since the ships are between scientific expeditions, he can sleep in one of the empty bunks normally reserved for ocean scientists, and crew members volunteer to take part in his study.

The study compared the same people standing on dry land—a dock in Guaymas, Mexico—and aboard the ship. In each experiment, the crew member stood comfortably on a force plate and focused on a target—either something about 16 inches in front of them, or a far-off point; a distant mountain when standing on land or the horizon when standing on the ship. On land, people were steadier when they looked at the close-up target and swayed more when they looked far away. On the ship, however, they were steadier when they looked at the horizon.

This is actually counterintuitive, Stoffregen says. When you’re standing on a ship, you need to adjust to the ship’s movement, or you’ll fall over. So why would it help to look at the horizon and orient yourself to the Earth? He thinks it may help stabilize your body by helping you differentiate between sources of movement—the natural movement coming from your body and the movement caused by the ship.

Stoffregen thinks this motion of bodies may predict motion sickness. “It’s the people who become wobbly who subsequently become motion sick,” he says. He had originally hoped to study seasickness directly, but so far his subjects have all been seasoned crew members who are used to the ship’s movement and don’t get sick; his dream is to do his experiments aboard a ship full of undergraduate oceanography majors going to sea for the first time. “I’d give my right arm to get on one of those.”

Association for Psychological Science

Scientists Determine What Makes an Orangutan an Orangutan

0 comentarios

For the first time, scientists have mapped the genome--the genetic code--of orangutans. This new tool may be used to support efforts to maintain the genetic diversity of captive and wild orangutans. The new map of the orangutan genome may also be used to help improve our understanding of the evolution of primates, including humans.

Partially funded by the National Science Foundation, the orangutan study appeared in the Jan. 27 issue of Nature. It was conducted by an international team of scientists led by Devin P. Locke of the Genome Center at Washington University.

The name "orangutan" is derived from the Malay term, "man of the forest," a fitting moniker for one of our closest relatives.

There are two species of orangutans, defined primarily by their island of origin--either Sumatra or Borneo. The outlook for orangutan survival is currently dire because there are estimated to be only about 7,500 orangutans in Sumatra, where they are considered critically endangered, and only about 50,000 orangutans in Borneo, where they are considered endangered.

The endangerment status of orangutans is determined by the International Union for Conservation of Nature.

There are no other wild populations of orangutans other than those in Sumatra and Borneo. The decline of the Sumatran and Borneo populations of orangutans is caused by varied threats, such as illegal logging, the conversion of rain forests to farmland and palm oil plantations, hunting and diseases.

Using a mix of legacy and novel technologies, the research team mapped the genomes of a total of 11 orangutans, including representatives of both the Sumatran and Bornean species.

The map of the orangutan genome may support conservation efforts by helping zoos create breeding programs designed to maintain the genetic diversity of captive populations. (The greater the genetic diversity of a species, the more resilient it is against threats to its survival.) The genome map may also help conservationists sample the genetic diversity of wild populations so they can prioritize populations of wild orangutans for conservation efforts.

After scientists map a species' genome, they compare it to the genetic maps of other species. As they do so, they search for key differences that involve duplications, deletions and inversions of genetic material. These differences may contribute to the unique features of particular species. They may also provide information about general evolutionary trends, such as the overall rate at which genomic evolution has occurred.

Before the orangutan's genome was mapped, the genetic codes of three other great primates--humans, chimpanzees and rhesus macaques--were mapped.

The genomes of the gorilla and bonobo will soon be mapped, as well.

Analyses of the orangutan genome reveal that this primate has many unique features. For example, comparisons of the structural variation of the genomes of orangutans, humans, chimpanzees and rhesus macaques indicate that during the last 15 million years or so of primate evolution, the orangutan genome has generally been more stable than those of the other primates, with fewer large-scale structural changes.

The orangutan genome also allowed for an analysis of fast-evolving genes, which are likely to have responded to evolutionary pressure for adaptation. Genes related to visual perception and metabolic processes were found to evolve unusually rapidly in orangutans and other primates. The orangutan's metabolism-related genes were also found to have evolved rapidly--a phenomenon that may be related to the orangutan's slow growth rate, slow reproduction rate, and long inter-birth interval, the period between successive births. Orangutans give birth not more than once every six to eight years, an inter-birth interval rated as the longest among mammals, including humans.

Comparisons of the population genetics of the Sumatran and Bornean species indicate that these species split approximately 400,000 years ago, which is more recent than previously believed. In addition, Sumatran orangutans have greater genetic diversity than their Bornean counterparts, despite their smaller population size and higher endangered status.

Adam Siepel, a research team member from Cornell University described the new map of the genetic code of orangutans as an important step in genome sequencing of primates. "The orangutan genome gives us a much more complete picture of genome evolution in the great apes," he said.

"This is a terrific example of the application of genome sequencing beyond model organisms--well-studied organism like the mouse and fruit fly," said Reed Beaman, an NSF program director. "Research like this has only recently become possible through a dramatic decrease in the cost of sequencing. These results demonstrate broad significance to biogeography, genetics, as well as to conservation and human evolution, and they are only starting to scratch the surface."

(Photo: © 2011 Jupiter Images Corporation)

National Science Foundation (NSF)

The Most Genes in an Animal? Tiny Crustacean Holds the Record

0 comentarios

Scientists have discovered that the animal with the most genes--about 31,000--is the near-microscopic freshwater crustacean Daphnia pulex, or water flea.

By comparison, humans have about 23,000 genes. Daphnia is the first crustacean to have its genome sequenced.

The water flea's genome is described in a Science paper published by members of the Daphnia Genomics Consortium, an international network of scientists led by the Center for Genomics and Bioinformatics (CGB) at Indiana University (IU) Bloomington and the U.S. Department of Energy's Joint Genome Institute.

"Daphnia's high gene number is largely because its genes are multiplying, creating copies at a higher rate than other species," said project leader and CGB genomics director John Colbourne.

"We estimate a rate that is three times greater than those of other invertebrates and 30 percent greater than that of humans."

"This analysis of the Daphnia genome significantly advances our understanding of how an organism's genome interacts with its environment both to influence genome structure and to confer ecological and evolutionary success," says Saran Twombly, program director in the National Science Foundation (NSF)'s Division of Environmental Biology, which funded the research.

"This gene-environment interplay has, to date, been studied in model organisms under artificial, laboratory conditions," says Twombly.

"Because the ecology of Daphnia pulex is well-known, and the organism occurs abundantly in the wild, this analysis provides unprecedented insights into the feedback between genes and environment in a real and ever-changing environment."

Daphnia's genome is no ordinary genome.

What reasons might Daphnia have so many genes compared to other animals?

A possibility, Colbourne said, is that "since the majority of duplicated and unknown genes are sensitive to environmental conditions, their accumulation in the genome could account for Daphnia's flexible responses to environmental change."

Scientists have studied Daphnia for centuries because of its importance in aquatic food webs and for its transformational responses to environmental stress.

Like the virgin nymph of Greek mythology that shares its name, Daphnia thrives in the absence of males--by clonal reproduction, until harsh environmental conditions favor the benefits of sex.

"More than one-third of Daphnia's genes are undocumented in any other organism--in other words, they are completely new to science," says Don Gilbert, paper co-author and scientist at IU Bloomington.

Sequenced genomes often contain some fraction of genes with unknown functions, even among the most well-studied genetic model species for biomedical research, such as the fruit fly Drosophila.

By using microarrays (containing millions of DNA strands affixed to microscope slides), experiments that subjected Daphnia to environmental stressors point to these unknown genes having ecologically significant functions.

"If such large fractions of genomes evolved to cope with environmental challenges, information from traditional model species used only in laboratory studies may be insufficient to discover the roles for a considerable number of animal genes," Colbourne said.

Daphnia is emerging as a model organism for a new field of science--environmental genomics--that aims to better understand how the environment and genes interact.

This includes a practical need to apply scientific developments from this field to managing our water resources and protecting human health from chemical pollutants in the environment.

James Klaunig, a scientist at IU Bloomington, predicts that the work will yield a more realistic and scientifically-based risk evaluation.

"Genome research on the responses of animals to stress has important implications for assessing environmental risks to humans," Klaunig said. "Daphnia is an exquisite aquatic sensor, a potential high-tech and modern version of the mineshaft canary."

"With knowledge of its genome, and using both field sampling and laboratory studies, the possible effects of environmental agents on cellular and molecular processes can be resolved and linked to similar processes in humans."

The scientists learned that of all sequenced invertebrate genomes so far, Daphnia shares the most genes with humans.

Daphnia's gene expression patterns change depending on its environment, and the patterns indicate what state its cells are in.

A water flea bobbing in water containing a chemical pollutant will tune-up or tune-down a suite of genes differently than its sisters accustomed to water without the pollutant, for example.

The health effects of most industrially produced compounds in the environment are unknown, because current testing procedures are too slow, too costly, and unable to indicate the causes for their effects on animals, including humans.

Over the course of the project, the Daphnia Genomics Consortium has grown from a handful of founding members to more than 450 investigators around the globe.

"Assembling so many experts around a shared research goal is no small feat," said Peter Cherbas, director of the CGB. "The genome project signals the coming-of-age of Daphnia as a research tool for investigating the molecular underpinnings of key ecological and environmental problems."

Colbourne agreed, adding, "New model systems rarely arrive on the scene with such clear and important roles to play in advancing a new field of science."

(Photo: Paul Hebert, University of Guelph)

National Science Foundation (NSF)

BIRDS USE RIGHT NOSTRIL TO NAVIGATE

0 comentarios

Pigeons rely mainly on their olfactory sense when they navigate. Young pigeons learn to recognize environmental odours carried by the winds into the loft and to use these odours to find their way home from unfamiliar territory. Scientists from the Max Planck Institute for Ornithology in Radolfzell, Anna Gagliardo from the University of Pisa, and their colleagues at the University of Trient recently demonstrated that pigeons navigate more poorly with their right nostril blocked. This finding suggests that the left brain hemisphere, where olfactory information is processed, is of fundamental importance to the orientation and navigation of homing pigeons.

The amazing ability of pigeons to find their way to their home loft has been known for centuries. According to scientists, these birds possess a distinctive olfactory sense as well as a capacity for recognizing odours, which helps them to develop a kind of “scent map” of their surroundings. However, it seems that pigeons cannot smell similarly with both nostrils. Like humans, they can detect odours better through their right nostril.

Martin Wikelski from the Max Planck Institute for Ornithology in Radolfzell and Anna Gagliardo from the University of Pisa recently completed a study of 31 pigeons to determine how the birds’ orientation would be affected if the they were unable to smell through their right nostril. The scientists inserted small, rubbery, removable plugs into the left nostril of some of the birds and into the right nostril of others. All the pigeons were hand-raised in the area around of Pisa. After fixing small GPS data loggers on the pigeons’ backs, the researchers released the pigeons near Cigoli, a Tuscan village 42 km away from the birds’ home.

Based on the GPS data they collected, the scientists found that the pigeons that could not breathe through their right nostril took a more tortuous route. They stopped more often, and spent more time exploring the surroundings of the stopover sites than those birds that could smell with their right nostril. “We suppose that these birds had to stop to gather additional information about their surroundings because they could not navigate by their olfactory sense,” says Anna Gagliardo. “This behaviour not only indicates that an asymmetry exists in the perception and processing of odours between the left and the right olfactory system; it also shows that the right nostril apparently plays an important role in processing olfactory information in the left hemisphere that is useful for navigation.” How the pigeons’ brains process these sensory perceptions, and why this processing is asymmetric, the researchers do not yet know.

(Photo: © Guiseppe Di Lieto)

Max-Planck

NEW MEASUREMENT OF THE MUON LIFETIME PROVIDES KEY TO DETERMINING STRENGTH OF WEAK NUCLEAR FORCE

0 comentarios
After a decade of experimental development, data-taking, and analysis, an international research team led by scientists from Boston University and the University of Illinois has announced a new value for the muon lifetime.

The new lifetime measurement—the most precise ever made of any subatomic particle—makes possible a new determination of the strength of the weak nuclear force. Experiments for this research were conducted using the proton accelerator facility of the Paul Scherrer Institute (PSI) in Villigen, Switzerland. The results were published in the January 25, 2011 issue of the journal Physical Review Letters.

The weak force is one of the four fundamental forces of nature. Although rarely encountered in everyday life, the weak force is at the heart of many elemental physical processes, including those responsible for making the sun shine. All four of the fundamental forces are characterized by coupling constants, which describe their strength. The famous constant G, in Newton’s law of gravitation, determines the gravitational attraction between any two massive objects. The fine structure constant determines the strength of the electrostatic force between charged particles. The coupling constant for the weak interactions, known as the Fermi constant, is also essential for calculations in the world of elementary particles. Today, physicists regard the weak and the electromagnetic interaction as two aspects of one and the same interaction. Proof of that relationship, established in the 1970s, was an important breakthrough in our understanding of the subatomic world.

The new value of the Fermi constant was determined by an extremely precise measurement of the muon lifetime. The muon is an unstable subatomic particle which decays with a lifetime of approximately two microseconds (two millionths of a second). This decay is governed by the weak force only, and the muon's lifetime has a relatively simple relationship to the strength of the weak force. "To determine the Fermi constant from the muon lifetime requires elegant and precise theory, but until 1999, the theory was not as good as the experiments," says David Hertzog, professor of physics at the University of Washington. (At the time of the experiment, Hertzog was at the University of Illinois.) “Then, several breakthroughs essentially eliminated the theoretical uncertainty. The largest uncertainty in the Fermi constant determination was now based on how well the muon lifetime had been measured."

The MuLan (Muon Lifetime Analysis) experiment used muons produced at PSI’s proton accelerator—the most powerful source of muons in the world and the only place where this kind of experiment can be done. "At the heart of the experiment were special targets that caught groups of positively charged muons during a ‘muon fill period,’" says PSI’s Bernhard Lauss. "The beam was then rapidly switched off, leaving approximately 20 muons in the target. Each muon would eventually decay, typically ejecting an energetic positron—a positively charged electron—to indicate its demise. The positrons were detected using a soccer-ball shaped array of 170 detectors, which surrounded the target." Boston University physics professor Robert Carey adds, "We repeated this procedure for 100 billion muon fills, accumulating trillions of individual decays. By the end, we had recorded more than 100 terabytes of data, far more than we could handle by ourselves. Instead, the data was stored and analyzed at the National Center for Supercomputing Applications (NCSA) in Illinois." A distribution of how long each muon lived before it decayed was created from the raw data and then fit to determine the mean lifetime: 2.1969803 ±0.0000022 microseconds. The uncertainty is approximately 2 millionths of a millionth of a second - a world record.

Boston University

X-RAYS REVEAL HIDDEN LEG OF AN ANCIENT SNAKE

0 comentarios

Synchrotron X-ray investigation of a fossilised snake with legs is helping scientists to better understand how in the course of evolution snakes have lost their legs, and whether they evolved from terrestrial lizards or from reptiles living in the oceans. New 3-D X-ray images reveal that the internal architecture of the ancient snake’s leg bones resemble those of modern terrestrial lizard legs. The results are published on 8 February in the Journal of Vertebrate Paleontology.

A novel X-ray imaging technology is helping scientists to better understand how in the course of evolution snakes have lost their legs. The researchers hope that the new data will help to resolve a heated debate about the origin of snakes: whether they evolved from a terrestrial lizard or from one that lived in the oceans. New, detailed 3-D images reveal that the internal architecture of an ancient snake’s leg bones strongly resembles those of modern terrestrial lizard legs. The results are published in the 8 February issue of the Journal of Vertebrate Paleontology.

The team of researchers was led by Alexandra Houssaye from Museum National d'Histoire Naturelle (MNHN) and CNRS in Paris, France, and included scientists from the European Synchrotron Radiation Facility (ESRF) in Grenoble, France, where the X-ray imaging was performed, and the Karlsruhe Institute of Technology (KIT), Germany, where a sophisticated technique and a dedicated instrument to take the images were developed.

Only three specimens exist of fossilised snakes with preserved leg bones. Eupodophis descouensi, the ancient snake studied in this experiment, was discovered ten years ago in 95-million-year-old rocks in Lebanon. About 50 cm long overall, it exhibits a small leg, about 2 cm long, attached to the animal’s pelvis. This fossil is key to understanding the evolution of snakes, as it represents an intermediate evolutionary stage when ancient snakes had not yet completely lost the legs they inherited from earlier lizards. Although the fossil exhibits just one leg on its surface, a second leg was thought to be concealed in the stone, and indeed this leg was revealed in full detail thanks to synchrotron X-rays.

The high-resolution 3-D images, in particular the fine details of the buried small leg, suggest that this species lost its legs because they grew more slowly, or for a shorter period of time. The data also reveal that the hidden leg is bent at the knee and has four ankle bones but no foot or toe bones.

"The revelation of the inner structure of Eupodophis hind limbs enables us to investigate the process of limb regression in snake evolution," says Alexandra Houssaye.

The scientists used synchrotron laminography, a recent imaging technique specially developed for studying large, flat samples. It is similar to the computed tomography (CT) technique used in many hospitals, but uses a coherent synchrotron X-ray beam to resolve details a few micrometres in size — some 1000 times smaller than a hospital CT scanner. For the new technique, the fossil is rotated at a tilted angle in a brilliant high-energy X-ray beam, with thousands of two-dimensional images recorded as it makes a full 360-degree turn. From these individual images, a high-resolution, 3-D representation is reconstructed, which shows hidden details like the internal structure(s) of the legs.

"Synchrotrons, these enormous machines, allow us to see microscopic details in fossils invisible to any other techniques without damage to such invaluable specimens," says Paul Tafforeau of the ESRF, a co-author of the study.

(Photo: A. Houssaye)

ESRF

Wednesday, February 23, 2011

Rare insect fossil reveals 100 million years of evolutionary stasis

0 comentarios

Researchers have discovered the 100 million-year-old ancestor of a group of large, carnivorous, cricket-like insects that still live today in southern Asia, northern Indochina and Africa. The new find, in a limestone fossil bed in northeastern Brazil, corrects the mistaken classification of another fossil of this type and reveals that the genus has undergone very little evolutionary change since the Early Cretaceous Period, a time of dinosaurs just before the breakup of the supercontinent Gondwana.

The findings are described in a paper in the open access journal ZooKeys.

“Schizodactylidae, or splay-footed crickets, are an unusual group of large, fearsome-looking predatory insects related to the true crickets, katydids and grasshoppers, in the order Orthoptera,” said University of Illinois entomologist and lead author Sam Heads, of the Illinois Natural History Survey. “They get their common name from the large, paddle-like projections on their feet, which help support their large bodies as they move around their sandy habitats, hunting down prey.”

Although the fossil is distinct from today’s splay-footed crickets, its general features differ very little, Heads said, revealing that the genus has been in a period of “evolutionary stasis” for at least the last 100 million years.

Other studies have found that the region where the fossil was found was most likely an arid or semi-arid monsoonal environment during the Early Cretaceous Period, Heads said, “suggesting that the habitat preferences of Schizodactylus have changed little in over 100 million years.”

Léa Leuzinger, a graduate student at the University of Fribourg, Switzerland, is a co-author on the study.

(Photo: Hwaja Goetz)

University of Illinois

Not Just for Raincoats

0 comentarios

Researchers from Northwestern University and the Massachusetts Institute of Technology (MIT) have studied individual water droplets and discovered a miniature version of the “water hammer,” an effect that produces the familiar radiator pipe clanging in older buildings.

In piping systems, the water hammer occurs when fluid is forced to stop abruptly, causing huge pressure spikes that can rupture pipe walls. Now, for the first time, the researchers have observed this force on the scale of microns: such pressure spikes can move through a water droplet, causing it to be impaled on textured superhydrophobic surfaces, even when deposited gently.

This insight of how droplets get stuck on surfaces could lead to the design of more effective superhydrophobic, or highly water-repellant, surfaces for condensers in desalination and steam power plants, de-icing for aircraft engines and wind turbine blades, low-drag surfaces in pipes and even raincoats. In certain cases, improved surfaces could improve energy efficiency on many orders of magnitude. (About half of all electricity generated in the world comes from steam turbines.)

The research is published by the journal Physical Review Letters.

“We want to design surface textures that will cause the water to really hate those surfaces,” said Neelesh A. Patankar, associate professor of mechanical engineering at Northwestern’s McCormick School of Engineering and Applied Science. “Improving current hydrophobic materials could result in a 60 percent drag reduction in some applications, for example.”

Patankar collaborated with Kripa K. Varanasi, the d’Arbeloff Assistant Professor of Mechanical Engineering at MIT. The two are co-corresponding authors of the paper. Patankar initiated this study in which he and Varanasi led the analytical work, and the experiments were conducted at MIT in Varanasi’s lab. Other co-authors are MIT mechanical engineering graduate students Hyuk-Min Kwon and Adam Paxson.

In designing superhydrophobic surfaces, one goal is to produce surfaces much like the natural lotus leaf. Water droplets on these leaves bead up and roll off easily, taking any dirt with them. Contrary to what one might think, the surface of the leaves is rough, not smooth. The droplets sit on microscopic bumps, as if resting on a bed of nails.

“If a water droplet impales the grooves of this bumpy texture, it becomes stuck instead of rolling off,” Patankar said. “Such transitions are well known for small static droplets. Our study shows that the impalement of water is very sensitive to the dynamic ‘water hammer’ effect, which was not expected.”

To show this, the researchers imaged millimeter-scale water droplets gently deposited onto rough superhydrophobic surfaces. (The surfaces were made of silicon posts, with spacing from post edge to post edge ranging from 40 to 100 microns, depending on the experiment.) Since these drops were on the millimeter scale and being deposited gently, prior understanding was to assume that gravitational force is not strong enough to push the water into the roughness grooves. The Northwestern and MIT researchers are the first to show this is not true.

They observed that as a droplet settles down on the surface (due to the drop’s own weight) there is a rapid deceleration in the drop that produces a brief burst of high pressure, sending a wave through the droplet. The droplet is consequently pinned on the rough surface. That’s the powerful mini water hammer effect at work.

By understanding the underlying physics of this transition, the study reveals that there is actually a window of droplet sizes that avoid impalement. Although focused on drop deposition, this idea is quite general and applies to any scenario where the water velocity is changing on a short (less than a millisecond) time scale. This insight can lead to the design of more robust superhydrophobic surfaces that can resist water impalement even under the dynamic conditions typical in industrial setups.

“One way to reduce impalement is to design a surface texture that results in a surface that sustains extremely high pressures,” Patankar said. “It is the length scale of the roughness that is important.” To resist impalement, the height of a bump and the distance between bumps need to be just right. Hundreds of nanometer scale roughness can lead to robust surfaces.

“Our ultimate goal,” he added, “is the invention of textured surfaces such that a liquid in contact with it will, at least partially, vaporize next to the surface -- or sustain air pockets -- and self-lubricate. This is similar to how an ice skater glides on ice due to a cushion of thin lubricating liquid film between the skates and the ice. A critical step is to learn how to resist impalement of water on the roughness. Our work on water hammer-induced impalement is a crucial advance toward that goal of ultra-slippery vapor stabilizing surfaces.”

(Photo: Northwestern U.)

Northwestern University

Followers

Archive

 

Selected Science News. Copyright 2008 All Rights Reserved Revolution Two Church theme by Brian Gardner Converted into Blogger Template by Bloganol dot com