Thursday, December 31, 2009

ROCK-BREATHING BACTERIA COULD GENERATE ELECTRICITY AND CLEAN UP OIL SPILLS

0 comentarios

A discovery by scientists at the University of East Anglia could contribute to the development of systems that use domestic or agricultural waste to generate clean electricity.

Published by the leading scientific journal, Proceedings of the National Academy of Sciences (PNAS), the researchers have demonstrated for the first time the mechanism by which some bacteria survive by ‘breathing rocks’.

The findings could be applied to help in the development of new microbe-based technologies such as fuel cells, or ‘bio-batteries’, powered by animal or human waste, and agents to clean up areas polluted by oil or uranium.

“This is an exciting advance in our understanding of bacterial processes in the Earth’s sub-surfaces,” said Prof David Richardson, of UEA’s School of Biological Sciences, who is leading the project.

“It will also have important biotechnological impacts. There is potential for these rock-breathing bacteria to be used to clean-up environments contaminated with toxic organic pollutants such as oil or radioactive metals such as uranium. Use of these bacteria in microbial fuel-cells powered by sewerage or cow manure is also being explored.”

The vast proportion of the world’s habitable environments is populated by micro-organisms which, unlike humans, can survive without oxygen. Some of these micro-organisms are bacteria living deep in the Earth’s subsurface and surviving by ‘breathing rocks’ – especially minerals of iron.

Iron respiration is one of the most common respiratory processes in oxygen-free habitats and therefore has wide environmental significance.

Prof Richardson said: “We discovered that the bacteria can construct tiny biological wires that extend through the cell walls and allow the organism to directly contact, and conduct electrons to, a mineral. This means that the bacteria can release electrical charge from inside the cell into the mineral, much like the earth wire on a household plug.”

(Photo: UEA)

University of East Anglia

SMALLER IS BETTER FOR FINGER SENSITIVITY

0 comentarios
People who have smaller fingers have a finer sense of touch, according to new research in the Dec. 16 issue of The Journal of Neuroscience. This finding explains why women tend to have better tactile acuity than men, because women on average have smaller fingers.

“Neuroscientists have long known that some people have a better sense of touch than others, but the reasons for this difference have been mysterious,” said Daniel Goldreich, PhD, of McMaster University in Ontario, one of the study’s authors. “Our discovery reveals that one important factor in the sense of touch is finger size.”

To learn why the sexes have different finger sensitivity, the authors first measured index fingertip size in 100 university students. Each student’s tactile acuity was then tested by pressing progressively narrower parallel grooves against a stationary fingertip — the tactile equivalent of the optometrist’s eye chart. The authors found that people with smaller fingers could discern tighter grooves.

“The difference between the sexes appears to be entirely due to the relative size of the person’s fingertips,” said Ethan Lerner, MD, PhD, of Massachusetts General Hospital, who is unaffiliated with the study. “So, a man with fingertips that are smaller than a woman’s will be more sensitive to touch than the woman.”

The authors also explored why more petite fingers are more acute. Tinier digits likely have more closely spaced sensory receptors, the authors concluded. Several types of sensory receptors line the skin’s interior and each detect a specific kind of outside stimulation. Some receptors, named Merkel cells, respond to static indentations (like pressing parallel grooves), while others capture vibrations or movement.

When the skin is stimulated, activated receptors signal the central nervous system, where the brain processes the information and generates a picture of what a surface “feels” like. Much like pixels in a photograph, each skin receptor sends an aspect of the tactile image to the brain — more receptors per inch supply a clearer image.

To find out whether receptors are more densely packed in smaller fingers, the authors measured the distance between sweat pores in some of the students, because Merkel cells cluster around the bases of sweat pores. People with smaller fingers had greater sweat pore density, which means their receptors are probably more closely spaced.

“Previous studies from other laboratories suggested that individuals of the same age have about the same number of vibration receptors in their fingertips. Smaller fingers would then have more closely spaced vibration receptors,” Goldreich said. “Our results suggest that this same relationship between finger size and receptor spacing occurs for the Merkel cells.”

Whether the total number of Merkel cell clusters remains fixed in adults and how the sense of touch fluctuates in children as they age is still unknown. Goldreich and his colleagues plan to determine how tactile acuity changes as a finger grows and receptors grow farther apart.

Society for Neuroscience

UF RESEARCHER HELPS REVEAL ANCIENT ORIGINS OF MODERN OPOSSUM

0 comentarios

A University of Florida researcher has co-authored a study tracing the evolution of the modern opossum back to the extinction of the dinosaurs and finding evidence to support North America as the center of origin for all living marsupials.

The study, published in PLoS ONE on Dec. 16, shows that peradectids, a family of marsupials known from fossils mostly found in North America and Eurasia, are a sister group of all living opossums. The findings are based in part on high-resolution CT scans of a 55-million-year-old skull found in freshwater limestone from the Bighorn Basin of Wyoming.

“The extinction of the dinosaurs was a pivotal moment in the evolution of mammals,” said Jonathan Bloch, study co-author and associate curator of vertebrate paleontology at UF’s Florida Museum of Natural History. “We’re tracing the beginnings of a major group of mammals that began in North America.”

Opossum-like peradectids first appeared on the continent about 65 million years ago, at the time of the Cretaceous–Paleogene extinction event, which killed the dinosaurs.

“North America is a critical area for understanding marsupial and opossum origins because of its extensive and varied fossil record,” said lead author Inés Horovitz, an assistant adjunct professor at the University of California, Los Angeles. “Unfortunately, most of its species are known only from teeth.”

The study also analyzes two 30-million-year-old skeletons of Herpetotheriidae, the sister group of all living marsupials.

Based on fossil evidence from the skull and two skeletons, the study’s authors concluded the evolutionary split between the ancestor of opossums and the ancestor of all other living marsupials occurred at least 65 million years ago, Horovitz said.

Marsupials migrated between North and South America until the two continents separated after the end of the Cretaceous period. Marsupials in South America diversified and also migrated into Antarctica and Australia, which were still connected at that time, Bloch said.

North American marsupials went extinct during the early Miocene, about 20 million years ago. But after the Isthmus of Panama emerged to reconnect North and South America 3 million years ago, two marsupials made it back to North America: the Virginia opossum (Didelphis virginiana), a common resident in the Southeast today, and the southern opossum (Didelphis marsupialis), which lives as far north as Mexico.

The study describes a new peradectid species, Mimoperadectes houdei, based on a relatively complete fossil skull. The high-resolution CT scan of the skull gave researchers a large amount of information about the animal’s internal anatomy. The ear, in particular, provides researchers with information on skull anatomy and clues about the animal’s locomotion, Bloch said.

The scan showed the new species shared enough common traits with living opossums to indicate an evolutionary relationship. Some predictions about that relationship could have been made from fossil teeth, Bloch said, “but this provides a much stronger foundation for that conclusion.”

Most North American marsupials living in the Paleocene and early Eocene (56 million to 48 million years ago) were small-bodied animals. But M. houdei approached the body size of some opossums living today.

“You would probably recognize it as an opossum, but it wouldn’t look quite right,” Bloch said.

The skull came from the same limestone deposits in Wyoming as the primitive primate skull Bloch and other researchers used to map an early primate brain with CT scans in a study published earlier this year.

“In parts of North America today, opossums are one of the most commonly observed mammals around,” Bloch said. “This fossil skull shows its roots going back to the extinction of the dinosaurs. This is literally the fossil that shows us the ancestry of that animal.”

The study’s examination of the two skeletons gives a first glimpse into the form and structure of primitive marsupials and shows that they were more terrestrial than modern opossums. The skeletons came from the late Oligocene and were found in the White River Badlands of Wyoming.

(Photo: Jeff Gage)

University of Florida

3-D SOLAR CELL THAT USES "TOWERS" TO BOOST EFFICIENCY WINS PATENTS

0 comentarios

A three dimensional solar cell design that uses micron-scale “towers” to capture nearly three times as much light as flat solar cells made from the same materials has been awarded broad patent protection in both China and Australia. Modeling suggests that the 3-D cell could boost power production by as much as 300 percent compared to conventional solar cells.

A three dimensional solar cell design that uses micron-scale "towers" to capture nearly three times as much light as flat solar cells made from the same materials has been awarded broad patent protection in both China and Australia. Modeling suggests that the 3-D cell could boost power production by as much as 300 percent compared to conventional solar cells.

Because it can capture more power from a given area, the 3-D design could be useful for powering satellites, cell phones, military equipment and other applications that have a limited surface area. Developed at the Georgia Tech Research Institute (GTRI), the "three dimensional multi-junction photovoltaic device" uses its 3-D surface structure to increase the likelihood that every photon striking it will produce energy.

"One problem with conventional flat solar cells is that the sunlight hits a flat surface and can bounce off, so the light only has one chance to be absorbed and turned into electricity," explained John Bacon, president of IP2Biz®, an Atlanta company that has licensed the technology from GTRI. "In the GTRI 3-D solar cell, we build a nanometer-scale version of Manhattan, with streets and avenues of tiny light-capturing structures similar to tall buildings. The sunlight bounces from building to building and produces more electricity."

The arrays of towers on the 3-D solar cell can increase the surface area by several thousand percent, depending on the size and density of the structures.

"Conventional cells have to be very large to make adequate amounts of electricity, and that limits their applications," Bacon explained. "The large surface area of our 3-D cell means that applications from satellites to cell phones will be more practical since we can pack so much light gathering power into a small footprint."

The three dimensional structure also means that the cells don't have to be aimed directly at the sun to capture sunlight efficiently, Bacon added. Conventional solar cells work best when the sunlight hits them at a narrow range of angles, but the new 3-D system remains efficient regardless of the angle at which the light hits.

The tower structures on the GTRI solar cells are about 100 microns tall, 40 microns by 40 microns square, 50 microns apart — and grown from arrays containing millions of vertically aligned carbon nanotubes. The nanotubes primarily serve as the structure on which current-generating photovoltaic p/n coatings are applied.

"The carbon nanotubes are like the framing inside of buildings, and the photovoltaic materials are like the outer skin of the buildings," said Tom Smith, president of 3-D Solar LLC, a company formed to commercialize the cells. "Within the three-dimensional structures, multiple materials could be used to create the physical framing. Carbon nanotubes were used in the original solar cells, but they are not required for the technology to work."

The 3-D solar cells were developed in the laboratory of Jud Ready, a GTRI senior research engineer. Tests comparing the 3-D solar cells produced in Ready's lab with traditional planar cells produced from the same materials showed an increase in power generation, Smith said.

The researchers chose to make their prototype cells from cadmium materials because they were familiar with them from other research. However, a broad range of photovoltaic materials could also be used, and selecting the best material for specific applications will be the goal of future research.

Fabrication of the cells begins with a silicon wafer, which also serves as the solar cell's bottom junction. The researchers first coat the wafer with a thin layer of iron using a photolithography process that can create a wide variety of patterns. The patterned wafer is then placed into a furnace heated to approximately 700 degrees Celsius.

Hydrocarbon gases are then flowed into the furnace, where the carbon and hydrogen separate. In a process known as chemical vapor deposition, the carbon grows arrays of multi-walled carbon nanotubes atop the patterns created by the iron particles.

Once the carbon nanotube towers have been grown, the researchers use a process known as molecular beam epitaxy to coat the nanotube arrays with cadmium telluride (CdTe) and cadmium sulfide (CdS), which serve as the p-type and n-type photovoltaic layers. Atop that, a thin coating of indium tin oxide, a clear conducting material, is added to serve as the cell's top electrode.

In the finished solar cells, the carbon nanotube arrays serve both as support for the 3-D arrays and as a conductor connecting the photovoltaic materials to the silicon wafer.

The 3-D solar cells were described in the March 2007 issue of the journal JOM, published by the Minerals, Metals and Materials Society, and in the Journal of Applied Physics in 2008. The research leading to their development was supported by the Air Force Office of Scientific Research and the Air Force Research Laboratory.

Beyond the patents in China and Australia, IP2Biz has applied for protection in the United States, Canada, Europe, Korea and India, Smith noted. The patents granted so far apply to any photovoltaic application in which three dimensional structures are used to capture light bouncing off them, he added.

"The 3-D photovoltaic cell could be of great value in satellite, cell phone and defense applications given its order of magnitude reduction in footprint, coupled with the potential for increased power production compared to planar cells," Smith added. "We are very pleased with the level of interest in licensing or acquiring this innovation as means of addressing the world's growing need for energy."

(Photo: GIT)

Georgia Institute of Technology

EARTH'S ATMOSPHERE CAME FROM OUTER SPACE, FIND SCIENTISTS

0 comentarios
The gases which formed the Earth's atmosphere - and probably its oceans - did not come from inside the Earth but from outer space, according to a study by University of Manchester and University of Houston scientists.

The report published in the prestigious international journal 'Science' means that textbook images of ancient Earth with huge volcanoes spewing gas into the atmosphere will have to be rethought.

According to the team, the age-old view that volcanoes were the source of the Earth's earliest atmosphere must be put to rest.

Using world-leading analytical techniques, the team of Dr Greg Holland, Dr Martin Cassidy and Professor Chris Ballentine tested volcanic gases to uncover the new evidence.

The research was funded by Natural Environment Research Council (NERC).

"We found a clear meteorite signature in volcanic gases," said Dr Greg Holland the project's lead scientist.

"From that we now know that the volcanic gases could not have contributed in any significant way to the Earth's atmosphere.

"Therefore the atmosphere and oceans must have come from somewhere else, possibly from a late bombardment of gas and water rich materials similar to comets.

"Until now, no one has had instruments capable of looking for these subtle signatures in samples from inside the Earth - but now we can do exactly that."

The techniques enabled the team to measure tiny quantities of the unreactive volcanic trace gases Krypton and Xenon, which revealed an isotopic 'fingerprint' matching that of meteorites which is different from that of 'solar' gases.

The study is also the first to establish the precise composition of the Krypton present in the Earth's mantle.

Project director Prof Chris Ballentine of The University of Manchester, said: "Many people have seen artist's impressions of the primordial Earth with huge volcanoes in the background spewing gas to form the atmosphere.

"We will now have to redraw this picture."

University of Manchester

ANCIENT PYGMY SEA COW DISCOVERED

0 comentarios

The discovery of a Middle Eocene (48.6-37.2 million years ago) sea cow fossil by McGill University professor Karen Samonds has culminated in the naming of a new species.

This primitive "dugong" is among the world's first fully-aquatic sea cows, having evolved from terrestrial herbivores that began exploiting coastal waters. Within this ancient genus, the newly discovered species is unusual as it is the first species known from the southern hemisphere (its closest relatives are from Egypt and India), and is extremely primitive in its skull morphology and dental adaptations. The fossil is a pivotal step in understanding Madagascar's evolutionary history - as it represents the first fossil mammal ever named from the 80-million-year gap in Madagascar's fossil record.

"The fossils of this ancient sea cow are unique in that it has a full set of relatively unspecialized teeth whereas modern sea cows have a reduced dentition specialized for eating sea grass, and most fossil species already show some degree of reduction. It may also be the first fully aquatic sea cow; confirmation will depend on recovering more of the skeleton, especially its limbs," says Samonds.

Samonds is a Curator at the Redpath Museum and an Assistant Professor in the Departments of Anatomy and Cell Biology, and the Faculty of Dentistry. Her discovery may be the tip of the iceberg to unlocking the secrets of 80-million-year gap in Madagascar's fossil record. The presence of fossil sea cows, crocodiles, and turtles, (which are generally associated with coastal environments), suggests that this fossil locality preserves an environment that was close to the coast, or even in an estuary (river mouth). These sediments may potentially yield fossils of marine, terrestrial and freshwater vertebrates - animals that lived in the sea as well as those that lived in forests, grasslands and rivers close to the ocean. Dr. Samonds plans to continue collecting fossils at this site, starting with a National Geographic-funded expedition in summer 2010.

"My hope with the discovery of these fossils is that they will illuminate how, when and from where Madagascar's modern animals arrived," said Samonds, "helping us understand how Madagascar accumulated such a bizarre and unique set of modern animals."

(Photo: McGill U.)

McGill University

AN ADVANCE IN SUPERCONDUCTING MAGNET TECHNOLOGY OPENS THE DOOR FOR MORE POWERFUL COLLIDERS

0 comentarios

Preparing for as much as a 10-fold increase in the Large Hadron Collider’s luminosity within the next decade, U.S. scientists and engineers have demonstrated a powerful magnet based on an advanced superconducting material, which can produce magnetic fields strong enough to focus intense proton beams in the LHC’s upgraded interaction regions.

The Large Hadron Collider (LHC) at CERN has just started producing collisions, but scientists and engineers have already made significant progress in preparing for future upgrades beyond the collider’s nominal design performance, including a 10-fold increase in collision rates by the end of the next decade and, eventually, higher-energy beams.

In a test on December 4, a focusing magnet built by members of the U.S. Department of Energy’s multi-laboratory LHC Accelerator Research Program (LARP), using an advanced superconducting material, achieved the goal of a magnetic field strong enough to focus intense proton beams in the upgraded LHC interaction regions.

“This success has been made possible by the enthusiasm and dedication of many scientists, engineers, and technicians at the collaborating laboratories,” said Eric Prebys of the Fermi National Accelerator Laboratory, who heads LARP, “and by the guidance and continuous support of the U.S. Department of Energy and the encouragement and contributions of CERN and the entire accelerator magnet community.”

LARP is a collaboration of Brookhaven National Laboratory, Fermilab, Lawrence Berkeley National Laboratory, and the SLAC National Accelerator Laboratory, founded by DOE in 2003 to address the challenge of planned upgrades that will significantly increase the LHC’s luminosity.

Increased luminosity will mean more collision events in the LHC’s interaction regions; the major experiments will thus be able to collect more data in less time. But it will also mean that the “inner triplet” magnets, which focus the beams to tiny spots at the interaction regions and are within 20 meters of the collision points, will be subjected to even more radiation and heat than they are presently designed to withstand.

The superconducting inner triplet magnets now in place at the LHC operate at the limits of well-established niobium-titanium (NbTi) magnet technology. One of the LARP goals is to develop upgraded magnets using a different superconducting material, niobium tin (Nb3Sn). Niobium tin is superconducting at a higher temperature than niobium titanium and therefore has a greater tolerance for heat; it can also be superconducting at a magnetic field more than twice as strong.

Unlike niobium titanium, however, niobium tin is brittle and sensitive to pressure; to become a superconductor when cold it must be reacted at very high temperatures, 650 to 700 degrees Celsius. Advanced magnet design and fabrication methods are needed to meet these challenges.

The Department of Energy’s Office of High Energy Physics (HEP) has long supported niobium-tin magnet research at several national laboratories through its Advanced Accelerator Technology Program. The HEP Conductor Development Program, a collaboration among national labs, universities, and industry created in 1998, was able to double the performance of niobium tin at high fields, which led to the fabrication of model coils up to four meters long and short dipole magnets with fields up to 16 tesla - about twice the nominal field of LHC – the necessary preconditions for the LARP program.

The LARP effort initially centered on a series of short quadrupole models at Fermilab and Berkeley Lab and, in parallel, a four-meter-long magnet based on racetrack coils, built at Brookhaven and Berkeley Lab. The next step involved the combined resources of all three laboratories: the fabrication of a long, large-aperture quadrupole magnet.

In 2005 DOE, CERN, and LARP agreed to set a goal of reaching, before the end of 2009, a gradient, or rate of increase in field strength, of 200 tesla per meter (200 T/m) in a four-meter-long superconducting quadrupole magnet with a 90-millimeter bore for housing the beam pipe.

This goal was met on December 4 by LARP’s first “long quadrupole shell” model magnet. The magnet’s superconducting coils performed well, as did its mechanical structure, based on a thick aluminum cylinder (shell) that supports the superconducting coils against the large forces generated by high magnetic fields and electrical currents. The magnet’s ability to withstand quenches – sudden transitions to normal conductivity with resulting heating – also was excellent.

“Congratulations on behalf of CERN for this achievement, a milestone both toward the LHC luminosity upgrade and for accelerator technology advancement in general, made possible by the high technical quality of the LARP teams and leadership,” said Lucio Rossi, head of the Magnets, Superconductors, and Cryostats group in CERN’s Technology Department, in a message to Fermilab’s Giorgio Ambrosio, head of the LARP Long Quadrupole team, which performed the successful development and test.

Rossi also praised the “strategic vision of leaders in the DOE laboratories” in initiating LARP, noting the contributions of Fermilab’s Jim Strait and Peter Limon, Berkeley Lab’s Steve Gourlay, and Bruce Strauss from DOE. “From my perspective, without them LARP would not have been started.”

Although the successful test of the long model was a major milestone, it is only one of several steps needed to fully qualify the new technology for use in the LHC. One goal is to further increase the field gradient in the long quadrupole, both to explore the limits of the technology and to reproduce the performance levels demonstrated in short models. A second goal is to address other critical accelerator requirements, such as field quality and alignment, through a new series of models with an even larger aperture (120 millimeters).

The long quadrupole shell magnet’s conductor – high-performance niobium-tin wire meeting stringent requirements – was manufactured by Oxford Superconducting Technology of New Jersey. The wire was cabled and insulated at Berkeley Lab and qualified at Brookhaven and Fermilab. The superconducting coils were wound at Fermilab and underwent high temperature reaction at Brookhaven and Fermilab, and their instrumentation was completed at Berkeley Lab. The magnet supporting structure was designed and pre-assembled at Berkeley Lab. The final magnet assembly was done at Berkeley Lab, and the cold test was performed at Fermilab’s Vertical Magnet Test Facility.

The Long Quadrupole task leaders are, for coil fabrication, Fred Nobrega of Fermilab and Jesse Schmalzle of Brookhaven; for the supporting structure and magnet assembly, Paolo Ferracin of Berkeley Lab; for instrumentation and quench protection, Helene Felice of Berkeley Lab; and for test preparations and test, Guram Chlachidze of Fermilab. Peter Wanderer of Brookhaven led the effort during its most critical phase and was recently succeeded by GianLuca Sabbi of Berkeley Lab, head of the LARP magnet research and development program.

(Photo: LBNL)

SUZAKU CATCHES RETREAT OF A BLACK HOLE'S DISK

0 comentarios

Studies of one of the galaxy's most active black-hole binaries reveal a dramatic change that will help scientists better understand how these systems expel fast-moving particle jets.

Binary systems where a normal star is paired with a black hole often produce large swings in X-ray emission and blast jets of gas at speeds exceeding one-third that of light. What fuels this activity is gas pulled from the normal star, which spirals toward the black hole and piles up in a dense accretion disk.

"When a lot of gas is flowing, the dense disk reaches nearly to the black hole," said John Tomsick at the University of California, Berkeley. "But when the flow is reduced, theory predicts that gas close to the black hole heats up, resulting in evaporation of the innermost part of the disk." Never before have astronomers shown an unambiguous signature of this transformation.

To look for this effect, Tomsick and an international group of astronomers targeted GX 339-4, a low-mass X-ray binary located about 26,000 light-years away in the constellation Ara. There, every 1.7 days, an evolved star no more massive than the sun orbits a black hole estimated at 10 solar masses. With four major outbursts in the past seven years, GX 339-4 is among the most dynamic binaries in the sky.

In September 2008, nineteen months after the system's most recent outburst, the team observed GX 339-4 using the orbiting Suzaku X-ray observatory, which is operated jointly by the Japan Aerospace Exploration Agency and NASA. At the same time, the team also observed the system with NASA's Rossi X-ray Timing Explorer satellite.

Instruments on both satellites indicated that the system was faint but in an active state, when black holes are known to produce steady jets. Radio data from the Australia Telescope Compact Array confirmed that GX 339-4's jets were indeed powered up when the satellites observed.

Despite the system's faintness, Suzaku was able to measure a critical X-ray spectral line produced by the fluorescence of iron atoms. "Suzaku's sensitivity to iron emission lines and its ability to measure the shapes of those lines let us see a change in the accretion disk that only happens at low luminosities," said team member Kazutaka Yamaoka at Japan's Aoyama Gakuin University.

X-ray photons emitted from disk regions closest to the black hole naturally experience stronger gravitational effects. The X-rays lose energy and produce a characteristic signal. At its brightest, GX 339-4's X-rays can be traced to within about 20 miles of the black hole. But the Suzaku observations indicate that, at low brightness, the inner edge of the accretion disk retreats as much as 600 miles.

"We see emission only from the densest gas, where lots of iron atoms are producing X-rays, but that emission stops close to the black hole -- the dense disk is gone," explained Philip Kaaret at the University of Iowa. "What's really happening is that, at low accretion rates, the dense inner disk thins into a tenuous but even hotter gas, rather like water turning to steam."

The dense inner disk has a temperature of about 20 million degrees Fahrenheit, but the thin evaporated disk may be more than a thousand times hotter.

The study, which appears in the Dec. 10 issue of The Astrophysical Journal Letters, confirms the presence of low-density accretion flow in these systems. It also shows that GX 339-4 can produce jets even when the densest part of the disk is far from the black hole.

"This doesn't tell us how jets form, but it does tell us that jets can be launched even when the high-density accretion flow is far from the black hole," Tomsick said. "This means that the low-density accretion flow is the most essential ingredient for the formation of a steady jet in a black hole system."

(Photo: ESO/L. Calçada)

NASA

GREENLAND GLACIERS: WHAT LIES BENEATH

0 comentarios

Scientists who study the melting of Greenland’s glaciers are discovering that water flowing beneath the ice plays a much more complex role than they previously imagined.

Researchers previously thought that meltwater simply lubricated ice against the bedrock, speeding the flow of glaciers out to sea.

Now, new studies have revealed that the effect of meltwater on acceleration and ice loss -- through fast-moving outlet glaciers that connect the inland ice sheet to the ocean -- is much more complex. This is because a kind of plumbing system evolves over time at the base of the ice, expanding and shrinking with the volume of meltwater.

Researchers are now developing new low-cost technologies to track the flow of glaciers and get a glimpse of what lies beneath the ice.

As ice melts, water trickles down into the glacier through crevices large and small, and eventually forms vast rivers and lakes under the ice, explained Ian Howat, assistant professor of earth sciences at Ohio State University. Researchers once thought that this sub-glacial water was to blame for sudden speed-ups of outlet glaciers along the Greenland coasts.

“We’ve come to realize that sub-glacial meltwater is not responsible for the big accelerations that we’ve seen for the last ten years,” Howat said. “Changes in the glacial fronts, where the ice meets the ocean, are the real key.”

“That doesn’t mean that meltwater is not important,” he continued. “It plays a role along these glacial fronts -- it’s just a very complex role, one that makes it hard for us to predict the future.”

In a press conference at the American Geophysical Union (AGU) meeting in San Francisco on Wednesday, December 16, 2009, Howat joined colleagues from the University of Colorado-Boulder/NOAA Cooperative Institute for Research in Environmental Sciences (CIRES) and NASA’s Jet Propulsion Laboratory (JPL) to discuss three related projects -- all of which aim to uncover how this meltwater interacts with ice and the ocean.

Their work has implications for ice loss elsewhere in the world -- including Antarctica -- and could ultimately lead to better estimates of future sea level rise due to climate change.

Howat leads a team of researchers who are planting inexpensive global positioning system (GPS) devices on the ice in Greenland and Alaska to track glacial flow. Designed to transmit their data off the ice, these systems have to be inexpensive, because there’s a high likelihood that they will never be recovered from the highly crevassed glaciers.

Howat will describe the team’s early results at the AGU meeting, and give an overview of what researchers have learned about meltwater so far.

John Adler, a doctoral student at CIRES, works to calculate the volume of water in lakes on the top of the ice sheet. These lakes periodically drain, and the entire water volume disappears into the ice. He uses small unmanned aerial vehicles to measure the ice’s surface roughness -- an indication of where cracks may form to enable this drainage to happen. Other members of his team are releasing GPS-tagged autonomous probes into the meltwater itself, to follow the water all the way down to the base of the ice sheet and out to sea.

“My tenet is pushing the miniaturization of technology, so that small autonomous platforms -- in the sea, on the surface, or in the air -- can reliably gather scientific information in remote regions,” Adler said.

All these efforts require cutting-edge technology, and that’s where Alberto Behar of JPL comes in. An Investigation Scientist on the upcoming Mars rover project, Behar designs the GPS units that will give researchers the data they need.

Howat’s team placed six units on outlet glaciers in Greenland last year, and this year they placed three in Greenland and three in Alaska. The units offer centimeter-scale measurements of ice speed, and Behar designed the power and communications systems to keep the overall cost per unit as inexpensive as possible.

Howat has found that glacial meltwater at the base of the ice sheet has little influence on ice loss along the coast -- most of the time.

All over Greenland, meltwater collects beneath the ice, gradually carving out an intricate network of passageways called moulins. The moulins form an ever-changing plumbing system that regulate where water collects between the ice and bedrock at different times of the year. According to Howat, meltwater increases as ice melts in the summer, and decreases as water re-freezes in winter.

In the early summer, the sudden influx of water overwhelms the subglacial drainage system, causing the water pressure to increase and the ice to lift off its bed and flow faster, to the tune of 100 meters per year, he said. The water passageways quickly expand, however, and reduce the water pressure so that by mid-summer the glaciers are flowing slowly again.

Inland, this summertime boost in speed is very noticeable, since the glaciers are moving so slowly in general.

But outlet glaciers along the coast are already flowing out to sea at rates as high as 10 kilometers per year -- a rate too high to be caused by the meltwater.

“So you have this inland ice moving slowly, and you have these outlet glaciers moving 100 times faster. Those outlet glaciers are feeling a small acceleration from the meltwater, but overall the contribution is negligible,” Howat said.

His team looked for correlations between times of peak meltwater in the summer and times of sudden acceleration in outlet glaciers, and found none. “Some of these outlet glaciers accelerated in the wintertime, and some of accelerated over long periods of time. The changes didn’t correlate with any time that you would expect there to be more melt,” he added.

So if meltwater is not responsible for rapidly moving outlet glaciers, then what is responsible? Howat suspects that the ocean is the cause.

Through computer modeling, he and his colleagues have determined that friction between the glacial walls and the fjords that surround them is probably what holds outlet glaciers in place, and sudden increases in ocean water temperature cause the outlet glaciers to speed up.

Howat did point out two cases in which meltwater can have a dramatic effect on ice loss along the coast: it can expand within cracks to form stress fractures, or it can bubble out from under the base of the ice sheet and stir up the warmer ocean water. Both circumstances can cause large pieces of the glacier to break off.

At one point, he and his colleagues witnessed the latter effect first hand. They detected a sudden decrease of sub-glacial meltwater inland, only to see a giant plume of dirty water burst out from under the ice at the nearby water’s edge.

The dirty water was freshwater -- glacial meltwater. It sprayed out from between the glacier and the bedrock “like a fire hose,” Howat said. Since saltwater is more dense than freshwater, the freshwater bubbled straight up to the surface. “This was the equivalent of the pipes bursting on all that plumbing beneath the ice, releasing the pressure.”

That kind of turbulence stirs up the warm ocean water, and can cause more ice to melt, he said.

“So you can’t just say, ‘if you increase melting, you increase glacial speed.’ The relationship is much more complex than that, and since the plumbing system evolves over time, it’s especially hard to pin down.”

(Photo: Alberto Behar, Jet Propulsion Laboratory)

Ohio State University

Wednesday, December 30, 2009

POLLUTION ALTERS ISOLATED THUNDERSTORMS

0 comentarios

New climate research reveals how wind shear — the same atmospheric conditions that cause bumpy airplane rides — affects how pollution contributes to isolated thunderstorm clouds. Under strong wind shear conditions, pollution hampers thunderhead formation. But with weak wind shear, pollution does the opposite and makes storms stronger.

The work improves climate scientists' understanding of how aerosols — tiny unseen particles that make up pollution — contribute to isolated thunderstorms and the climate cycle. How aerosols and clouds interact is one of the least understood aspects of climate, and this work allows researchers to better model clouds and precipitation.

"This finding may provide some guidelines on how man-made aerosols affect the local climate and precipitation, especially for the places where 'afternoon showers' happen frequently and affect the weather system and hydrological cycle," said atmospheric scientist Jiwen Fan of the Department of Energy's Pacific Northwest National Laboratory. "Aerosols in the air change the cloud properties, but the changes vary from case to case. With detailed cloud modeling, we found an important factor regulating how aerosols change storms and precipitation."

Fan discussed her results Thursday, December 17 at the 2009 American Geophysical Union meeting. Her study uses data from skies over Australia and China.

The results provide insight into how to incorporate these types of clouds and conditions into computational climate models to improve their accuracy.

Deep convective clouds reflect a lot of the sun's energy back into space and return water that has evaporated back to the surface as rain, making them an important part of the climate cycle. The clouds form as lower air rises upwards in a process called convection. The updrafts carry aerosols that can seed cloud droplets, building a storm.

Previous studies produced conflicting results in how aerosols from pollution affect storm development. For example, in some cases, more pollution leads to stronger storms, while in others, less pollution does. Fan and her colleagues used computer simulations to tease out what was going on. Of concern was a weather phenomenon known as wind shear, where horizontal wind speed and direction vary at different heights. Wind shear can be found near weather fronts and is known to influence storms.

The team ran a computer model with atmospheric data collected in northern Australia and eastern China. They simulated the development of eight deep convective clouds by varying the concentration of aerosols, wind shear, and humidity. Then they examined updraft speed and precipitation.

In the first simulations, the team found that in scenarios containing strong wind shear, more pollution curbed convection. When wind shear was weak, more pollution produced a stronger storm. But convection also changed depending on humidity, so the team wanted to see which effect — wind shear or humidity — was more important.

The team took a closer look at two cloud-forming scenarios: one that ended up with the strongest enhancement in updraft speed and one with the weakest. For each scenario, they created a humid and a dry condition, as well as a strong and weak wind shear condition. The trend in the different conditions indicated that wind shear had a much greater effect on updraft strength than humidity.

When the team measured the expected rainfall, they found that the pattern of rainfall followed the pattern of updraft speed. That is, with strong wind shear, more pollution led to less rainfall. When wind shear was weak, more pollution created stronger storms and more rain — up to a certain point. Beyond a peak level in weak wind shear conditions, pollution led to decreased storm development.

Additional analyses described the physics underlying these results. Water condensing onto aerosol particles releases heat, which contributes to convection and increases updraft speed. The evaporation of water from the cloud droplets cools the air, which reduces the updrafts. In strong wind shear conditions, the cooling effect is always larger than the heating effect, leading to a reduction in updraft speed.

(Photo: UCAR/Carlyle Calvin)

Pacific Northwest National Laboratory

BLACK HOLES IN STAR CLUSTERS STIR UP TIME AND SPACE

0 comentarios

Within a decade scientists could be able to detect the merger of tens of pairs of black holes every year, according to a team of astronomers at the University of Bonn’s Argelander-Institut fuer Astronomie, who publish their findings in a paper in Monthly Notices of the Royal Astronomical Society. By modelling the behaviour of stars in clusters, the Bonn team find that they are ideal environments for black holes to coalesce. These merger events produce ripples in time and space (gravitational waves) that could be detected by instruments from as early as 2015.

Clusters of stars are found throughout our own and other galaxies and most stars are thought to have formed in them. The smallest looser ‘open clusters’ have only a few stellar members, whilst the largest tightly bound ‘globular clusters’ have as many as several million stars. The highest mass stars in clusters use up their hydrogen fuel relatively quickly (in just a few million years). The cores of these stars collapse, leading to a violent supernova explosion where the outer layers of the star are expelled into space. The explosion leaves behind a stellar remnant with gravitational field so strong that not even light can escape – a black hole.

When stars are as close together as they are in clusters, then although still rare events, the likelihood of collisions and mergers between stars of all types, including black holes, is much higher. The black holes sink to the centre of the cluster, where a core that is completely made of up of black holes forms. In the core, the black holes experience a range of interactions, sometimes forming binary pairs and sometimes being ejected from the cluster completely.

Now Dr Sambaran Banerjee, Alexander von Humboldt postdoctoral fellow, has worked with his University of Bonn colleagues Dr Holger Baumgardt and Professor Pavel Kroupa to develop the first self-consistent simulation of the movement of black holes in star clusters.

The scientists assembled their own star clusters on a high-performance supercomputer, and then calculated how they would evolve by tracing the motion of each and every star and black hole within them.

According to a key prediction of Einstein’s General Theory of Relativity, black hole binaries stir the space-time around them, generating waves that propagate away like ripples on the surface of a lake. These waves of curvature in space-time are known as gravitational waves and will temporarily distort any object they pass through. But to date no-one has succeeded in detecting them.

In the cores of stars clusters, black hole binaries are sufficiently tightly bound to be significant sources of gravitational waves. If the black holes in a binary system merge, then an even stronger pulse of gravitational waves radiates away from the system.

Based on the new results, the next generation of gravitational wave observatories like the Advanced Laser Interferometer Gravitational-wave Observatory (Advanced LIGO) could detect tens of these events each year, out to a distance of almost 5000 million light years (for comparison the well known Andromeda Galaxy is just 2.5 million light years away).

Advanced LIGO will be up and running by 2015 and if the Bonn team are right, from then on we can look forward to a new era of gravitational wave astronomy.

Sambaran comments, “Physicists have looked for gravitational waves for more than half a century. But up to now they have proved elusive. If we are right then not only will gravitational waves be found so that General Relativity passes a key test but astronomers will soon have a completely new way to study the Universe. It seems fitting that almost exactly 100 years after Einstein published his theory, scientists should be able to use this exotic phenomenon to watch some of the most exotic events in the cosmos.”

(Photo: LIGO Scientific Collaboration (LSC) / NASA)

Royal Astronomical Society

PRE-ERUPTION EARTHQUAKES OFFER CLUES TO VOLCANO FORECASTERS

0 comentarios

Like an angry dog, a volcano growls before it bites, shaking the ground and getting "noisy" before erupting. This activity gives scientists an opportunity to study the tumult beneath a volcano and may help them improve the accuracy of eruption forecasts, according to Emily Brodsky, an associate professor of Earth and planetary sciences at the University of California, Santa Cruz.

Brodsky presented recent findings on pre-eruption earthquakes on Wednesday, December 16, at the fall meeting of the American Geophysical Union in San Francisco.

Each volcano has its own personality. Some rumble consistently, while others stop and start. Some rumble and erupt the same day, while others take months, and some never do erupt. Brodsky is trying to find the rules behind these personalities.

"Volcanoes almost always make some noise before they erupt, but they don't erupt every time they make noise," she said. "One of the big challenges of a volcano observatory is how to handle all the false alarms."

Brodsky and Luigi Passarelli, a visiting graduate student from the University of Bologna, compiled data on the length of pre-eruption earthquakes, time between eruptions, and the silica content of lava from 54 volcanic eruptions over a 60-year span. They found that the length of a volcano's "run-up"--the time between the onset of earthquakes and an eruption--increases the longer a volcano has been dormant or "in repose." Furthermore, the underlying magma is more viscous or gummy in volcanoes with long run-up and repose times.

Scientists can use these relationships to estimate how soon a rumbling volcano might erupt. A volcano with frequent eruptions over time, for instance, provides little warning before it blows. The findings can also help scientists decide how long they should stay on alert after a volcano starts rumbling.

"You can say, 'My volcano is acting up today, so I'd better issue an alert and keep that alert open for 100 days or 10 days, based on what I think the chemistry of the system is,' " Brodsky said.

Volcano observers are well-versed in the peculiarities of their systems and often issue alerts to match, according to Brodsky. But this study is the first to take those observations and stretch them across all volcanoes, she said.

"The innovation of this study is trying to stitch together those empirical rules with the underlying physics and find some sort of generality," Brodsky said.

The underlying physics all lead back to magma, she said. When the pressure in a chamber builds high enough, the magma pushes its way to the volcano's mouth and erupts. The speed of this ascent depends on how viscous the magma is, which depends in turn on the amount of silica in the magma. The less silica, the runnier the magma. The runnier the magma, the quicker the volcanic chamber fills and the quicker it will spew, according to Brodsky.

The path from chamber to surface isn't easy for magma as it forces its way up through the crust. The jostling of subsurface rock causes pre-eruption tremors, which oscillate in length and severity based on how freely the magma can move.

"If the magma's very sticky, then it takes a long time both to recharge the chamber and to push its way to the surface," Brodsky said. "It extends the length of precursory activity."

Thick magma is the culprit behind the world's most explosive eruptions, because it traps gas and builds pressure like a keg, she said. Mount St. Helens is an example of a volcano fed by viscous magma.

Brodsky and Passarelli diagrammed the dynamics of magma flow using a simple analytical model of fluids moving through channels. The next step, Brodsky said, is to test the accuracy of their predictions on future eruptions.

Volcanoes are messy systems, however, with wildly varying structures and mineral ingredients. Observatories will likely have to tweak their predictions based on the unique characteristics of each system, she said.

(Photo: Cyrus Read, AVO/USGS)

University of California, Santa Cruz

Tuesday, December 29, 2009

ICY MOONS OF SATURN AND JUPITER MAY HAVE CONDITIONS NEEDED FOR LIFE

0 comentarios

Scientists once thought that life could originate only within a solar system's "habitable zone," where a planet would be neither too hot nor too cold for liquid water to exist on its surface. But according to planetary scientist Francis Nimmo, evidence from recent NASA missions suggests that conditions necessary for life may exist on the icy satellites of Saturn and Jupiter.

"If these moons are habitable, it changes the whole idea of the habitable zone," said Nimmo, a professor of Earth and planetary sciences at UC Santa Cruz. "It changes our thinking about how and where we might find life outside of the solar system."

Nimmo discussed the impact of ice dynamics on the habitability of the moons of Saturn and Jupiter on Tuesday, December 15, at the annual meeting of the American Geophysical Union in San Francisco.

Jupiter's moon Europa and Saturn's moon Enceladus, in particular, have attracted attention because of evidence that oceans of liquid water may lie beneath their icy surfaces. This evidence, plus discoveries of deep-sea hydrothermal vent communities on Earth, suggests to some that these frozen moons just might harbor life.

"Liquid water is the one requirement for life that everyone can agree on," Nimmo said.

The icy surfaces may insulate deep oceans, shift and fracture like tectonic plates, and mediate the flow of material and energy between the moons and space.

Several lines of evidence support the presence of subsurface oceans on Europa and Enceladus, Nimmo said. In 2000, for example, NASA's Galileo spacecraft measured an unusual magnetic field around Europa that was attributed to the presence of an ocean beneath the moon's surface. On Enceladus, the Cassini spacecraft discovered geysers shooting ice crystals a hundred miles above the surface, which also suggests at least pockets of subsurface water, Nimmo said (see earlier story).

Liquid water isn't easy to find in the cold expanses beyond Earth's orbit. But according to Nimmo, tidal forces could keep subsurface oceans from freezing up. Europa and Enceladus both have eccentric orbits that bring them alternately close to and then far away from their respective planets. These elongated orbits create ebbs and flows of gravitational energy between the planets and their satellites.

"A moon like Enceladus is getting squeezed and stretched and squeezed and stretched," Nimmo said.

The extent to which this squeezing and stretching transforms into heat remains unclear, he said. Tidal forces likely shift plates in the lunar cores, creating friction and geothermal energy. This energy may also rub surface ice against itself at the sites of deep ice fissures, creating heat and melting, according to Nimmo. Enceladus's geysers appear to originate from these shifting faults, and the thin lines running along Europa's surface suggest geologically active plates, he said.

A frozen outer layer may be crucial to maintaining oceans that could harbor life on these moons. The icy surfaces may shield the oceans from the frigidity of space and from radiation harmful to living organisms.

"If you want to have life, you want the ocean to last a long time," Nimmo said. "The ice above acts like an insulating blanket."

Enceladus is so small and its ice so thin that scientists expect its oceans to freeze periodically, making habitability less likely, Nimmo said. Europa, however, is the perfect size to heat its oceans efficiently. It is larger than Enceladus but smaller than moons such as Ganymede, which has thick ice surrounding its core and blocking communication with the exterior. If liquid water exists on Ganymede, it may be trapped between layers of ice that separate it from both the core and the surface.

The core and the surface of these moons are both potential sources of the chemical building blocks needed for life. Solar radiation and comet impacts leave a chemical film on the surfaces. To sustain living organisms, these chemicals would have to migrate to the subsurface oceans, and this can occur periodically around ice fissures on moons with relatively thin ice shells like Europa and Enceladus. Organic molecules and minerals may also stream out of their cores, Nimmo said. These nutrients could support communities like those seen around hydrothermal vents on Earth.

Nimmo cautioned that being habitable is no guarantee that a planetary body is actually inhabited. It is unlikely that we will find life elsewhere in our solar system, despite all the time and resources devoted to the search, he said. But such a discovery would certainly be worth the effort.

"I think pretty much everyone can agree that finding life anywhere else in the solar system would be the scientific discovery of the millennium," Nimmo said.

(Photo: NASA/JPL/Space Science Institute)

University of California

SUPERNOVA EXPLOSIONS STAY IN SHAPE

0 comentarios

At a very early age, children learn how to classify objects according to their shape. Now, new research suggests studying the shape of the aftermath of supernovas may allow astronomers to do the same.

A new study of images from NASA's Chandra X-ray Observatory on supernova remnants - the debris from exploded stars - shows that the symmetry of the remnants, or lack thereof, reveals how the star exploded. This is an important discovery because it shows that the remnants retain information about how the star exploded even though hundreds or thousands of years have passed.

"It's almost like the supernova remnants have a 'memory' of the original explosion," said Laura Lopez of the University of California at Santa Cruz, who led the study. "This is the first time anyone has systematically compared the shape of these remnants in X-rays in this way."

Astronomers sort supernovas into several categories, or "types", based on properties observed days after the explosion and which reflect very different physical mechanisms that cause stars to explode. But, since observed remnants of supernovas are leftover from explosions that occurred long ago, other methods are needed to accurately classify the original supernovas.

Lopez and colleagues focused on the relatively young supernova remnants that exhibited strong X-ray emission from silicon ejected by the explosion so as to rule out the effects of interstellar matter surrounding the explosion. Their analysis showed that the X-ray images of the ejecta can be used to identify the way the star exploded. The team studied 17 supernova remnants both in the Milky Way galaxy and a neighboring galaxy, the Large Magellanic Cloud.

For each of these remnants there is independent information about the type of supernova involved, based not on the shape of the remnant but, for example, on the elements observed in it. The researchers found that one type of supernova explosion - the so-called Type Ia - left behind relatively symmetric, circular remnants. This type of supernova is thought to be caused by a thermonuclear explosion of a white dwarf, and is often used by astronomers as "standard candles" for measuring cosmic distances.

On the other hand, the remnants tied to the "core-collapse" supernova explosions were distinctly more asymmetric. This type of supernova occurs when a very massive, young star collapses onto itself and then explodes.

"If we can link supernova remnants with the type of explosion", said co-author Enrico Ramirez-Ruiz, also of University of California, Santa Cruz, "then we can use that information in theoretical models to really help us nail down the details of how the supernovas went off."

Models of core-collapse supernovas must include a way to reproduce the asymmetries measured in this work and models of Type Ia supernovas must produce the symmetric, circular remnants that have been observed.

Out of the 17 supernova remnants sampled, ten were classified as the core-collapse variety, while the remaining seven of them were classified as Type Ia. One of these, a remnant known as SNR 0548-70.4, was a bit of an "oddball". This one was considered a Type Ia based on its chemical abundances, but Lopez finds it has the asymmetry of a core-collapse remnant.

"We do have one mysterious object, but we think that is probably a Type Ia with an unusual orientation to our line of sight," said Lopez. "But we'll definitely be looking at that one again."

While the supernova remnants in the Lopez sample were taken from the Milky Way and its close neighbor, it is possible this technique could be extended to remnants at even greater distances. For example, large, bright supernova remnants in the galaxy M33 could be included in future studies to determine the types of supernova that generated them.

(Photo: NASA/CXC/UCSC/L. Lopez et al.)

Harvard-Smithsonian Center for Astrophysics

UMBILICAL CORD COULD BE NEW SOURCE OF PLENTIFUL STEM CELLS

0 comentarios
Stem cells that could one day provide therapeutic options for muscle and bone disorders can be easily harvested from the tissue of the umbilical cord, just as the blood that goes through it provides precursor cells to treat some blood disorders, said University of Pittsburgh School of Medicine researchers in the online version of the Journal of Biomedicine and Biotechnology.

Umbilical cord tissue cells can be expanded to greater number, are remarkably stable and might not trigger strong immune responses, said senior investigator Bridget M. Deasy, Ph.D., assistant professor in the Department of Orthopaedic Surgery, Pitt School of Medicine. The cells are obtained from the gelatinous material in the cord known as Wharton's jelly and from blood vessel walls.

"Our experiments indicate also that at least 21 million stem cells, and possibly as many as 500 million, could be banked from a single umbilical cord after the birth of a baby," she noted. "So, the cord could become an accessible source of a multitude of stem cells that overcomes many of the restrictions, such as limited quantity as well as donor age and donor sex issues, that come with other adult stem cell populations."

Dr. Deasy and her team analyzed sections of two-foot-long human umbilical cords that were donated for research, looking for cells in Wharton's jelly and blood vessel walls that displayed the characteristic protein markers found in stem cells derived from other sources. The researchers then sought to find the best way to isolate the stem cells from the cords, and tested them in the lab to confirm their ability to produce specialized cells, such as bone and cartilage, while retaining their invaluable ability to renew themselves.

To build on these findings, the team will test the umbilical cord stem cells in animal models of cartilage and bone repair, as well as muscle regeneration.

University of Pittsburgh

DNA OF JESUS-ERA SHROUDED MAN IN JERUSALEM REVEALS EARLIEST CASE OF LEPROSY

0 comentarios

The DNA of a 1st century shrouded man found in a tomb on the edge of the Old City of Jerusalem has revealed the earliest proven case of leprosy. Details of the research were published December 16 in the PloS ONE Journal.

The molecular investigation was undertaken by Prof. Mark Spigelman and Prof. Charles Greenblatt and of the Sanford F. Kuvin Center for the Study of Infectious and Tropical Diseases at the Hebrew University of Jerusalem, Prof. Carney Matheson and Ms. Kim Vernon of Lakehead University, Canada, Prof. Azriel Gorski of New Haven University and Dr. Helen Donoghue of University College London. The archaeological excavation was led by Prof. Shimon Gibson, Dr. Boaz Zissu and Prof. James Tabor on behalf of the Israel Antiquities Authority and the University of North Carolina at Charlotte.

The burial cave, which is known as the Tomb of the Shroud, is located in the lower Hinnom Valley and is part of a 1st century C.E. cemetery known as Akeldama or 'Field of Blood' (Matthew 27:3-8; Acts 1:19) - next to the area where Judas is said to have committed suicide. The tomb of the shrouded man is located next to the tomb of Annas, the high priest (6-15 C.E.), who was the father in law of Caiaphas, the high priest who betrayed Jesus to the Romans. It is thus thought that this shrouded man was either a priest or a member of the aristocracy. According to Prof. Gibson, the view from the tomb would have looked directly toward the Jewish Temple.

What is particularly rare about this tomb is that it was clear this man, which is dated by radiocarbon methods to 1-50 C.E., did not receive a secondary burial. Secondary burials were common practice at the time, where the bones were removed after a year and placed in an ossuary (a stone bone box). In this case, however, the entrance to this part of the tomb was completely sealed with plaster. Prof. Spigelman believes this is due to the fact that this man had suffered from leprosy and died of tuberculosis, as the DNA of both diseases was found in his bones.

Historically, disfiguring diseases - particularly leprosy - caused the afflicted individuals to be ostracized from their communities. However, a number of indications – the location and size of the tomb, the type of textiles used as shroud wrappings, and the clean state of the hair – suggest that the shrouded individual was a fairly affluent member of society in Jerusalem and that tuberculosis and leprosy may have crossed social boundaries in the first century C.E.

This is also the first time fragments of a burial shroud have been found from the time of Jesus in Jerusalem. The shroud is very different to that of the Turin Shroud, hitherto assumed to be the one that was used to wrap the body of Jesus. Unlike the complex weave of the Turin Shroud, this is made up of a simple two-way weave, as the textiles historian Dr. Orit Shamir was able to show.

Based on the assumption that this is representative of a typical burial shroud widely used at the time of Jesus, the researchers conclude that the Turin Shroud did not originate from Jesus-era Jerusalem.

The excavation also found a clump of the shrouded man's hair, which had been ritually cut prior to his burial. These are both unique discoveries because organic remains are hardly ever preserved in the Jerusalem area owing to high humidity levels in the ground.

According to Prof. Spigelman and Prof. Greenblatt, the origins and development of leprosy are largely obscure. Leprosy in the Old Testament may well refer to skin rashes such as psoriasis. The leprosy known to us today was thought to have originated in India and brought over to the Near East and to Mediterranean countries in the Hellenistic period. The results from the first-century C.E. Tomb of the Shroud fill a vital gap in our knowledge of this disease.

Furthermore, the new research has shown that molecular pathology clearly adds a new dimension to the archaeological exploration of disease in ancient times and provides us with a better understanding of the evolution, geographic distribution and epidemiology of disease and social health in antiquity.

The co-infection of both leprosy and tuberculosis here and in 30 percent of DNA remains in Israel and Europe from the ancient and modern period provided evidence for the postulate that the medieval plague of leprosy was eliminated by an increased level of tuberculosis in Europe as the area urbanized.

(Photo: Prof. Shimon Gibson)

The Hebrew University of Jerusalem

AMONG APES, TEETH ARE MADE FOR THE TOUGHEST TIMES

0 comentarios
The teeth of some apes are formed primarily to handle the most stressful times when food is scarce, according to new research performed at the National Institute of Standards and Technology (NIST). The findings imply that if humanity is serious about protecting its close evolutionary cousins, the food apes eat during these tough periods—and where they find it—must be included in conservation efforts.

The interdisciplinary team, which brought together anthropologists from George Washington University (GWU) and fracture mechanics experts from NIST, has provided the first evidence that natural selection in three ape species has favored individuals whose teeth can most easily handle the "fallback foods" they choose when their preferred fare is less available. All of these apes—gorillas, orangutans and chimpanzees—favor a diet of fruit whenever possible. But when fruit disappears from their usual foraging grounds, each species responds in a different way—and has developed teeth formed to reflect the differences.

"It makes sense if you think about it," says GWU's Paul Constantino. "When resources are scarce, that's when natural selection is highly active in weeding out the less fit, so animals without the necessary equipment to get through those tough times won't pass on their genes to the next generation."

In this case, the necessary equipment is the right set of molars. The team examined ape tooth enamel and found that several aspects of molar shape and structure can be explained in terms of adapting to eat fallback foods. For instance, gorillas' second choice is leaves and tree bark, which are much tougher than fruit, while orangutans fall back to nuts and seeds, which are comparatively hard.

For these reasons, the researchers theorized that gorillas would have evolved broader back teeth than a fruit diet would require in order to chew leaves, but orangutans would have thicker enamel to handle the increased stress of crunching seeds.

NIST scientists developed models of how teeth fracture while chewing different foods. By fracturing teeth in the laboratory, they verified fundamental fracture mechanics models incorporating tooth shape and structure. These efforts revealed the effects of food stiffness and how various foods likely would damage ape teeth. "The research at NIST supports our theories and several related ones," Constantino says. "It's likely that fallback foods have influenced jaw and skull shape as well."

Constantino adds that the findings suggest mankind must protect not only forest areas where commonly eaten fruits grow, but also the places where apes' fallback resources appear. While identifying precisely what these resources are is a job for ecologists, he says, the new research shows just how important and influential these foods are in primate ecology.

"Among orangutans, for example, timber companies are harvesting the sources of their fallbacks," Constantino says. "These apes have evolved the right tools to survive on fallback foods, but they need to be able to find these foods in the first place."

National Institute of Standards and Technology

RESEARCHERS DISCOVER NEW 'GOLDEN RATIOS' FOR FEMALE FACIAL BEAUTY

0 comentarios
Beauty is not only in the eye of the beholder but also in the relationship of the eyes and mouth of the beholden. The distance between a woman's eyes and the distance between her eyes and her mouth are key factors in determining how attractive she is to others, according to new psychology research from the University of California, San Diego and the University of Toronto.

Pamela Pallett and Stephen Link of UC San Diego and Kang Lee of the University of Toronto tested the existence of an ideal facial feature arrangement. They successfully identified the optimal relation between the eyes, the mouth and the edge of the face for individual beauty.

In four separate experiments, the researchers asked university students to make paired comparisons of attractiveness between female faces with identical facial features but different eye-mouth distances and different distances between the eyes.

They discovered two "golden ratios," one for length and one for width. Female faces were judged more attractive when the vertical distance between their eyes and the mouth was approximately 36 percent of the face's length, and the horizontal distance between their eyes was approximately 46 percent of the face's width.

Interestingly, these proportions correspond with those of an average face.

"People have tried and failed to find these ratios since antiquity. The ancient Greeks found what they believed was a 'golden ratio' – also known as 'phi' or the 'divine proportion' – and used it in their architecture and art. Some even suggest that Leonardo Da Vinci used the golden ratio when painting his 'Mona Lisa.' But there was never any proof that the golden ratio was special. As it turns out, it isn't. Instead of phi, we showed that average distances between the eyes, mouth and face contour form the true golden ratios," said Pallett, a post-doctoral fellow in psychology at UC San Diego and also an alumna of the department.

"We already know that different facial features make a female face attractive – large eyes, for example, or full lips," said Lee, a professor at University of Toronto and the director of the Institute of Child Study at the Ontario Institute for Studies in Education. "Our study conclusively proves that the structure of faces – the relation between our face contour and the eyes, mouth and nose – also contributes to our perception of facial attractiveness. Our finding also explains why sometimes an attractive person looks unattractive or vice versa after a haircut, because hairdos change the ratios."

The researchers suggest that the perception of facial attractiveness is a result of a cognitive averaging process by which people take in all the faces they see and average them to get an ideal width ratio and an ideal length ratio. They also posit that "averageness" (like symmetry) is a proxy for health, and that we may be predisposed by biology and evolution to find average faces attractive.

University of Toronto

ETHANOL-POWERED VEHICLES GENERATE MORE OZONE THAN GAS-POWERED ONES

0 comentarios

Ethanol, often promoted as a clean-burning, renewable fuel that could help wean the nation from oil, would likely worsen health problems caused by ozone, compared with gasoline, especially in winter, according to a new study led by Stanford researchers.

Ozone production from both gasoline and E85, a blend of gasoline and ethanol that is 85 percent ethanol, is greater in warm sunny weather than during the cold weather and short days of winter, because heat and sunlight contribute to ozone formation. But E85 produces different byproducts of combustion than gasoline and generates substantially more aldehydes, which are precursors to ozone.

"What we found is that at the warmer temperatures, with E85, there is a slight increase in ozone compared to what gasoline would produce," said Diana Ginnebaugh, a doctoral candidate in civil and environmental engineering, who worked on the study. She presented the results of the study on Tuesday, Dec. 15, at the American Geophysical Union meeting in San Francisco. "But even a slight increase is a concern, especially in a place like Los Angeles, because you already have episodes of high ozone that you have to be concerned about, so you don't want any increase."

But it was at colder temperatures, below freezing, that it appeared the health impacts of E85 would be felt most strongly.

"We found a pretty substantial increase in ozone production from E85 at cold temperatures, relative to gasoline when emissions and atmospheric chemistry alone were considered," Ginnebaugh said. Although ozone is generally lower under cold-temperature winter conditions, "If you switched to E85, suddenly you could have a place like Denver exceeding ozone health-effects limits and then they would have a health concern that they don't have now."

The problem with cold weather emissions arises because the catalytic converters used on vehicles have to warm up before they reach full efficiency. So until they get warm, a larger proportion of pollutants escapes from the tailpipe into the air.

There are other pollutants that would increase in the atmosphere from burning E85 instead of gasoline, some of which are irritants to eyes, throats and lungs, and can also damage crops, but the aldehydes are the biggest contributors to ozone production, as well as being carcinogenic.

Ginnebaugh worked with Mark Z. Jacobson, professor of civil and environmental engineering, using vehicle emissions data from some earlier studies and applying it to the Los Angeles area to model the likely output of pollutants from vehicles.

Because E85 is only now beginning to be used in mass-produced vehicles, the researchers projected for the year 2020, when more "flex fuel" vehicles, which can run on E85, will likely be in use. They estimated that vehicle emissions would be about 60 percent less than today, because automotive technology will likely continue to become cleaner over time. They investigated two scenarios, one that had all the vehicles running on E85 and another in which the vehicles all ran on gasoline.

Running a widely used, complex model involving over 13,000 chemical reactions, they did repeated simulations at different ambient temperatures for the two scenarios, each time simulating a 48-hour period. They used the average ozone concentrations during each of those periods for comparison.

They found that at warm temperatures, from freezing up to 41 degrees Celsius (give F conversion), in bright sunlight, E85 raised the concentration of ozone in the air by up to 7 parts per billion more than produced by gasoline. At cold temperatures, from freezing down to minus 37 degrees Celsius, they found E85 raised ozone concentrations by up to 39 parts per billion more than gasoline.

"What we are saying with these results is that you see an increase," Ginnebaugh said. "We are not saying that this is the exact magnitude you are going to get in a given urban area, because it is really going to vary from city to city depending on a lot of other factors such as the amount of natural vegetation, traffic levels, and local weather patterns."

Ginnebaugh said the results of the study represent a preliminary analysis of the impact of E85. More data from studies of the emissions of flex fuel vehicles at various temperatures would help refine the estimates, she said.

(Photo: Beth McConnell)

Stanford University

KANSAS SCIENTISTS PROBE MYSTERIOUS POSSIBLE COMET STRIKES ON EARTH

0 comentarios
It's the stuff of a Hollywood disaster epic: A comet plunges from outer space into the Earth's atmosphere, splitting the sky with a devastating shock wave that flattens forests and shakes the countryside. But this isn't a disaster movie plotline.

"Comet impacts might be much more frequent than we expect," said Adrian Melott, professor of physics and astronomy at the University of Kansas. "There's a lot of interest in the rate of impact events upon the Earth. We really don't know the rate very well because most craters end up being destroyed by erosion or the comets go into the ocean and we don't know that they're there. We really don't have a good handle on the rate of impacts on the Earth."

An investigation by Melott and colleagues reveals a promising new method of detecting past comet strikes upon Earth and gauging their frequency. The results will be unveiled at the American Geophysical Union's Fall Meeting, to be held Dec. 14-18 in San Francisco.

The research shows a potential signature of nitrate and ammonia that can be found in ice cores corresponding to suspected impacts. Although high nitrate levels previously have been tied to space impacts, scientists have never before seen atmospheric ammonia spikes as indicators of space impacts with our planet.

"Now we have a possible new marker for extraterrestrial events in ice," Melott said. "You don't just look for nitrates, you also look for ammonia."

Melott studied two possible cometary airbursts with Brian Thomas, assistant professor of physics and astronomy at Washburn University, Gisela Dreschhoff, KU adjunct associate professor of physics and astronomy, and Carey Johnson, KU professor of chemistry.

In June 1908, a puzzling explosion rocked central Siberia in Russia; it came to be known as the "Tunguska event." A later expedition found that 20 miles of trees had been knocked down and set alight by the blast. Today, scientists have coalesced around the idea that Tunguska's devastation was caused by a 100-foot asteroid that had entered Earth's atmosphere, causing an airburst.

Some 13,000 years earlier, an occurrence thought by some researchers to be an extraterrestrial impact set off cooler weather and large-scale extinctions in North America. The "Younger Dryas event," as it is known, coincided with the end of the prehistoric Clovis culture.

Melott and fellow researchers examined data from ice cores extracted in Greenland to compare atmospheric chemistry during the Tunguska and Younger Dryas events. In both instances, Melott's group found evidence that the Haber process — whereby a nitrogen fixation reaction produces ammonia — may have occurred on a large scale.

"A comet entering the atmosphere makes a big shock wave with high pressure — 6,000 times the pressure of air," said Melott. "It can be shown that under those conditions you can make ammonia. Plus the Tunguska comet, or some fragments of it, landed in a swamp. And any Younger Dryas comet presumably hit an ice sheet, or at least part of it did. So there should have been lots of water around for this Haber process to work. We think the simplest way to explain the signal in both objects is the Haber process. Comets hit the atmosphere in the presence of a lot of water and you get both nitrate and ammonia, which is what both ice cores show."

Melott cautions that the results are inconclusive because the ice cores are sampled at five-year intervals only, not sufficient resolution to pinpoint peaks of atmospheric nitrates and ammonia, which rapidly would have been dissipated by rains following a comet strike.

But the KU researcher contends that ammonia enhancement resulting from the Haber process could serve as a useful marker for detecting possible comet impacts. He encourages more sampling and analysis of ice cores to see where the nitrate-ammonia signal might line up with suspected cometary collisions with the Earth.

Such information could help humankind more accurately gauge the danger of a comet hitting the Earth in the future.

"There's a whole program to watch for near-Earth asteroids as they go around the sun repeatedly, and some of them have close brushes with the Earth," said Melott. "But comets are a whole different ball game. They don't do that circular thing. They come straight in from far, far out — and you don't see them coming until they push out a tail only a few years before they would enter the inner solar system. So we could be hit by a comet and only have a few years' warning — possibly not enough time to do anything about it."

Kansas University

UNDERSTANDING APPLES' ANCESTORS

0 comentarios

Wild Malus orientalis—species of wild apples that could be an ancestor of today's domesticated apples—are native to the Middle East and Central Asia. A new study comparing the diversity of recently acquired M. orientalis varieties from Georgia and Armenia with previously collected varieties originating in Russia and Turkey narrows the large population and establishes a core collection that will make M. orientalis more accessible to the breeding and research communities.

To identify and record the genetic diversity of these wild apples, Gayle Volk and Christopher Richards at the National Center for Genetic Resources Preservation, U.S. Department of Agriculture (USDA), performed genetic diversity analyses on trees grown from Malus orientalis seeds collected in Georgia, Armenia, Russia, and Turkey. The trees are located at the USDA-ARS Plant Genetic Resources Unit (PGRU) in Geneva, New York. Seedling trees were evaluated for resistance to critical diseases such as fire blight, apple scab, and cedar apple rust. The full report was published in a recent issue of the Journal of the American Society for Horticultural Science.

Seeds from wild Malus orientalis trees were collected from 1998-2004 during explorations to Armenia, Georgia, Turkey, and Russia. Seedling orchards with between eight and 171 "individuals" from each collection location were established, and disease resistance data were collected for 776 trees. The genetic diversity of the 280 individuals from Armenia and Georgia was compared with results obtained for individuals from Russia and Turkey. A total of 106 alleles were identified in the trees from Georgia and Armenia. The average gene diversity ranged from 0.47 to 0.85 per locus. The researchers concluded that "the genetic differentiation among sampling locations was greater than that found between the two countries."

Six individuals from Armenia exhibited resistance to fire blight, apple scab, and cedar apple rust. According to Volk; "The data suggest wild populations of M. orientalis from regions around the Black Sea are genetically distinguishable and show high levels of diversity."

A core set of 27 trees that captures 93% of the alleles in the PGRU M. orientalis collection will be maintained in the field at the PGRU in Geneva, New York.

(Photo: Phil Forsline)

American Society for Horticultural Science

Monday, December 28, 2009

BLACK CARBON DEPOSITS ON HIMALAYAN ICE THREATEN EARTH'S 'THIRD POLE'

0 comentarios

Black soot deposited on Tibetan glaciers has contributed significantly to the retreat of the world's largest non-polar ice masses, according to new research by scientists from NASA and the Chinese Academy of Sciences. Soot absorbs incoming solar radiation and can speed glacial melting when deposited on snow in sufficient quantities.

Temperatures on the Tibetan Plateau -- sometimes called Earth's "third pole" -- have warmed by 0.3°C (0.5°F) per decade over the past 30 years, about twice the rate of observed global temperature increases. New field research and ongoing quantitative modeling suggests that soot's warming influence on Tibetan glaciers could rival that of greenhouse gases.

"Tibet's glaciers are retreating at an alarming rate," said James Hansen, coauthor of the study and director of NASA's Goddard Institute for Space Studies (GISS) in New York City. "Black soot is probably responsible for as much as half of the glacial melt, and greenhouse gases are responsible for the rest."

"During the last 20 years, the black soot concentration has increased two- to three-fold relative to its concentration in 1975," said Junji Cao, a researcher from the Chinese Academy of Sciences in Beijing and a coauthor of the paper.

The study was published December 7th in the Proceedings of the National Academy of Sciences.

"Fifty percent of the glaciers were retreating from 1950 to 1980 in the Tibetan region; that rose to 95 percent in the early 21st century," said Tandong Yao, director of the Chinese Academy's Institute of Tibetan Plateau Research. Some glaciers are retreating so quickly that they could disappear by mid-century if current trends continue, the researchers suggest.

Since melt water from Tibetan glaciers replenishes many of Asia's major rivers—including the Indus, Ganges, Yellow, and Brahmaputra—such losses could have a profound impact on the billion people who rely on the rivers for fresh water. While rain and snow would still help replenish Asian rivers in the absence of glaciers, the change could hamper efforts to manage seasonal water resources by altering when fresh water supplies are available in areas already prone to water shortages.

Researchers led by Baiqing Xu of the Chinese Academy drilled and analyzed five ice cores from various locations across the Tibetan Plateau, looking for black carbon (a key component of soot) as well as organic carbon. The cores support the hypothesis that black soot amounts in the Himalayan glaciers correlate with black carbon emissions in Europe and South Asia.

At Zuoqiupu glacier -- a bellwether site on the southern edge of the plateau and downwind from the Indian subcontinent -- black soot deposition increased by 30 percent between 1990 and 2003. The rise in soot levels at Zuoqiupu follows a dip that followed the enacting of clean air regulations in Europe in the 1970s.

Most soot in the region comes from diesel engines, coal-fired power plants, and outdoor cooking stoves. Many industrial processes produce both black carbon and organic carbon, but often in different proportions. Burning diesel fuel produces mainly black carbon, for example, while burning wood produces mainly organic carbon. Since black carbon is darker and absorbs more radiation, it's thought to have a stronger warming effect than organic carbon.

To refine this emerging understanding of soot's impact on glaciers, scientists are striving to gather even more robust measurements. "We can't expect this study to clarify the effect of black soot on the melting of Tibetan snow and glaciers entirely," said Cao. "Additional work that looks at albedo measurements, melting rate, and other types of reconnaissance is also needed."

For example, scientists are using satellite instruments such as the Moderate Resolution Imaging Spectroradiometer (MODIS) aboard the NASA satellites Terra and Aqua to enhance understanding of the region's albedo. And a new NASA climate satellite called Glory, which will launch late in 2010, will carry a new type of aerosol sensor that should be able to distinguish between aerosol types more accurately than previous instruments.

"Reduced black soot emissions, in addition to reduced greenhouse gases, may be required to avoid demise of Himalayan glaciers and retain the benefits of glaciers for seasonal fresh water supplies," Hansen said.

(Photo: NASA/Sally Bensusen)

THEORISTS PROPOSE A NEW WAY TO SHINE -- AND A NEW KIND OF STAR

0 comentarios
Dying, for stars, has just gotten more complicated. For some stellar objects, the final phase before or instead of collapsing into a black hole may be what a group of physicists is calling an electroweak star.

Glenn Starkman, a professor of physics at Case Western Reserve University, together with former graduate students and post-docs De-Chang Dai and Dejan Stojkovic, now at the State University of New York in Buffalo, and Arthur Lue, at MIT's Lincoln Lab, offer a description of the structure of an electroweak star in a paper submitted to Physical Review Letters and posted online at http://arxiv.org/abs/0912.0520.

Ordinary stars are powered by the fusion of light nuclei into heavier ones – such as hydrogen into helium in the center of our sun. Electroweak stars, they theorize, would be powered by the total conversion of quarks – the particles that make up the proton and neutron building blocks of those nuclei – into much lighter particles called leptons. These leptons include electrons, but especially elusive – and nearly massless – neutrinos.

"This is a process predicted by the well-tested Standard Model of particle physics," Starkman said. At ordinary temperatures it is so incredibly rare that it probably hasn't happened within the visible universe anytime in the last 10 billion years, except perhaps in the core of these electroweak stars and in the laboratories of some advanced alien civilizations, he said.

In their dying days, stars smaller than 2.1 times our sun's mass die and collapse into neutron stars – objects dense enough that the neutrons and protons push against each other. More massive stars are thought to head toward collapse into a black hole. But at the extreme temperatures and densities that might be reached when a star begins to collapse into a black hole, electroweak conversion of quarks into leptons should proceed at a rapid rate, the scientists say.

The energy generated could halt the collapse, much as the energy generated by nuclear fusion prevents ordinary stars like the Sun from collapsing. In other words, an electroweak star is the possible next step before total collapse into a black hole. If the electroweak burning is efficient, it could consume enough mass to prevent what's left from ever becoming a black hole.

Most of the energy eventually emitted from electroweak stars is in the form of neutrinos, which are hard to detect. A small fraction comes out as light and this is where the electroweak star's signature will likely be found, Starkman, said. But, "To understand that small fraction, we have to understand the star better than we do."

And until they do, it's hard to know how we can tell electroweak stars from other stars.

There's time, however, to learn. The theorists have calculated that this phase of a star's life can last more than 10 million years – a long time for us, though just an instant in the life of a star.

Case Western Reserve University

Followers

Archive

 

Selected Science News. Copyright 2008 All Rights Reserved Revolution Two Church theme by Brian Gardner Converted into Blogger Template by Bloganol dot com