Thursday, December 31, 2009

ROCK-BREATHING BACTERIA COULD GENERATE ELECTRICITY AND CLEAN UP OIL SPILLS

0 comentarios

A discovery by scientists at the University of East Anglia could contribute to the development of systems that use domestic or agricultural waste to generate clean electricity.

Published by the leading scientific journal, Proceedings of the National Academy of Sciences (PNAS), the researchers have demonstrated for the first time the mechanism by which some bacteria survive by ‘breathing rocks’.

The findings could be applied to help in the development of new microbe-based technologies such as fuel cells, or ‘bio-batteries’, powered by animal or human waste, and agents to clean up areas polluted by oil or uranium.

“This is an exciting advance in our understanding of bacterial processes in the Earth’s sub-surfaces,” said Prof David Richardson, of UEA’s School of Biological Sciences, who is leading the project.

“It will also have important biotechnological impacts. There is potential for these rock-breathing bacteria to be used to clean-up environments contaminated with toxic organic pollutants such as oil or radioactive metals such as uranium. Use of these bacteria in microbial fuel-cells powered by sewerage or cow manure is also being explored.”

The vast proportion of the world’s habitable environments is populated by micro-organisms which, unlike humans, can survive without oxygen. Some of these micro-organisms are bacteria living deep in the Earth’s subsurface and surviving by ‘breathing rocks’ – especially minerals of iron.

Iron respiration is one of the most common respiratory processes in oxygen-free habitats and therefore has wide environmental significance.

Prof Richardson said: “We discovered that the bacteria can construct tiny biological wires that extend through the cell walls and allow the organism to directly contact, and conduct electrons to, a mineral. This means that the bacteria can release electrical charge from inside the cell into the mineral, much like the earth wire on a household plug.”

(Photo: UEA)

University of East Anglia

SMALLER IS BETTER FOR FINGER SENSITIVITY

0 comentarios
People who have smaller fingers have a finer sense of touch, according to new research in the Dec. 16 issue of The Journal of Neuroscience. This finding explains why women tend to have better tactile acuity than men, because women on average have smaller fingers.

“Neuroscientists have long known that some people have a better sense of touch than others, but the reasons for this difference have been mysterious,” said Daniel Goldreich, PhD, of McMaster University in Ontario, one of the study’s authors. “Our discovery reveals that one important factor in the sense of touch is finger size.”

To learn why the sexes have different finger sensitivity, the authors first measured index fingertip size in 100 university students. Each student’s tactile acuity was then tested by pressing progressively narrower parallel grooves against a stationary fingertip — the tactile equivalent of the optometrist’s eye chart. The authors found that people with smaller fingers could discern tighter grooves.

“The difference between the sexes appears to be entirely due to the relative size of the person’s fingertips,” said Ethan Lerner, MD, PhD, of Massachusetts General Hospital, who is unaffiliated with the study. “So, a man with fingertips that are smaller than a woman’s will be more sensitive to touch than the woman.”

The authors also explored why more petite fingers are more acute. Tinier digits likely have more closely spaced sensory receptors, the authors concluded. Several types of sensory receptors line the skin’s interior and each detect a specific kind of outside stimulation. Some receptors, named Merkel cells, respond to static indentations (like pressing parallel grooves), while others capture vibrations or movement.

When the skin is stimulated, activated receptors signal the central nervous system, where the brain processes the information and generates a picture of what a surface “feels” like. Much like pixels in a photograph, each skin receptor sends an aspect of the tactile image to the brain — more receptors per inch supply a clearer image.

To find out whether receptors are more densely packed in smaller fingers, the authors measured the distance between sweat pores in some of the students, because Merkel cells cluster around the bases of sweat pores. People with smaller fingers had greater sweat pore density, which means their receptors are probably more closely spaced.

“Previous studies from other laboratories suggested that individuals of the same age have about the same number of vibration receptors in their fingertips. Smaller fingers would then have more closely spaced vibration receptors,” Goldreich said. “Our results suggest that this same relationship between finger size and receptor spacing occurs for the Merkel cells.”

Whether the total number of Merkel cell clusters remains fixed in adults and how the sense of touch fluctuates in children as they age is still unknown. Goldreich and his colleagues plan to determine how tactile acuity changes as a finger grows and receptors grow farther apart.

Society for Neuroscience

UF RESEARCHER HELPS REVEAL ANCIENT ORIGINS OF MODERN OPOSSUM

0 comentarios

A University of Florida researcher has co-authored a study tracing the evolution of the modern opossum back to the extinction of the dinosaurs and finding evidence to support North America as the center of origin for all living marsupials.

The study, published in PLoS ONE on Dec. 16, shows that peradectids, a family of marsupials known from fossils mostly found in North America and Eurasia, are a sister group of all living opossums. The findings are based in part on high-resolution CT scans of a 55-million-year-old skull found in freshwater limestone from the Bighorn Basin of Wyoming.

“The extinction of the dinosaurs was a pivotal moment in the evolution of mammals,” said Jonathan Bloch, study co-author and associate curator of vertebrate paleontology at UF’s Florida Museum of Natural History. “We’re tracing the beginnings of a major group of mammals that began in North America.”

Opossum-like peradectids first appeared on the continent about 65 million years ago, at the time of the Cretaceous–Paleogene extinction event, which killed the dinosaurs.

“North America is a critical area for understanding marsupial and opossum origins because of its extensive and varied fossil record,” said lead author Inés Horovitz, an assistant adjunct professor at the University of California, Los Angeles. “Unfortunately, most of its species are known only from teeth.”

The study also analyzes two 30-million-year-old skeletons of Herpetotheriidae, the sister group of all living marsupials.

Based on fossil evidence from the skull and two skeletons, the study’s authors concluded the evolutionary split between the ancestor of opossums and the ancestor of all other living marsupials occurred at least 65 million years ago, Horovitz said.

Marsupials migrated between North and South America until the two continents separated after the end of the Cretaceous period. Marsupials in South America diversified and also migrated into Antarctica and Australia, which were still connected at that time, Bloch said.

North American marsupials went extinct during the early Miocene, about 20 million years ago. But after the Isthmus of Panama emerged to reconnect North and South America 3 million years ago, two marsupials made it back to North America: the Virginia opossum (Didelphis virginiana), a common resident in the Southeast today, and the southern opossum (Didelphis marsupialis), which lives as far north as Mexico.

The study describes a new peradectid species, Mimoperadectes houdei, based on a relatively complete fossil skull. The high-resolution CT scan of the skull gave researchers a large amount of information about the animal’s internal anatomy. The ear, in particular, provides researchers with information on skull anatomy and clues about the animal’s locomotion, Bloch said.

The scan showed the new species shared enough common traits with living opossums to indicate an evolutionary relationship. Some predictions about that relationship could have been made from fossil teeth, Bloch said, “but this provides a much stronger foundation for that conclusion.”

Most North American marsupials living in the Paleocene and early Eocene (56 million to 48 million years ago) were small-bodied animals. But M. houdei approached the body size of some opossums living today.

“You would probably recognize it as an opossum, but it wouldn’t look quite right,” Bloch said.

The skull came from the same limestone deposits in Wyoming as the primitive primate skull Bloch and other researchers used to map an early primate brain with CT scans in a study published earlier this year.

“In parts of North America today, opossums are one of the most commonly observed mammals around,” Bloch said. “This fossil skull shows its roots going back to the extinction of the dinosaurs. This is literally the fossil that shows us the ancestry of that animal.”

The study’s examination of the two skeletons gives a first glimpse into the form and structure of primitive marsupials and shows that they were more terrestrial than modern opossums. The skeletons came from the late Oligocene and were found in the White River Badlands of Wyoming.

(Photo: Jeff Gage)

University of Florida

3-D SOLAR CELL THAT USES "TOWERS" TO BOOST EFFICIENCY WINS PATENTS

0 comentarios

A three dimensional solar cell design that uses micron-scale “towers” to capture nearly three times as much light as flat solar cells made from the same materials has been awarded broad patent protection in both China and Australia. Modeling suggests that the 3-D cell could boost power production by as much as 300 percent compared to conventional solar cells.

A three dimensional solar cell design that uses micron-scale "towers" to capture nearly three times as much light as flat solar cells made from the same materials has been awarded broad patent protection in both China and Australia. Modeling suggests that the 3-D cell could boost power production by as much as 300 percent compared to conventional solar cells.

Because it can capture more power from a given area, the 3-D design could be useful for powering satellites, cell phones, military equipment and other applications that have a limited surface area. Developed at the Georgia Tech Research Institute (GTRI), the "three dimensional multi-junction photovoltaic device" uses its 3-D surface structure to increase the likelihood that every photon striking it will produce energy.

"One problem with conventional flat solar cells is that the sunlight hits a flat surface and can bounce off, so the light only has one chance to be absorbed and turned into electricity," explained John Bacon, president of IP2Biz®, an Atlanta company that has licensed the technology from GTRI. "In the GTRI 3-D solar cell, we build a nanometer-scale version of Manhattan, with streets and avenues of tiny light-capturing structures similar to tall buildings. The sunlight bounces from building to building and produces more electricity."

The arrays of towers on the 3-D solar cell can increase the surface area by several thousand percent, depending on the size and density of the structures.

"Conventional cells have to be very large to make adequate amounts of electricity, and that limits their applications," Bacon explained. "The large surface area of our 3-D cell means that applications from satellites to cell phones will be more practical since we can pack so much light gathering power into a small footprint."

The three dimensional structure also means that the cells don't have to be aimed directly at the sun to capture sunlight efficiently, Bacon added. Conventional solar cells work best when the sunlight hits them at a narrow range of angles, but the new 3-D system remains efficient regardless of the angle at which the light hits.

The tower structures on the GTRI solar cells are about 100 microns tall, 40 microns by 40 microns square, 50 microns apart — and grown from arrays containing millions of vertically aligned carbon nanotubes. The nanotubes primarily serve as the structure on which current-generating photovoltaic p/n coatings are applied.

"The carbon nanotubes are like the framing inside of buildings, and the photovoltaic materials are like the outer skin of the buildings," said Tom Smith, president of 3-D Solar LLC, a company formed to commercialize the cells. "Within the three-dimensional structures, multiple materials could be used to create the physical framing. Carbon nanotubes were used in the original solar cells, but they are not required for the technology to work."

The 3-D solar cells were developed in the laboratory of Jud Ready, a GTRI senior research engineer. Tests comparing the 3-D solar cells produced in Ready's lab with traditional planar cells produced from the same materials showed an increase in power generation, Smith said.

The researchers chose to make their prototype cells from cadmium materials because they were familiar with them from other research. However, a broad range of photovoltaic materials could also be used, and selecting the best material for specific applications will be the goal of future research.

Fabrication of the cells begins with a silicon wafer, which also serves as the solar cell's bottom junction. The researchers first coat the wafer with a thin layer of iron using a photolithography process that can create a wide variety of patterns. The patterned wafer is then placed into a furnace heated to approximately 700 degrees Celsius.

Hydrocarbon gases are then flowed into the furnace, where the carbon and hydrogen separate. In a process known as chemical vapor deposition, the carbon grows arrays of multi-walled carbon nanotubes atop the patterns created by the iron particles.

Once the carbon nanotube towers have been grown, the researchers use a process known as molecular beam epitaxy to coat the nanotube arrays with cadmium telluride (CdTe) and cadmium sulfide (CdS), which serve as the p-type and n-type photovoltaic layers. Atop that, a thin coating of indium tin oxide, a clear conducting material, is added to serve as the cell's top electrode.

In the finished solar cells, the carbon nanotube arrays serve both as support for the 3-D arrays and as a conductor connecting the photovoltaic materials to the silicon wafer.

The 3-D solar cells were described in the March 2007 issue of the journal JOM, published by the Minerals, Metals and Materials Society, and in the Journal of Applied Physics in 2008. The research leading to their development was supported by the Air Force Office of Scientific Research and the Air Force Research Laboratory.

Beyond the patents in China and Australia, IP2Biz has applied for protection in the United States, Canada, Europe, Korea and India, Smith noted. The patents granted so far apply to any photovoltaic application in which three dimensional structures are used to capture light bouncing off them, he added.

"The 3-D photovoltaic cell could be of great value in satellite, cell phone and defense applications given its order of magnitude reduction in footprint, coupled with the potential for increased power production compared to planar cells," Smith added. "We are very pleased with the level of interest in licensing or acquiring this innovation as means of addressing the world's growing need for energy."

(Photo: GIT)

Georgia Institute of Technology

EARTH'S ATMOSPHERE CAME FROM OUTER SPACE, FIND SCIENTISTS

0 comentarios
The gases which formed the Earth's atmosphere - and probably its oceans - did not come from inside the Earth but from outer space, according to a study by University of Manchester and University of Houston scientists.

The report published in the prestigious international journal 'Science' means that textbook images of ancient Earth with huge volcanoes spewing gas into the atmosphere will have to be rethought.

According to the team, the age-old view that volcanoes were the source of the Earth's earliest atmosphere must be put to rest.

Using world-leading analytical techniques, the team of Dr Greg Holland, Dr Martin Cassidy and Professor Chris Ballentine tested volcanic gases to uncover the new evidence.

The research was funded by Natural Environment Research Council (NERC).

"We found a clear meteorite signature in volcanic gases," said Dr Greg Holland the project's lead scientist.

"From that we now know that the volcanic gases could not have contributed in any significant way to the Earth's atmosphere.

"Therefore the atmosphere and oceans must have come from somewhere else, possibly from a late bombardment of gas and water rich materials similar to comets.

"Until now, no one has had instruments capable of looking for these subtle signatures in samples from inside the Earth - but now we can do exactly that."

The techniques enabled the team to measure tiny quantities of the unreactive volcanic trace gases Krypton and Xenon, which revealed an isotopic 'fingerprint' matching that of meteorites which is different from that of 'solar' gases.

The study is also the first to establish the precise composition of the Krypton present in the Earth's mantle.

Project director Prof Chris Ballentine of The University of Manchester, said: "Many people have seen artist's impressions of the primordial Earth with huge volcanoes in the background spewing gas to form the atmosphere.

"We will now have to redraw this picture."

University of Manchester

ANCIENT PYGMY SEA COW DISCOVERED

0 comentarios

The discovery of a Middle Eocene (48.6-37.2 million years ago) sea cow fossil by McGill University professor Karen Samonds has culminated in the naming of a new species.

This primitive "dugong" is among the world's first fully-aquatic sea cows, having evolved from terrestrial herbivores that began exploiting coastal waters. Within this ancient genus, the newly discovered species is unusual as it is the first species known from the southern hemisphere (its closest relatives are from Egypt and India), and is extremely primitive in its skull morphology and dental adaptations. The fossil is a pivotal step in understanding Madagascar's evolutionary history - as it represents the first fossil mammal ever named from the 80-million-year gap in Madagascar's fossil record.

"The fossils of this ancient sea cow are unique in that it has a full set of relatively unspecialized teeth whereas modern sea cows have a reduced dentition specialized for eating sea grass, and most fossil species already show some degree of reduction. It may also be the first fully aquatic sea cow; confirmation will depend on recovering more of the skeleton, especially its limbs," says Samonds.

Samonds is a Curator at the Redpath Museum and an Assistant Professor in the Departments of Anatomy and Cell Biology, and the Faculty of Dentistry. Her discovery may be the tip of the iceberg to unlocking the secrets of 80-million-year gap in Madagascar's fossil record. The presence of fossil sea cows, crocodiles, and turtles, (which are generally associated with coastal environments), suggests that this fossil locality preserves an environment that was close to the coast, or even in an estuary (river mouth). These sediments may potentially yield fossils of marine, terrestrial and freshwater vertebrates - animals that lived in the sea as well as those that lived in forests, grasslands and rivers close to the ocean. Dr. Samonds plans to continue collecting fossils at this site, starting with a National Geographic-funded expedition in summer 2010.

"My hope with the discovery of these fossils is that they will illuminate how, when and from where Madagascar's modern animals arrived," said Samonds, "helping us understand how Madagascar accumulated such a bizarre and unique set of modern animals."

(Photo: McGill U.)

McGill University

AN ADVANCE IN SUPERCONDUCTING MAGNET TECHNOLOGY OPENS THE DOOR FOR MORE POWERFUL COLLIDERS

0 comentarios

Preparing for as much as a 10-fold increase in the Large Hadron Collider’s luminosity within the next decade, U.S. scientists and engineers have demonstrated a powerful magnet based on an advanced superconducting material, which can produce magnetic fields strong enough to focus intense proton beams in the LHC’s upgraded interaction regions.

The Large Hadron Collider (LHC) at CERN has just started producing collisions, but scientists and engineers have already made significant progress in preparing for future upgrades beyond the collider’s nominal design performance, including a 10-fold increase in collision rates by the end of the next decade and, eventually, higher-energy beams.

In a test on December 4, a focusing magnet built by members of the U.S. Department of Energy’s multi-laboratory LHC Accelerator Research Program (LARP), using an advanced superconducting material, achieved the goal of a magnetic field strong enough to focus intense proton beams in the upgraded LHC interaction regions.

“This success has been made possible by the enthusiasm and dedication of many scientists, engineers, and technicians at the collaborating laboratories,” said Eric Prebys of the Fermi National Accelerator Laboratory, who heads LARP, “and by the guidance and continuous support of the U.S. Department of Energy and the encouragement and contributions of CERN and the entire accelerator magnet community.”

LARP is a collaboration of Brookhaven National Laboratory, Fermilab, Lawrence Berkeley National Laboratory, and the SLAC National Accelerator Laboratory, founded by DOE in 2003 to address the challenge of planned upgrades that will significantly increase the LHC’s luminosity.

Increased luminosity will mean more collision events in the LHC’s interaction regions; the major experiments will thus be able to collect more data in less time. But it will also mean that the “inner triplet” magnets, which focus the beams to tiny spots at the interaction regions and are within 20 meters of the collision points, will be subjected to even more radiation and heat than they are presently designed to withstand.

The superconducting inner triplet magnets now in place at the LHC operate at the limits of well-established niobium-titanium (NbTi) magnet technology. One of the LARP goals is to develop upgraded magnets using a different superconducting material, niobium tin (Nb3Sn). Niobium tin is superconducting at a higher temperature than niobium titanium and therefore has a greater tolerance for heat; it can also be superconducting at a magnetic field more than twice as strong.

Unlike niobium titanium, however, niobium tin is brittle and sensitive to pressure; to become a superconductor when cold it must be reacted at very high temperatures, 650 to 700 degrees Celsius. Advanced magnet design and fabrication methods are needed to meet these challenges.

The Department of Energy’s Office of High Energy Physics (HEP) has long supported niobium-tin magnet research at several national laboratories through its Advanced Accelerator Technology Program. The HEP Conductor Development Program, a collaboration among national labs, universities, and industry created in 1998, was able to double the performance of niobium tin at high fields, which led to the fabrication of model coils up to four meters long and short dipole magnets with fields up to 16 tesla - about twice the nominal field of LHC – the necessary preconditions for the LARP program.

The LARP effort initially centered on a series of short quadrupole models at Fermilab and Berkeley Lab and, in parallel, a four-meter-long magnet based on racetrack coils, built at Brookhaven and Berkeley Lab. The next step involved the combined resources of all three laboratories: the fabrication of a long, large-aperture quadrupole magnet.

In 2005 DOE, CERN, and LARP agreed to set a goal of reaching, before the end of 2009, a gradient, or rate of increase in field strength, of 200 tesla per meter (200 T/m) in a four-meter-long superconducting quadrupole magnet with a 90-millimeter bore for housing the beam pipe.

This goal was met on December 4 by LARP’s first “long quadrupole shell” model magnet. The magnet’s superconducting coils performed well, as did its mechanical structure, based on a thick aluminum cylinder (shell) that supports the superconducting coils against the large forces generated by high magnetic fields and electrical currents. The magnet’s ability to withstand quenches – sudden transitions to normal conductivity with resulting heating – also was excellent.

“Congratulations on behalf of CERN for this achievement, a milestone both toward the LHC luminosity upgrade and for accelerator technology advancement in general, made possible by the high technical quality of the LARP teams and leadership,” said Lucio Rossi, head of the Magnets, Superconductors, and Cryostats group in CERN’s Technology Department, in a message to Fermilab’s Giorgio Ambrosio, head of the LARP Long Quadrupole team, which performed the successful development and test.

Rossi also praised the “strategic vision of leaders in the DOE laboratories” in initiating LARP, noting the contributions of Fermilab’s Jim Strait and Peter Limon, Berkeley Lab’s Steve Gourlay, and Bruce Strauss from DOE. “From my perspective, without them LARP would not have been started.”

Although the successful test of the long model was a major milestone, it is only one of several steps needed to fully qualify the new technology for use in the LHC. One goal is to further increase the field gradient in the long quadrupole, both to explore the limits of the technology and to reproduce the performance levels demonstrated in short models. A second goal is to address other critical accelerator requirements, such as field quality and alignment, through a new series of models with an even larger aperture (120 millimeters).

The long quadrupole shell magnet’s conductor – high-performance niobium-tin wire meeting stringent requirements – was manufactured by Oxford Superconducting Technology of New Jersey. The wire was cabled and insulated at Berkeley Lab and qualified at Brookhaven and Fermilab. The superconducting coils were wound at Fermilab and underwent high temperature reaction at Brookhaven and Fermilab, and their instrumentation was completed at Berkeley Lab. The magnet supporting structure was designed and pre-assembled at Berkeley Lab. The final magnet assembly was done at Berkeley Lab, and the cold test was performed at Fermilab’s Vertical Magnet Test Facility.

The Long Quadrupole task leaders are, for coil fabrication, Fred Nobrega of Fermilab and Jesse Schmalzle of Brookhaven; for the supporting structure and magnet assembly, Paolo Ferracin of Berkeley Lab; for instrumentation and quench protection, Helene Felice of Berkeley Lab; and for test preparations and test, Guram Chlachidze of Fermilab. Peter Wanderer of Brookhaven led the effort during its most critical phase and was recently succeeded by GianLuca Sabbi of Berkeley Lab, head of the LARP magnet research and development program.

(Photo: LBNL)

SUZAKU CATCHES RETREAT OF A BLACK HOLE'S DISK

0 comentarios

Studies of one of the galaxy's most active black-hole binaries reveal a dramatic change that will help scientists better understand how these systems expel fast-moving particle jets.

Binary systems where a normal star is paired with a black hole often produce large swings in X-ray emission and blast jets of gas at speeds exceeding one-third that of light. What fuels this activity is gas pulled from the normal star, which spirals toward the black hole and piles up in a dense accretion disk.

"When a lot of gas is flowing, the dense disk reaches nearly to the black hole," said John Tomsick at the University of California, Berkeley. "But when the flow is reduced, theory predicts that gas close to the black hole heats up, resulting in evaporation of the innermost part of the disk." Never before have astronomers shown an unambiguous signature of this transformation.

To look for this effect, Tomsick and an international group of astronomers targeted GX 339-4, a low-mass X-ray binary located about 26,000 light-years away in the constellation Ara. There, every 1.7 days, an evolved star no more massive than the sun orbits a black hole estimated at 10 solar masses. With four major outbursts in the past seven years, GX 339-4 is among the most dynamic binaries in the sky.

In September 2008, nineteen months after the system's most recent outburst, the team observed GX 339-4 using the orbiting Suzaku X-ray observatory, which is operated jointly by the Japan Aerospace Exploration Agency and NASA. At the same time, the team also observed the system with NASA's Rossi X-ray Timing Explorer satellite.

Instruments on both satellites indicated that the system was faint but in an active state, when black holes are known to produce steady jets. Radio data from the Australia Telescope Compact Array confirmed that GX 339-4's jets were indeed powered up when the satellites observed.

Despite the system's faintness, Suzaku was able to measure a critical X-ray spectral line produced by the fluorescence of iron atoms. "Suzaku's sensitivity to iron emission lines and its ability to measure the shapes of those lines let us see a change in the accretion disk that only happens at low luminosities," said team member Kazutaka Yamaoka at Japan's Aoyama Gakuin University.

X-ray photons emitted from disk regions closest to the black hole naturally experience stronger gravitational effects. The X-rays lose energy and produce a characteristic signal. At its brightest, GX 339-4's X-rays can be traced to within about 20 miles of the black hole. But the Suzaku observations indicate that, at low brightness, the inner edge of the accretion disk retreats as much as 600 miles.

"We see emission only from the densest gas, where lots of iron atoms are producing X-rays, but that emission stops close to the black hole -- the dense disk is gone," explained Philip Kaaret at the University of Iowa. "What's really happening is that, at low accretion rates, the dense inner disk thins into a tenuous but even hotter gas, rather like water turning to steam."

The dense inner disk has a temperature of about 20 million degrees Fahrenheit, but the thin evaporated disk may be more than a thousand times hotter.

The study, which appears in the Dec. 10 issue of The Astrophysical Journal Letters, confirms the presence of low-density accretion flow in these systems. It also shows that GX 339-4 can produce jets even when the densest part of the disk is far from the black hole.

"This doesn't tell us how jets form, but it does tell us that jets can be launched even when the high-density accretion flow is far from the black hole," Tomsick said. "This means that the low-density accretion flow is the most essential ingredient for the formation of a steady jet in a black hole system."

(Photo: ESO/L. Calçada)

NASA

GREENLAND GLACIERS: WHAT LIES BENEATH

0 comentarios

Scientists who study the melting of Greenland’s glaciers are discovering that water flowing beneath the ice plays a much more complex role than they previously imagined.

Researchers previously thought that meltwater simply lubricated ice against the bedrock, speeding the flow of glaciers out to sea.

Now, new studies have revealed that the effect of meltwater on acceleration and ice loss -- through fast-moving outlet glaciers that connect the inland ice sheet to the ocean -- is much more complex. This is because a kind of plumbing system evolves over time at the base of the ice, expanding and shrinking with the volume of meltwater.

Researchers are now developing new low-cost technologies to track the flow of glaciers and get a glimpse of what lies beneath the ice.

As ice melts, water trickles down into the glacier through crevices large and small, and eventually forms vast rivers and lakes under the ice, explained Ian Howat, assistant professor of earth sciences at Ohio State University. Researchers once thought that this sub-glacial water was to blame for sudden speed-ups of outlet glaciers along the Greenland coasts.

“We’ve come to realize that sub-glacial meltwater is not responsible for the big accelerations that we’ve seen for the last ten years,” Howat said. “Changes in the glacial fronts, where the ice meets the ocean, are the real key.”

“That doesn’t mean that meltwater is not important,” he continued. “It plays a role along these glacial fronts -- it’s just a very complex role, one that makes it hard for us to predict the future.”

In a press conference at the American Geophysical Union (AGU) meeting in San Francisco on Wednesday, December 16, 2009, Howat joined colleagues from the University of Colorado-Boulder/NOAA Cooperative Institute for Research in Environmental Sciences (CIRES) and NASA’s Jet Propulsion Laboratory (JPL) to discuss three related projects -- all of which aim to uncover how this meltwater interacts with ice and the ocean.

Their work has implications for ice loss elsewhere in the world -- including Antarctica -- and could ultimately lead to better estimates of future sea level rise due to climate change.

Howat leads a team of researchers who are planting inexpensive global positioning system (GPS) devices on the ice in Greenland and Alaska to track glacial flow. Designed to transmit their data off the ice, these systems have to be inexpensive, because there’s a high likelihood that they will never be recovered from the highly crevassed glaciers.

Howat will describe the team’s early results at the AGU meeting, and give an overview of what researchers have learned about meltwater so far.

John Adler, a doctoral student at CIRES, works to calculate the volume of water in lakes on the top of the ice sheet. These lakes periodically drain, and the entire water volume disappears into the ice. He uses small unmanned aerial vehicles to measure the ice’s surface roughness -- an indication of where cracks may form to enable this drainage to happen. Other members of his team are releasing GPS-tagged autonomous probes into the meltwater itself, to follow the water all the way down to the base of the ice sheet and out to sea.

“My tenet is pushing the miniaturization of technology, so that small autonomous platforms -- in the sea, on the surface, or in the air -- can reliably gather scientific information in remote regions,” Adler said.

All these efforts require cutting-edge technology, and that’s where Alberto Behar of JPL comes in. An Investigation Scientist on the upcoming Mars rover project, Behar designs the GPS units that will give researchers the data they need.

Howat’s team placed six units on outlet glaciers in Greenland last year, and this year they placed three in Greenland and three in Alaska. The units offer centimeter-scale measurements of ice speed, and Behar designed the power and communications systems to keep the overall cost per unit as inexpensive as possible.

Howat has found that glacial meltwater at the base of the ice sheet has little influence on ice loss along the coast -- most of the time.

All over Greenland, meltwater collects beneath the ice, gradually carving out an intricate network of passageways called moulins. The moulins form an ever-changing plumbing system that regulate where water collects between the ice and bedrock at different times of the year. According to Howat, meltwater increases as ice melts in the summer, and decreases as water re-freezes in winter.

In the early summer, the sudden influx of water overwhelms the subglacial drainage system, causing the water pressure to increase and the ice to lift off its bed and flow faster, to the tune of 100 meters per year, he said. The water passageways quickly expand, however, and reduce the water pressure so that by mid-summer the glaciers are flowing slowly again.

Inland, this summertime boost in speed is very noticeable, since the glaciers are moving so slowly in general.

But outlet glaciers along the coast are already flowing out to sea at rates as high as 10 kilometers per year -- a rate too high to be caused by the meltwater.

“So you have this inland ice moving slowly, and you have these outlet glaciers moving 100 times faster. Those outlet glaciers are feeling a small acceleration from the meltwater, but overall the contribution is negligible,” Howat said.

His team looked for correlations between times of peak meltwater in the summer and times of sudden acceleration in outlet glaciers, and found none. “Some of these outlet glaciers accelerated in the wintertime, and some of accelerated over long periods of time. The changes didn’t correlate with any time that you would expect there to be more melt,” he added.

So if meltwater is not responsible for rapidly moving outlet glaciers, then what is responsible? Howat suspects that the ocean is the cause.

Through computer modeling, he and his colleagues have determined that friction between the glacial walls and the fjords that surround them is probably what holds outlet glaciers in place, and sudden increases in ocean water temperature cause the outlet glaciers to speed up.

Howat did point out two cases in which meltwater can have a dramatic effect on ice loss along the coast: it can expand within cracks to form stress fractures, or it can bubble out from under the base of the ice sheet and stir up the warmer ocean water. Both circumstances can cause large pieces of the glacier to break off.

At one point, he and his colleagues witnessed the latter effect first hand. They detected a sudden decrease of sub-glacial meltwater inland, only to see a giant plume of dirty water burst out from under the ice at the nearby water’s edge.

The dirty water was freshwater -- glacial meltwater. It sprayed out from between the glacier and the bedrock “like a fire hose,” Howat said. Since saltwater is more dense than freshwater, the freshwater bubbled straight up to the surface. “This was the equivalent of the pipes bursting on all that plumbing beneath the ice, releasing the pressure.”

That kind of turbulence stirs up the warm ocean water, and can cause more ice to melt, he said.

“So you can’t just say, ‘if you increase melting, you increase glacial speed.’ The relationship is much more complex than that, and since the plumbing system evolves over time, it’s especially hard to pin down.”

(Photo: Alberto Behar, Jet Propulsion Laboratory)

Ohio State University

Followers

Archive

 

Selected Science News. Copyright 2008 All Rights Reserved Revolution Two Church theme by Brian Gardner Converted into Blogger Template by Bloganol dot com