Tuesday, June 30, 2009

SANDS OF GOBI DESERT YIELD NEW SPECIES OF NUT-CRACKING DINOSAUR

0 comentarios

Plants or meat: That's about all that fossils ever tell paleontologists about a dinosaur's diet. But the skull characteristics of a new species of parrot-beaked dinosaur and its associated gizzard stones indicate that the animal fed on nuts and/or seeds. These characteristics present the first solid evidence of nut-eating in any dinosaur.

"The parallels in the skull to that in parrots, the descendants of dinosaurs most famous for their nut-cracking habits, is remarkable," said Paul Sereno, a paleontologist at the University of Chicago and National Geographic Explorer-in-Residence. Sereno and two colleagues from the People's Republic of China announce their discovery June 17 in the Proceedings of the Royal Society B.

The paleontologists discovered the new dinosaur, which they've named Psittacosaurus gobiensis, in the Gobi Desert of Inner Mongolia in 2001, and spent years preparing and studying the specimen. The dinosaur is approximately 110 million years old, dating to the mid-Cretaceous Period.

The quantity and size of gizzard stones in birds correlates with dietary preference. Larger, more numerous gizzard stones point to a diet of harder food, such as nuts and seeds. "The psittacosaur at hand has a huge pile of stomach stones, more than 50, to grind away at whatever it eats, and this is totally out of proportion to its three-foot body length," Sereno explained.

Technically speaking, the dinosaur is also important because it displays a whole new way of chewing, which Sereno and co-authors have dubbed "inclined-angle" chewing. "The jaws are drawn backward and upward instead of just closing or moving fore and aft," Sereno said. "It remains to be seen whether some other plant-eating dinosaurs or other reptiles had the same mechanism."

The unusual chewing style has solved a major mystery regarding the wear patterns on psittacosaur teeth. Psittacosaurs sported rigid skulls, but their teeth show the same sliding wear patterns as plant-eating dinosaurs with flexible skulls.

(Photo: Todd Marshall)

University of Chicago

NEW STUDY CLOSES IN ON GEOLOGIC HISTORY OF EARTH’S DEEP INTERIOR

0 comentarios

By using a super-computer to virtually squeeze and heat iron-bearing minerals under conditions that would have existed when the Earth crystallized from an ocean of magma to its solid form 4.5 billion years ago, two UC Davis geochemists have produced the first picture of how different isotopes of iron were initially distributed in the solid Earth.

The discovery could usher in a wave of investigations into the evolution of Earth’s mantle, a layer of material about 1,800 miles deep that extends from just beneath the planet’s thin crust to its metallic core.

"Now that we have some idea of how these isotopes of iron were originally distributed on Earth,” said study senior author James Rustad, a chancellor’s fellow and professor of geology, “we should be able to use the isotopes to trace the inner workings of Earth’s engine.”

A paper describing the study by Rustad and co-author Qing-zhu Yin, an associate professor of geology, was posted online by the journal Nature Geoscience on Sunday, June 14, in advance of print publication in July.

Sandwiched between Earth's crust and core, the vast mantle accounts for about 85 percent of the planet's volume. On a human time scale, this immense portion of our orb appears to be solid. But over millions of years, heat from the molten core and the mantle’s own radioactive decay cause it to slowly churn, like thick soup over a low flame. This circulation is the driving force behind the surface motion of tectonic plates, which builds mountains and causes earthquakes.

One source of information providing insight into the physics of this viscous mass are the four stable forms, or isotopes, of iron that can be found in rocks that have risen to Earth’s surface at mid-ocean ridges where seafloor spreading is occurring, and at hotspots like Hawaii’s volcanoes that poke up through the Earth’s crust. Geologists suspect that some of this material originates at the boundary between the mantle and the core some 1,800 miles beneath the surface.

“Geologists use isotopes to track physico-chemical processes in nature the way biologists use DNA to track the evolution of life,” Yin said.

Because the composition of iron isotopes in rocks will vary depending on the pressure and temperature conditions under which a rock was created, in principle, Yin said, geologists could use iron isotopes in rocks collected at hot spots around the world to track the mantle’s geologic history. But in order to do so, they would first need to know how the isotopes were originally distributed in Earth’s primordial magma ocean when it cooled down and hardened.

As a team, Yin and Rustad were the ideal partners to solve this riddle. Yin and his laboratory are leaders in the field of using advanced mass spectrometric analytical techniques to produce accurate measurements of the subtle variations in isotopic composition of minerals. Rustad is renowned for his expertise in using large computer clusters to run high-level quantum mechanical calculations to determine the properties of minerals.

The challenge the pair faced was to determine how the competing effects of extreme pressure and temperature deep in Earth’s interior would have affected the minerals in the lower mantle, the zone that stretches from about 400 miles beneath the planet’s crust to the core-mantle boundary. Temperatures up to 4,500 degrees Kelvin in the region reduce the isotopic differences between minerals to a miniscule level, while crushing pressures tend to alter the basic form of the iron atom itself, a phenomenon known as electronic spin transition.

Using Rustad’s powerful 144-processor computer, the two calculated the iron isotope composition of two minerals under a range of temperatures, pressures and different electronic spin states that are now known to occur in the lower mantle. The two minerals, ferroperovskite and ferropericlase, contain virtually all of the iron that occurs in this deep portion of the Earth.

These calculations were so complex that each series Rustad and Yin ran through the computer required a month to complete.

In the end, the calculations showed that extreme pressures would have concentrated iron’s heavier isotopes near the bottom of the crystallizing mantle.

It will be a eureka moment when these theoretical predictions are verified one day in geological samples that have been generated from the lower mantle, Yin said. But the logical next step for him and Rustad to take, he said, is to document the variation of iron isotopes in pure chemicals subjected to temperatures and pressures in the laboratory that are equivalent to those found at the core-mantle boundary. This can be achieved using lasers and a tool called a diamond anvil.

“Much more fun work lies ahead,” he said. “And that’s exciting.”

(Photo: Louise Kellogg, with James Rustad and Qing-zhu Yin/UCD)

UC Davis

SCIENTISTS FIND THAT SQUID'S BIOLUMINESCENCE COMES FROM EYE-RELATED GENES

0 comentarios

Scientists have found that a small Hawaiian squid can hide itself by using an organ with the same genes found in its eye.

Using a process called bioluminescence, the squid can light up its underside to match the surrounding light from the sun. This disguises the squid in much the same way that it discharges black ink to cloak itself. The study was recently reported in the Proceedings of the National Academies of Science.

The squid, commonly called Hawaiian Bobtail squid, has a light organ that is totally separate from the eyes. The new finding is that this organ is light sensitive and uses some of the same genes as the squid's eye.

Todd Oakley, an evolutionary biologist at UC Santa Barbara, performed the evolutionary analysis of the genes of the squid. He confirmed that the genes in the light organ are similar or the same as those of the eye of the squid.

"This is significant because it is an example of how existing components can be used in evolution to make something completely new," said Oakley. "These components existed for use in the eye and then got recruited for use in the light organ. The light organ resembles an eye in a lot of ways. It has a lens for focusing the light. It has the shape of an eye, and now we found that it has the sensitivity of an eye as well."

Oakley explained that the analogy used is "evolutionary tinkering." "Evolution acts a lot like a tinkerer and assembles what's available to make something new," he said.

The scientists gathered the small squid in shallow water in Hawaii, after sunset, when they are active. "We scoop them up with a net," said Oakley. "Then we bring them back, and keep male and female pairs together. Then they have lots of young that we can study in the lab." The squid are located in a lab at the University of Wisconsin.

"The reason that the squid are bioluminescent is for camouflage," said Oakley. "You can imagine that if you were lying on the bottom of the ocean looking up, there is a lot of light coming from above. When the squid passes over, it would cast a shadow; it would be pretty conspicuous."

To camouflage itself, the squid matches the light behind it, using bioluminescence. It matches the surrounding light so that it does not cast a shadow.

The light is caused by bacteria that are housed in the squid. It harbors the bacteria and the bacteria themselves produce the light. The scientists confirmed that the squid can actually detect the light that it is producing, by confirming that many of the genes used in the eye are also used in the light organ.

The light comes from a chemical reaction that happens within the bacteria. The squid doesn't control the chemical reaction directly, but the squid can change its light organ to make it more or less open –– to let out more or less light, Oakley explained.

(Photo: University of Wisconsin)

UC Santa Barbara

NEW CU-BOULDER RESEARCH EFFORT DISCOVERS ANCIENT MAYA INTENSIVELY CULTIVATED MANIOC

0 comentarios

A University of Colorado at Boulder team has uncovered an ancient and previously unknown Maya agricultural system -- a large manioc field intensively cultivated as a staple crop that was buried and exquisitely preserved under a blanket of ash by a volcanic eruption in present-day El Salvador 1,400 years ago.

Evidence shows the manioc field -- at least one-third the size of a football field -- was harvested just days before the eruption of the Loma Caldera volcano near San Salvador in roughly A.D. 600, said CU-Boulder anthropology Professor Payson Sheets, who is directing excavations at the ancient village of Ceren. The cultivated field of manioc was discovered adjacent to Ceren, which was buried under 17 feet of ash and is considered the best preserved ancient farming village in all of Latin America.

The ancient planting beds of the carbohydrate-rich tuber are the first and only evidence of an intensive manioc cultivation system at any New World archaeology site, said Sheets. While two isolated portions of the manioc field were discovered in 2007 following radar work and limited excavation, 18 large test pits dug in spring 2009 -- each measuring about 10 feet by 10 feet -- allowed the archaeologists to estimate the size of the field and assess the related agricultural activity that went on there.

Sheets said manioc pollen has been found at archaeological sites in Belize, Mexico and Panama, but it is not known whether it was cultivated as a major crop or was just remnants of a few garden plants. "This is the first time we have been able to see how ancient Maya grew and harvested manioc," said Sheets, who discovered Ceren in 1978.

Ash hollows in the manioc planting beds at Ceren left by decomposed plant material were cast in dental plaster by the team to preserve their shape and size, said Sheets. Evidence showed the field was harvested and then replanted with manioc stalk cuttings just a few days before the eruption of the volcano.

A few anthropologists have suspected that manioc tubers -- which can be more than three feet long and as thick as a man's arm -- were a dietary salvation for ancient, indigenous societies living in large cities in tropical Latin America. Corn, beans and squash have long been known to be staples of the ancient Maya, but they are sensitive to drought and require fertile soils, said Sheets.

"As 'high anxiety' crops, they received a lot of attention, including major roles in religious and cosmological activities of the Maya," said Sheets. "But manioc, which grows well in poor soils and is highly drought resistant did not. I like to think of manioc like an old Chevy gathering dust in the garage that doesn't get much attention, but it starts right up every time when the need arises."

Calculations by Sheets indicate the Ceren planting fields would have produced roughly 10 metric tons of manioc annually for the 100 to 200 villagers believed to have lived there. "The question now is what these people in the village were doing with all that manioc that was harvested all at once," he said. "Even if they were gorging themselves, they could not have consumed that much."

The CU-Boulder team also found the shapes and sizes of individual manioc planting ridges and walkways varied widely. "This indicates the individual farmers at Ceren had control over their families' fields and cultivated them they way they wanted, without an external higher authority telling them what to do and how to do it," he said.

The team also found that the manioc fields and adjacent cornfields at Ceren were oriented 30 degrees east of magnetic north -- the same orientation as the village buildings and the public town center, said Sheets. "The villagers laid out the agricultural fields and the town structures with the same orientation as the nearby river, showing the importance and reverence the Maya had for water," he said.

The volcano at Ceren shrouded adobe structures, thatched roofs, house beams, woven baskets, sleeping mats, garden tools and grain caches. The height of the corn stalks and other evidence indicate the eruption occurred early on an August evening, he said.
Because it is unlikely that the people of Ceren were alone in their intensive cultivation of manioc, Sheets and his colleagues are now investigating chemical and microscopic botanical evidence at other Maya archaeological sites that may be indicators of manioc cultivation and processing.

Sheets said Maya villagers living in the region today have a long tradition of cutting manioc roots into small chunks, drying them eight days, then grinding the chunks into a fine, flour-like powder known as almidón. Almidón can be stored almost indefinitely, and traditionally was used by indigenous people in the region for making tamales and tortillas and as a thickening agent for stews, he said.

Since indigenous peoples in tropical South America use manioc today to brew alcoholic beverages, including beer, the CU-Boulder team will be testing ceramic vessels recovered from various structures at Ceren for traces of manioc. To date, 12 structures have been excavated, and others detected by ground-penetrating radar remain buried, he said.

Sheets is particularly interested in vessels from a religious building at Ceren excavated in 1991. The structure contained such items as a deer headdress painted red, blue and white; a large, alligator-shaped painted pot; the bones of butchered deer; and evidence that large quantities of food items like meat, corn, beans and squash were prepared on-site and dispensed to villagers from the structure, said Sheets.

Ceren's residents apparently were participating in a spiritual ceremony in the building when the volcano erupted, and did not return to their adobe homes, which excavations showed were void of people and tied shut from the outside. "I think there may have been an emergency evacuation from the ceremonial building when the volcano erupted," he said. To date, no human remains have been found at Ceren.

(Photo: U. Colorado B.)

University of Colorado at Boulder

SCIENTISTS CREATE FIRST COMPREHENSIVE COMPUTER MODEL OF SUNSPOTS

0 comentarios

In a breakthrough that will help scientists unlock mysteries of the sun and its impacts on Earth, scientists have created the first-ever comprehensive computer model of sunspots. The resulting visuals capture both scientific detail and remarkable beauty. The results are published in a paper in Science Express. The research was supported by the National Science Foundation (NSF).

The high-resolution simulations of sunspots open the way for scientists to learn more about the vast mysterious dark patches on the sun's surface, first studied by Galileo. Sunspots are associated with massive ejections of charged plasma that can cause geomagnetic storms and disrupt communications and navigational systems. They are also linked to variations in solar output that can affect weather on Earth and exert a subtle influence on climate patterns.

"Understanding complexities in the solar magnetic field is key to 'space weather' forecasting," says Richard Behnke of NSF's Division of Atmospheric Sciences. "If we can model sunspots, we may be able to predict them and be better prepared for the potential serious consequences here on Earth of these violent storms on the sun."

Scientists at the National Center for Atmospheric Research (NCAR) in Boulder, Colo., collaborated with colleagues at the Max Planck Institute for Solar System Research (MPS) in Germany, building on a computer code that had been created at the University of Chicago.

"This is the first time we have a model of an entire sunspot," says lead paper author Matthias Rempel, a scientist at NCAR's High Altitude Observatory. "If you want to understand all the drivers of Earth's atmospheric system, you have to understand how sunspots emerge and evolve. Our simulations will advance research into the inner workings of the sun as well as connections between solar output and Earth's atmosphere."

Ever since outward flows from the center of sunspots were discovered 100 years ago, scientists have worked to explain the complex structure of sunspots, whose number peaks and wanes during the 11-year solar cycle. Sunspots accompany intense magnetic activity that is associated with solar flares and massive ejections of plasma that can buffet Earth's atmosphere. The resulting damage to power grids, satellites and other sensitive technological systems takes an economic toll on a rising number of industries.

Creating such detailed simulations would not have been possible even as recently as a few years ago, before the latest generation of supercomputers and a growing array of instruments to observe the sun. The new computer models capture pairs of sunspots with opposite polarity. In striking detail, they reveal the dark central region, or umbra, with brighter umbral dots, as well as webs of elongated narrow filaments with flows of mass streaming away from the spots in the outer penumbral regions. They also capture the convective flow and movement of energy that underlie the sunspots, and which are not directly detectable by instruments.

The models suggest that the magnetic fields within sunspots need to be inclined in certain directions in order to create such complex structures. The authors conclude that there is a unified physical explanation for the structure of sunspots in umbra and penumbra that's the consequence of convection in a magnetic field with varying properties.

The simulations can help scientists decipher the mysterious, subsurface forces in the sun that cause sunspots. Such work may lead to an improved understanding of variations in solar output and their impacts on Earth.

To create the simulations, the research team designed a virtual, three- dimensional domain measuring about 31,000 miles by 62,000 miles, and about 3,700 miles in depth--an expanse as long as eight times Earth's diameter, and as deep as Earth's radius.

The scientists then used a series of equations involving fundamental physical laws of energy transfer, fluid dynamics, magnetic induction and feedback, and other phenomena to simulate sunspot dynamics at 1.8 billion grid points within the domain, each spaced about 10 to 20 miles apart.

They solved the equations on NCAR's new bluefire supercomputer, an IBM machine that can perform 76 trillion calculations per second. The work drew on increasingly detailed observations from a network of ground- and space-based instruments to verify that the model captured sunspots realistically. The new models are far more detailed and realistic than previous simulations that failed to capture the complexities of the outer penumbral region.

The researchers noted, however, that even their new model does not accurately capture the lengths of the filaments in parts of the penumbra. They can refine the model by placing the grid points closer together, but that would require more computing power than is currently available.

"Advances in supercomputing power are enabling us to close in on some of the most fundamental processes of the sun," says Michael Knölker, director of NCAR's High Altitude Observatory and a co-author of the paper. "With this breakthrough simulation, an overall comprehensive physical picture is emerging for everything that observers have associated with the appearance, formation, dynamics, and the decay of sunspots on the sun's surface."

(Photo: Matthias Rempel, NCAR)

National Science Foundation

AUTONOMOUS ROBOT DETECTS SHRAPNEL

0 comentarios

Bioengineers at Duke University have developed a laboratory robot that can successfully locate tiny pieces of metal within flesh and guide a needle to its exact location – all without the need for human assistance.

The successful proof-of-feasibility experiments lead the researchers to believe that in the future, such a robot could not only help treat shrapnel injuries on the battlefield, but might also be used for such medical procedures as placing and removing radioactive "seeds" used in the treatment of prostate and other cancers.

In their latest experiments, the engineers started with a rudimentary tabletop robot whose "eyes" are a novel 3-D ultrasound technology developed at Duke. An artificial intelligence program served as the robot's "brain" by taking the real-time 3-D information, processing it and giving the robot specific commands to perform. In their simulations, the researchers used tiny (2 millimeter) pieces of needle because, like shrapnel, they are subject to magnetism.

"We attached an electromagnet to our 3-D probe, which caused the shrapnel to vibrate just enough that its motion could be detected," said A.J. Rogers, who just completed an undergraduate degree in bioengineering at Duke. "Once the shrapnel's coordinates were established by the computer, it successfully guided a needle to the site of the shrapnel."

By proving that the robot could guide a needle to an exact location, it would simply be a matter of replacing the needle probe with a tiny tool, such as a grabber, the researchers said.

Rogers worked in the laboratory of Stephen Smith, director of the Duke University Ultrasound Transducer Group and senior member of the research team. The results of the experiments were published early online in the July issue of the journal IEEE Transactions on Ultrasonics, Ferroelectrics and Frequency Control.

Since the researchers achieved positive results using a rudimentary robot and a basic artificial intelligence program, they are encouraged that simple and reasonably safe procedures will become routine in the near future as robot and artificial intelligence technology improves.

"We showed that in principle, the system works," Smith said. "It can be very difficult using conventional means to detect small pieces of shrapnel, especially in the field. The military has an extensive program of exploring the use of surgical robots in the field, and this advance could play a role."

In addition to its applications recovering the radioactive seeds used in treating prostate cancer, Smith said the system could also prove useful in removing foreign, metallic objects from the eye.

Advances in ultrasound technology have made these latest experiments possible, the researchers said, by generating detailed, 3-D moving images in real-time. The Duke team has a long track record of modifying traditional 2-D ultrasound – like that used to image babies in utero – into the more advanced 3-D scans. Since inventing the technique in 1991, the team has shown its utility by developing specialized catheters and endoscopes for real-time imaging of blood vessels in the heart and brain.

In the latest experiments, the robot successfully performed its main task: locating a tiny piece of metal in a water bath, then directing a needle on the end of the robotic arm to it. The researchers had previously used this approach to detect micro-calcifications in simulated breast tissue. In the latest experiments, Rogers added an electromagnet to the end of the transducer, or wand, the device that sends out and receives the ultrasonic waves.

"The movement caused by the electromagnet on the shrapnel was not visible to the human eye," Rogers said. "However, on the 3-D color Doppler system, the moving shrapnel stood out plainly as bright red."

The robot used in these experiments is a tabletop version capable of moving in three axes. For the next series of tests, the Duke researchers plan to use a robotic arm with six-axis capability.

(Photo: Pratt School of Engineering)

Duke University

Monday, June 29, 2009

BEAKED, BIRD-LIKE DINOSAUR TELLS STORY OF FINGER EVOLUTION

0 comentarios
Scientists have discovered a unique beaked, plant-eating dinosaur in China. The finding, they say, demonstrates that theropod, or bird-footed, dinosaurs were more ecologically diverse in the Jurassic period than previously thought, and offers important evidence about how the three-fingered hand of birds evolved from the hand of dinosaurs.

The discovery is reported in a paper published in the journal Nature.

"This work on dinosaurs provides a whole new perspective on the evolution of bird manual digits," said H. Richard Lane, program director in the National Science Foundation (NSF)'s Division of Earth Sciences, which funded the research.

"This new animal is fascinating, and when placed into an evolutionary context it offers intriguing evidence about how the hand of birds evolved," said scientist James Clark of George Washington University.

Clark, along with Xu Xing of the Chinese Academy of Science's Institute of Vertebrate Paleontology and Paleoanthropology in Beijing, made the discovery. Clark's graduate student, Jonah Choiniere, also was involved in analyzing the new animal.

"This finding is truly exciting, as it changes what we thought we knew about the dinosaur hand," said Xu. "It also brings conciliation between the data from million-year-old bones and molecules of living birds."

Limusaurus inextricabilis ("mire lizard who could not escape") was found in 159 million-year-old deposits located in the Junggar Basin of Xinjiang, northwestern China. The dinosaur earned its name from the way its skeletons were preserved, stacked on top of each other in fossilized mire pits.

A close examination of the fossil shows that its upper and lower jaws were toothless, demonstrating that the dinosaur possessed a fully developed beak. Its lack of teeth, short arms without sharp claws and possession of gizzard stones suggest that it was a plant-eater, though it is related to carnivorous dinosaurs.

The newly discovered dinosaur's hand is unusual and provides surprising new insights into a long-standing controversy over which fingers are present in living birds, which are theropod dinosaur descendants. The hands of theropod dinosaurs suggest that the outer two fingers were lost during the course of evolution and the inner three remained.

Conversely, embryos of living birds suggest that birds have lost one finger from the outside and one from the inside of the hand. Unlike all other theropods, the hand of Limusaurus strongly reduced the first finger and increased the size of the second. Clark and Xu argue that Limusaurus' hand represents a transitional condition in which the inner finger was lost and the other fingers took on the shape of the fingers next to them.

The three fingers of most advanced theropods are the second, third and fourth fingers-the same ones indicated by bird embryos-contrary to the traditional interpretation that they were the first, second and third.

Limusaurus is the first ceratosaur known from East Asia and one of the most primitive members of the group. Ceratosaurs are a diverse group of theropods that often bear crests or horns on their heads, and many have unusual, knobby fingers lacking sharp claws.

The fossil beds in China that produced Limusaurus have previously yielded skeletons of a variety of dinosaurs and contemporary animals described by Clark and Xu.

These include the oldest tyrannosaur, Guanlong wucaii; the oldest horned dinosaur, Yinlong downsi; a new stegosaur, Jiangjunosaurus junggarensis; and the running crocodile relative, Junggarsuchus sloani. (Photo: Portia Sloan)

IBEX SPACECRAFT DETECTS FAST NEUTRAL HYDROGEN COMING FROM THE MOON

0 comentarios
NASA's Interstellar Boundary Explorer (IBEX) spacecraft has made the first observations of very fast hydrogen atoms coming from the moon, following decades of speculation and searching for their existence.

During spacecraft commissioning, the IBEX team turned on the IBEX-Hi instrument, built primarily by Southwest Research Institute (SwRI) and the Los Alamos National Laboratory, which measures atoms with speeds from about half a million to 2.5 million miles per hour. Its companion sensor, IBEX-Lo, built by Lockheed Martin, the University of New Hampshire, NASA Goddard Space Flight Center, and the University of Bern in Switzerland, measures atoms with speeds from about one hundred thousand to 1.5 million mph.

"Just after we got IBEX-Hi turned on, the moon happened to pass right through its field of view, and there they were," says Dr. David J. McComas, IBEX principal investigator and assistant vice president of the SwRI Space Science and Engineering Division. "The instrument lit up with a clear signal of the neutral atoms being detected as they backscattered from the moon."

The solar wind, the supersonic stream of charged particles that flows out from the sun, moves out into space in every direction at speeds of about a million mph. The Earth's strong magnetic field shields our planet from the solar wind. The moon, with its relatively weak magnetic field, has no such protection, causing the solar wind to slam onto the moon's sunward side.

From its vantage point in space, IBEX sees about half of the moon -- one quarter of it is dark and faces the nightside (away from the sun), while the other quarter faces the dayside (toward the sun). Solar wind particles impact only the dayside, where most of them are embedded in the lunar surface, while some scatter off in different directions. The scattered ones mostly become neutral atoms in this reflection process by picking up electrons from the lunar surface.

The IBEX team estimates that only about 10 percent of the solar wind ions reflect off the sunward side of the moon as neutral atoms, while the remaining 90 percent are embedded in the lunar surface. Characteristics of the lunar surface, such as dust, craters and rocks, play a role in determining the percentage of particles that become embedded and the percentage of neutral particles, as well as their direction of travel, that scatter.

McComas says the results also shed light on the "recycling" process undertaken by particles throughout the solar system and beyond. The solar wind and other charged particles impact dust and larger objects as they travel through space, where they backscatter and are reprocessed as neutral atoms. These atoms can travel long distances before they are stripped of their electrons and become ions and the complicated process begins again.

The combined scattering and neutralization processes now observed at the moon have implications for interactions with objects across the solar system, such as asteroids, Kuiper Belt objects and other moons. The plasma-surface interactions occurring within protostellar nebula, the region of space that forms around planets and stars -- as well as exoplanets, planets around other stars -- also can be inferred.

IBEX's primary mission is to observe and map the complex interactions occurring at the edge of the solar system, where the million miles per hour solar wind runs into the interstellar material from the rest of the galaxy. The spacecraft carries the most sensitive neutral atom detectors ever flown in space, enabling researchers to not only measure particle energy, but also to make precise images of where they are coming from.

Around the end of the summer, the team will release the spacecraft's first all-sky map showing the energetic processes occurring at the edge of the solar system. The team will not comment until the image is complete, but McComas hints, "It doesn't look like any of the models." (Photo: SwRI)

Saturday, June 27, 2009

CO2 HIGHER TODAY THAN LAST 2.1 MILLION YEARS

0 comentarios

Researchers have reconstructed atmospheric carbon dioxide levels over the past 2.1 million years in the sharpest detail yet, shedding new light on its role in the earth's cycles of cooling and warming.

The study, in the June 19 issue of the journal Science, is the latest to rule out a drop in CO2 as the cause for earth's ice ages growing longer and more intense some 850,000 years ago. But it also confirms many researchers' suspicion that higher carbon dioxide levels coincided with warmer intervals during the study period.

The authors show that peak CO2 levels over the last 2.1 million years averaged only 280 parts per million; but today, CO2 is at 385 parts per million, or 38% higher. This finding means that researchers will need to look back further in time for an analog to modern day climate change.

In the study, Bärbel Hönisch, a geochemist at Lamont-Doherty Earth Observatory, and her colleagues reconstructed CO2 levels by analyzing the shells of single-celled plankton buried under the Atlantic Ocean, off the coast of Africa. By dating the shells and measuring their ratio of boron isotopes, they were able to estimate how much CO2 was in the air when the plankton were alive. This method allowed them to see further back than the precision records preserved in cores of polar ice, which go back only 800,000 years.

The planet has undergone cyclic ice ages for millions of years, but about 850,000 years ago, the cycles of ice grew longer and more intense—a shift that some scientists have attributed to falling CO2 levels. But the study found that CO2 was flat during this transition and unlikely to have triggered the change.

"Previous studies indicated that CO2 did not change much over the past 20 million years, but the resolution wasn't high enough to be definitive," said Hönisch. "This study tells us that CO2 was not the main trigger, though our data continues to suggest that greenhouse gases and global climate are intimately linked."

The timing of the ice ages is believed to be controlled mainly by the earth's orbit and tilt, which determines how much sunlight falls on each hemisphere. Two million years ago, the earth underwent an ice age every 41,000 years. But some time around 850,000 years ago, the cycle grew to 100,000 years, and ice sheets reached greater extents than they had in several million years—a change too great to be explained by orbital variation alone.

A global drawdown in CO2 is just one theory proposed for the transition. A second theory suggests that advancing glaciers in North America stripped away soil in Canada, causing thicker, longer lasting ice to build up on the remaining bedrock. A third theory challenges how the cycles are counted, and questions whether a transition happened at all.

The low carbon dioxide levels outlined by the study through the last 2.1 million years make modern day levels, caused by industrialization, seem even more anomalous, says Richard Alley, a glaciologist at Pennsylvania State University, who was not involved in the research.

"We know from looking at much older climate records that large and rapid increase in C02 in the past, (about 55 million years ago) caused large extinction in bottom-dwelling ocean creatures, and dissolved a lot of shells as the ocean became acidic," he said. "We're heading in that direction now."

The idea to approximate past carbon dioxide levels using boron, an element released by erupting volcanoes and used in household soap, was pioneered over the last decade by the paper's coauthor Gary Hemming, a researcher at Lamont-Doherty and Queens College. The study's other authors are Jerry McManus, also at Lamont; David Archer at the University of Chicago; and Mark Siddall, at the University of Bristol, UK.

(Photo: Steve Doo)

The Earth Institute

Friday, June 26, 2009

WHAT LIMITS THE SIZE OF BIRDS?

0 comentarios
Why aren't birds larger? Fifteen-kilogram swans hold the current upper size record for flying birds, although the extinct Argentavis of the Miocene Epoch in Argentina is estimated to have weighed 70 kilograms, the size of an average human.

In a forthcoming article in PLoS Biology, Sievert Rohwer, and his colleagues at the Burke Museum at the University of Washington, provide evidence that maximum body size in birds is constrained by the amount of time it takes to replace the flight feathers during molt. As bird size increases, feather growth rate fails to keep up with feather length until, eventually; feathers wear out before they can be replaced. This fundamental relationship requires basic changes in the molt strategy as size increases, ultimately limiting the size of flying birds.

Feathers deteriorate with continued exposure to ultra-violet light and bacterial decomposition, and must be replaced periodically to maintain adequate aerodynamic support for flight. Small birds accomplish this in an annual or twice-annual molt, during which the 9 or 10 primary flight feathers are replaced sequentially, taking about three weeks for each feather. Large species of birds need different approaches to feather replacement. These involve several alternative strategies: prolonging the total molt to two or even three years; simultaneously replacing multiple feathers from different molt-origination points in the feather sequence; and, in species that do not require flight to feed or escape enemies (ducks and geese, for example), replacing all feathers simultaneously.

With increasing body size, the length of the primary feathers increases as the one-third power of mass, approximately doubling with each 10-fold increase in mass. However, the rate of feather growth increases only as the one-sixth power of mass, meaning that the time required to replace each feather increases by a factor of about 1.5 for each 10-fold increase in mass, until 56 days are required to replace a single flight feather in a 10-kg bird. The cause of this discrepancy is not known, but the authors speculate that it probably depends on the geometry of producing a two-dimensional feather structure from a one-dimensional growing zone in the feather shaft.

The avian feather is one of the most striking adaptations in the animal world, and yet its growth dynamics are poorly understood. It might be possible to achieve more rapid feather growth with a larger growth zone, but this could also weaken the structure of the growing feather, resulting in frequent breakage in large birds. Understanding the engineering complexities of the growing feather will require further study of the dynamics and structure of the growing zone. And what about Argentavis? The authors speculate that this giant bird most likely molted all its feathers simultaneously during a long fast, fueled by accumulated fat deposits much in the same way as emperor penguins do today.

PLoS

Thursday, June 25, 2009

INSIGHTS INTO MALE FERTILITY: NEW RESEARCH SHOWS POTENTIAL FOR A MALE CONTRACEPTIVE

0 comentarios

Researchers have known for more than half a century that sperm is able to fertilize an egg only after it has resided for a period of time in the female reproductive tract. Without this specific interaction with the female body, the sperm is incapable of producing offspring. But until now there was very little understanding of what changes occur within the sperm that suddenly allows it to fertilize an egg.

In the Journal of Proteome Research, Rensselaer Polytechnic Institute Assistant Professor of Chemistry and Chemical Biology Mark Platt reveals the molecular-level changes that occur within sperm after it enters the female reproductive tract. His findings provide important clues into the still-mysterious process of capacitation, the process by which sperm acquire the ability to fertilize an egg, including why some otherwise healthy males might encounter fertility issues. His research may also offer insight required to develop an entirely new contraceptive, even a male version of the birth control pill.

“Much has been done to understand capacitation, but with the tools that we have within the lab we can now identify how specific sites on individual proteins are modified during this process,” said Platt. “With this knowledge we can develop a deeper understanding of the molecular mechanisms required to provide sperm with fertilizing competence.”

“Based upon some of our additional work, a few of these sites appear to be essential to carrying out the process of capacitation,” Platt said.

Phosphorylation can be thought of as a light switch, which can be used to turn on or turn off a step in the chain of reactions, known as a signal transduction cascade, that leads to capacitation. Just like the initial flicking of a light switch quickly moves electricity through the wires to turn on a lamp across the room, phosphorylation provides the initial trigger that moves a cellular signal through the cell that turns “on” its ability to fertilize an egg. According to Platt, by interfering with a just a single site of phosphorylation, scientists could entirely switch off the fertilization process. It is this ability that has the strongest potential for the development of a novel contraceptive.

“If phosphorylation on a particular amino acid is absolutely required for sperm capacitation, a drug could be developed which prevents phosphorylation from occurring at that specific site, thereby preventing the entire capacitation process,” Platt said. This turning off of the phosphorylation switch could then prevent fertilization entirely.

“These applications are currently hypothetical at this point, but the implications for contraceptives resulting from this research are promising,” he said. He noted that there could be several different options that could be developed using this and future research, including a drug for males that specifically targets the individual sites of protein phosphorylation in the developing sperm or a novel spermicide that prevents capacitation from occurring in sperm residing in the female reproductive tract.

In addition, the research provides important insight into male infertility. “Certain types of male infertility could be caused by a mutation of a single amino acid on a critical protein that prevents the sperm from ever undergoing the capacitation process,” Platt said. “If you could correct that specific mutation or design a drug which mimics phosphorylation on that particular amino acid, for example, you might be able to improve fertility.”

To locate the specific site of phosphorylation, Platt and his colleagues first induced capacitation in sperm. Proteins from the capacitated sperm and proteins from a non-capacitated population were then extracted and digested into smaller segments called peptides. Using tandem mass spectrometry (MS/MS), an analytical technique utilized in his laboratory, Platt was able to determine the amino acid sequence of each peptide and to determine where each was phosphorylated. By comparing the phosphorylation status of the samples, Platt and his colleagues were able to identify 55 specific sites whose level of modification changed as a result of the capacitation process.

(Photo: Rensselaer/Mark Platt)

Rensselaer Polytechnic Institute

GENETIC DIFFERENCES IN HUMANS SHAPED BY GEOGRAPHY, HISTORY

0 comentarios
Local pressures such as climate or diet that affect natural selection are only partly responsible for differences in the genetic makeup of human populations, a new study finds. Migrations, the expansions and contractions of populations and the vagaries of genetic chance also play a role.

“It’s easy to imagine that natural selection could have led to human populations strongly diverging from each other as they adapted to the different environments they encountered as they spread around the world,” said Graham Coop, an assistant professor of evolution and ecology at the University of California, Davis, and lead author of the paper describing the study. “But what excites me about this work is that we’re showing that a population’s history and migrations between groups have had a strong role in limiting and shaping how populations adapted to their environments.”

The study, led by Jonathan Pritchard, a Howard Hughes Medical Institute investigator at the University of Chicago, was published June 5 in the online journal PLoS Genetics.

Other collaborators in the work are Joseph Pickrell at the University of Chicago and Marcus Feldman, Richard Myers and Luca Cavalli-Sforza, all at Stanford University.

Natural selection occurs when a particular genetic difference — known as a variant — gives an individual a greater opportunity to have children and pass on his or her genes to future generations. A good example are the variants responsible for light skin color. As modern humans moved out of Africa and spread into northern latitudes, dark skin became a disadvantage, possibly because it blocked too much of the sunlight necessary for synthesis of vitamin D for healthy bones. Genetic variants for light skin, then, conferred a small survival edge, and today those variants are common in people of European and northern Asian ancestry.

Yet genetic variants do not always help populations adapt to their environments, Pritchard said. For example, if a small population undergoes a rapid expansion — because it has entered new territory, perhaps, or developed a technology that supports larger numbers — some of the genetic variants carried by that population can increase rapidly in number, even if they do not provide a reproductive advantage. The pool of genes within a population also tends to fluctuate due to chance events and random differences in the number of children people have and the particular genes they pass on to their offspring.

Thus, one of the fundamental questions facing human geneticists is: Is it possible to determine which genetic variants have increased because of selection and which have increased because of population changes or genetic chance?

Pritchard, Coop and their collaborators decided to tackle this question when new genetic data became available last year from the Human Genome Diversity Project at Stanford University. These data provided a much more in-depth sampling of worldwide genetic differences than had previously been available, allowing the research team to carry out a new and more rigorous test for selection.

To determine whether a variant’s frequency resulted from natural selection, the team compared the distribution of variants in two parts of the genome: in regions that affect the structure and regulation of proteins, and in neutral regions that have no effect on proteins. Because neutral regions are less likely to be affected by natural selection, they should reflect the demographic history of populations, the team reasoned. In contrast, regions that have been influenced by selection — the protein-regulating regions — should show different patterns of distribution.

The analysis immediately identified known examples of selection, including those involved in determining skin pigmentation, resistance to pathogens, and the ability to digest milk as an adult, a trait that arose in Europe, the Middle East and Africa following the domestication of dairy animals.

Interestingly, it also revealed several genes of unknown function that appear to have been under strong selective pressures. “We’re keen to learn what these genes are and how they work,” said Coop.

Yet the team also found that many genetic signals that other researchers have attributed to selection may actually have been created by historical and demographic factors. When the group compared closely related populations — those that recently came from the same ancestral population or those that have exchanged many migrants throughout history — it found few large genetic differences. If these populations’ environments were exerting strong selective pressure, such differences should have been more apparent.

Even with genetic variants where evidence of selection is strong, such as those for skin color, Pritchard noted, the movements of populations have powerfully influenced current patterns of variation.
“A handful of selective signals are clear,” he said, “but it’s hard to be confident about individual cases beyond the top ten or so that we understand well right now.” (Photo: Karin Higgins/UC Davis)

Wednesday, June 24, 2009

TYPHOONS TRIGGER SLOW EARTHQUAKES

0 comentarios

Scientists have made the surprising finding that typhoons trigger slow earthquakes, at least in eastern Taiwan. Slow earthquakes are non-violent fault slippage events that take hours or days instead of a few brutal seconds to minutes to release their potent energy. The researchers discuss their data in a study published in the June 11, issue of Nature.

“From 2002 to 2007 we monitored deformation in eastern Taiwan using three highly sensitive borehole strainmeters installed 650 to 870 feet (200-270 meters) deep. These devices detect otherwise imperceptible movements and distortions of rock,” explained coauthor Selwyn Sacks of Carnegie’s Department of Terrestrial Magnetism. “We also measured atmospheric pressure changes, because they usually produce proportional changes in strain, which we can then remove.”

Taiwan has frequent typhoons in the second half of each year but is typhoon free during the first 4 months. During the five-year study period, the researchers, including lead author Chiching Liu (Academia Sinica, Taiwan), identified 20 slow earthquakes that each lasted from hours to more than a day. The scientists did not detect any slow events during the typhoon-free season. Eleven of the 20 slow earthquakes coincided with typhoons. Those 11 were also stronger and characterized by more complex waveforms than the other slow events.

“These data are unequivocal in identifying typhoons as triggers of these slow quakes. The probability that they coincide by chance is vanishingly small,” remarked coauthor Alan Linde, also of Carnegie.

How does the low pressure trigger the slow quakes? The typhoon reduces atmospheric pressure on land in this region, but does not affect conditions at the ocean bottom, because water moves into the area and equalizes pressure. The reduction in pressure above one side of an obliquely dipping fault tends to unclamp it. “This fault experiences more or less constant strain and stress buildup,” said Linde. “If it’s close to failure, the small perturbation due to the low pressure of the typhoon can push it over the failure limit; if there is no typhoon, stress will continue to accumulate until it fails without the need for a trigger.”

“It’s surprising that this area of the globe has had no great earthquakes and relatively few large earthquakes,” Linde remarked. “By comparison, the Nankai Trough in southwestern Japan, has a plate convergence rate about 4 centimeters per year, and this causes a magnitude 8 earthquake every 100 to 150 years. But the activity in southern Taiwan comes from the convergence of same two plates, and there the Philippine Sea Plate pushes against the Eurasian Plate at a rate twice that for Nankai.”

The researchers speculate that the reason devastating earthquakes are rare in eastern Taiwan is because the slow quakes act as valves, releasing the stress frequently along a small section of the fault, eliminating the situation where a long segment sustains continuous high stresses until it ruptures in a single great earthquake. The group is now expanding their instrumentation and monitoring for this research.

(Photo: Alan Linde)

Carnegie Institution of Washington

Tuesday, June 23, 2009

TRANSFORMING ROOFS FROM WASTED SPACE TO ENERGY SOURCE

0 comentarios

A transparent thin film barrier used to protect flat panel TVs from moisture could become the basis for flexible solar panels that would be installed on roofs like shingles.

The flexible rooftop solar panels - called building-integrated photovoltaics, or BIPVs - could replace today's boxy solar panels that are made with rigid glass or silicon and mounted on thick metal frames. The flexible solar shingles would be less expensive to install than current panels and made to last 25 years.

"There's a lot of wasted space on rooftops that could actually be used to generate power," said Mark Gross, a senior scientist at the Department of Energy's Pacific Northwest National Laboratory. "Flexible solar panels could easily become integrated into the architecture of commercial buildings and homes. Solar panels have had limited success because they've been difficult and expensive to install."

Researchers at PNNL will create these flexible panels by adapting a film encapsulation process currently used to coat flat panel displays that use organic light-emitting diodes, or OLEDs. The work is made possible by a Cooperative Research and Development Agreement recently penned between Vitex Systems and Battelle, which operates PNNL for the federal government.

PNNL researchers developed the thin film technology in the 1990s. At the time, the lab's team investigated 15 possible applications, including solar power. Vitex licensed the technology from Battelle in 2000 and focused its initial efforts on developing the ultra-barrier films for flat-panel displays. Now PNNL and Vitex are taking a hard second look at solar power.

The encapsulation process and the ultra-barrier film - called Barix™ Encapsulation and Barix™ Barrier Film, respectively - are already proven and effective moisture barriers. But researchers need to find a way to apply the technology to solar panels that are made with copper indium gallium selenide, called CIGS, or cadmium telluride, called CdTe.

Under the agreement, researchers will create low-cost flexible barrier films and evaluate substrate materials for solar panels, which are also called photovoltaics, or PVs. Both the film and substrate must be able to survive harsh ultraviolet rays and natural elements like rain and hail for 25 years.

The agreement also calls for researchers to develop a manufacturing process for the flexible panels that can be readily adapted to large-scale production. If successful, this process will reduce solar panel manufacturing costs to less than $1 per watt of power, which would be competitive with the 10 cents per kilowatt-hour that a utility would charge.

"Vitex is proud to continue its long, successful relationship with PNNL," said Martin Rosenblum, Vitex's vice president of operations and engineering. "Vitex is excited to further its Barix™ technology's proven barrier performance for photovoltaics toward mass manufacturing. Together, we look forward to creating a product that will help alleviate America's dependence on foreign oil and increase America's access to an abundant renewable energy source - the sun."

Battelle, which is the majority shareholder of Vitex, is optimistic that this research agreement will contribute to a new way of generating solar power. Battelle recently increased its investment in Vitex for new state-of-the-art thin film encapsulation equipment and expanded its intellectual property portfolio.

"We're confident that Vitex will be uniquely positioned to help meet the demand for flexible solar panels, OLED displays and lighting that should rise along with the economy," said Martin Inglis, Battelle's chief financial officer.

(Photo: Vitex Systems, Inc.)

Pacific Northwest National Laboratory

RED GIANT STAR BETELGEUSE MYSTERIOUSLY SHRINKING

0 comentarios

The red supergiant star Betelgeuse, the bright reddish star in the constellation Orion, has steadily shrunk over the past 15 years, according to University of California, Berkeley, researchers.

Long-term monitoring by UC Berkeley's Infrared Spatial Interferometer (ISI) on the top of Mt. Wilson in Southern California shows that Betelgeuse (bet' el juz), which is so big that in our solar system it would reach to the orbit of Jupiter, has shrunk in diameter by more than 15 percent since 1993.

Since Betelgeuse's radius is about five astronomical units, or five times the radius of Earth's orbit, that means the star's radius has shrunk by a distance equal to the orbit of Venus.

"To see this change is very striking," said Charles Townes, a UC Berkeley professor emeritus of physics who won the 1964 Nobel Prize in Physics for inventing the laser and the maser, a microwave laser. "We will be watching it carefully over the next few years to see if it will keep contracting or will go back up in size."

Despite Betelgeuse's diminished size, Wishnow pointed out that its visible brightness, or magnitude, which is monitored regularly by members of the American Association of Variable Star Observers, has shown no significant dimming over the past 15 years.

The ISI has been focusing on Betelgeuse for more than 15 years in an attempt to learn more about these giant massive stars and to discern features on the star's surface, Wishnow said. He speculated that the measurements may be affected by giant convection cells on the star's surface that are like convection granules on the sun, but so large that they bulge out of the surface. Townes and former graduate student Ken Tatebe observed a bright spot on the surface of Betelgeuse in recent years, although at the moment, the star appears spherically symmetrical.

"But we do not know why the star is shrinking," Wishnow said. "Considering all that we know about galaxies and the distant universe, there are still lots of things we don't know about stars, including what happens as red giants near the ends of their lives."

Betelgeuse was the first star ever to have its size measured, and even today is one of only a handful of stars that appears through the Hubble Space Telescope as a disk rather than a point of light. In1921, Francis G. Pease and Albert Michelson used optical interferometry to estimate its diameter was equivalent to the orbit of Mars. Last year, new measurements of the distance to Betelgeuse raised it from 430 light years to 640, which increased the star's diameter from about 3.7 to about 5.5 AU.

"Since the 1921 measurement, its size has been re-measured by many different interferometer systems over a range of wavelengths where the diameter measured varies by about 30 percent," Wishnow said. "At a given wavelength, however, the star has not varied in size much beyond the measurement uncertainties."

The measurements cannot be compared anyway, because the star's size depends on the wavelength of light used to measure it, Townes said. This is because the tenuous gas in the outer regions of the star emits light as well as absorbs it, which makes it difficult to determine the edge of the star.

The ISI that Townes and his colleagues first built in the early 1990s sidesteps these confounding emission and absorption lines by observing in the mid-infrared with a narrow bandwidth that can be tuned between spectral lines. The ISI consists of three 5.4-foot (1.65-meter) diameter mirrors separated by distances that vary from 12 to 230 feet (4-70 meters), said Townes. Using a laser as a common frequency standard, the ISI interferometer combines signals from telescope pairs in order to determine path length differences between light that originates at the star's center and light that originates at the star's edge. The technique of stellar interferometry is highlighted in the June 2009 issue of Physics Today magazine.

"We observe around 11 microns, the mid-infrared, where this long wavelength penetrates the dust and the narrow bandwidth avoids any spectral lines, and so we see the star relatively undistorted," said Townes. "We have also had the good fortune to have an instrument that has operated in a very similar manner for some 15 years, providing a long and consistent series of measurements that no one else has. The first measurements showed a size quite close to Michelson's result, but over 15 years, it has decreased in size about 15 percent, changing smoothly, but faster as the years progressed."

Townes, who turns 94 in July, plans to continue monitoring Betelgeuse in hopes of finding a pattern in the changing diameter, and to improve the ISI's capabilities by adding a spectrometer to the interferometer.

"Whenever you look at things with more precision, you are going to find some surprises and uncover very fundamental and important new things," he said.

(Photo: David Hale 2006)

University of California, Berkeley

DRAWING INSPIRATION FROM NATURE TO BUILD A BETTER RADIO

0 comentarios
MIT engineers have built a fast, ultra-broadband, low-power radio chip, modeled on the human inner ear, that could enable wireless devices capable of receiving cell phone, Internet, radio and television signals.

Rahul Sarpeshkar, associate professor of electrical engineering and computer science, and his graduate student, Soumyajit Mandal, designed the chip to mimic the inner ear, or cochlea. The chip is faster than any human-designed radio-frequency spectrum analyzer and also operates at much lower power.

"The cochlea quickly gets the big picture of what's going on in the sound spectrum," said Sarpeshkar. "The more I started to look at the ear, the more I realized it's like a super radio with 3,500 parallel channels."

Sarpeshkar and his students describe their new chip, which they have dubbed the "radio frequency (RF) cochlea," in a paper to be published in the June issue of the IEEE Journal of Solid-State Circuits. They have also filed for a patent to incorporate the RF cochlea in a universal or software radio architecture that is designed to efficiently process a broad spectrum of signals including cellular phone, wireless Internet, FM, and other signals.

The RF cochlea mimics the structure and function of the biological cochlea, which uses fluid mechanics, piezoelectrics and neural signal processing to convert sound waves into electrical signals that are sent to the brain.

As sound waves enter the cochlea, they create mechanical waves in the cochlear membrane and the fluid of the inner ear, activating hair cells (cells that cause electrical signals to be sent to the brain). The cochlea can perceive a 100-fold range of frequencies -- in humans, from 100 to 10,000 Hz. Sarpeshkar used the same design principles in the RF cochlea to create a device that can perceive signals at million-fold higher frequencies, which includes radio signals for most commercial wireless applications.

The device demonstrates what can happen when researchers take inspiration from fields outside their own, says Sarpeshkar.

"Somebody who works in radio would never think of this, and somebody who works in hearing would never think of it, but when you put the two together, each one provides insight into the other," he says. For example, in addition to its use for radio applications, the work provides an analysis of why cochlear spectrum analysis is faster than any known spectrum-analysis algorithm. Thus, it sheds light on the mechanism of hearing as well.

The RF cochlea, embedded on a silicon chip measuring 1.5 mm by 3 mm, works as an analog spectrum analyzer, detecting the composition of any electromagnetic waves within its perception range. Electromagnetic waves travel through electronic inductors and capacitors (analogous to the biological cochlea's fluid and membrane). Electronic transistors play the role of the cochlea's hair cells.

The analog RF cochlea chip is faster than any other RF spectrum analyzer and consumes about 100 times less power than what would be required for direct digitization of the entire bandwidth. That makes it desirable as a component of a universal or "cognitive" radio, which could receive a broad range of frequencies and select which ones to attend to.

This is not the first time Sarpeshkar has drawn on biology for inspiration in designing electronic devices. Trained as an engineer but also a student of biology, he has found many similar patterns in the natural and man-made worlds (http://www.rle.mit.edu/avbs). For example, Sarpeshkar's group, in MIT's Research Laboratory of Electronics, has also developed an analog speech-synthesis chip inspired by the human vocal tract and a novel analysis-by-synthesis technique based on the vocal tract. The chip's potential for robust speech recognition in noise and its potential for voice identification have several applications in portable devices and security applications.

The researchers have built circuits that can analyze heart rhythms for wireless heart monitoring, and are also working on projects inspired by signal processing in cells. In the past, his group has worked on hybrid analog-digital signal processors inspired by neurons in the brain.

Sarpeshkar says that engineers can learn a great deal from studying biological systems that have evolved over hundreds of millions of years to perform sensory and motor tasks very efficiently in noisy environments while using very little power.

"Humans have a long way to go before their architectures will successfully compete with those in nature, especially in situations where ultra-energy-efficient or ultra-low-power operation are paramount," he said. Nevertheless, "We can mine the intellectual resources of nature to create devices useful to humans, just as we have mined her physical resources in the past. (Photo: Donna Coveney)

Monday, June 22, 2009

IF THE SHOE FLITS, DUCK: A REAL-LIFE EXAMPLE OF HUMANS' DUAL VISION SYSTEM

0 comentarios

It's rare when real-world events perfectly mirror experiments that scientists are conducting. That's why neuroscientists at the University of Washington were delighted at the reactions of former President George W. Bush and Iraq's Prime Minister Nouri al-Maliki when an Iraqi reporter flung his shoes toward the two men during a Baghdad news conference.

When Bush ducked and Maliki didn't flinch as the first shoe sailed toward them, it was a real-world example supporting the theory that there are two independent pathways in the human visual system.

"The original idea proposed is that one system guides your actions and the other guides your perception. The interesting part is the 'action' system allows the brain to 'see' things your eyes do not perceive," said Jeffrey Lin, a UW psychology doctoral student and lead author of a paper appearing June 11 in the journal Current Biology. Co-authors are Scott Murray and Geoffrey Boynton, UW psychology assistant and associate professors, respectively.

"When we throw two balls at you with very similar trajectories, they may look the same to your perceptual system, but your brain can automatically calculate which one is more threatening and trigger a dodging motion before you've even realized what has happened," said Lin.

"If you look at the shoe-throwing video you will see that the prime minister doesn't flinch at all. His brain has already categorized the shoe as non-threatening which does not require evasive action. But Bush's brain has categorized the shoe as threatening and triggers an evasive dodge, all within a fraction of a second."

To explore how this dual visual system works, the researchers set up several experiments that were similar to what baseball players experience when they step into the batter's box and get ready for the pitcher to throw the ball. In a split second their action system determines if the ball is going to hit their body and whether to initiate a defensive bail out of the batter's box.

Instead of baseballs, college students participating in three experiments looked at a computer monitor and were instructed to quickly locate a target oval among a field of circular discs, determine its path and press a key when they had found the oval. The key manipulation in the study was that some of the trials began with a looming stimulus at the position of the target oval. When this looming motion was on a collision path with a student's head, the participant responded faster to the target than when the looming motion just missed the head. The experiment showed that a stimulus on a collision path with a student captured attention but one on a near-miss path did not. Critically, subjects could not differentiate between the subtly different collision and near-miss looming stimuli in a separate experiment.

The authors strongly believe the research supports the idea that the human visual system is composed of two independent systems.

"A major focus of neuroscience is understanding how we deal with sensory information," said Boynton. "There is no way the brain can possibly process and analyze everything we are exposed to. We have to select what is important. In the real world you are on your own and what you pay attention to is part of survival. This experiment shows that threatening stimuli grab your attention, even those we can't consciously identify. That this is more accurate than our conscious perception is pretty amazing."

(Photo: U. Washington)

University of Washington

MANATEES CAN PROBABLY HEAR WHICH DIRECTIONS BOATS APPROACH FROM

0 comentarios
The world is a perilous place for the endangered manatee. While the mammals are at risk from natural threats, human activity also poses a great danger to manatee numbers. Debborah Colbert, from the Association of Zoos and Aquariums, explains that many manatees die and are seriously injured in collisions with boats every year. However, little is known about how manatees perceive their environment.

Whether they can localize sounds, and specifically whether they can tell which direction a boat is approaching from, are crucial factors in the development of manatee protection programmes. Colbert and her colleagues from the Universities of Florida and South Florida and the New College of Florida decided to test whether the mammals can pinpoint sound sources. They publish their results on 12 June 2009 in the Journal of Experimental Biology at http://jeb.biologists.org/.

Working at the Mote Marine Laboratory and Aquarium in Florida, Colbert was able to work with two male manatees, Hugh and Buffett, when she initiated a research programme to find out more about these enigmatic creatures. Both young males had been trained previously to participate in a series of sensory studies, so Colbert, Joseph Gaspard 3rd, Gordon Bauer and Roger Reep trained the animals to swim to a specific stationing platform in their enclosure where they could listen to sounds played from one of four speakers arranged around their heads.

Knowing that the manatees' hearing was most sensitive to sounds ranging from 10 to 20kHz, while the animals' calls range from 2.5 to 6kHz, Colbert and David Mann designed three sounds ranging from 0.2 to 20kHz, 6 to 20kHz and 0.2 to 2kHz to play to the animals. The team also selected two single frequency (tonal) sounds at 4kHz and 16kHz to test how the manatees responded to less complex sounds. Having trained the manatees to swim to the speaker that they thought the sound came from, the team then played the broadband sounds, of 0.2, 0.5, 1 to 3s, from each speaker at random while monitoring the animals' responses.

One of the manatees, Buffett, successfully identified the source of the broadband sounds with almost 90% accuracy, while Hugh did slightly less well. The team was also surprised that the manatees were able to locate the sources of both the 4kHz and 16kHz tones, although the team only tested the animals with the longest of the two tonal notes, as the animals had shown signs of frustration when they heard these sounds.

So how are the animals able to localize sounds? Colbert explains that many terrestrial animals use the time difference between a sound arriving at their two ears to find the source. However, this time difference is probably extremely short in aquatic animals, as sounds travel 5 times faster in water than in air. Animals also use the intensity difference as the sound arrives at each ear, which is more pronounced in high-pitched noises, to pinpoint the source. Colbert suspects that the manatees use combinations of these and other cues to help them localize sounds, as they were able to locate the sources of high- and low-pitched sounds equally well.

Crucially, the animals can probably hear approaching speed boats and tell which direction they are coming from, which is an essential piece of information for conservation organisations as they battle to save this gentle giant.

The Company of Biologists

STRESS MAKES YOUR HAIR GO GRAY

0 comentarios
Those pesky graying hairs that tend to crop up with age really are signs of stress, reveals a new report in the June 12 issue of Cell, a Cell Press publication.

Researchers have discovered that the kind of "genotoxic stress" that does damage to DNA depletes the melanocyte stem cells (MSCs) within hair follicles that are responsible for making those pigment-producing cells. Rather than dying off, when the going gets tough, those precious stem cells differentiate, forming fully mature melanocytes themselves. Anything that can limit the stress might stop the graying from happening, the researchers said.

"The DNA in cells is under constant attack by exogenously- and endogenously-arising DNA-damaging agents such as mutagenic chemicals, ultraviolet light and ionizing radiation," said Emi Nishimura of Tokyo Medical and Dental University. "It is estimated that a single cell in mammals can encounter approximately 100,000 DNA damaging events per day."

Consequently, she explained, cells have elaborate ways to repair damaged DNA and prevent the lesions from being passed on to their daughter cells.

"Once stem cells are damaged irreversibly, the damaged stem cells need to be eliminated to maintain the quality of the stem cell pools," Nishimura continued. "We found that excessive genotoxic stress triggers differentiation of melanocyte stem cells." She says that differentiation might be a more sophisticated way to get rid of those cells than stimulating their death.

Nishimura's group earlier traced the loss of hair color to the gradual dying off of the stem cells that maintain a continuous supply of new melanocytes, giving hair its youthful color. Those specialized stem cells are not only lost, they also turn into fully committed pigment cells and in the wrong place.

Now, they show in mice that irreparable DNA damage, as caused by ionizing radiation, is responsible. They further found that the "caretaker gene" known as ATM (for ataxia telangiectasia mutated) serves as a so-called stemness checkpoint, protecting against MSCs differentiation. That's why people with Ataxia-telangiectasia, an aging syndrome caused by a mutation in the ATM gene, go gray prematurely.

The findings lend support to the notion that genome instability is a significant factor underlying aging in general, the researchers said. They also support the "stem cell aging hypothesis," which proposes that DNA damage to long-lived stem cells can be a major cause for the symptoms that come with age. In addition to the aging-associated stem cell depletion typically seen in melanocyte stem cells, qualitative and quantitative changes to other body stem cells have been reported in blood stem cells, cardiac muscle, and skeletal muscle, the researchers said. Stresses on stem cell pools and genome maintenance failures have also been implicated in the decline of tissue renewal capacity and the accelerated appearance of aging-related characteristics.

"In this study, we discovered that hair graying, the most obvious aging phenotype, can be caused by the genomic damage response through stem cell differentiation, which suggests that physiological hair graying can be triggered by the accumulation of unavoidable DNA damage and DNA-damage response associated with aging through MSC differentiation," they wrote.

Cell

Friday, June 19, 2009

EVOLUTION CAN OCCUR IN LESS THAN 10 YEARS

0 comentarios


How fast can evolution take place? In just a few years, according to a new study on guppies led by UC Riverside's Swanne Gordon, a graduate student in biology.

Gordon and her colleagues studied guppies — small fresh-water fish biologists have studied for long — from the Yarra River, Trinidad. They introduced the guppies into the nearby Damier River, in a section above a barrier waterfall that excluded all predators. The guppies and their descendents also colonized the lower portion of the stream, below the barrier waterfall, that contained natural predators.

Eight years later (less than 30 guppy generations), the researchers found that the guppies in the low-predation environment above the barrier waterfall had adapted to their new environment by producing larger and fewer offspring with each reproductive cycle. No such adaptation was seen in the guppies that colonized the high-predation environment below the barrier waterfall.

"High-predation females invest more resources into current reproduction because a high rate of mortality, driven by predators, means these females may not get another chance to reproduce," explained Gordon, who works in the lab of David Reznick, a professor of biology. "Low-predation females, on the other hand, produce larger embryos because the larger babies are more competitive in the resource-limited environments typical of low-predation sites. Moreover, low-predation females produce fewer embryos not only because they have larger embryos but also because they invest fewer resources in current reproduction."

Study results appear in the July issue of The American Naturalist.

Natural guppy populations can be divided into two basic types. High-predation populations are usually found in the downstream reaches of rivers, where they coexist with predatory fishes that have strong effects on guppy demographics. Low-predation populations are typically found in upstream tributaries above barrier waterfalls, where strong predatory fishes are absent. Researchers have found that this broad contrast in predation regime has driven the evolution of many adaptive differences between the two guppy types in color, morphology, behavior, and life history.

Gordon's research team performed a second experiment to measure how well adapted to survival the new population of guppies were. To this end, they introduced two new sets of guppies, one from a portion of the Yarra River that contained predators and one from a predator-free tributary to the Yarra River into the high-and low-predation environments in the Damier River.

They found that the resident, locally adapted guppies were significantly more likely to survive a four-week time period than the guppies from the two sites on the Yarra River. This was especially true for juveniles. The adapted population of juveniles showed a 54-59 percent increase in survival rate compared to their counterparts from the newly introduced group.

"This shows that adaptive change can improve survival rates after fewer than ten years in a new environment," Gordon said. "It shows, too, that evolution might sometimes influence population dynamics in the face of environmental change."

(Photo: Paul Bentzen)

UC Riverside

MOBILE DNA ELEMENTS IN WOOLLY MAMMOTH GENOME GIVE NEW CLUES TO MAMMALIAN EVOLUTION

0 comentarios
The woolly mammoth died out several thousand years ago, but the genetic material they left behind is yielding new clues about the evolution of mammals. In a study published online in Genome Research, scientists have analyzed the mammoth genome looking for mobile DNA elements, revealing new insights into how some of these elements arose in mammals and shaped the genome of an animal headed for extinction.

Interspersed repeats, also known as transposable elements, are DNA sequences that can "jump" around the genome, causing mutations in the host and contributing to expansion of the genome. Interspersed repeats account for a significant fraction of mammalian genomes, and some of these elements are still actively mobile. In humans, interspersed repeats account for approximately 44% of the entire genome sequence. Even more extreme is the opossum genome, where more than half of the sequence is composed of repetitive elements.

Scientists recently sequenced the woolly mammoth genome, using DNA samples obtained from preserved specimens. Dr. Stephan Schuster and his research group at Penn State University, who were involved in the sequencing and analysis of the mammoth genome, are now looking deeper into the sequence for interspersed repeats. The mammoth genome is an excellent candidate for comparative analysis of interspersed repeats in mammals, as it had a remarkably large genome of approximately 4.7 billion bases, 1.5 times larger than the human genome. Using the mammoth genome sequence and sequences of other mammals for comparison, Schuster's group found that the mammoth genome contained the largest proportion of interspersed repeats of any other mammal studied. In fact, a single class of elements, known as the BovB long interspersed repeat, accounted for nearly 12% of the mammoth genome alone.

Dr. Fangqing Zhao, a postdoctoral researcher in Schuster's group and primary author of the work, emphasized that the BovB family of repeats is particularly interesting, because while this family has been identified in other mammalian genomes, such as ruminants, snakes, opossum, and now the mammoth, its distribution in the mammalian lineage is inconsistent. Zhao explained that this finding in mammoth further supports the hypothesis that BovB may have been acquired "horizontally," meaning that vertebrate genomes attained the element from another organism, rather than inherited from ancestors.

Many species within the Afrotheria group of mammals, which includes the woolly mammoth, are at high risk for extinction or are already extinct. "Further analyses examining if the genomes of extinct and endangered Afrotherians contain more repetitive elements than non-endangered mammals may elucidate whether there is an interplay between repetitive elements and extinction," Zhao noted, underscoring the need to study genomes of species on the brink of extinction.

CSHL

Thursday, June 18, 2009

METEORITE BOMBARDMENT MAY HAVE MADE EARTH MORE HABITABLE

0 comentarios
Large bombardments of meteorites approximately four billion years ago could have helped to make the early Earth and Mars more habitable for life by modifying their atmospheres, suggests the results of a paper published in the journal Geochimica et Cosmochima Acta.

When a meteorite enters a planet’s atmosphere, extreme heat causes some of the minerals and organic matter on its outer crust to be released as water and carbon dioxide before it breaks up and hits the ground.

Researchers suggest the delivery of this water could have made Earth’s and Mars’ atmospheres wetter. The release of the greenhouse gas carbon dioxide could have trapped more energy from sunlight to make Earth and Mars warm enough to sustain liquid oceans.


In the new study, researchers from Imperial College London analysed the remaining mineral and organic content of fifteen fragments of ancient meteorites that had crashed around the world to see how much water vapour and carbon dioxide they would release when subjected to very high temperatures like those that they would experience upon entering the Earth’s atmosphere.

The researchers used a new technique called pyrolysis-FTIR, which uses electricity to rapidly heat the fragments at a rate of 20,000 degrees Celsius per second, and they then measured the gases released.

They found that on average, each meteorite was capable of releasing up to 12 percent of its mass as water vapour and 6 percent of its mass as carbon dioxide when entering an atmosphere. They concluded that contributions from individual meteorites were small and were unlikely to have a significant impact on the atmospheres of planets on their own.

The researchers then analysed data from an ancient meteorite shower called the Late Heavy Bombardment (LHB), which occurred 4 billion years ago, where millions of rocks crashed to Earth and Mars over a period of 20 million years.

Using published models of meteoritic impact rates during the LHB, the researchers calculated that 10 billion tonnes of carbon dioxide and 10 billion tonnes of water vapour could have been delivered to the atmospheres of Earth and Mars each year.

This suggests that the LHB could have delivered enough carbon dioxide and water vapour to turn the atmospheres of the two planets into warmer and wetter environments that were more habitable for life, say the researchers.

Professor Mark Sephton, from Imperial’s Department of Earth Science and Engineering believes the study provides important clues about Earth’s ancient past:

“For a long time, scientists have been trying to understand why Earth is so water rich compared to other planets in our solar system. The LHB may provide a clue. This may have been a pivotal moment in our early history where Earth’s gaseous envelope finally had enough of the right ingredients to nurture life on our planet.”

Lead- author of the study, Dr Richard Court from Imperial’s Department of Earth Science and Engineering, adds:

“Because of their chemistry, ancient meteorites have been suggested as a way of furnishing the early Earth with its liquid water. Now we have data that reveals just how much water and carbon dioxide was directly injected into the atmosphere by meteorites. These gases could have got to work immediately, boosting the water cycle and warming the planet.”

However, researchers say Mars’ good fortune did not last. Unlike Earth, Mars doesn’t have a magnetic field to act as a protective shield from the Sun’s solar wind. As a consequence, Mars was stripped of most of its atmosphere. A reduction in volcanic activity also cooled the planet. This caused its liquid oceans to retreat to the poles where they became ice.


Wednesday, June 17, 2009

ADULT BONE MARROW STEM CELLS INJECTED INTO SKELETAL MUSCLE CAN REPAIR HEART TISSUE

0 comentarios
University at Buffalo researchers have demonstrated for the first time that injecting adult bone marrow stem cells into skeletal muscle can repair cardiac tissue, reversing heart failure.

Using an animal model, the researchers showed that this non-invasive procedure increased myocytes, or heart cells, by two-fold and reduced cardiac tissue injury by 60 percent.

The therapy also improved function of the left ventricle, the primary pumping chamber of the heart, by 40 percent and reduced fibrosis, the hardening of the heart lining that impairs its ability to contract, by up to 50 percent.

"This work demonstrates a novel non-invasive mesenchymal stem cell (MSC) therapeutic regimen for heart failure based on an intramuscular delivery route," said Techung Lee, Ph.D., UB associate professor of biochemistry and senior author on the paper.

Mesenchymal stem cells are found in the bone marrow and can differentiate into a variety of cell types.

"Injecting MSCs or factors released by MSCs improved ventricular function, promoted myocardial regeneration, lessened apoptosis (cell death) and fibrotic remodeling, recruited bone marrow progenitor cells and induced myocardial expression of multiple growth factor genes," Lee said.

"These findings highlight the critical 'cross-talks' between the injected MSCs and host tissues, culminating in effective cardiac repair for the failing heart."

The paper reporting this development appears online in the Articles-in-Press section of the American Journal of Physiology -- Heart Circulation Physiology at http://ajpheart.physiology.org/cgi/reprint/00186.2009v1.

The heart disease death rate has dropped significantly in the last three decades due to better treatments, resulting in large numbers of people living with heart failure. This advance has lead to another health hurdle: The only therapy available to reverse the decline in cardiac function is heart transplantation, and donor hearts are very scarce.

Clinical trials of myocardial stem cell therapy traditionally have relied on surgery -- infusing the stem cells directly into the heart or injecting them into the myocardium, the heart muscle -- invasive methods that can result in harmful scar tissue, arrhythmia, calcification or small vessel blockages.

"In our research with a swine model of heart failure," said Lee, "we've found that only 1-to-2 percent of MSCs infused into the myocardium grafted into the heart, and there was no evidence that they differentiated into heart muscle cells. In addition, diseased tissue is not a healthy environment for cell growth.

"For these reasons, and because patients with heart failure are not good surgical risks, it made sense to explore a non-invasive cell delivery approach," said Lee. "An important feature of MSCs is their ability to produce a plethora of tissue healing effects, known as "tropic factors," which can be harnessed for stem cell therapy for heart failure.

Lee noted that the multiple trophic factors produced by MSCs have been shown in the literature to be capable of reducing tissue injury, inhibiting fibrosis, promoting angiogenesis, stimulating recruitment and proliferation of tissue stem cells, and reducing inflammatory oxidative stress, a common cause of cardiovascular disease and heart failure.

"Since skeletal muscle is the most abundant tissue in the body and can withstand repeated injection of large number of stem cells, we thought it would be a good method to deliver MSCs," Lee said. "We hypothesized that MSCs, via secretion of these functionally synergistic trophic factors, would be able to rescue the failing heart even when delivered away from the myocardium.

"This study proves our hypothesis," said Lee. "We've demonstrated that injecting MSCs, or trophic factors released by MSCs, into skeletal muscle improved ventricular function, promoted regeneration of heart tissue, decreased cell death and improved other factors that cause heart failure.

"This non-invasive stem cell administration regimen, if validated clinically, is expected to facilitate future stem cell therapy for heart failure."

Lee said the next step is to use genetic and pharmacological engineering to make the stem cells more active, so good therapeutic effects can be achieved with fewer cells.

"That is our goal. It would reduce the cost of stem cell therapy and make it more affordable for patients in the future."


Followers

Archive

 

Selected Science News. Copyright 2008 All Rights Reserved Revolution Two Church theme by Brian Gardner Converted into Blogger Template by Bloganol dot com