Monday, January 31, 2011

CONVERTING 2-D PHOTO INTO 3-D FACE FOR SECURITY APPLICATIONS AND FORENSICS

0 comentarios
It is possible to construct a three-dimensional, 3D, face from flat 2D images, according to research published in the International Journal of Biometrics this month. The discovery could be used for biometrics in security applications or in forensic investigations.

Xin Guan and Hanqi Zhuang of Florida Atlantic University on Boca Raton explain how Biometrics, the technology of performing personal identification or authentication via an individual's physical attributes, is becoming an increasingly viable solution for identity management, information protection and homeland security. The researchers have now developed a computer algorithm that can analyze the viewing angle and illumination of a face in an image and generate a 3D view of the face based on the results.

The team points out that while our faces are all different they share so many characteristics that it is difficult for current computer technology to uniquely identify an individual from a flat, 2D image. However, a processed 2D image that yields a 3D image of the face would give a unique perspective.

A 3D image of a person's face might be used in biometrics alongside or instead of fingerprint, iris, face, voice and DNA, recognition techniques for so-called identity management and in security, coupled with smart cards and passwords computer recognition of a real face based on a 3D version of known personnel in a security database could be used to reduce false identification. The same technique might also be applied to analysis of security footage from closed-circuit television cameras (CCTV) in crime investigation or in searching for missing persons. Ultimately, the same technology might also be adapted by the entertainment industry where 2D images of famous people from the past might be rendered in 3D and so allow a face to be animated.

Inderscience Publishers

KILLER PAPER FOR NEXT-GENERATION FOOD PACKAGING

0 comentarios
Scientists are reporting development and successful lab tests of "killer paper," a material intended for use as a new food packaging material that helps preserve foods by fighting the bacteria that cause spoilage. The paper, described in ACS' journal, Langmuir, contains a coating of silver nanoparticles, which are powerful anti-bacterial agents.

Aharon Gedanken and colleagues note that silver already finds wide use as a bacteria fighter in certain medicinal ointments, kitchen and bathroom surfaces, and even odor-resistant socks. Recently, scientists have been exploring the use of silver nanoparticles — each 1/50,000 the width of a human hair — as germ-fighting coatings for plastics, fabrics, and metals. Nanoparticles, which have a longer-lasting effect than larger silver particles, could help overcome the growing problem of antibiotic resistance, in which bacteria develop the ability to shrug-off existing antibiotics. Paper coated with silver nanoparticles could provide an alternative to common food preservation methods such as radiation, heat treatment, and low temperature storage, they note. However, producing "killer paper" suitable for commercial use has proven difficult.

The scientists describe development of an effective, long-lasting method for depositing silver nanoparticles on the surface of paper that involves ultrasound, or the use of high frequency sound waves. The coated paper showed potent antibacterial activity against E. coli and S. aureus, two causes of bacterial food poisoning, killing all of the bacteria in just three hours. This suggests its potential application as a food packaging material for promoting longer shelf life, they note.

ACS

BETTER LEARNING THROUGH HANDWRITING

0 comentarios

Writing by hand strengthens the learning process. When typing on a keyboard, this process may be impaired.

Associate professor Anne Mangen at the University of Stavanger’s Reading Centre asks if something is lost in switching from book to computer screen, and from pen to keyboard.

The process of reading and writing involves a number of senses, she explains. When writing by hand, our brain receives feedback from our motor actions, together with the sensation of touching a pencil and paper. These kinds of feedback is significantly different from those we receive when touching and typing on a keyboard.

Together with neurophysiologist Jean-Luc Velay at the University of Marseille, Anne Mangen has written an article published in the Advances in Haptics periodical. They have examined research which goes a long way in confirming the significance of these differences.

An experiment carried out by Velay’s research team in Marseille establishes that different parts of the brain are activated when we read letters we have learned by handwriting, from those activated when we recognise letters we have learned through typing on a keyboard. When writing by hand, the movements involved leave a motor memory in the sensorimotor part of the brain, which helps us recognise letters. This implies a connection between reading and writing, and suggests that the sensorimotor system plays a role in the process of visual recognition during reading, Mangen explains.

Other experiments suggest that the brain’s Brocas area is discernibly more activated when we are read a verb which is linked to a physical activity, compared with being read an abstract verb or a verb not associated with any action.

“This also happens when you observe someone doing something. You don’t have to do anything yourself. Hearing about or watching some activity is often enough. It may even suffice to observe a familiar tool associated with a particular physical activity,” Mangen says.

Since writing by hand takes longer than typing on a keyboard, the temporal aspect may also influence the learning process, she adds.

The term ‘haptic’ refers to the process of touching and the way in which we communicate by touch, particularly by using our fingers and hands to explore our surroundings. Haptics include both our perceptions when we relate passively to our surroundings, and when we move and act.

There is a lot of research on haptics in relation to computer games, in which for instance vibrating hand controls are employed. According to Mangen, virtual drills with sound and vibration are used for training dentists.

But there has been very little effort to include haptics within the humanistic disciplines, she explains. In educational science, there is scant interest in the ergonomics of reading and writing, and its potential significance in the learning process.

Mangen refers to an experiment involving two groups of adults, in which the participants were assigned the task of having to learn to write in an unknown alphabet, consisting of approximately twenty letters. One group was taught to write by hand, while the other was using a keyboard. Three and six weeks into the experiment, the participants’ recollection of these letters, as well as their rapidity in distinguishing right and reversed letters, were tested. Those who had learned the letters by handwriting came out best in all tests. Furthermore, fMRI brain scans indicated an activation of the Brocas area within this group. Among those who had learned by typing on keyboards, there was little or no activation of this area.

“The sensorimotor component forms an integral part of training for beginners, and in special education for people with learning difficulties. But there is little awareness and understanding of the importance of handwriting to the learning process, beyond that of writing itself,” Mangen says.

She refers to pedagogical research on writing, which has moved from a cognitive approach to a focus on contextual, social and cultural relations. In her opinion, a one-sided focus on context may lead to neglect of the individual, physiological, sensorimotor and phenomenological connections.

Within the field of psychology, there is an awareness of the danger of paying too much attention on mentality. According to Mangen, perception and sensorimotor now play a more prominent role.

“Our bodies are designed to interact with the world which surrounds us. We are living creatures, geared toward using physical objects - be it a book, a keyboard or a pen - to perform certain tasks,” she says.

Being a media and reading researcher, Anne Mangen is a rare bird within her field of study. And she is very enthusiastic about her collaboration with a neurophysiologist.

“We combine very different disciplines. Velay has carried out some very exciting experiments on the difference between handwriting and the use of keyboards, from a neurophysiologic perspective. My contribution centres on how we – as humans with bodies and brains – experience the writing process, through using different technologies in different ways. And how these technologies’ interfaces influence our experience,” she concludes.

(Photo: University of Stavanger)

University of Stavanger

Friday, January 28, 2011

ARE SHARKS COLOR BLIND?

0 comentarios

Sharks are unable to distinguish colors, even though their close relatives rays and chimaeras have some color vision, according to new research by Dr. Nathan Scott Hart and colleagues from the University of Western Australia and the University of Queensland in Australia. Their study shows that although the eyes of sharks function over a wide range of light levels, they only have a single long-wavelength-sensitive cone type in the retina and therefore are potentially totally color blind.

Hart and team's findings are published online in Springer's journal Naturwissenschaften – The Science of Nature.

“This new research on how sharks see may help to prevent attacks on humans and assist in the development of fishing gear that may reduce shark bycatch in long-line fisheries. Our study shows that contrast against the background, rather than colour per se, may be more important for object detection by sharks. This may help us to design long-line fishing lures that are less attractive to sharks as well as to design swimming attire and surf craft that have a lower visual contrast to sharks and, therefore, are less ‘attractive’ to them,” said Prof. Hart.

Sharks are efficient predators and their evolutionary success is thought to be due in part to an impressive range of sensory systems, including vision. To date, it is unclear whether sharks have color vision, despite well-developed eyes and a large sensory brain area dedicated to the processing of visual information. In an attempt to demonstrate whether or not sharks have color vision, Hart and colleagues used a different technique - microspectrophotometry - to identify cone visual pigments in shark retinas and measure their spectral absorbance.

They looked at the retinas of 17 shark species caught in a variety of waters in both Queensland and Western Australia. Rod cells were the most common type of photoreceptor in all species. In ten of the 17 species, no cone cells were observed. However, cones were found in the retinae of 7 species of shark from three different families and in each case only a single type of long-wavelength-sensitive cone photoreceptor was present. Hart and team's results provide strong evidence that sharks possess only a single cone type, suggesting that sharks may be cone monochromats, and therefore potentially totally color blind.

The authors conclude: "While cone monochromacy on land is rare, it may be a common strategy in the marine environment. Many aquatic mammals − whales, dolphins and seals − also possess only a single, green-sensitive cone type. It appears that both sharks and marine mammals may have arrived at the same visual design by convergent evolution, in other words, they acquired the same biological trait in unrelated lineages."

(Photo: Springer)

Springer

GENETIC ORIGIN OF CULTIVATED CITRUS DETERMINED

0 comentarios

Citrus species are among the most important fruit trees in the world. Citrus has a long history of cultivation, often thought to be more than 4,000 years. Until now, however, the exact genetic origins of cultivated citrus such as sweet orange (Citrus sinensis), lemon (C. limon), and grapefruit (C. paradisi) have been a mystery.

A team of researchers from China has published a study in the Journal of the American Society of Horticultural Science that provides genetic evidence of the origins of a variety species of today's cultivated citrus.

The research team, led by Zhiqin Zhou from Southwest University, analyzed amplified fragment length polymorphism (AFLP) fingerprints—a technique that has been used successfully to assess the origin of potato cultivars—with chloroplast DNA (cpDNA) sequence analysis and nuclear internal transcribed spacer. "The combination of nuclear DNA and cpDNA data allowed us to identify the exact genetic origin of the cultivated citrus", they wrote.

The results proved that bergamot and lemon were derived from citron and sour orange, and grapefruit was a hybrid that originated from a cross between pummelo and sweet orange. The data demonstrated that sweet orange and sour orange were hybrids of mandarin and pummelo, while rough lemon was a cross between citron and mandarin. The evidence also confirmed that bergamot was a hybrid of sour orange and citron, with sour orange as the maternal parent and citron as the paternal parent.

"Our molecular evidence presented more convincing data than all other previous studies in supporting the origin of lime", noted the scientists. The data confirmed a species of Papeda to be the female parent and C. medica as the male for mexican lime.

The researchers said that a clear understanding of the citrus genetic background is necessary for better characterization and utilization of citrus germplasm, adding that this research will provide important new information for future study on the genetics and breeding of citrus.

(Photo: Xiaomeng Li)

American Society of Horticultural Science

FORGET PLANET X! NEW TECHNIQUE COULD PINPOINT GALAXY X

0 comentarios

Planet X, an often-sought 10th planet, is so far a no-show, but Sukanya Chakrabarti has high hopes for finding what might be called Galaxy X – a dwarf galaxy that she predicts orbits our Milky Way Galaxy.

Many large galaxies, such as the Milky Way, are thought to have lots of satellite galaxies too dim to see. They are dominated by "dark matter," which astronomers say makes up 85 percent of all matter in the universe but so far remains undetected.

Chakrabarti, a post-doctoral fellow and theoretical astronomer at the University of California, Berkeley, has developed a way to find "dark" satellite galaxies by analyzing the ripples in the hydrogen gas distribution in spiral galaxies. Planet X was predicted – erroneously – more than 100 years ago based on perturbations in the orbit of Neptune.

Earlier this year, Chakrabarti used her mathematical method to predict that a dwarf galaxy sits on the opposite side of the Milky Way from Earth, and that it has been unseen to date because it is obscured by the intervening gas and dust in the galaxy's disk. One astronomer has already applied for time on the Spitzer Space Telescope to look in infrared wavelengths for this hypothetical Galaxy X.

"My hope is that this method can serve as a probe of mass distribution and of dark matter in galaxies, in the way that gravitational lensing today has become a probe for distant galaxies," Chakrabarti said.

Since her prediction for the Milky Way, Chakrabarti has gained confidence in her method after successfully testing it on two galaxies with known, faint satellites.

"This approach has broad implications for many fields of physics and astronomy – for the indirect detection of dark matter as well as dark-matter dominated dwarf galaxies, planetary dynamics, and for galaxy evolution driven by satellite impacts," she said.

Chakrabarti's colleague Leo Blitz, a UC Berkeley professor of astronomy, said that the method could also help test an alternative to dark matter theory, which proposes a modification to the law of gravity to explain the missing mass in galaxies.

"The matter density in the outer reaches of spiral galaxies is hard to explain in the context of modified gravity, so if this tidal analysis continues to work, and we can find other dark galaxies in distant halos, it may allow us to rule out modified gravity," he said.

The Milky Way is surrounded by some 80 known or suspected dwarf galaxies that are called satellite galaxies, even though some of them may just be passing through, not captured into orbits around the galaxy. The Large and Small Magellanic Clouds are two such satellites, both of them irregular dwarf galaxies.

Theoretical models of rotating spiral galaxies, however, predict that there should be many more satellite galaxies, perhaps thousands, with small ones even more prevalent than large ones. Dwarf galaxies, however, are faint, and some of the galaxies may be primarily invisible dark matter.

Chakrabarti and Blitz realized that dwarf galaxies would create disturbances in the distribution of cold atomic hydrogen gas (H I) within the disk of a galaxy, and that these perturbations could reveal not only the mass, but the distance and location of the satellite. The cold hydrogen gas in spiral galaxies is gravitationally confined to the plane of the galactic disk and extends much farther out than the visible stars – sometimes up to five times the diameter of the visible spiral. The cold gas can be mapped by radio telescopes.

"The method is like inferring the size and speed of a ship by looking at its wake," said Blitz. "You see the waves from a lot of boats, but you have to be able to separate out the wake of a medium or small ship from that of an ocean liner."

The technique Chakrabarti developed involves a Fourier analysis of the gas distribution determined by high-resolution radio observations. Her initial predication of Galaxy X around the Milky Way was made possible by a wealth of data already available on the atomic hydrogen in our galaxy. To test her theory on other galaxies, she and her collaborators used recent data from a radio survey called The HI Nearby Galaxy Survey (THINGS), conducted by the Very Large Array, as well as its extension to the Southern Hemisphere, THINGS-SOUTH, a survey carried out by the Australia Telescope Compact Array.

''These new high-resolution radio data open up a wealth of opportunities to explore the gas distributions in the outskirts of galaxies'', said co-author Frank Bigiel, a UC Berkeley post-doctoral fellow who is also co-investigator of the THINGS and THINGS-SOUTH projects.

Collaborating with Bigiel and Phil Chang of the Canadian Institute of Theoretical Astrophysics, Chakrabarti looked at data for the Whirlpool Galaxy (M51), which has a companion galaxy one-third the size of M51, and NGC 1512, with a satellite one-hundredth the size of the galaxy. Her mathematical analysis correctly predicted the mass and location of these satellites.

She said her technique should work for satellite galaxies as small as one-thousandth the mass of the primary galaxy.

Chakrabarti predicted the mass of Galaxy X, for example, to be one-hundredth that of the Milky Way itself. Based on her calculations with Blitz, the galaxy currently sits across the Milky Way somewhere in the constellations of Norma or Circinus, just west of the galactic center in Sagittarius when viewed from Earth.

She contrasts her prediction of Galaxy X with previous arguments for a Planet X beyond the orbit of Neptune. In the 19th century, what would have been at the time a ninth planet was proposed by famed astronomer Percival Lowell, but his prediction turned out to be based on incorrect measurements of Neptune's orbit. In fact, Pluto and other objects in the Kuiper Belt, where the planet was predicted to reside, have masses far too low to exert a measurable gravitational effect on Neptune or Uranus, Chakrabarti said. Since then, perturbations in the orbits of other bodies in the solar system have set off periodic searches for a 10th planet beyond the now "dwarf" planet Pluto.

On the other hand, Galaxy X – or a satellite galaxy one-thousandth the mass of the Milky Way – would still exert a large enough gravitational effect to cause ripples in the disk of our galaxy.

Barbara Whitney, a Wisconsin-based astronomer affiliated with the Space Sciences Institute in Boulder, Colo., hopes to target Galaxy X as part of the Galactic Legacy Infrared Mid-Plane Survey Extraordinaire (GLIMPSE) conducted with the Spitzer Space Telescope.

Chakrabarti and Blitz also calculated that the predicted galaxy is in a parabolic orbit around the Milky Way, now at a distance of about 300,000 light years from the galactic center. The galactic radius is about 50,000 light years.

"Our paper is a proof of principle, but we need to look at a much larger sample of spiral galaxies with optically visible galactic companions to determine the incidence of false positives," and thus the method's reliability, Chakrabarti said.

(Photo: Sukanya Chakrabarti/UC Berkeley)

University of California, Berkeley

Thursday, January 27, 2011

OUR PERCEPTIONS OF MASCULINITY AND FEMININITY ARE SWAYED BY OUR SENSE OF TOUCH

0 comentarios
Gender stereotypes suggest that men are usually tough and women are usually tender. A new study published in Psychological Science, a journal of the Association for Psychological Science, finds these stereotypes have some real bodily truth for our brains; when people look at a gender-neutral face, they are more likely to judge it as male if they’re touching something hard and as female if they’re touching something soft.

Several studies have found recently that we understand many concepts through our bodies. For example, weight conveys importance; just giving someone a heavy clipboard to hold will make them judge something as more important than someone who holds a light clipboard. Michael Slepian, a graduate student at Tufts University, and his colleagues wanted to know if this was also true for how people think about gender.

For one experiment, people were given either a hard or a soft ball to hold, then told to squeeze it continuously while looking at pictures of faces on a computer. Each face had been made to look exactly gender-neutral, so it was neither male nor female. For each face, the volunteer had to categorize it as male or female. People who were squeezing the soft ball were more likely to judge faces as female, while people who handled the hard ball were more likely to categorize them as male.

The same effect was found in a second experiment in which people wrote their answers on a piece of paper with carbon paper underneath; some were told to press hard, to make two copies, and some were told to press lightly, so the carbon paper could be reused. People who were pressing hard were more likely to categorize faces as male, while the soft writers were more likely to choose female.

“We were really surprised,” says Slepian, who cowrote the study with Max Weisbuch of the University of Denver, Nicholas O. Rule at the University of Toronto, and Nalini Ambady of Tufts University. “It’s remarkable that the feeling of handling something hard or soft can influence how you visually perceive a face.” The results show that knowledge about social categories, such as gender, is like other kinds of knowledge—it’s partly carried in the body.

Association for Psychological Science

GESTURING WHILE TALKING HELPS CHANGE YOUR THOUGHTS

0 comentarios
Sometimes it’s almost impossible to talk without using your hands. These gestures seem to be important to how we think. They provide a visual clue to our thoughts and, a new theory suggests, may even change our thoughts by grounding them in action.

University of Chicago psychological scientists Sian Beilock and Susan Goldin-Meadow are bringing together two lines of research: Beilock’s work on how action affects thought and Goldin-Meadow’s work on gesture. After a chat at a conference instigated by Ed Diener, the founding editor of Perspectives on Psychological Science, they designed a study together to look at how gesture affects thought.

For the study, published in Psychological Science, a journal of the Association for Psychological Science, Beilock and Goldin-Meadow had volunteers solve a problem known as the Tower of Hanoi. It’s a game in which you have to move stacked disks from one peg to another. After they finished, the volunteers were taken into another room and asked to explain how they did it. (This is virtually impossible to explain without using your hands.) Then the volunteers tried the task again. But there was a trick: For some people, the weight of the disks had secretly changed, such that the smallest disk, which used to be light enough to move with one hand, now needed two hands.

People who had used one hand in their gestures when talking about moving the small disk were in trouble when that disk got heavier. They took longer to complete the task than did people who used two hands in their gestures—and the more one-handed gestures they used, the longer they took. This shows that how you gesture affects how you think; Goldin-Meadow and Beilock suggest that the volunteers had cemented how to solve the puzzle in their heads by gesturing about it (and were thrown off by the invisible change in the game).

In another version of the experiment, published in Perspectives in Psychological Science, the volunteers were not asked to explain their solution; instead, they solved the puzzle a second time before the disk weights were changed. But moving the disks didn’t affect performance in the way that gesturing about the disks did. The people who gestured did worse after the disk weights switched, but the people who moved the disks did not—they did just as well as before. “Gesture is a special case of action. You might think it would have less effect because it does not have a direct impact on the world,” says Goldin-Meadow. But she and Beilock think it may actually be having a stronger effect, “because gesturing about an act requires you to represent that act.” You aren’t just reaching out and handling the thing you’re talking about; you have to abstract from it, indicating it by a movement of your hands.

In the article published in Perspectives in Psychological Science, the two authors review the research on action, gesture, and thought. Gestures make thought concrete, bringing movement to the activity that’s going on in your mind.

This could be useful in education; Goldin-Meadow and Beilock have been working on helping children to understand abstract concepts in mathematics, physics, and chemistry by using gesture. “When you’re talking about angular momentum and torque, you’re talking about concepts that have to do with action,” Beilock says. “I’m really interested in whether getting kids to experience some of these actions or gesture about them might change the brain processes they use to understand these concepts.” But even in math where the concepts have little to do with action, gesturing helps children learn—maybe because the gestures themselves are grounded in action.

Association for Psychological Science

PLASMA JETS ARE SUSPECT IN SOLAR MYSTERY

0 comentarios

One of the most enduring mysteries in solar physics is why the Sun's outer atmosphere, or corona, is millions of degrees hotter than its surface.

Now scientists believe they have discovered a major source of hot gas that replenishes the corona: jets of plasma shooting up from just above the Sun's surface.

The finding addresses a fundamental question in astrophysics: how energy is moved from the Sun's interior to create its hot outer atmosphere.

"It's always been quite a puzzle to figure out why the Sun's atmosphere is hotter than its surface," says Scott McIntosh, a solar physicist at the High Altitude Observatory of the National Center for Atmospheric Research (NCAR) in Boulder, Colo., who was involved in the study.

"By identifying that these jets insert heated plasma into the Sun's outer atmosphere, we can gain a much greater understanding of that region and possibly improve our knowledge of the Sun's subtle influence on the Earth's upper atmosphere."

The research, results of which are published this week in the journal Science, was conducted by scientists from Lockheed Martin's Solar and Astrophysics Laboratory (LMSAL), NCAR, and the University of Oslo. It was supported by NASA and the National Science Foundation (NSF), NCAR's sponsor.

"These observations are a significant step in understanding observed temperatures in the solar corona," says Rich Behnke of NSF's Division of Atmospheric and Geospace Sciences, which funded the research.

"They provide new insight about the energy output of the Sun and other stars. The results are also a great example of the power of collaboration among university, private industry and government scientists and organizations."

The research team focused on jets of plasma known as spicules, which are fountains of plasma propelled upward from near the surface of the Sun into the outer atmosphere.

For decades scientists believed spicules could send heat into the corona. However, following observational research in the 1980s, it was found that spicule plasma did not reach coronal temperatures, and so the theory largely fell out of vogue.

"Heating of spicules to millions of degrees has never been directly observed, so their role in coronal heating had been dismissed as unlikely," says Bart De Pontieu, the lead researcher and a solar physicist at LMSAL.

In 2007, De Pontieu, McIntosh, and their colleagues identified a new class of spicules that moved much faster and were shorter-lived than the traditional spicules.

These "Type II" spicules shoot upward at high speeds, often in excess of 100 kilometers per second, before disappearing.

The rapid disappearance of these jets suggested that the plasma they carried might get very hot, but direct observational evidence of this process was missing.

The researchers used new observations from the Atmospheric Imaging Assembly on NASA's recently launched Solar Dynamics Observatory and NASA's Focal Plane Package for the Solar Optical Telescope (SOT) on the Japanese Hinode satellite to test their hypothesis.

"The high spatial and temporal resolution of the newer instruments was crucial in revealing this previously hidden coronal mass supply," says McIntosh.

"Our observations reveal, for the first time, the one-to-one connection between plasma that is heated to millions of degrees and the spicules that insert this plasma into the corona."

The findings provide an observational challenge to the existing theories of coronal heating.

During the past few decades, scientists proposed a wide variety of theoretical models, but the lack of detailed observation significantly hampered progress.

"One of our biggest challenges is to understand what drives and heats the material in the spicules," says De Pontieu.

A key step, according to De Pontieu, will be to better understand the interface region between the Sun's visible surface, or photosphere, and its corona.

Another NASA mission, the Interface Region Imaging Spectrograph (IRIS), is scheduled for launch in 2012 to provide high-fidelity data on the complex processes and enormous contrasts of density, temperature and magnetic field between the photosphere and corona. Researchers hope this will reveal more about the spicule heating and launch mechanism.

The LMSAL is part of the Lockheed Martin Space Systems Company, which designs and develops, tests, manufactures and operates a full spectrum of advanced-technology systems for national security and military, civil government and commercial customers.

(Photo: NASA)

National Science Foundation

BEING POOR CAN SUPPRESS CHILDREN'S GENETIC POTENTIALS

0 comentarios
Growing up poor can suppress a child's genetic potential to excel cognitively even before the age of 2, according to research from psychologists at The University of Texas at Austin.

Half of the gains that wealthier children show on tests of mental ability between 10 months and 2 years of age can be attributed to their genes, the study finds. But children from poorer families, who already lag behind their peers by that age, show almost no improvements that are driven by their genetic makeup.

The study of 750 sets of twins by Assistant Professor Elliot Tucker-Drob does not suggest that children from wealthier families are genetically superior or smarter. They simply have more opportunities to reach their potential.

These findings go to the heart of the age-old debate about whether "nature" or "nurture" is more important to a child's development. They suggest the two work together and that the right environment can help children begin to reach their genetic potentials at a much earlier age than previously thought.

"You can't have environmental contributions to a child's development without genetics. And you can't have genetic contributions without environment," says Tucker-Drob, who is also a research associate in the university's Population Research Center. "Socioeconomic disadvantages suppress children's genetic potentials."

The study, published in the journal Psychological Science, was co-authored by K. Paige Harden of The University of Texas at Austin, Mijke Rhemtulla of The University of Texas at Austin and the University of British Columbia, and Eric Turkheimer and David Fask of the University of Virginia.

The researchers looked at test results from twins who had taken a version of the Bayley Scales of Infant Development at about 10 months and again at about 2 years of age. The test, which is widely used to measure early cognitive ability, asks children to perform such tasks as pulling a string to ring a bell, putting three cubes in a cup and matching pictures.

At 10 months, there was no difference in how the children from different socioeconomic backgrounds performed. By 2 years, children from high socioeconomic background scored significantly higher than those from low socioeconomic backgrounds.

In general, the 2-year-olds from poorer families performed very similarly to one another. That was true among both fraternal and identical twins, suggesting that genetic similarity was unrelated to similarities in cognitive ability. Instead, their environments determine their cognitive success.

Among 2-year-olds from wealthier families, identical twins (who share identical genetic makeups) performed very similarly to one another. But fraternal twins were not as similar — suggesting their different genetic makeups and potentials were already driving their cognitive abilities.

"Our findings suggest that socioeconomic disparities in cognitive development start early," says Tucker-Drob. "For children from poorer homes, genetic influences on changes in cognitive ability were close to zero. For children from wealthier homes, genes accounted for about half of the variation in cognitive changes."

The study notes that wealthier parents are often able to provide better educational resources and spend more time with their children but does not examine what factors, in particular, help their children reach their genetic potentials. Tucker-Drob is planning follow-up studies to examine that question.

The University of Texas at Austin

COILED NANOWIRES MAY HOLD KEY TO STRETCHABLE ELECTRONICS

0 comentarios

Researchers at North Carolina State University have created the first coils of silicon nanowire on a substrate that can be stretched to more than double their original length, moving us closer to incorporating stretchable electronic devices into clothing, implantable health-monitoring devices, and a host of other applications.

“In order to create stretchable electronics, you need to put electronics on a stretchable substrate, but electronic materials themselves tend to be rigid and fragile,” says Dr. Yong Zhu, one of the researchers who created the new nanowire coils and an assistant professor of mechanical and aerospace engineering at NC State. “Our idea was to create electronic materials that can be tailored into coils to improve their stretchability without harming the electric functionality of the materials.”

Zhu's research team has created the first coils of silicon nanowire on a substrate that can be stretched to more than double their original length, moving us closer to developing stretchable electronic devices.

Other researchers have experimented with “buckling” electronic materials into wavy shapes, which can stretch much like the bellows of an accordion. However, Zhu says, the maximum strains for wavy structures occur at localized positions – the peaks and valleys – on the waves. As soon as the failure strain is reached at one of the localized positions, the entire structure fails.

“An ideal shape to accommodate large deformation would lead to a uniform strain distribution along the entire length of the structure – a coil spring is one such ideal shape,” Zhu says. “As a result, the wavy materials cannot come close to the coils’ degree of stretchability.” Zhu notes that the coil shape is energetically favorable only for one-dimensional structures, such as wires.

Zhu’s team put a rubber substrate under strain and used very specific levels of ultraviolet radiation and ozone to change its mechanical properties, and then placed silicon nanowires on top of the substrate. The nanowires formed coils upon release of the strain. Other researchers have been able to create coils using freestanding nanowires, but have so far been unable to directly integrate those coils on a stretchable substrate.

While the new coils’ mechanical properties allow them to be stretched an additional 104 percent beyond their original length, their electric performance cannot hold reliably to such a large range, possibly due to factors like contact resistance change or electrode failure, Zhu says. “We are working to improve the reliability of the electrical performance when the coils are stretched to the limit of their mechanical stretchability, which is likely well beyond 100 percent, according to our analysis.”

(Photo: NCSU)

North Carolina State University

PRINCETON SCIENTISTS CONSTRUCT SYNTHETIC PROTEINS THAT SUSTAIN LIFE

0 comentarios

In a groundbreaking achievement that could help scientists "build" new biological systems, Princeton University scientists have constructed for the first time artificial proteins that enable the growth of living cells.

The team of researchers created genetic sequences never before seen in nature, and the scientists showed that they can produce substances that sustain life in cells almost as readily as proteins produced by nature's own toolkit.

"What we have here are molecular machines that function quite well within a living organism even though they were designed from scratch and expressed from artificial genes," said Michael Hecht, a professor of chemistry at Princeton, who led the research. "This tells us that the molecular parts kit for life need not be limited to parts -- genes and proteins -- that already exist in nature."

The work, Hecht said, represents a significant advance in synthetic biology, an emerging area of research in which scientists work to design and fabricate biological components and systems that do not already exist in the natural world. One of the field's goals is to develop an entirely artificial genome composed of unique patterns of chemicals.

"Our work suggests," Hecht said, "that the construction of artificial genomes capable of sustaining cell life may be within reach."

Nearly all previous work in synthetic biology has focused on reorganizing parts drawn from natural organisms. In contrast, Hecht said, the results described by the team show that biological functions can be provided by macromolecules that were not borrowed from nature, but designed in the laboratory.

Although scientists have shown previously that proteins can be designed to fold and, in some cases, catalyze reactions, the Princeton team's work represents a new frontier in creating these synthetic proteins.

The research, which Hecht conducted with three former Princeton students and a former postdoctoral fellow, is described in a report published online Jan. 4 in the journal Public Library of Science ONE.

Hecht and the students in his lab study the relationship between biological processes on the molecular scale and processes at work on a larger magnitude. For example, he is studying how the errant folding of proteins in the brain can lead to Alzheimer's disease, and is involved in a search for compounds to thwart that process. In work that relates to the new paper, Hecht and his students also are interested in learning what processes drive the routine folding of proteins on a basic level -- as proteins need to fold in order to function -- and why certain key sequences have evolved to be central to existence.

Proteins are the workhorses of organisms, produced from instructions encoded into cellular DNA. The identity of any given protein is dictated by a unique sequence of 20 chemicals known as amino acids. If the different amino acids can be viewed as letters of an alphabet, each protein sequence constitutes its own unique "sentence."

And, if a protein is 100 amino acids long (most proteins are even longer), there are an astronomically large number of possibilities of different protein sequences, Hecht said. At the heart of his team's research was to question how there are only about 100,000 different proteins produced in the human body, when there is a potential for so many more. They wondered, are these particular proteins somehow special? Or might others work equally well, even though evolution has not yet had a chance to sample them?

Hecht and his research group set about to create artificial proteins encoded by genetic sequences not seen in nature. They produced about 1 million amino acid sequences that were designed to fold into stable three-dimensional structures.

"What I believe is most intriguing about our work is that the information encoded in these artificial genes is completely novel -- it does not come from, nor is it significantly related to, information encoded by natural genes, and yet the end result is a living, functional microbe," said Michael Fisher, a co-author of the paper who earned his Ph.D. at Princeton in 2010 and is now a postdoctoral fellow at the University of California-Berkeley. "It is perhaps analogous to taking a sentence, coming up with brand new words, testing if any of our new words can take the place of any of the original words in the sentence, and finding that in some cases, the sentence retains virtually the same meaning while incorporating brand new words."

Once the scientists had created this new library of artificial proteins, they inserted those proteins into various mutant strains of bacteria in which certain natural genes previously had been deleted. The deleted natural genes are required for survival under a given set of conditions, including a limited food supply. Under these harsh conditions, the mutant strains of bacteria died -- unless they acquired a life-sustaining novel protein from Hecht's collection. This was significant because formation of a bacterial colony under these selective conditions could occur only if a protein in the collection had the capacity to sustain the growth of living cells.

In a series of experiments exploring the role of differing proteins, the scientists showed that several different strains of bacteria that should have died were rescued by novel proteins designed in the laboratory. "These artificial proteins bear no relation to any known biological sequences, yet they sustained life," Hecht said.

Added Kara McKinley, also a co-author and a 2010 Princeton graduate who is now a Ph.D. student at the Massachusetts Institute of Technology: "This is an exciting result, because it shows that unnatural proteins can sustain a natural system, and that such proteins can be found at relatively high frequency in a library designed only for structure."

In addition to Hecht, Fisher and McKinley, other authors on the paper include Luke Bradley, a former postdoctoral fellow in Hecht's lab who is now an assistant professor at the University of Kentucky, and Sara Viola, a 2008 Princeton graduate who is now a medical student at Columbia University.

(Photo: Brian Wilson)

Princeton University

Tuesday, January 25, 2011

HOW DO YOU MAKE LITHIUM MELT IN THE COLD?

0 comentarios

Sophisticated tools allow scientists to subject the basic elements of matter to conditions drastic enough to modify their behavior. By doing this, they can expand our understanding of matter. A research team including three Carnegie scientists was able to demonstrate surprising properties of the element lithium under intense pressure and low temperatures. Their results were published Jan. 9 on the Nature Physics website.

Lithium is the first metal in the periodic table and is the least dense solid element at room temperature. It is most commonly known for its use in batteries for consumer electronics, such as cell phones and laptop computers. And, with only three electrons per atom, lithium should behave like a model, simple metal.

However, this research has shown that under pressure ranging between about 395,000 atmospheres (40 GPa) and about 592,000 atmospheres (60 GPa), lithium behaves in a manner that’s anything but simple. Not only does it become a liquid at room temperature, but it then refuses to freeze until the temperature reaches a chilly -115o F. At pressures above about 592,000 atmospheres (60 GPa), when lithium does eventually solidify, it is into a range of highly complex, crystalline states. The highest pressure reached in the study was about 1.3 million atmospheres (130 GPa).

The research team, including Malcolm Guthrie, Stanislav Sinogeikin and Ho-kwang (Dave) Mao, of Carnegie’s Geophysical Laboratory, believe that this exotic behavior is directly due to the exceptionally low mass of the lithium atom. An elementary result of quantum physics is that atoms continue to move, even when cooled to the lowest possible temperature. As the mass of an atom decreases, the importance of this residual, so called ‘zero-point,’ energy increases. The researchers speculate that, in the case of lithium, the zero-point energy increases with pressure to the point that melting occurs. This work raises the possibility of uncovering a material that never freezes. The prospect of a metallic liquid at even the lowest temperatures raises the intriguing possibility of an entirely novel material, a superconducting liquid, as proposed previously by theorists for hydrogen at very high pressure.

(Photo: ©iStockphoto.com/David Freund)

Carnegie Institution

RESEARCHERS FIND SPECIFIC BACTERIA MAY LEAD TO HEART DISEASE AND STROKE

0 comentarios

Emil Kozarov and a team of researchers at the Columbia University College of Dental Medicine have identified specific bacteria that may have a key role in atherosclerosis, or what is commonly referred to as “hardening of the arteries,” caused by plaque build-up, which can lead to heart attack and stroke.

Fully understanding the role of bacterial infections in cardiovascular diseases has been challenging because researchers have previously been unable to isolate live bacteria from plaque tissue removed from the arteries. Using specimens from the Department of Surgery and the Herbert Irving Comprehensive Cancer Center at Columbia University, Kozarov and his team, however, were able to isolate bacteria from a 78-year-old male who had previously suffered a heart attack. Their findings are explained in the latest Journal of Atherosclerosis and Thrombosis.

In the paper, researchers describe using cell cultures to study the genetic make-up of the tissue and to look for the presence of bacteria that could be cultured and grown for analysis. In addition, they looked at five pairs of both diseased and healthy arteries. Culturing the cells aided in the isolation of the bacillus Enterobacter hormaechei from the patient’s tissue. Implicated in bloodstream infections such as sepsis and other life-threatening conditions, the isolated bacteria were resistant to multiple antibiotics. Surprisingly, this microbe was further identified in very high numbers in diseased but not in healthy arterial tissues.

The data suggest that a chronic infection may underlie the process of atherosclerosis, an infection that can be initiated by the spread of bacteria though different “gates” in the vascular wall—as in the case of someone with an intestinal infection. The data support Kozarov’s previous studies, where his team identified bacteria normally found in a person’s mouth and in the carotid artery, thus pointing to tissue-destructing periodontal, or tooth and gum, infections as one possible gate to the circulation.

Bacteria can gain access to blood vessels through dif­ferent avenues, and then penetrate their vascular walls where they can create second­ary infections that have been shown to lead to plaque formation, the researchers continued. “In order to test the idea that bacteria are involved, we must be able not only to detect bacterial DNA, but first of all to isolate the bacterial strains from the vascular wall from the patient,” Kozarov said.

One specific avenue of infection the researchers studied involved bacteria getting access to the circulatory system via white blood cells (phagocytes) designed to ingest harmful foreign particles. The model that Kozarov’s team was able to demonstrate showed an intermediate step where Enterobacter hormaechei is internal­ized by the phagocytic cells, but a step wherein bacteria are able to avoid immediate death in phagocytes. Once in circulation, Kozarov said, bacteria using this “Trojan horse” approach can persist in the organism for extended periods of time while traveling to and colonizing dis­tant sites such as the carotid, femoral artery or the aorta. This can lead to failure of antibiotic treatment and initiation of an inflammatory process, or atherosclerosis.

“Our findings warrant further studies of bacterial infections as a contributing factor to cardiovascular disease,” said Kozarov, an associate professor of oral biology at the College of Dental Medicine. Jingyue Ju, co-author and director of the Columbia Center for Genome Technology & Biomolecular Engineering, also contributed to this research, which was supported in part by a grant from the National Heart, Lung, and Blood Institute of the National Institutes of Health and by the Columbia University Section of Oral and Diagnostic Sciences. “The concept that bacteria might not face an immediate death after being ingested by white blood cells likely contributes to the spread of potentially plaque-forming bacteria to sites where they might not normally be present.”

(Photo: Nephron)

Columbia University

EARTH IS TWICE AS DUSTY AS IN 19TH CENTURY, RESEARCH SHOWS

0 comentarios

If the house seems dustier than it used to be, it may not be a reflection on your housekeeping skills. The amount of dust in the Earth's atmosphere has doubled over the last century, according to a new study; and the dramatic increase is influencing climate and ecology around the world.

The study, led by Natalie Mahowald, associate professor of earth and atmospheric sciences, used available data and computer modeling to estimate the amount of desert dust, or soil particles in the atmosphere, throughout the 20th century. It's the first study to trace the fluctuation of a natural (not human-caused) aerosol around the globe over the course of a century.

Mahowald presented the research at the fall meeting of the American Geophysical Union in San Francisco Dec. 13.

Desert dust and climate influence each other directly and indirectly through a host of intertwined systems. Dust limits the amount of solar radiation that reaches the Earth, for example, a factor that could mask the warming effects of increasing atmospheric carbon dioxide. It also can influence clouds and precipitation, leading to droughts; which, in turn, leads to desertification and more dust.

Ocean chemistry is also intricately involved. Dust is a major source of iron, which is vital for plankton and other organisms that draw carbon out of the atmosphere.

To measure fluctuations in desert dust over the century, the researchers gathered existing data from ice cores, lake sediment and coral, each of which contain information about past concentrations of desert dust in the region. They then linked each sample with its likely source region and calculated the rate of dust deposition over time. Applying components of a computer modeling system known as the Community Climate System Model, the researchers reconstructed the influence of desert dust on temperature, precipitation, ocean iron deposition and terrestrial carbon uptake over time.

Among their results, the researchers found that regional changes in temperature and precipitation caused a global reduction in terrestrial carbon uptake of 6 parts per million (ppm) over the 20th century. The model also showed that dust deposited in oceans increased carbon uptake from the atmosphere by 6 percent, or 4 ppm, over the same time period.

While the majority of research related to aerosol impacts on climate is focused on anthropogenic aerosols (those directly emitted by humans through combustion), Mahowald said, the study highlights the important role of natural aerosols as well.

"Now we finally have some information on how the desert dust is fluctuating. This has a really big impact for the understanding of climate sensitivity," she said.

It also underscores the importance of gathering more data and refining the estimates. "Some of what we're doing with this study is highlighting the best available data. We really need to look at this more carefully. And we really need more paleodata records," she said.

Meanwhile, the study is also notable for the variety of fields represented by its contributors, she said, which ranged from marine geochemistry to computational modeling. "It was a fun study to do because it was so interdisciplinary. We're pushing people to look at climate impacts in a more integrative fashion."

(Photo: Cornell U.)

Cornell University

Monday, January 24, 2011

SQUEEZING SUSTAINABLE ENERGY FROM THIN AIR

0 comentarios

Energy from compressed air stored underground is cheap, clean and renewable, and could even save lives. Researchers at the UA's School of Sustainable Engineered Systems are designing systems that will run fridges, buildings or power plants.

Solar collectors and wind generators hold so much promise for clean energy, but they have a major flaw – they produce no power when the sun doesn't shine or the wind doesn't blow.

"If all we had to do was to generate power when the sun is shining, we would actually be in good shape right now," said Ben Sternberg, a researcher in the University of Arizona's Compressed Air Energy Storage, or CAES, program. "The crucial issue now is finding economical ways to store energy for large-scale use, either home-by-home over the entire country, or utility scale."

Batteries have traditionally been used to store energy, but they're expensive, have a limited number of charge-discharge cycles, and pose resource and disposal problems.

The CAES group is developing cost-competitive energy-storage systems based on compressing air and storing it in man-made containers or below ground in natural reservoirs.

When solar panels shut down and wind generators stop spinning, the compressed air is heated slightly and released to drive turbines that generate electricity. The compressed air also can be released directly to drive mechanical systems without being converted to electricity.

Although CAES researchers are putting a high-tech spin on compressed air storage and its modern materials, sophisticated remote sensing gear, and computer analysis, it's a simple, well-tested and mature technology. Urban systems were built in European cities as early as 1870, and by the 1890s were storing and delivering power to factories and homes.

The UA's CAES research team is working on three projects that range from systems that might power a single air conditioner or refrigerator to building-wide systems, as well as massive storage sites that could store utility-scale energy.

In this system, a low-speed motor uses some or all of the power from a solar panel or wind generator to pump air into a tank similar to those used for propane or oxygen. The energy is later used to power an appliance, such as a refrigerator. Several of these units could be linked together to power a home.

"We hope to develop a single-appliance system that could be built for less than $1,000," said Dominique Villela, a doctoral student in materials science and engineering. "We've had visitors from Alaska, whose villages depend on energy generated from propane. This is very expensive. Systems like ours could save them a lot of money by using solar or wind power for refrigerators or lights, for instance."

These systems could also save lives and taxpayer dollars in combat zones by producing energy on site. A recent segment on NPR's Science Friday program featured military efforts to conserve energy and switch to renewable energy fuels. Program guests noted that a gallon of gas that cost $2.35 in the U.S. could cost between $200 and $400 by the time it reaches outposts in Afghanistan. They also noted that more than 1,000 Americans have been killed moving fuel since that war began.

Villela said the group's research is now focusing on reliability issues, scaling the system to provide more energy storage, and adapting the system to less expensive materials and components.

UA civil engineers are designing hollow structural members that could be used to store compressed air in load-bearing components, such as foundation piles or the frames of buildings and houses.

"The key to our system is that the loads on structural components coming from compressed air are small compared to building loads, such as the weight of the building and wind loads," said George Frantziskonis, a professor of civil engineering and engineering mechanics. "This makes CAES storage in buildings economically and aesthetically feasible."

The larger the building, the more economical the CAES system and the greater the energy cost savings both in the short and long term, Frantziskonis said.

Researchers in the UA's Laboratory for Advanced Subsurface Imaging, or LASI, are developing high-resolution underground imaging systems that can be used to find salt deposits, porous rocks and other natural underground storage reservoirs. These sites could be used to hold large amounts of compressed air to drive utility-scale turbines.

While salt deposits have traditionally been associated with CAES technology, "you don't need a large cavern," said Sternberg, a professor of mining and geological engineering and director of the LASI program. "Rocks that have lots of pores also can provide energy storage. A third option is alluvium in basins, such as those found throughout the Southwest."

All of these possibilities require mapping the Earth's subsurface in high resolution with ground-penetrating electromagnetic waves. "That's where our work comes in because accurate imaging is needed to determine if there are discontinuities in these underground storage areas that will allow too much air to escape," he said.

Sternberg said porosity within the Earth, either from caverns or lots of interconnected pore space, has tremendous potential for low-cost storage that would make renewables cost competitive with fossil fuels.

Recent breakthroughs in the LASI program could help drive exploration and development of these resources. "We're getting data that's an order of magnitude more sensitive than conventional measurements," Sternberg said. "It's a combination of a new approach to collecting data, a new type of antenna array and a very different way of analyzing the data."

Sternberg is anxious to rapidly expand this technology to utility-size exploration. "Right now, so much of our energy is coming from volatile areas of the world, and we've got to overcome that," he said. "Energy security is our biggest risk. That's why this is so pressing. We cannot afford to drag this out and sit on the new developments in energy independence that are being created here in the LASI and CAES programs, as well as in other programs at universities across the country."

(Photo: U. Arizona)

University of Arizona

UF STUDY OF LICE DNA SHOWS HUMANS FIRST WORE CLOTHES 170,000 YEARS AGO

0 comentarios

A new University of Florida study following the evolution of lice shows modern humans started wearing clothes about 170,000 years ago, a technology which enabled them to successfully migrate out of Africa.

Principal investigator David Reed, associate curator of mammals at the Florida Museum of Natural History on the UF campus, studies lice in modern humans to better understand human evolution and migration patterns. His latest five-year study used DNA sequencing to calculate when clothing lice first began to diverge genetically from human head lice.

Funded by the National Science Foundation, the study is available online and appears in this month’s print edition of Molecular Biology and Evolution.

“We wanted to find another method for pinpointing when humans might have first started wearing clothing,” Reed said. “Because they are so well adapted to clothing, we know that body lice or clothing lice almost certainly didn’t exist until clothing came about in humans.”

The data shows modern humans started wearing clothes about 70,000 years before migrating into colder climates and higher latitudes, which began about 100,000 years ago. This date would be virtually impossible to determine using archaeological data because early clothing would not survive in archaeological sites.

The study also shows humans started wearing clothes well after they lost body hair, which genetic skin-coloration research pinpoints at about 1 million years ago, meaning humans spent a considerable amount of time without body hair and without clothing, Reed said.

“It’s interesting to think humans were able to survive in Africa for hundreds of thousands of years without clothing and without body hair, and that it wasn’t until they had clothing that modern humans were then moving out of Africa into other parts of the world,” Reed said.

Lice are studied because unlike most other parasites, they are stranded on lineages of hosts over long periods of evolutionary time. The relationship allows scientists to learn about evolutionary changes in the host based on changes in the parasite.

Applying unique data sets from lice to human evolution has only developed within the last 20 years, and provides information that could be used in medicine, evolutionary biology, ecology or any number of fields, Reed said.

“It gives the opportunity to study host-switching and invading new hosts — behaviors seen in emerging infectious diseases that affect humans,” Reed said.

A study of clothing lice in 2003 led by Mark Stoneking, a geneticist at the Max Planck Institute in Leipzig, Germany, estimated humans first began wearing clothes about 107,000 years ago. But the UF research includes new data and calculation methods better suited for the question.

“The new result from this lice study is an unexpectedly early date for clothing, much older than the earliest solid archaeological evidence, but it makes sense,” said Ian Gilligan, lecturer in the School of Archaeology and Anthropology at The Australian National University. “It means modern humans probably started wearing clothes on a regular basis to keep warm when they were first exposed to Ice Age conditions.”

The last Ice Age occurred about 120,000 years ago, but the study’s date suggests humans started wearing clothes in the preceding Ice Age 180,000 years ago, according to temperature estimates from ice core studies, Gilligan said. Modern humans first appeared about 200,000 years ago.

Because archaic hominins did not leave descendants of clothing lice for sampling, the study does not explore the possibility archaic hominins outside of Africa were clothed in some fashion 800,000 years ago. But while archaic humans were able to survive for many generations outside Africa, only modern humans persisted there until the present.

“The things that may have made us much more successful in that endeavor hundreds of thousands of years later were technologies like the controlled use of fire, the ability to use clothing, new hunting strategies and new stone tools,” Reed said.

(Photo: Jeff Gage, Florida Museum of Natural History)

University of Florida

ATTENTION LADIES AND GENTLEMEN: COURTSHIP AFFECTS GENE EXPRESSION

0 comentarios
Scientists from Texas have made an important step toward understanding human mating behavior by showing that certain genes become activated in fruit flies when they interact with the opposite sex.

This research, published in the January 2011 issue of the journal GENETICS (http://www.genetics.org), shows that courtship behaviors may be far more influenced by genetics than previously thought. In addition, understanding why and how these genes become activated within social contexts may also lead to insight into disorders such as autism.

"Be careful who you interact with," said Ginger E. Carney, PhD, co-author of the research study from the Department of Biology at Texas A&M University in College Station. "The choice may affect your physiology, behavior and health in unexpected ways."

To make this discovery, the scientists compared gene expression profiles in males that courted females, males that interacted with other males, and males that did not interact with other flies. The investigators identified a common set of genes that respond to the presence of either sex. They also discovered that there are other genes that are only affected by being placed with members of a particular sex, either male or female. Researchers then tested mutant flies that are missing some of these socially responsive genes and confirmed that these particular genes are important for behavior. The scientists predict that analyzing additional similar genes will give further insight into genes and neural signaling pathways that influence reproductive and other behavioral interactions.

"This study shows that we're closing in on the complex genetic machinery that affects social interactions," said Mark Johnston, Editor-in-Chief of the journal GENETICS. "Once similar genes are identified in humans, the implications will be enormous, as it could bring new understanding of, and perhaps even treatments for, a vast range of disorders related to social behavior."

Genetics

Friday, January 21, 2011

LONGEVITY UNLIKELY TO HAVE AIDED EARLY MODERN HUMANS

0 comentarios
Life expectancy was probably the same for early modern and late archaic humans and did not factor in the extinction of Neanderthals, suggests a new study by a Washington University in St. Louis anthropologist.

Erik Trinkaus, PhD, Professor of Anthropology in Arts & Sciences, examined the fossil record to assess adult mortality for both groups, which co-existed in different regions for roughly 150,000 years. Trinkaus found that the proportions of 20 to 40-year-old adults versus adults older than 40, were about the same for early modern humans and Neandertals.

This similar age distribution, says Trinkaus, reflects similar patterns of adult mortality and treatment of the elderly in the context of highly mobile hunting-and-gathering human populations.

The study, “Late Pleistocene Adult Mortality Patterns and Modern Human
Establishment, was published the week of Jan. 10, 2011 in the Proceedings of the National Academy of Sciences.

Older individuals are rarely found among the remains of late archaic humans, which has prompted some researchers to propose that Neandertals had an inherently shorter life expectancy, contributing to their demise.

However, if early modern humans did have a demographic advantage, Trinkaus argues, it was more likely due to high fertility rates and lower infant mortality.

“If indeed there was a demographic advantage for early modern humans, at least during transitional phases of Late Pleistocene human evolution, it must have been the result of increased fertility and/or reduced immature mortality,” writes Trinkaus in the paper’s conclusion. “Neither adult longevity nor proposed modest shifts in developmental rates are likely to have played a role in this demographic transition.

Washington University in St. Louis

MOUNTAIN GLACIER MELT TO CONTRIBUTE 12 CENTIMETRES TO WORLD SEA-LEVEL INCREASES BY 2100

0 comentarios

Melt off from small mountain glaciers and ice caps will contribute about 12 centimetres to world sea-level increases by 2100, according to UBC research published in Nature Geoscience.

The largest contributors to projected global sea-level increases are glaciers in Arctic Canada, Alaska and landmass bound glaciers in the Antarctic. Glaciers in the European Alps, New Zealand, the Caucasus, Western Canada and the Western United Sates--though small absolute contributors to global sea-level increases--are projected to lose more than 50 per cent of their current ice volume.

The study modelled volume loss and melt off from 120,000 mountain glaciers and ice caps, and is one of the first to provide detailed projections by region. Currently, melt from smaller mountain glaciers and ice caps is responsible for a disproportionally large portion of sea level increases, even though they contain less than one per cent of all water on Earth bound in glacier ice.

“There is a lot of focus on the large ice sheets but very few global scale studies quantifying how much melt to expect from these smaller glaciers that make up about 40 percent of the entire sea-level rise that we observe right now,” says Valentina Radic, a postdoctoral researcher with the Department of Earth and Ocean Sciences and lead author of the study.

Increases in sea levels caused by the melting of the Greenland and Antarctic ice sheets, and the thermal expansion of water, are excluded from the results.

Radic and colleague Regine Hock at the University of Alaska, Fairbanks, modelled future glacier melt based on temperature and precipitation projections from 10 global climate models used by the Intergovernmental Panel on Climate Change.

“While the overall sea level increase projections in our study are on par with IPCC studies, our results are more detailed and regionally resolved,” says Radic. “This allows us to get a better picture of projected regional ice volume change and potential impacts on local water supplies, and changes in glacier size distribution.”

Global projections of sea level rises from mountain glacier and ice cap melt from the IPCC range between seven and 17 centimetres by the end of 2100. Radic’s projections are only slightly higher, in the range of seven to 18 centimetres.

Radic’s projections don’t include glacier calving--the production of icebergs. Calving of tide-water glaciers may account for 30 per cent to 40 per cent of their total mass loss.

“Incorporating calving into the models of glacier mass changes on regional and global scale is still a challenge and a major task for future work,” says Radic.

However, the new projections include detailed projection of melt off from small glaciers surrounding the Greenland and Antarctic ice sheets, which have so far been excluded from, or only estimated in, global assessments.

(Photo: Wikipedia)

The University of British Columbia

AMMONITES DINED ON PLANKTON

0 comentarios

Powerful synchrotron scans of Baculites fossils found on American Museum of Natural History expeditions to the Great Plains suggests that the extinct group of marine invertebrates to which they belong, the ammonites, had jaws and teeth adapted for eating small prey floating in the water.

One ammonite also provided direct evidence of a planktonic diet because it died with its last meal in its mouth—tiny larval snails and crustacean bits. The detailed description of internal structure of ammonites, published by a Franco-American research team this week in Science, also provides new insights into why ammonites became extinct 65.5 million years ago when an asteroid impact led to the demise of the world's nonavian dinosaurs and much of the plankton.

"I was astonished when I saw the teeth for the first time, and when I found the tiny plankton in the mouth," says first author Isabelle Kruta of the Département Histoire de la Terre, Muséum National d'Histoire Naturelle in Paris, France. Kruta began the project as an Annette Kade fellow at the American Museum of Natural History. "For the first time we could observe these delicate, exceptionally well-preserved structures and obtain information on the ecology of these enigmatic animals."

"When you take into consideration the large lower jaws of ammonites in combination with this new information about their teeth, you realize that these animals must have been feeding in a different way from modern carrion-eating Nautilus," says Neil Landman, curator in the Division of Paleontology at the American Museum of Natural History. "Ammonites have a surprisingly large lower jaw with slender teeth, but the effect is opposite to that of the wolf threatening to eat Little Red Riding Hood. Here, the bigger mouth facilitates feeding on smaller prey."

Ammonites are extinct relatives of the squid and octopus; the Nautilus is similar in appearance to many ammonites but is a more distant relative. Ammonites appeared about 400 million years ago (the Early Devonian) and experienced an explosive radiation in the early Jurassic. In fact, ammonites became such an abundant and diverse part of the marine fauna that they are, for paleontologists, classic "index" fossils used to determine the relative ages of rocks.

Until recently, the role of ammonites in the marine food web was unknown, although some previous research by Landman and colleagues on the shape of the jaw, as well as a 1992 paper by Russian scientists that reconstructed some of the internal structures by slicing fossils, provided clues. The current study used synchrotron X-ray microtomography to digitally reconstruct the mouths of three fossils found in South Dakota. The three dimensional reconstructions are so high in quality that the jaws and teeth are revealed in their complete form.

"X-ray synchrotron microtomography is currently the most sensitive technique for non-destructive investigation of the internal structure of fossils," says Paul Tafforeau of the European Synchrotron Radiation Facility. "For this study, we tested specimens after an initial fossil preparation and scanning on more conventional machines by Kruta failed to provide enough detail. The synchrotron results were so impressive that we scanned all available samples, discovering nearly each time radula, and, for one of them, plankton."

"The plankton in the Baculites jaws is the first direct evidence of the trophic habits of the uncoiled ammonites and helps us understand the evolutionary success of these ammonites in the Cretaceous," says Fabrizio Cecca of the Laboratoire de Paléontologie, Université Pierre et Marie Curie in Paris.

Ammonite jaws lie just inside the body chamber. The research team's new scans of Baculites, a straight ammonite found worldwide, confirms older research that ammonites had multiple cusps on their radula, a kind of tongue covered by teeth that is typical of mollusks. The radula can now be seen in exquisite detail: the tallest cusp is 2 mm high, tooth shape varies from saber to comb-like, and teeth are very slender. The jaw is typical of the group of ammonites (the aptychophorans) to which Baculites, belongs. In addition, one specimen has a tiny snail and three tiny crustaceans in its mouth; one of the crustaceans is even cut in two pieces. Because these planktonic fossils are not found anywhere else on the specimen, the team thinks that the specimen died while eating its last meal rather than being scavenged by these organisms after death.

"Our research suggests several things. First, the radiation of aptychophoran ammonites might be associated with the radiation of plankton during the Early Jurassic," says Landman. "In addition, plankton were severely hit at the Cretaceous-Tertiary boundary, and the loss of their food source probably contributed to the extinction of ammonites. This research also has implications for understanding carbon cycling during this time."

Isabelle Rouget, Laboratoire de Paléontologie, Université Pierre et Marie Curie in Paris, agrees, adding that "we now realize that ammonites occupied a different niche in the trophic web than we previously thought."

(Photo: A. Lethiers, UPMC)

American Museum of Natural History

Thursday, January 20, 2011

FROM DUSTY PUNCH CARDS, NEW INSIGHTS INTO LINK BETWEEN CHOLESTEROL AND HEART DISEASE

0 comentarios

A stack of punch cards from a landmark study published in 1966, and the legwork to track down the study’s participants years later, has yielded the longest analysis of the effects of lipoproteins on coronary heart disease.

The study, published in a recent issue of the journal Atherosclerosis, tracked almost 1,900 people over a 29-year period, which is nearly three times longer than other studies that examine the link between different sizes of high-density lipoprotein particles and heart disease.

It found that an increase in larger high-density lipoprotein particles decreased a subject’s risk of heart disease. The research also underscores the value of looking to the past to advance science.

“Often we think only of designing new studies with the latest technologies, but there are treasures buried in our past,” says study author Paul Williams of the U.S. Department of Energy’s Lawrence Berkeley National Laboratory.

Lipoproteins are fat molecules that carry cholesterol in the blood. Cholesterol is divided into high-density lipoprotein, the so-called good cholesterol, and low-density lipoprotein, the bad cholesterol.

That’s common knowledge today. But it was a groundbreaking and controversial notion in the 1950s, when Berkeley Lab’s John Gofman used an analytic ultracentrifuge at Berkeley Lab to separate and measure the different lipoproteins. He was the first to propose that high-density and low-density lipoprotein particles play a role in heart disease.

His research was met with skepticism, however, so Gofman began a prospective study of lipoproteins in a group of 1,905 employees at Lawrence Livermore National Laboratory between 1954 and 1956. After ten years, there were 38 new cases of heart disease. In 1966, he reported that men who developed heart disease had lower levels of the HDL2 (the larger high-density lipoprotein particles) and HDL3 (the smaller high-density lipoprotein particles).

It would take several more years for Gofman’s work to gain currency in the scientific community. Gofman left lipoprotein research in the 1960s to pioneer the study of the biological effects of low doses of radiation. He died in 2007.

His Livermore cohort study collected dust until 1988, when Williams discovered the study’s punch cards at the University of California, Berkeley’s Donner Hall. Realizing he had found an epidemiological goldmine, Williams verified the cards’ authenticity by examining logbooks. He also found an old punch card machine to extract their data. Then, with the help of students and research assistants, he located and contacted 97 percent of the people in Gofman’s study over the next nine years.

“Often, all we had to go on was an address on a street that no longer existed,” says Williams, a staff scientist in Berkeley Lab’s Life Sciences Division. “Women had changed their names, employees had left or retired and moved, and many had died. However, by telephoning neighbors and coworkers, we were able to track down all but a few.”

Medical records were obtained and reviewed by a physician, Daniel Feldman, who is the study’s co-author.

Their 29-year follow-up uncovered 363 cases of coronary heart disease. They found that both HDL2 and HDL3 lowered heart disease risk, and that a one-milligram per milliliter increase in HDL2 produced a significantly larger reduction in coronary heart disease risk than a one-milligram per milliliter increase in HDL3. Their follow-up also buttressed Gofman’s insights from 1966.

“Gofman’s original conclusion that ischemic heart disease is inversely related to both HDL2 and HDL3 was upheld in the current analyses,” says Williams, who hopes to complete the 55-year follow-up of this cohort.

(Photo: LBNL)

Lawrence Berkeley National Laboratory

WHAT TRIGGERS MASS EXTINCTIONS? STUDY SHOWS HOW INVASIVE SPECIES STOP NEW LIFE

0 comentarios

An influx of invasive species can stop the dominant natural process of new species formation and trigger mass extinction events, according to research results published in the journal PLoS ONE.

The study of the collapse of Earth's marine life 378 to 375 million years ago suggests that the planet's current ecosystems, which are struggling with biodiversity loss, could meet a similar fate.

Although Earth has experienced five major mass extinction events, the environmental crash during the Late Devonian was unlike any other in the planet's history.

The actual number of extinctions wasn't higher than the natural rate of species loss, but very few new species arose.

"We refer to the Late Devonian as a mass extinction, but it was actually a biodiversity crisis," said Alycia Stigall, a scientist at Ohio University and author of the PLoS ONE paper.

"This research significantly contributes to our understanding of species invasions from a deep-time perspective," said Lisa Boush, program director in the National Science Foundation (NSF)'s Division of Earth Sciences, which funded the research.

"The knowledge is critical to determining the cause and extent of mass extinctions through time, especially the five biggest biodiversity crises in the history of life on Earth. It provides an important perspective on our current biodiversity crises."

The research suggests that the typical method by which new species originate--vicariance--was absent during this ancient phase of Earth's history, and could be to blame for the mass extinction.

Vicariance occurs when a population becomes geographically divided by a natural, long-term event, such as the formation of a mountain range or a new river channel, and evolves into different species.

New species also can originate through dispersal, which occurs when a subset of a population moves to a new location.

In a departure from previous studies, Stigall used phylogenetic analysis, which draws on an understanding of the tree of evolutionary relationships to examine how individual speciation events occurred.

She focused on one bivalve, Leptodesma (Leiopteria), and two brachiopods, Floweria and Schizophoria (Schizophoria), as well as a predatory crustacean, Archaeostraca.

These small, shelled marine animals were some of the most common inhabitants of the Late Devonian oceans, which had the most extensive reef system in Earth's history.

The seas teemed with huge predatory fish such as Dunkleosteus, and smaller life forms such as trilobites and crinoids (sea lilies).

The first forests and terrestrial ecosystems appeared during this time; amphibians began to walk on land.

As sea levels rose and the continents closed in to form connected land masses, however, some species gained access to environments they hadn't inhabited before.

The hardiest of these invasive species that could thrive on a variety of food sources and in new climates became dominant, wiping out more locally adapted species.

The invasive species were so prolific at this time that it became difficult for many new species to arise.

"The main mode of speciation that occurs in the geological record is shut down during the Devonian," said Stigall. "It just stops in its tracks."

Of the species Stigall studied, most lost substantial diversity during the Late Devonian, and one, Floweria, became extinct.

The entire marine ecosystem suffered a major collapse. Reef-forming corals were decimated and reefs did not appear on Earth again for 100 million years.

The giant fishes, trilobites, sponges and brachiopods also declined dramatically, while organisms on land had much higher survival rates.

The study is relevant for the current biodiversity crisis, Stigall said, as human activity has introduced a high number of invasive species into new ecosystems.

In addition, the modern extinction rate exceeds the rate of ancient extinction events, including the event that wiped out the dinosaurs 65 million years ago.

"Even if you can stop habitat loss, the fact that we've moved all these invasive species around the planet will take a long time to recover from because the high level of invasions has suppressed the speciation rate substantially," Stigall said.

Maintaining Earth's ecosystems, she suggests, would be helped by focusing efforts and resources on protection of new species generation.

"The more we know about this process," Stigall said, "the more we will understand how to best preserve biodiversity."

(Photo: Ohio University)

National Science Foundation

TRUST YOUR GUT... BUT ONLY SOMETIMES

0 comentarios
When faced with decisions, we often follow our intuition—our self-described “gut feelings”—without understanding why. Our ability to make hunch decisions varies considerably: Intuition can either be a useful ally or it can lead to costly and dangerous mistakes. A new study published in Psychological Science, a journal of the Association for Psychological Science, finds that the trustworthiness of our intuition is really influenced by what is happening physically in our bodies.

“We often talk about intuition coming from the body—following our gut instincts and trusting our hearts”, says Barnaby D. Dunn, of the Medical Research Council Cognition and Brain Sciences Unit in Cambridge, U.K., first author of the new paper. What isn’t certain is whether we should follow, or be suspicious of, what our bodies are telling us. And do we differ in the influence that our gut feelings have on how we make decisions?

To investigate how different bodily reactions can influence decision making, Dunn and his co-authors asked study participants to try to learn how to win at a card game they had never played before. The game was designed so that there was no obvious strategy to follow and instead players had to follow their hunches. While playing the game, each participant wore a heart rate monitor and a sensor that measured the amount of sweat on their fingertips.

Most players gradually found a way to win at the card game and they reported having relied on intuition rather than reason. Subtle changes in the players’ heart rates and sweat responses affected how quickly they learned to make the best choices during the game.

Interestingly, the quality of the advice that people’s bodies gave them varied. Some people’s gut feelings were spot on, meaning they mastered the card game quickly. Other people’s bodies told them exactly the wrong moves to make, so they learned slowly or never found a way to win.

Dunn and his co-authors found this link between gut feelings and intuitive decision making to be stronger in people who were more aware of their own heartbeat. So for some individuals being able to ‘listen to their heart’ helped them make wise choices, whereas for others it led to costly mistakes.

“What happens in our bodies really does appear to influence what goes in our minds. We should be careful about following these gut instincts, however, as sometimes they help and sometimes they hinder our decision making,” says Dunn.

Association for Psychological Science

Wednesday, January 19, 2011

WATER PURIFICATION MADE SIMPLER

0 comentarios

Inside a growing number of homes in the developing world, sand and biological organisms are collaborating to decontaminate drinking water.

Arranged in barrels of concrete or plastic, these biosand water filtration systems (BSFs) remove 95 to 99 percent of the bacteria, viruses, worms and particles contained in rain or surface water. A layer of microorganisms at the top of the sand bed consumes biological and other organic contaminants, while the sand below removes contaminants that cause cloudiness and odor.

A BSF can produce several dozen liters of clean water in an hour. But it can weigh several hundred pounds and cost up to $30, an expense some families in developing countries cannot afford.

Kristen Jellison and her students are trying to build a BSF that is smaller than the standard system, but just as effective.

“Smaller, lighter BSFs,” says Jellison, an associate professor of civil and environmental engineering, “would be cheaper, easier to transport and available to a broader global market. Preliminary research has shown the potential for smaller systems to remove most disease-causing organisms, except possibly viruses.”

Jellison, who is affiliated with the university’s STEPS (Science, Technology, Environment, Policy and Society) initiative, has devoted most of her career to improving drinking water. As a co-adviser to Lehigh’s chapter of Engineers Without Borders, she helped lead efforts to design and build a 20,000-gallon water-storage tank and chlorination system in Pueblo Nuevo, Honduras.

With support from NSF and the Philadelphia Water Department, she has spent five years studying the parasite Cryptosporidium parvum and its transport and fate in water bodies. The parasite is found in multiple hosts, is difficult to eradicate, and can be deadly to people with compromised immune systems.

In an effort to identify possible sources of Cryptosporidium contamination in the Philadelphia watershed, Jellison studies the DNA of various species using a technique called polymerase chain reaction (PCR). She also studies the impact on Cryptosporidium of biofilms, the slimy layers of microorganisms that form on rocks, pipes and other surfaces in water.

Jellison’s group is conducting experiments on BSFs of various sizes, including systems that fit inside two- and five-gallon plastic pails. (The typical BSF is 3 feet high.) The group will change the depth of the sand column, add rusty nails to several pails (in an effort to increase virus removal), and alter other parameters.

“BSFs were developed in the 1980s,” says Jellison. “This is the most comprehensive study to date to characterize the efficiency of different filter types.”

(Photo: Lehigh U.)

Lehigh University

Followers

Archive

 

Selected Science News. Copyright 2008 All Rights Reserved Revolution Two Church theme by Brian Gardner Converted into Blogger Template by Bloganol dot com