Monday, May 31, 2010


0 comentarios

Astrophysicists at UC Santa Barbara are the first scientists to identify two white dwarf stars in an eclipsing binary system, allowing for the first direct radius measurement of a rare white dwarf composed of pure helium. The results will be published in the Astrophysical Journal Letters. These observations are the first to confirm a theory about a certain type of white dwarf star.

The story began with observations by Justin Steinfadt, a UCSB physics graduate student who has been monitoring white dwarf stars as part of his Ph.D. thesis with Lars Bildsten, a professor and permanent member of UCSB's Kavli Institute for Theoretical Physics, and Steve Howell, an astronomer at the National Optical Astronomy Observatory (NOAO) in Tucson, Ariz.

Brief eclipses were discovered during observations of the star NLTT 11748 with the Faulkes Telescope North of the Las Cumbres Observatory Global Telescope (LCOGT), a UCSB-affiliated institution. NLTT 11748 is one of the few very low-mass, helium-core white dwarfs that are under careful study for their brightness variations. Rapid snapshots of the star –– about one exposure every minute –– found a few consecutive images where the star was slightly fainter. Steinfadt quickly realized the importance of this unexpected discovery. "We've been looking at a lot of stars, but I still think we got lucky!" he said.

Avi Shporer, a postdoctoral fellow at UCSB and LCOGT, assisted with the observations and quickly brought his expertise to the new discovery. "We knew something was unusual, especially as we confirmed these dips the next night," Shporer said. The scientists observed three-minute eclipses of the binary stars twice during the 5.6-hour orbit.

The excitement of the discovery and the need to confirm it rapidly led to the use of the 10-meter Keck Telescope, located on Mauna Kea in Hawaii, just five weeks after the first observation. The team also brought in David Kaplan, a Hubble Fellow and KITP postdoctoral fellow. Bildsten and Kaplan arranged for use of the Keck by swapping time they had reserved for another project with Geoff Marcy at UC Berkeley.

During that night, the scientists were able to measure the changing Doppler shift of the star NLTT 11748 as it orbited its faint, but more massive, white dwarf companion. "It was amazing to witness the velocity of this star change in just a few minutes," said Kaplan, who was present at the Keck telescope during the observations.

These observations led to the confirmation of an important theory about white dwarf stars. Stars end their lives in many ways. "The formation of such a binary system containing an extremely low mass helium white dwarf has to be the result of interactions and mass loss between the two original stars," said Howell. White dwarf stars are the very dense remnants of stars like the sun, with dimensions comparable to the earth. A star becomes a white dwarf when it has exhausted its nuclear fuel and all that remains is the dense inner core, typically made of carbon and oxygen.

One of the stars in the newly discovered binary is a relatively rare helium-core white dwarf with a mass only 10 to 20 percent of that of the sun. The existence of these special stars has been known for more than 20 years. Theoretical work predicted that these stars burn hotter and are larger than ordinary white dwarfs. Until now, their size had never been measured. The observations of the star NLTT 11748 by this research group have yielded the first direct radius measurement of an unusual white dwarf that confirms this theory.

The other star in the binary is also a white dwarf, albeit a more ordinary one, composed of mostly carbon and oxygen with about 70 percent of the mass of the sun. This star is more massive and also much smaller than the other white dwarf. The light it gives off is 30 times fainter than that of its partner star in the binary.

Bildsten credits the scientific collaborations at UCSB for the success of this work, noting that the original team was expanded to include KITP, the Physics Department, and LCOGT to quickly respond to the new discovery.

"A particularly intriguing possibility to ponder is what will happen in 6 to 10 billion years," said Bildsten. "This binary is emitting gravitational waves at a rate that will force the two white dwarfs to make contact. What happens then is anybody's guess."

(Photo: Steve Howell/Pete Marenfeld/NOAO)

University of California, Santa Barbara


0 comentarios
Sleeping newborns are better learners than thought, says a University of Florida researcher about a study that is the first of its type. The study could lead to identifying those at risk for developmental disorders such as autism and dyslexia.

“We found a basic form of learning in sleeping newborns, a type of learning that may not be seen in sleeping adults,” said Dana Byrd, a research affiliate in psychology at UF who collaborated with a team of scientists.

The findings give valuable information about how it is that newborns are able to learn so quickly from the world, when they sleep for 16 to 18 hours a day, Byrd said. “Sleeping newborns are better learners, better ‘data sponges’ than we knew,” she said.

In order to understand how newborns learn while in their most frequent state, Byrd and her colleagues tested the learning abilities of sleeping newborns by repeating tones that were followed by a gentle puff of air to the eyelids. After about 20 minutes, 24 of the 26 babies squeezed their eyelids together when the tone was sounded without the puff of air.

“This methodology opens up research areas into potentially detecting high risk populations, those who show abnormalities in the neural systems underlying this form of learning,” she said. “These would include siblings of individuals with autism and siblings of those with dyslexia.”

The research team’s paper, published online this week in Proceedings of the National Academy of Sciences, describes the results of their experiment with the 1- or 2-day-old infants, comparing them with a control group using EEG and video recordings. The brain waves of the 24 infants were found to change, providing a neural measurement of memory updating.

“While past studies find this type of learning can occur in infants who are awake, this is the first study to document it in their most frequent state, while they are asleep,” Byrd said. “Since newborns sleep so much of the time, it is important that they not only take in information but use the information in such a way to respond appropriately.”

Not only did the newborns show they can learn to give this reflex in response to the simple tone, but they gave the response at the right time, she said.

Learned eyelid movement reflects the normal functioning of the circuitry in the cerebellum, a neural structure at the base of the brain. This study’s method potentially offers a unique non-invasive tool for early identification of infants with atypical cerebellar structure, who are potentially at risk for a range of developmental disorders, including autism and dyslexia, she said.

The capacity of infants to learn during sleep contrasts with some researchers’ stance that learning new material does not take place in sleeping adults, Byrd said.

The immature nature of sleep patterns in infants could help explain why, she said.

“Newborn infants’ sleep patterns are quite different than those of older children or adults in that they show more active sleep where heart and breathing rates are very changeable,” she said. “It may be this sleep state is more amenable to experiencing the world in a way that facilitates learning.”

Another factor is that infants’ brains have greater neural plasticity, which is the ability for the neural connections to be changed, Byrd said. “Newborns may be very adaptive to learning in general simply because their brains have increased plasticity, increased propensity to be changed by experience,” she said.

University of Florida


0 comentarios

The ancestor of all hammerhead sharks probably appeared abruptly in Earth's oceans about 20 million years ago and was as big as some contemporary hammerheads, according to a new study led by the University of Colorado at Boulder.

But once the hammerhead evolved, it underwent divergent evolution in different directions, with some species becoming larger, some smaller, and the distinctive hammer-like head of the fish changing in size and shape, said CU-Boulder Professor Andrew Martin of the ecology and evolutionary biology department.

Sporting wide, flattened heads known as cephalofoils with eyeballs bulging at each end, hammerhead sharks are among the most recognizable fish in the world. The bizarre creatures range in length from about 3 feet up to 18 feet and cruise warm waters around the world, Martin said.

In the new study, scientists focused on the DNA of eight species of hammerhead sharks to build family "gene trees" going back thousands to millions of generations. In addition to showing that small hammerheads evolved from a large ancestor, the team showed that the "signature" cephalofoils of hammerheads underwent divergent evolution in different lineages over time, likely due to selective environmental pressures, said Martin.

The team used both mitochondrial DNA passed from mother to offspring and nuclear DNA -- which is commonly used in forensic identification -- to track gene mutations. The researchers targeted four mitochondrial genes and three nuclear genes, which they amplified and sequenced for the study.

"These techniques allowed us to see the whole organism evolving through time," Martin said. "Our study indicates the big hammerheads probably evolved into smaller hammerheads, and that smaller hammerheads evolved independently twice."

A paper on the subject was published in this month's issue of Molecular Phylogenetics and Evolution. Led by former CU-Boulder ecology and evolutionary biology undergraduate student Douglas Lim, co-authors included Martin and University of South Florida researchers Philip Motta and Kyle Mara. Lim is currently a student at the University of Colorado School of Medicine. The National Science Foundation funded the study.

The researchers sampled hammerheads from across the globe -- including the waters of the southeast United States now under siege by the Gulf oil spill -- as well as Australia, Panama, Hawaii, Trinidad and South Africa. Most of the hammerhead DNA was obtained at local markets, where the peddling of sharks and other fish is common practice.

The team sequenced the DNA of the sharks, constructing a "phylogenetic" tree that shows how all of the species are related and when each species originated, said Martin. The hammerhead ancestor probably lived in the Miocene epoch about 20 million years ago.

The team found that two divergent lineages of small sharks about 3 to 4 feet long originated independently at separate times in the past. One of the species, the winghead shark, now lives in the warm waters north of Australia and the other, the bonnethead shark, inhabits the Caribbean and tropical eastern Pacific Ocean.

One reason for the "incredible shrinking shark" over the eons may be the process of neoteny -- the ability of some adult sharks to retain juvenile traits -- or their ability to achieve sexual maturity at earlier ages, Martin said. "As the sharks became smaller, they may have begun investing more energy into reproductive activities instead of growth."

While the cephalofoils appear to provide "lift" to large hammerheads as they cruise through the water -- much like the wing of an airplane -- smaller hammerheads don't appear to gain an advantage in lift, but may gain other attributes. "It looks like they sacrifice locomotion advantages for prey detection and visualization," he said.

Another advantage hammerheads may gain from larger cephalofoils is an increased number of electrical sensors in their flattened noses and heads that can detect extremely weak electrical emissions from molecules associated with potential prey. "Hammerheads appear to be able to triangulate on their prey, which is remarkable," said Martin.

Small sharks are highly variable in the size and shape of their cephalofoils, said Martin. The winghead shark, for example, has a laterally expanded head that is about half the size of its roughly 4-foot body length. At the other end of the spectrum is the bonnethead shark, about 3 feet long but which has the smallest cephalofoil of all hammerhead species -- a protrusion that resembles the head of a shovel, Martin said.

Martin said that hammerheads are an ideal biological study subject in part because of some important similarities to humans. Both have slow growth rates, mature late in life, give live birth and have relatively few offspring. While hammerheads may have a dozen or more pups, other oceanic fish regularly lay millions of eggs. Hammerheads generally live for about 30 years, he said.

While hammerhead sharks appear intimidating, attacks on humans are extremely rare, said Martin. Hammerheads have relatively small mouths facing downward that are used to grab food like fish, shellfish, shrimp, squid, octopuses and stingrays. "If you see a hammerhead, I'd say grab your camera and jump into the water," said Martin.

"Hammerheads are special fish, and there is nothing that remotely resembles them anywhere on the planet," said Martin. Unfortunately, hammerheads -- like most shark species -- are on the decline. In addition to being overfished, sharks often are the victims of a technique known as finning, in which fishermen catch them, cut off their fins for use in delicacy soups, and return them to the water to die, Martin said. Shark meat also is used for fertilizer and to make pet food.

There currently are 233 shark species on the International Union for the Conservation of Nature's "Red List of Threatened Species," and 12 shark species are classified as critically endangered. A study led by Virginia Tech showed the great hammerhead, scalloped hammerhead and smooth hammerhead species declined by an average of 90 percent from 1981 to 2005. "Their situation is generally pretty dire," Martin said.

A 2005 study by Martin and his colleagues on scalloped hammerheads indicated females tend to breed in the specific ocean regions where they were born, while males tended to move around more widely. A previous study by Martin's team also showed that male great white sharks roam Earth's oceans much more widely than females, a finding with implications for future conservation strategies for the storied and threatened fish.

(Photo: Terry Goss 2008 Marine Photobank)

University of Colorado


0 comentarios

A mass extinction of fish 360 million years ago hit the reset button on Earth's life, setting the stage for modern vertebrate biodiversity. The mass extinction scrambled the species pool near the time at which the first vertebrates crawled from water towards land.

Those few species that survived the bottleneck were the evolutionary starting point for all vertebrates--including humans--that exist today, according to results of a study published in the journal Proceedings of the National Academy of Sciences (PNAS).

"Everything was hit; the extinction was global," said Lauren Sallan of the University of Chicago and lead author of the paper. "It reset vertebrate diversity in every single environment, both freshwater and marine, and created a completely different world."

The Devonian Period, which spanned from 416 to 359 million years ago, is also known as the Age of Fishes for the broad array of species present in Earth's aquatic environments.

Armored placoderms such as the gigantic Dunkleosteus and lobe-finned fishes--similar to the modern lungfish--dominated the waters, while ray-finned fishes, sharks and tetrapods were in the minority, according to Maureen Kearney, program director in the National Science Foundation (NSF)'s Division of Environmental Biology, which funded the research, along with NSF's Division of Earth Sciences.

But between the latest Devonian Period and the subsequent Carboniferous period, placoderms disappeared and ray-finned fishes rapidly replaced lobe-finned fishes as the dominant group, a demographic shift that persists to today.

"The Devonian period is known as the Age of Fishes, but it's the wrong kind of fish," Sallan said. "Just about everything dominant in the Devonian died at the end of the period and was replaced."

"There's some sort of pinch at the end of the Devonian," said the paper's second author, Michael Coates, an organismal biologist and anatomist at the University of Chicago.

"It's as if the roles persist, but the players change: the cast is transformed dramatically. Something happened that almost wiped the slate clean, and, of the few stragglers that made it through, a handful then re-radiate spectacularly."

Scientists have long theorized that the Late Devonian Kellwasser event--considered to be one of the "Big Five" extinctions in Earth's history--was responsible for a marine invertebrate species shake-up.

But an analysis of the vertebrate fossil record by Sallan and Coates pinpointed a critical shift in diversity to the Hangenberg extinction event 15 million years later.

Prior to the extinction, lobe-finned forms such as Tiktaalik and the earliest limbed tetrapods such as Ichthyostega had made the first tentative "steps" toward a land-dwelling existence.

But after the extinction, a long stretch of the fossil record known as "Romer's Gap," is almost barren of tetrapods, a puzzle that had confused paleontologists for many years.

Sallan and Coates' data suggest that the 15-million-year gap was the hangover after the traumatic Hangenberg event.

"Something that's seen after an extinction event is a gap in the records of survivors," Sallan said. "You have a very low diversity fauna, because most things have been killed off."

When tetrapods finally recovered, those survivors were likely the great-great-grandfathers to the vast majority of land vertebrates present today.

Modern vertebrate traits--such as the motif of five-digit limbs that is shared by all mammals, birds and reptiles in utero--may have been set by this early common ancestor, the authors propose.

"Extinction events remove a huge amount of biodiversity," Coates said. "That shapes in a very significant way the patchiness of biodiversity that persists to the present day."

The analysis benefitted from recent advances in filling in the vertebrate fossil record, Coates said.

Previously, estimates of the earlier extinction had been made using fossils of invertebrates such as mollusks and clams, which are far more abundant.

With a larger dataset of vertebrates and analytical techniques borrowed from modern ecology, Sallan and Coates were able to see the abrupt changes in species composition before and after the Hangenberg event.

"It's a big extinction during what was already considered a critical time in vertebrate evolution, so it's surprising that it went unnoticed for so long," Sallan said. "But it took the right methods to reveal its magnitude."

What remains mysterious is exactly what happened 360 million years ago to trigger this mass extinction.

Other researchers have found evidence of substantial glacier formation at the end of the Devonian period, which would have dramatically lowered sea levels and affected life.

The first appearance of forest-like environments in some regions might also have produced atmospheric changes catastrophic to animal life.

The research also raises questions about the pattern of evolution after the extinction event.

It remains unclear why groups that were abundant before the event did not recover, while other groups spread and diversified in radical new ways.

Regardless of these questions, the consequences are still being felt hundreds of millions of years later.

"It is a pivotal episode that shaped modern vertebrate biodiversity," Coates said. "We are only now beginning to place that important event in the history of life and the history of the planet, which we weren't able to do before."

(Photo: Jason Smith)

National Science Foundation


0 comentarios
So close and yet so wrong – you might love heavy metal like Metallica but your music platform suggests you should also like the Sixties sound of The Doors, simply because both bands are classified as rock.

New research published in New Journal of Physics (co-owned by the Institute of Physics and German Physical Society), shows that searching for the temporal aspects of songs – their rhythm – might be better to find music you like than using current automatic genre classifications.

By studying similar and different characteristics of specific rhythmic durations and the occurrence of rhythmic sequences, the group of Brazilian researchers has found that it is possible to correctly identify the musical genres of specific musical pieces.

The researchers studied four musical genres – rock, blues, bossa nova and reggae – looking at 100 songs from each category, analysing the most representative sequences of each genre-specific rhythm such as the 12 bar theme in blues, which means that the song is divided into 12 bars – or measures - with a given chord sequence.

Using hierarchical clustering, a visual representation of rhythmic frequencies, the researchers were able to discriminate between songs and come up with a possibly novel way of defining musical genres.

As the researchers write, "By showing that rhythm represents a surprisingly distinctive signature of some of the main musical genres, the work suggests that rhythm-based features could be more comprehensively incorporated as resources for searching in music platforms.

Musical genre classification is a nontrivial task even for musician experts, since often a song can be assigned to more than one single genre. With our proposed method, new sub-genres (for example, rock-blues) can arise from original ones. Therefore, we observed a significant improvement in the supervised classification performance."

The next step, as suggested by the researchers, would be to include further aspects such as the intensity of the beat in future research, which could increase accurate genre identification even more.

Institute of Physics


0 comentarios

Greenland is situated in the Atlantic Ocean to the northeast of Canada. It has stunning fjords on its rocky coast formed by moving glaciers, and a dense icecap up to 2 km thick that covers much of the island--pressing down the land beneath and lowering its elevation. Now, scientists at the University of Miami say Greenland's ice is melting so

According to the study, some coastal areas are going up by nearly one inch per year and if current trends continue, that number could accelerate to as much as two inches per year by 2025, explains Tim Dixon, professor of geophysics at the University of Miami Rosenstiel School of Marine and Atmospheric Science (RSMAS) and principal investigator of the study.

"It's been known for several years that climate change is contributing to the melting of Greenland's ice sheet," Dixon says. "What's surprising, and a bit worrisome, is that the ice is melting so fast that we can actually see the land uplift in response," he says. "Even more surprising, the rise seems to be accelerating, implying that melting is accelerating."

Dixon and his collaborators share their findings in a new study titled "Accelerating uplift in the North Atlantic region as an indicator of ice loss," The paper is now available as an advanced online publication, by Nature Geoscience. The idea behind the study is that if Greenland is losing its ice cover, the resulting loss of weight causes the rocky surface beneath to rise. The same process is affecting the islands of Iceland and Svalbard, which also have ice caps, explains Shimon Wdowinski, research associate professor in the University of Miami RSMAS, and co-author of the study.

"During ice ages and in times of ice accumulation, the ice suppresses the land," Wdowinski says. "When the ice melts, the land rebounds upwards," he says. "Our study is consistent with a number of global warming indicators, confirming that ice melt and sea level rise are real and becoming significant."

Using specialized global positioning system (GPS) receivers stationed on the rocky shores of Greenland, the scientists looked at data from 1995 onward. The raw GPS data were analyzed for high accuracy position information, as well as the vertical velocity and acceleration of each GPS site.

The measurements are restricted to places where rock is exposed, limiting the study to coastal areas. However, previous data indicate that ice in Greenland's interior is in approximate balance: yearly losses from ice melting and flowing toward the coast are balanced by new snow accumulation, which gradually turns to ice. Most ice loss occurs at the warmer coast, by melting and iceberg calving and where the GPS data are most sensitive to changes. In western Greenland, the uplift seems to have started in the late 1990's.

Melting of Greenland's ice contributes to global sea level rise. If the acceleration of uplift and the implied acceleration of melting continue, Greenland could soon become the largest contributor to global sea level rise, explains Yan Jiang, Ph.D. candidate at the University of Miami RSMAS and co-author of the study.

"Greenland's ice melt is very important because it has a big impact on global sea level rise," Jiang says. "We hope that our work reaches the general public and that this information is considered by policy makers."

This work was supported by the National Science Foundation and NASA. The team plans to continue its studies, looking at additional GPS stations in sensitive coastal areas, where ice loss is believed to be highest.

(Photo: NASA)

University of Miami

Friday, May 28, 2010


0 comentarios

Ball lightnings are circular light phenomena occurring during thunderstorms and there are a large class of reports by eyewitnesses having experienced such events. Scientists have been puzzled by the nature of these apparent fire balls for a long time. Now physicists at the University of Innsbruck have calculated that the magnetic field of long lightning strokes may produce the image of luminous shapes, also known as phosphenes, in the brain. This finding may offer an explanation for many ball lightning observations.

Physicists Josef Peer and Alexander Kendl from the University of Innsbruck have studied electromagnetic fields of different types of lightning strokes occurring during thunderstorms. Their calculations suggest that the magnetic fields of a specific class of long lasting repetitive lightning discharges show the same properties as transcranial magnetic stimulation (TMS), a technique commonly used in clinical and psychiatric practice to stimulate neural activity in the human brain.

Time varying and sufficiently strong magnetic fields induce electrical fields in the brain, specifically, in neurons of the visual cortex, which may invoke phosphenes. “In the clinical application of TMS, luminous and apparently real visual perceptions in varying shapes and colors within the visual field of the patients and test persons are reported and well examined,” says Alexander Kendl. The Innsbruck physicists have now calculated that a near lightning stroke of long lasting thunderbolts may also generate these luminous visions, which are likely to appear as ball lightning. Their findings are published in the journal Physics Letters A.

Ball lightnings are rather rare events. The majority of researchers agree that different phenomena are likely to be summarized under the collective term “ball lightning”. Over time, various theories and propositions about the nature of these experiences have been suggested. Other researchers have produced luminous fire balls in the laboratory, which appeared not completely unlike ball lightning and could explain some of the observations but were mostly too short lived. Other plausible explanations for some of observations are St. Elmo's fire, luminous dust balls or small molten balls of metal.

In which cases then, can a lightning bolt invoke a ball-shaped phosphene? “Lightning strokes with repetitive discharges producing stimulating magnetic fields over a period of a few seconds are rather rare and only occur in about one in one hundred events,“ reports physicist Kendl. “An observer located within few hundred metres of a long lightning stroke may experience a magnetic phosphene in the shape of a luminous spot.“ Also other sensations, such as noises or smells, may be induced. Since the term “ball lightning” is well known from media reports, observers are likely to classify lightning phosphenes as such.

Alexander Kendl’s hypothesis that in fact the majority of ball lightning observations are phosphenes is strongly supported by its simplicity: “Contrary to other theories describing floating fire balls, no new and other suppositions are necessary.”

(Photo: Uni Innsbruck)

University of Innsbruck


0 comentarios
Swimsuit season is almost upon us. For most of us, the countdown has begun to lazy days lounging by the pool and relaxing on the beach. However, for some of us, the focus is not so much on sunglasses and beach balls, but how to quickly shed those final five or ten pounds in order to look good poolside. It is no secret that dieting can be challenging and food cravings can make it even more difficult. Why do we get intense desires to eat certain foods? Although food cravings are a common experience, researchers have only recently begun studying how food cravings emerge.

Psychological scientists Eva Kemps and Marika Tiggemann of Flinders University, Australia, review the latest research on food cravings and how they may be controlled in the current issue of Current Directions in Psychological Science, a journal of the Association for Psychological Science.

We've all experienced hunger (where eating anything will suffice), but what makes food cravings different from hunger is how specific they are. We don't just want to eat something; instead, we want barbecue potato chips or cookie dough ice cream. Many of us experience food cravings from time to time, but for certain individuals, these cravings can pose serious health risks. For example, food cravings have been shown to elicit binge-eating episodes, which can lead to obesity and eating disorders. In addition, giving in to food cravings can trigger feelings of guilt and shame.

Where do food cravings come from? Many research studies suggest that mental imagery may be a key component of food cravings — when people crave a specific food, they have vivid images of that food. Results of one study showed that the strength of participants' cravings was linked to how vividly they imagined the food. Mental imagery (imagining food or anything else) takes up cognitive resources, or brain power. Studies have shown that when subjects are imagining something, they have a hard time completing various cognitive tasks. In one experiment, volunteers who were craving chocolate recalled fewer words and took longer to solve math problems than volunteers who were not craving chocolate. These links between food cravings and mental imagery, along with the findings that mental imagery takes up cognitive resources, may help to explain why food cravings can be so disruptive: As we are imagining a specific food, much of our brain power is focused on that food, and we have a hard time with other tasks.

New research findings suggest that that this relationship may work in the opposite direction as well: It may be possible to use cognitive tasks to reduce food cravings. The results of one experiment revealed that volunteers who had been craving a food reported reduced food cravings after they formed images of common sights (for example, they were asked to imagine the appearance of a rainbow) or smells (they were asked to imagine the smell of eucalyptus). In another experiment, volunteers who were craving a food watched a flickering pattern of black and white dots on a monitor (similar to an untuned television set). After viewing the pattern, they reported a decrease in the vividness of their craved-food images as well as a reduction in their cravings. According the researchers, these findings indicate that "engaging in a simple visual task seems to hold real promise as a method for curbing food cravings." The authors suggest that "real-world implementations could incorporate the dynamic visual noise display into existing accessible technologies, such as the smart phone and other mobile, hand-held computing devices." They conclude that these experimental approaches may extend beyond food cravings and have implications for reducing cravings of other substances such as drugs and alcohol.

The Association for Psychological Science


0 comentarios

Fish become feisty but fearful when facing themselves in a mirror, according to two Stanford biologists.

"It seems like something they don't understand," said Julie Desjardins, a post-doctoral researcher in biology and lead author of a paper to be published in Biology Letters describing the study. The paper is available online now. "I think this stimulus is just so far outside their realm of experience that it results in this somewhat emotional response."

Desjardins and coauthor Russell Fernald, professor of biology, arrived at their conclusion by comparing the behavior and brain activity of male African cichlid fish during and after one-on-one encounters with either a mirror or other another male of about the same size.

Cichlids grow to several inches in length and territorial males typically have bright blue or yellow body coloration.

Territorial male cichlids usually react to another male by trying to fight with it in a sort of tit-for-tat manner. Desjardins suspects the fish fighting their own reflections become fearful because their enemy in the mirror doesn't exhibit the usual reactions they would expect from another fish.

"In normal fights, they bite at each other, one after the other, and will do all kinds of movements and posturing, but it is always slightly off or even alternating in timing," Desjardins said. "But when you are fighting with a mirror, your opponent is perfectly in time. So the subject fish really is not seeing any sort of reciprocal response from their opponent."

The discovery that fish can discern a difference so subtle could prompt researchers to take a second look at how well other lower invertebrates can discriminate among various situations.

Desjardins and Fernald arranged 20-minute long sparing sessions for their fish. A clear wall across the middle of the tank kept the combatants apart when two fish were pitted against each other, so there was never any actual fish-to-fish contact. The fish invariably tried to fight with their foe – real or reflected – and their behavior during the dust-ups appeared consistent whether they were mirror-boxing or not.

When the researchers performed post-mayhem postmortems on the fish, they found that the levels of testosterone and another hormone associated with aggression circulating in the cichlids' bloodstreams were comparable regardless of whether the foe was a reflection or flesh.

But in dissecting a part of the fishes' brain called the amygdala, they found evidence of substantially more activity in that region in the mirror-fighting fish than in those tussling with real foes.

"The amygdala is a part of the brain that has been associated with fear and fear conditioning, not only in fish, but across all vertebrates," Desjardins said. So the fish appeared to feel an element of fear when confronted by an opponent whose behavior was off-kilter.

Although higher vertebrates such as humans have very elaborate amygdalas by comparison with fish, there is still a part of the more complex amygdalas that is analogous to what fish have and performs similar functions.

"The fact that we saw evidence of a really high level of activity in the amygdala, is pretty exciting," Desjardins said. "And surprising."

"I thought I might see a difference in the behavior and when I didn't see that, I was pretty skeptical that I would see anything different in the brain," she said.

But she thinks what they found is evidence of a negative emotional response and offered what she emphasized is a speculative comparison. Perhaps it is "like when you are a little kid and someone keeps repeating back to you what you have just said, that quickly becomes irritating and frustrating," she said. "If I was going to make that giant leap between humans and fish, it could be similar."

So what does this tell us about a fish's level of consciousness?

"It's difficult to say," she said. "But I think it certainly indicates that there is more going on cognitively than people have long assumed in most lower invertebrates."

Desjardins said many researchers who study the cognitive capabilities of lower vertebrates such as frogs, lizards and birds look at behavior and hormones, but rarely look at the brain.

"I think there is a lot that could be done with these types of techniques that has not been done in the past," she said. "This opens the door for us to better understand what is going on in the brain of non-mammalian animals."

(Photo: Todd Anderson/Stanford University)

Stanford University


0 comentarios

The world's biggest investigation on possible links between cell phone use and brain tumours is inconclusive, according to a Canadian scientist who collaborated on the Interphone International Study Group.

Jack Siemiatycki, a professor at the University of Montreal and an epidemiologist at the University of Montreal Hospital Research Center, says restricted access to participants compromised the validity of results of the study published in the May 18 International Journal of Epidemiology. "The findings of the Interphone Study are ambiguous, surprising and puzzling," he says.

The Interphone International Study Group, which examined whether cellular radio frequencies could be correlated to brain tumours, was coordinated by the International Agency for Research on Cancer. The investigation was led by 21 epidemiologists from Australia, Canada, Denmark, Finland, France, Germany, Israel, Italy, Japan, New Zealand, Norway, Sweden and the United Kingdom. Over 10,000 people took part in the study: cell phone users; non cell phone users; cell phone users who survived brain cancer as well as brain cancer survivors who had never used cell phones.

"If we combine all users and compare them with non-users, the Interphone Study found no increase in brain cancer among users. In fact, surprisingly, we found that when we combine users independently of the amount of use, they had lower brain cancer risks than non-users," says Dr. Siemiatycki, who teaches in the University of Montreal Faculty of Medicine. "However, the study also found heavy users of cell phones appeared to be at a higher risk of brain tumours than non-users."

Why the discrepancy? Simply put, scientists are unsure. Attention has focused on the methodology of the study and, in particular, on the representativeness of the study subjects who participated. With participation rates in the range of 50 percent to 60 percent of eligible subjects, it is possible that the participants did not provide an accurate portrait of cell phone usage among cancer cases and among healthy control subjects. Dr. Siemiatycki argues this problem arose because of constraints imposed on researchers by ethics committees intended to protect potential research subjects.

"Ethics reviews are now so rigid that scientists from Canada, the United States and Europe are losing the kind of access to medical databases and to study subjects that is needed to conduct studies such as this one. Ethics committees increasingly require that researchers work through treating physicians, professionals who are already overworked, to recruit their patients. This may work for clinical research exploring treatment of cancer, in which physicians often have a professional or personal interest, but it does not work for investigations into the causes of cancer. This flawed system can produce biased study results."

Despite the inconclusive results of the Interphone Study, consumers should not panic about possible risks related to cell phones, stresses Dr. Siemiatycki. "If there are risks, they are probably pretty small. Should anyone be concerned about potential dangers of cell phones, they can remedy the issue by using hands-free devices and avoid exposure to radio frequencies around their head."

(Photo: IStock)

University of Montreal


0 comentarios
One way that geologists try to decipher how cells functioned as far back as 3 billion years is by studying modern microbial mats, or gooey layers of nutrient-exchanging bacteria that grow mostly on moist surfaces and collect dirt and minerals that crystallize over time. Eventually, the bacteria turn to stone just beneath the crystallized material, thereby recording their history within the crystalline skeletons. Known as stromatolites, the layered rock formations are considered to be the oldest fossils on Earth.

Deciphering the few clues about ancient bacterial life that are seen in these poorly preserved rocks has been difficult, but researchers from MIT's Department of Earth, Atmospheric and Planetary Sciences (EAPS) and the Russian Academy of Sciences may have found a way to glean new information from the fossils. Specifically, they have linked the even spacing between the thousands of tiny cones that dot the surfaces of stromatolite-forming microbial mats — a pattern that also appears in cross-sectional slices of stromatolites that are 2.8 billion years old — to photosynthesis.

In a paper published in the Proceedings of the National Academy of Science, the researchers suggest that the characteristic centimeter-scale spacing between neighboring cones that appears on modern microbial mats and the conical stromatolites they form occurs as a result of the daily competition for nutrients between neighboring mats.

In addition to helping scientists put a better range on when photosynthesis started, the research provides a new technique for interpreting the patterns of these ancient fossils. By analyzing the length of the triangular patterns seen in an ancient stromatolite, for example, geologists can now infer more details about the environment in which the microbial mat lived, such as whether it lived in still or turbulent water.

Until now, no one had explored the consistent one-centimeter spacing that appears between the tiny cones featured on microbial mats and conical stromatolites that grow in the hot springs of Yellowstone National Park, and at other locations around the world. Lead author and EAPS graduate student Alexander Petroff and EAPS professors Daniel Rothman and Tanja Bosak proposed that the pattern was not coincidental and could pertain to a biophysical process, such as how the bacteria compete for nutrients.

By studying the physics of photosynthesis, the researchers formed a better understanding of how a mat consumes nutrients from its surroundings over the course of a day, and then metabolizes, or breaks down, those nutrients for energy.

During the daytime, a mat takes in nutrients like inorganic carbon from its immediate surroundings and uses energy from sunlight to build sugars and new bacteria. As these nutrients become locally depleted, the mat starts to consume nutrients from larger distances. At nighttime when it is dark and photosynthesis is not possible, nutrients return to the water immediately surrounding the mat.

The researchers reasoned that in order to avoid direct competition for nutrients, the spacing between mats must be influenced by diffusion, or how molecules spread out over time. In this case, diffusion is itself influenced by the amount of time a mat is metabolically active, which varies over the course of a day due to changes in sunlight. Therefore, the spacing between cones records the maximum distance that mats can compete with one another to metabolize nutrients that are spread by diffusion and later replenished at night. After testing this theory on cultures in the lab, the researchers confirmed their hypothesis through fieldwork in Yellowstone, where the centimeter spacing between mats corresponds to their metabolic period of about 20 hours.

That the spacing pattern corresponds to the mats' metabolic period — and is also seen in ancient rocks — shows that the same basic physical processes of diffusion and competition seen today were happening billions of years ago, long before complex life appeared. Petroff and his colleagues are currently researching why biological stromatolites form cones instead of other shapes.



0 comentarios
Most venomous snakes are legendary for their lethal bites, but not all. Some spit defensively. Bruce Young, from the University of Massachusetts Lowell, explains that some cobras defend themselves by spraying debilitating venom into the eyes of an aggressor.

Getting the chance to work with spitting cobras in South Africa, Young took the opportunity to record the venom spray tracks aimed at his eyes. Protected by a sheet of Perspex, Young caught the trails of venom and two things struck him: how accurately the snakes aimed and that each track was unique. This puzzled Young. For a start the cobra's fangs are fixed and they can't change the size of the venom orifice, 'so basic fluid dynamics would lead you to think that the pattern of the fluid should be fixed,' explains Young. But Young had also noticed that the snakes 'wiggled' their heads just before letting fly. 'The question became how do we reconcile those two things,' says Young, who publishes his discovery that the snakes initially track their victim's movement and then switch to predicting where the victim is going to be 200ms in the future in the Journal of Experimental Biology ( on 14 May 2010.

Young remembers that Guido Westhoff had also noticed the spitting cobra's 'head wiggle', so he and his research assistant, Melissa Boetig, travelled to Horst Bleckmann's lab in the University of Bonn, Germany, to find out how spitting cobras fine-tune their venom spray. The team had to find out how a target provokes a cobra to spit, and Young was the man for that job, 'I just put on the goggles and the cobras start spitting all over,' laughs Young.

Wearing a visor fitted with accelerometers to track his own head movements while Boetig and Westhoff filmed the cobra's movements at 500 frames/s, Young stood in front of the animals and taunted them by weaving his head about. Over a period of 6 weeks, the team filmed over 100 spits before trying to discover why Young was so successful at provoking the snakes.

Analysing Young's movements, only one thing stood out; 200 ms before the snake spat, Young suddenly jerked his head. The team realised that Young's head jerk was the spitting trigger. They reasoned that the snake must be tracking Young's movements right up to the instant that he jerked his head and that it took a further 200 ms for the snake to react and fire off the venom.

But Young was still moving after triggering the snake into spitting and the snake can't steer the stream of venom, so how was the cobra able to successfully hit Young's eyes if it was aiming at a point where the target had been 200 ms previously? Realigning the data to the instant when Young jerked his head, the team compared all of the snakes' head movements and noticed that the cobras were all moving in a similar way. They accelerated their heads in the same direction that Young's eyes were moving. 'Not only does it speed up but it predicts where I am going to be and then it patterns its venom in that area,' explains Young.

So spitting cobras defend themselves by initially tracking an aggressor's movements. However, at the instant that an attacker triggers the cobra into spitting, the reptile switches to predicting where the attacker's eyes will be 200 ms in the future and aims there to be sure that it hits its target.

The Company of Biologists

Thursday, May 27, 2010


0 comentarios
Biological differences between the sexes could be a significant predictor of responses to vaccines, according to researchers at the Johns Hopkins Bloomberg School of Public Health. They examined published data from numerous adult and child vaccine trials and found that sex is a fundamental, but often overlooked predictor of vaccine response that could help predict the efficacy of combating infectious disease. The review is featured in the May 2010 issue of The Lancet Infectious Diseases.

"Sex can affect the frequency and severity of adverse effects of vaccination, including fever, pain and inflammation," said Sabra Klein, PhD, lead author of the review and an assistant professor at the Bloomberg School's W. Harry Feinstone Department of Molecular Microbiology and Immunology. "This is likely due to the fact that women typically mount stronger immune responses to vaccinations compared to men. In some cases, women need substantially less of a vaccine to mount the same response as men. Pregnancy is also a factor that can alter immune responses to vaccines."

Researchers conducted a review of existing literature on several vaccines including yellow fever, influenza, measles, mumps and rubella, hepatitis and herpes simplex to obtain evidence of the difference in responses between women and men. They also examined the effect hormonal changes that occur during pregnancy have on vaccine efficacy. Researchers found that despite data supporting a role for sex in the response to vaccines, most studies did not document sex-specific effects in vaccine efficacy or induced immune responses.

"Understanding the biological differences between men and women to vaccines could have led to better distribution of the 2010 H1N1 vaccine during the early months. Our review of the literature found that healthy women often generated a more robust protective immune response to vaccination when compared to men," said Andrew Pekosz, PhD, associate professor at the Bloomberg School's W. Harry Feinstone Department of Molecular Microbiology and Immunology. "An understanding and appreciation of the effect of sex and pregnancy on immune responses might change the strategies used by public health officials to start efficient vaccination programs, optimizing the timing and dose of vaccines so that the maximum number of people are immunized." added Klein.

Johns Hopkins University


0 comentarios

A team of scientists from Columbia University, Arizona State University, the University of Michigan, and the California Institute of Technology (Caltech) have programmed an autonomous molecular "robot" made out of DNA to start, move, turn, and stop while following a DNA track.

The development could ultimately lead to molecular systems that might one day be used for medical therapeutic devices and molecular-scale reconfigurable robots—robots made of many simple units that can reposition or even rebuild themselves to accomplish different tasks.

A paper describing the work appears in the current issue of the journal Nature.

The traditional view of a robot is that it is "a machine that senses its environment, makes a decision, and then does something—it acts," says Erik Winfree, associate professor of computer science, computation and neural systems, and bioengineering at Caltech.

Milan N. Stojanovic, a faculty member in the Division of Experimental Therapeutics at Columbia University, led the project and teamed up with Winfree and Hao Yan, professor of chemistry and biochemistry at Arizona State University and an expert in DNA nanotechnology, and with Nils G. Walter, professor of chemistry and director of the Single Molecule Analysis in Real-Time (SMART) Center at the University of Michigan in Ann Arbor, for what became a modern-day self-assembly of like-minded scientists with the complementary areas of expertise needed to tackle a tough problem.

Shrinking robots down to the molecular scale would provide, for molecular processes, the same kinds of benefits that classical robotics and automation provide at the macroscopic scale. Molecular robots, in theory, could be programmed to sense their environment (say, the presence of disease markers on a cell), make a decision (that the cell is cancerous and needs to be neutralized), and act on that decision (deliver a cargo of cancer-killing drugs).

Or, like the robots in a modern-day factory, they could be programmed to assemble complex molecular products. The power of robotics lies in the fact that once programmed, the robots can carry out their tasks autonomously, without further human intervention.

With that promise, however, comes a practical problem: how do you program a molecule to perform complex behaviors?

"In normal robotics, the robot itself contains the knowledge about the commands, but with individual molecules, you can't store that amount of information, so the idea instead is to store information on the commands on the outside," says Walter. And you do that, says Stojanovic, "by imbuing the molecule's environment with informational cues."

"We were able to create such a programmed or 'prescribed' environment using DNA origami," explains Yan. DNA origami, an invention by Caltech Senior Research Associate Paul W. K. Rothemund, is a type of self-assembled structure made from DNA that can be programmed to form nearly limitless shapes and patterns (such as smiley faces or maps of the Western Hemisphere or even electrical diagrams). Exploiting the sequence-recognition properties of DNA base pairing, DNA origami are created from a long single strand of DNA and a mixture of different short synthetic DNA strands that bind to and "staple" the long DNA into the desired shape. The origami used in the Nature study was a rectangle that was 2 nanometers (nm) thick and roughly 100 nm on each side.

The researchers constructed a trail of molecular "bread crumbs" on the DNA origami track by stringing additional single-stranded DNA molecules, or oligonucleotides, off the ends of the staples. These represent the cues that tell the molecular robots what to do—start, walk, turn left, turn right, or stop, for example—akin to the commands given to traditional robots.

The molecular robot the researchers chose to use—dubbed a "spider"—was invented by Stojanovic several years ago, at which time it was shown to be capable of extended, but undirected, random walks on two-dimensional surfaces, eating through a field of bread crumbs.

To build the 4-nm-diameter molecular robot, the researchers started with a common protein called streptavidin, which has four symmetrically placed binding pockets for a chemical moiety called biotin. Each robot leg is a short biotin-labeled strand of DNA, "so this way we can bind up to four legs to the body of our robot," Walter says. "It's a four-legged spider," quips Stojanovic. Three of the legs are made of enzymatic DNA, which is DNA that binds to and cuts a particular sequence of DNA. The spider also is outfitted with a "start strand"—the fourth leg—that tethers the spider to the start site (one particular oligonucleotide on the DNA origami track). "After the robot is released from its start site by a trigger strand, it follows the track by binding to and then cutting the DNA strands extending off of the staple strands on the molecular track," Stojanovic explains.

"Once it cleaves," adds Yan, "the product will dissociate, and the leg will start searching for the next substrate." In this way, the spider is guided down the path laid out by the researchers. Finally, explains Yan, "the robot stops when it encounters a patch of DNA that it can bind to but that it cannot cut," which acts as a sort of flypaper.

Although other DNA walkers have been developed before, they've never ventured farther than about three steps. "This one," says Yan, "can walk up to about 100 nanometers. That's roughly 50 steps."

"This in itself wasn't a surprise," adds Winfree, "since Milan's original work suggested that spiders can take hundreds if not thousands of processive steps. What's exciting here is that not only can we directly confirm the spiders' multistep movement, but we can direct the spiders to follow a specific path, and they do it all by themselves—autonomously."

In fact, using atomic force microscopy and single-molecule fluorescence microscopy, the researchers were able to watch directly spiders crawling over the origami, showing that they were able to guide their molecular robots to follow four different paths.

"Monitoring this at a single molecule level is very challenging," says Walter. "This is why we have an interdisciplinary, multi-institute operation. We have people constructing the spider, characterizing the basic spider. We have the capability to assemble the track, and analyze the system with single-molecule imaging. That's the technical challenge." The scientific challenges for the future, Yan says, "are how to make the spider walk faster and how to make it more programmable, so it can follow many commands on the track and make more decisions, implementing logical behavior."

"In the current system," says Stojanovic, "interactions are restricted to the walker and the environment. Our next step is to add a second walker, so the walkers can communicate with each other directly and via the environment. The spiders will work together to accomplish a goal." Adds Winfree, "The key is how to learn to program higher-level behaviors through lower-level interactions."

Such collaboration ultimately could be the basis for developing molecular-scale reconfigurable robots—complicated machines that are made of many simple units that can reorganize themselves into any shape—to accomplish different tasks, or fix themselves if they break. For example, it may be possible to use the robots for medical applications. "The idea is to have molecular robots build a structure or repair damaged tissues," says Stojanovic.

"You could imagine the spider carrying a drug and bonding to a two-dimensional surface like a cell membrane, finding the receptors and, depending on the local environment," adds Yan, "triggering the activation of this drug."

Such applications, while intriguing, are decades or more away. "This may be 100 years in the future," Stojanovic says. "We're so far from that right now."

"But," Walter adds, "just as researchers self-assemble today to solve a tough problem, molecular nanorobots may do so in the future."

(Photo: Paul Michelotti)

The California Institute of Technology


0 comentarios

It's the last place you want to be judged on your looks. But in a court of law, it pays to be attractive, according to a new Cornell study that has found that unattractive defendants tend to get hit with longer, harsher sentences -- on average 22 months longer in prison.

The study also identified two kinds of jurors: Those who reason emotionally and give harsher verdicts to unattractive defendants and those who reason rationally and focus less on defendants' looks.

Psycho-legal literature has reported for decades that juries tend to show a bias in favor of good-looking defendants. Two Cornell researchers set out to determine why.

"Our hypothesis going in was that jurors inclined to process information in a more emotional/intuitive manner would be more prone to make reasoning errors when rendering verdicts and recommending sentences as opposed to rational processors. The results bore out our hypothesis on all measures," said lead author Justin Gunnell '05, J.D. '08, who began working on the study as a policy analysis and management major with co-author Stephen Ceci, Cornell's Helen L. Carr Professor of Developmental Psychology.

The study, "When Emotionality Trumps Reason," to be published in an upcoming issue of Behavioral Sciences and the Law, may contribute to new refinements in jury selection.

Borrowing a theory from personality psychology, the researchers sought to identify emotional and rational thinkers. One processes information based on facts, analysis and logic. The other reasons emotionally and may consider such legally irrelevant factors as a defendant's appearance, race, gender and class and report that the less-attractive defendant appeared more like the "type of person" that would commit a crime.

American attorneys, depending on jurisdiction and the type of case, are permitted to screen out jurors for a number of reasons. In cases where the evidence strongly favors one side, a lawyer might want to choose rational jurors. But in a case with an emotional tug (e.g., a poor defendant with several children, for example), a defense attorney might try to screen out highly rational jurors.

Study participants -- 169 Cornell psychology undergraduates -- took an online survey to determine the degree to which they processed information rationally or emotionally. They were then given a case study with a photograph of an actual defendant and his or her general profile. They read real jury instructions and listened to the cases' closing arguments.

While both groups convicted attractive defendants at similar rates and were less biased in the face of strong evidence or very serious offences, the jurors' reasoning style tended to play out "in cases where the evidence is ambiguous and the charged offense is somewhat minor," said Gunnell, who practices commercial litigation in New York City.

"Every person is capable of reasoning via either system and likely uses each system to some degree depending on context," Gunnell said. "The degree to which one system predominates the other is a factor that varies, depending on the individual's natural preference and style."

He said the findings are important in that "22 months may not seem like a lot to an outsider, but I guarantee that to the person serving the sentence it will seem like a lot."

"It is our obligation to look at areas of the judicial process that may be prone to weakness, at least under certain circumstances, or where improvements can be made. How jurors think, process and reason are an important step in understanding potential flaws in the American justice system, as crucial decisions ultimately rest in their hands," he concluded.

(Photo: Cornell U.)

Cornell University


0 comentarios

Scientists have used quantum mechanics to reveal that the most common mineral on Earth is relatively uncommon deep within the planet.

Using several of the largest supercomputers in the nation, a team of physicists led by Ohio State University has been able to simulate the behavior of silica in a high-temperature, high-pressure form that is particularly difficult to study firsthand in the lab.

The resulting discovery -- reported in this week’s early online edition of the Proceedings of the National Academy of Sciences (PNAS) -- could eventually benefit science and industry alike.

Silica makes up two-thirds of the Earth’s crust, and we use it to form products ranging from glass and ceramics to computer chips and fiber optic cables.

“Silica is all around us," said Ohio State doctoral student Kevin Driver, who led this project for his doctoral thesis. “But we still don’t understand everything about it. A better understanding of silica on a quantum-mechanical level would be useful to earth science, and potentially to industry as well.”

Silica takes many different forms at different temperatures and pressures -- not all of which are easy to study, Driver said.

"As you might imagine, experiments performed at pressures near those of Earth’s core can be very challenging. By using highly accurate quantum mechanical simulations, we can offer reliable insight that goes beyond the scope of the laboratory.”

Over the past century, seismology and high-pressure laboratory experiments have revealed a great deal about the general structure and composition of the earth. For example, such work has shown that the planet’s interior structure exists in three layers called the crust, mantle, and core. The outer two layers -- the mantle and the crust -- are largely made up of silicates, minerals containing silicon and oxygen.

Still, the detailed structure and composition of the deepest parts of the mantle remain unclear. These details are important for geodynamical modeling, which may one day predict complex geological processes such as earthquakes and volcanic eruptions.

Even the role that the simplest silicate -- silica -- plays in Earth's mantle is not well understood.

“Say you’re standing on a beach, looking out over the ocean. The sand under your feet is made of quartz, a form of silica containing one silicon atom surrounded by four oxygen atoms. But in millions of years, as the oceanic plate below becomes subducted and sinks beneath the Earth’s crust, the structure of the silica changes dramatically,” Driver said.

As pressure increases with depth, the silica molecules crowd closer together, and the silicon atoms start coming into contact with oxygen atoms from neighboring molecules. Several structural transitions occur, with low-pressure forms surrounded by four oxygen atoms and higher-pressure forms surrounded by six. With even more pressure, the structure collapses into a very dense form of the mineral, which scientists call alpha-lead oxide.

It’s this form of silica that likely resides deep within the earth, in the lower part of the mantle, just above the planet’s core, Driver said.

When scientists try to interpret seismic signals from that depth, they have no direct way of knowing what form of silica they are dealing with. So they must simulate the behavior of different forms on computer, and then compare the results to the seismic data. The simulations rely on quantum mechanics.

In PNAS, Driver, his advisor John Wilkins, and their coauthors describe how they used a quantum mechanical method to design computer algorithms that would simulate the silica structures. When they did, they found that the behavior of the dense, alpha-lead oxide form of silica did not match up with any global seismic signal detected in the lower mantle.

This result indicates that the lower mantle is relatively devoid of silica, except perhaps in localized areas where oceanic plates have subducted, Driver explained.

Wilkins, Ohio Eminent Scholar and professor of physics at Ohio State, cited Driver’s determination and resourcefulness in making this study happen. The physicists used a method called quantum Monte Carlo (QMC), which was developed during atomic bomb research in World War II. To earn his doctorate, Driver worked to show that the method could be applied to studying minerals in the planet’s deep interior.

"This work demonstrates both the superb contributions a single graduate student can make, and that the quantum Monte Carlo method can compute nearly every property of a mineral over a wide range of pressure and temperatures,” Wilkins said. He added that the study will “stimulate a broader use of quantum Monte Carlo worldwide to address vital problems.”

While these algorithms have been around for over half a century, applying them to silica was impossible until recently, Driver said. The calculations were simply too labor-intensive.

Even today, with the advent of more powerful supercomputers and fast algorithms that require less computer memory, the calculations still required using a number of the largest supercomputers in the United States, including the Ohio Supercomputer Center in Columbus.

“We used the equivalent of six million CPU hours or more, to model four different states of silica” Driver said.

He and his colleagues expect that quantum Monte Carlo will be used more often in materials science in the future, as the next generation of computers goes online.

(Photo: OSU)

Ohio State


0 comentarios

Field studies have shown for the first time that several common species of seaweeds in both the Pacific and Caribbean Oceans can kill corals upon contact using chemical means.

While competition between seaweed and coral is just one of many factors affecting the decline of coral reefs worldwide, this chemical threat may provide a serious setback to efforts aimed at repopulating damaged reefs. Seaweeds are normally kept in check by herbivorous fish, but in many areas overfishing has reduced the populations of these plant-consumers, allowing seaweeds to overpopulate coral reefs.

A study documenting the chemical effects of seaweeds on corals was published May 10, 2010, in the early edition of the journal Proceedings of the National Academy of Sciences (PNAS). The research was supported by the National Institutes of Health, the National Science Foundation, and the Teasely Endowments at the Georgia Institute of Technology.

"Between 40 and 70 percent of the seaweeds we studied killed corals," said Mark Hay, a professor in the School of Biology at Georgia Tech. "We don't know how significant this is compared to other problems affecting coral, but we know this is a growing problem. For reefs that have been battered by human use or overfishing, the presence of seaweeds may prevent natural recovery from happening at all."

Coral reefs are declining worldwide, and scientists studying the problem had suspected that proliferation of seaweed was part of the cause -- perhaps by crowding out the coral or by damaging it physically.

Using racks of coral being transplanted as part of repopulation efforts, Hay and graduate student Douglas Rasher compared the fate of corals from two different species when they were placed next to different types of seaweed common around Fijian reefs in the Pacific -- and Panamanian reefs in Caribbean. They planted the seaweeds next to coral being transplanted -- and also placed plastic plants next to some of the coral to simulate the effects of shading and mechanical damage. Other coral in the racks had neither seaweeds nor plastic plants near them.

The researchers revisited the coral two days, 10 days and 20 days later. In as little as two days, corals in contact with some seaweed species bleached and died in areas of direct contact. In other cases, the effects took a full 20 days to appear -- or for some seaweed species, no damaging effects were noted during the 20-day period. Ultimately, as much as 70 percent of the seaweed species studied turned out to have harmful effects -- but only when they were in direct contact with the coral.

To confirm that chemical factors were responsible, Hay and Rasher extracted chemicals from the seaweeds -- and from only the surfaces of the seaweeds. They then applied both types of chemicals to corals by placing the chemicals into gel matrix bound to a strip of window screen, forming something similar to a gauze bandage and applying that directly to the corals. To a control group of corals, they applied the gel and screen without the seaweed chemicals.

The effects confirmed that chemicals from both the surface of certain seaweeds and extracts from those entire plants killed corals.

"In all cases where the coral had been harmed, the chemistry appeared to be responsible for it," said Hay. "The evolutionary reasons why the seaweeds have these compounds are not known. It may be that these compounds protect the seaweeds against microbial infection, or that they help compete with other seaweeds. But it's clear now that they also harm the corals, either by killing them or suppressing their growth."

The researchers studied coral of different species in the Pacific and Caribbean, matching them up against different species of seaweed common to their geographic areas. The coral species chosen – Porites porites in Panama and Porites cylindrica in Fiji -- are among the hardiest of coral, suggesting that other species may be even more dramatically affected by the seaweed compounds.

In the Caribbean, five of the seven seaweeds studied caused bleaching of the coral, while in the Pacific, three of eight species studied caused the effect.

The harmful chemicals affect only coral in direct contact with the seaweed, suggesting the compounds are not soluble in water, Hay noted. The effects -- which were measured through photographic image analysis and Pulse-Amplitude-Modulated fluorometry -- also varied considerably, with certain seaweeds showing stronger impacts than others.

Conducted during 2008 and 2009, the study adds new information about the decline of reefs worldwide, and reinforces the importance of maintaining a healthy ecosystem that includes enough herbivorous fish to keep seaweed under control.

"Removing the herbivorous fishes really sets up a cascade of effects," said Hay, who holds the Harry and Linda Teasely Chair in the Georgia Tech School of Biology. "The more you fish, the more seaweeds there are. The more seaweeds there are, the more damage is done to the coral. The less coral there is, the fewer fish will be recruited to an area. If there are fewer fish, the seaweeds outgrow the coral. It's a downward death spiral that may be difficult to recover from."

In earlier research, Hay and other researchers demonstrated that keeping fish away from coral reefs fuels the growth of seaweeds, and that certain fish are responsible for eating specific seaweed species. That information could help guide fisheries management by encouraging protection of fish that control the most harmful seaweeds.

"The most damaging seaweed in our study is eaten voraciously by one species of fish, and no other species will touch it," Hay said. "Now that we know that seaweeds can kill coral through these chemical means, it is even more important to understand which herbivores control which seaweeds so we can consider additional protections for these critical fish species, even outside of normal marine protected areas."

(Photo: GIT)

Georgia Tech


0 comentarios
The term Hispanic/Latino "encompasses a huge amount of genetic diversity," says a Cornell researcher whose new study shows that populations geographically close to historical slave trade routes and ports have more African ancestry than more distant or inland Latin Americans, who show more Native American influences.

The genetics study by researchers from Cornell, the New York University School of Medicine, the University of Arizona and Stanford University, appears online in the Proceedings of the National Academy of Sciences Early Edition May 5.

"The study reveals unique patterns of ancestry across these populations," said Katarzyna Bryc, graduate student in biological statistics and computational biology and co-lead author of the paper with New York University medical student Christopher Velez. The genes of "these Latino populations tell us about the complexity of migration events involved in the histories of Hispanics/Latinos," Bryc added.

The research also found a sex bias in ancestry contributions in the sample of 100 individuals from Ecuador, Colombia, Dominican Republic and Puerto Rico and 112 Mexicans: All of the studied populations show greater proportions of Native American female and European male ancestries.

The findings have implications for using a genomic perspective in medicine, where knowledge of ancestries may reveal tendencies toward chronic inherited diseases. Previous studies have shown that Latinos with greater European ancestry have a higher risk of breast cancer, while European ancestry correlates with higher susceptibility to asthma in Puerto Ricans, compared with Mexicans, for example.

The genetic analysis shows that individuals from the Dominican Republic, Puerto Rico and to some extent Colombia have more African ancestry, reflecting migrations along the historical slave trade routes. In contrast, Mexicans and Ecuadorians have more Native American ancestry.

When compared with eight sampled Native American populations, the researchers also found that the Native American segments of genomes of North American Hispanics/Latinos (from Mexico, Puerto Rico and Dominican Republic) are genetically more similar to those of the Nahua people (indigenous of Mexico and Central America), while the Native American segments of genomes of South American populations (of Colombia and Ecuador) were most similar to those of the Quechua people.

The term Hispanic or Latino encompasses many different countries, hundreds of millions of individuals, and populations that underwent "different rates of migration from Europe and enslavement from Africa," said Velez. "This, in conjunction with the uneven distribution of native populations throughout the Americas prior to the arrival of the Spanish and Portuguese, has led to marked differences not only between countries in Latin America, but also within them," he added.

The researchers used data on 100 individuals from Ecuador, Colombia, Dominican Republic and Puerto Rico from a repository collected for the New York Cancer Project and housed at North Shore-Long Island Jewish Medical Center, and 112 Mexicans from a dataset collected by GlaxoSmithKline for the Population Reference Sample, a resource for population, disease and pharmacological genetics research.

Cornell University

Wednesday, May 26, 2010


0 comentarios

MIT researchers find a way to calculate the effects of Casimir forces, offering a way to keep micromachines’ parts from sticking together.

Discovered in 1948, Casimir forces are complicated quantum forces that affect only objects that are very, very close together. They’re so subtle that for most of the 60-odd years since their discovery, engineers have safely ignored them. But in the age of tiny electromechanical devices like the accelerometers in the iPhone or the micromirrors in digital projectors, Casimir forces have emerged as troublemakers, since they can cause micromachines’ tiny moving parts to stick together.

MIT researchers have developed a powerful new tool for calculating the effects of Casimir forces, with ramifications for both basic physics and the design of microelectromechanical systems (MEMS). One of the researchers’ most recent discoveries using the new tool was a way to arrange tiny objects so that the ordinarily attractive Casimir forces become repulsive. If engineers can design MEMS so that the Casimir forces actually prevent their moving parts from sticking together — rather than causing them to stick — it could cut down substantially on the failure rate of existing MEMS. It could also help enable new, affordable MEMS devices, like tiny medical or scientific sensors, or microfluidics devices that enable hundreds of chemical or biological experiments to be performed in parallel.

Quantum mechanics has bequeathed a very weird picture of the universe to modern physicists. One of its features is a cadre of new subatomic particles that are constantly flashing in and out of existence in an almost undetectably short span of time. (The Higgs boson, a theoretically predicted particle that the Large Hadron Collider in Switzerland is trying to detect for the first time, is expected to appear for only a few sextillionths of a second.) There are so many of these transient particles in space — even in a vacuum — moving in so many different directions that the forces they exert generally balance each other out. For most purposes, the particles can be ignored. But when objects get very close together, there’s little room for particles to flash into existence between them. Consequently, there are fewer transient particles in between the objects to offset the forces exerted by the transient particles around them, and the difference in pressure ends up pushing the objects toward each other.

In the 1960s, physicists developed a mathematical formula that, in principle, describes the effects of Casimir forces on any number of tiny objects, with any shape. But in the vast majority of cases, that formula remained impossibly hard to solve. “People think that if you have a formula, then you can evaluate it. That’s not true at all,” says Steven Johnson, an associate professor of applied mathematics, who helped develop the new tools. “There was a formula that was written down by Einstein that describes gravity. They still don’t know what all the consequences of this formula are.” For decades, the formula for Casimir forces was in the same boat. Physicists could solve it for only a small number of cases, such as that of two parallel plates. Then, in 2006, came a breakthrough: MIT Professor of Physics Mehran Kardar demonstrated a way to solve the formula for a plate and a cylinder.

In a paper appearing in Proceedings of the National Academy of Sciences, Johnson, physics PhD students Alexander McCauley and Alejandro Rodriguez (the paper’s lead author), and John Joannopoulos, the Francis Wright Davis Professor of Physics, describe a way to solve Casimir-force equations for any number of objects, with any conceivable shape.

The researchers’ insight is that the effects of Casimir forces on objects 100 nanometers apart can be precisely modeled using objects 100,000 times as big, 100,000 times as far apart, immersed in a fluid that conducts electricity. Instead of calculating the forces exerted by tiny particles flashing into existence around the tiny objects, the researchers calculate the strength of an electromagnetic field at various points around the much larger ones. In their paper, they prove that these computations are mathematically equivalent.

For objects with odd shapes, calculating electromagnetic-field strength in a conducting fluid is still fairly complicated. But it’s eminently feasible using off-the-shelf engineering software.

“Analytically,” says Diego Dalvit, a specialist in Casimir forces at the Los Alamos National Laboratory, “it’s almost impossible to do exact calculations of the Casimir force, unless you have some very special geometries.” With the MIT researchers’ technique, however, “in principle, you can tackle any geometry. And this is useful. Very useful.”

Since Casimir forces can cause the moving parts of MEMS to stick together, Dalvit says, “One of the holy grails in Casimir physics is to find geometries where you can get repulsion” rather than attraction. And that’s exactly what the new techniques allowed the MIT researchers to do. In a separate paper published in March, physicist Michael Levin of Harvard University’s Society of Fellows, together with the MIT researchers, described the first arrangement of materials that enable Casimir forces to cause repulsion in a vacuum.

Dalvit points out, however, that physicists using the new technique must still rely on intuition when devising systems of tiny objects with useful properties. “Once you have an intuition of what geometries will cause repulsion, then the [technique] can tell you whether there is repulsion or not,” Dalvit says. But by themselves, the tools cannot identify geometries that cause repulsion.

(Photo: Alejandro Rodriguez)



0 comentarios

MIT-led Ignitor reactor could be the world’s first to reach major milestone, perhaps paving the way for eventual power production.

Russia and Italy have entered into an agreement to build a new fusion reactor outside Moscow that could become the first such reactor to achieve ignition, the point where a fusion reaction becomes self-sustaining instead of requiring a constant input of energy. The design for the reactor, called Ignitor, originated with MIT physics professor Bruno Coppi, who will be the project’s principal investigator.

The concept for the new reactor builds on decades of experience with MIT’s Alcator fusion research program, also initiated by Coppi, which in its present version (called Alcator C-Mod) has the highest magnetic field and highest plasma pressure (two of the most important measures of performance in magnetic fusion) of any fusion reactor, and is the largest university-based fusion reactor in the world.

The key ingredient in all fusion experiments is plasma, a kind of hot gas made up of charged particles such as atomic nuclei and electrons. In fusion reactors, atomic nuclei — usually of isotopes of hydrogen called deuterium and tritium — are forced together through a combination of heat and pressure to overcome their natural electrostatic repulsion. When the nuclei join together, or fuse, they release prodigious amounts of energy.

Ignitor would be about twice the size of Alcator C-Mod, with a main donut-shaped chamber 1.3 meters across, and have an even stronger magnetic field. It will be much smaller and less expensive than the major international fusion project called ITER (with a chamber 6.2 meters across), currently under construction in France. Though originally designed to achieve ignition, the ITER reactor has been scaled back and is now not expected to reach that milestone.

The Ignitor reactor, Coppi says, will be “a very compact, inexpensive type of machine,” and unlike the larger ITER could be ready to begin operations within a few years. Its design is based on a particularly effective combination of factors that researchers unexpectedly discovered during the many years of running the Alcator program, and that were later confirmed in experiments at other reactors. Together, these factors produce especially good confinement of the plasma and a high degree of purity (impurities in the hot gases can be a major source of inefficiency). The new design aims to preserve these features to produce the highest plasma current densities — the amount of electric current in a given area of plasma. The design also has additional structures needed to produce and confine burning fusion plasmas in order to create the conditions needed for ignition, Coppi says.

Coppi plans to work with the Italian ministry of research and Evgeny Velikhov, president of the Kurchatov Institute in Moscow, to finalize the distribution of tasks for the machine, the core of which is to be built in Italy and then installed in Troitsk, near Moscow, on the site of that institute’s present Triniti reactor. Velikhov, as it happens, is also the chair of the ITER council. Coppi says of these two different programs, “there’s no competition, we are complementary.”

Although seen as a possible significant contributor to the world’s energy needs because it would be free of greenhouse-gas emissions, practical fusion power remains at least two decades away, most scientists in the field agree. But the initial impetus for setting up the Alcator reactor in the 1970s had more to do with pure science: “It was set up to simulate the X-ray stars that we knew at that time,” says Coppi, whose research work has as much to do with astrophysics as with energy. Stars are themselves made of plasma and powered by fusion, and the only way to study their atomic-level behavior in detail is through experiments inside fusion reactors.

Once the reactor was in operation, he says, “we found we were producing plasmas with unusual properties,” and realized this might represent a path to the long-sought goal of fusion ignition.

Roscoe White, a distinguished research fellow at the Princeton Plasma Physics Laboratory, says that “the whole point of Ignitor is to find out how a burning plasma behaves, and there could be pleasant or unpleasant results coming from it. Whatever is learned is a gain. Nobody knows exactly how it will perform, that is the point of the experiment.” But while its exact results are unknown, White says it is important to pursue this project in addition to other approaches to fusion. “With our present knowledge it is very risky to commit the program to a single track reactor development — our knowledge is still in flux,” he says.

In addition, he says, “the completion of ITER, the only currently projected burning plasma experiment, is decades off. Experimental data concerning a burning plasma would be very welcome, and could lead to important results helping the cause of practical fusion power.” Furthermore, the Ignitor approach, if all goes well, could lead to more compact and economical future reactors: Some recent results from existing reactors, plus new information to be gained from Ignitor, “could lead to reactor designs much smaller and simpler than ITER,” he says.

Coppi remains especially interested in the potential of the new reactor to make new discoveries about fundamental physics. Quoting the late MIT physicist and Institute Professor Bruno Rossi, Coppi says, “whenever you do experiments in an unknown regime, you will find something new.” The new machine’s findings, he suggests, “will have a strong impact on astrophysics.”

(Photo: Bruno Coppi)



0 comentarios

Marriage is more beneficial for men than for women - at least for those who want a long life. Previous studies have shown that men with younger wives live longer. While it had long been assumed that women with younger husbands also live longer, in a new study Sven Drefahl from the Max Planck Institute for Demographic Research (MPIDR) in Rostock, Germany, has shown that this is not the case. Instead, the greater the age difference from the husband, the lower the wife’s life expectancy. This is the case irrespective of whether the woman is younger or older than her spouse.

Related to life expectancy choosing a wife is easy for men - the younger the better. The mortality risk of a husband who is seven to nine years older than his wife is reduced by eleven percent compared to couples where both partners are the same age. Conversely, a man dies earlier when he is younger than his spouse.

For years, researchers have thought that this data holds true for both sexes. They assumed an effect called "health selection" was in play; those who select younger partners are able to do so because they are healthier and thus already have a higher life expectancy. It was also thought that a younger spouse has a positive psychological and social effect on an older partner and can be a better caretaker in old age, thereby helping to extend the partner’s life.

"These theories now have to be reconsidered", says Sven Drefahl from MPIDR. "It appears that the reasons for mortality differences due to the age gap of the spouses remain unclear." Using data from almost two million Danish couples, Drefahl was able to eliminate the statistical shortcomings of earlier research, and showed that the best choice for a woman is to marry a man of exactly the same age; an older husband shortens her life, and a younger one even more so.

According to Drefahl’s study, published May 12th in the journal Demography, women marrying a partner seven to nine years younger increase their mortality risk by 20 percent. Hence, "health selection" can’t be true for women; healthy women apparently don’t go chasing after younger men. While many studies on mate selection show that women mostly prefer men the same age, most of them end up with an older husband. In the United States, on average a groom is 2.3 years older than his bride. "It’s not that women couldn’t find younger partners; the majority just don’t want to", says Sven Drefahl.

It is also doubtful that older wives benefit psychologically and socially from a younger husband. This effect only seems to work for men. "On average, men have fewer and lesser quality social contacts than those of women," says Sven Drefahl. Thus, unlike the benefits of a younger wife, a younger husband wouldn’t help extend the life of his older wife by taking care of her, going for a walk with her and enjoying late life together. She already has friends for that. The older man, however, doesn’t.

This means that women don’t benefit by having a younger partner, but why does he shorten their lives? "One of the few possible explanations is that couples with younger husbands violate social norms and thus suffer from social sanctions," says Sven Drefahl. Since marrying a younger husband deviates from what is regarded as normal, these couples could be regarded as outsiders and receive less social support. This could result in a less joyful and more stressful life, reduced health, and finally, increased mortality.

While the new MPIDR study shows that marriage disadvantages most women when they are not the same age as their husband, it is not true that marriage in general is unfavorable. Being married raises the life expectancy of both men and women above those that are unmarried. Women are also generally better off than men; worldwide their life expectancy exceeds that of men by a few years.

(Photo: Sven Drefahl)

Max Planck Institute




Selected Science News. Copyright 2008 All Rights Reserved Revolution Two Church theme by Brian Gardner Converted into Blogger Template by Bloganol dot com