Friday, July 30, 2010

REMARKABLE FOSSIL CAVE SHOWS HOW ANCIENT MARSUPIALS GREW

0 comentarios

The discovery of a remarkable 15-million-year-old Australian fossil limestone cave packed with even older animal bones has revealed almost the entire life cycle of a large prehistoric marsupial, from suckling young in the pouch still cutting their milk teeth to elderly adults.

In an unprecedented find, a team of University of New South Wales [Sydney Australia] researchers in has unearthed from the cave floor hundreds of beautifully preserved fossils of the extinct browsing wombat-like marsupial Nimbadon lavarackorum, along with the remains of galloping kangaroos, primitive bandicoots, a fox-sized thylacine and forest bats.

By comparing the skulls of 26 different Nimbadon individuals that died in the cave at varying stages of life the team has been able to show that its babies developed in much the same way as marsupials today, probably being born after only a month's gestation and crawling to the mother's pouch to complete their early development.

Details of the find at a site known as AL90 in the famous Riversleigh World Heritage fossil field in Queensland are published in the Journal of Vertebrate Paleontology, by a team led by Dr Karen Black, of the UNSW School of Biological, Earth and Environmental Sciences. The research was supported by the Xstrata Community Partnership Program North Queensland and the Australian Research Council.

"This is a fantastic and incredibly rare site," says Dr Black. "The exceptional preservation of the fossils has allowed us to piece together the growth and development of Nimbadon from baby to adult. So far 26 skulls - ranging in age from suckling pouch young and juveniles right through to elderly adults - have been recovered, as well as associated skeletons.

"The animals appear to have plunged to their deaths through a vertical cave entrance that may have been obscured by vegetation and acted as a natural pit-fall trap. These animals – including mothers with pouch young - either unwittingly fell to their deaths or survived the fall only to be entombed and unable to escape.

"The ceiling and walls of the cave were eroded away millions of years ago, but the floor of the cave remains at ground level. We have literally only scratched its surface, with thousands more bones evident at deeper levels in the deposit.'

The site is also scientifically important because it documents a critical time in the evolution of Australia's flora and fauna when lush greenhouse conditions were giving way to a long, slow drying out that fundamentally reshaped the continent's cargo of life as rainforests retreated.

Dr Black notes that the Nimbadon skulls also reveal that early in life, the emphasis of its growth was on the development of bones at the front of the face, to help the baby to suckle from its mother. As it grew older and its diet changed to eating leaves, the rest of the skull developed and grew quite massive by way of a series of bony chambers surrounding the brain.

Team member Professor Mike Archer says: "Yet we found that its brain was quite small and stopped growing relatively early in its life. We think it needed a large surface area of skull to provide attachments for all the muscle power it required to chew large quantities of leaves, so its skull features empty areas, or sinus cavities. Roughly translated, this may be the first demonstration of how a growing mammal 'pays' for the need to eat more greens - by becoming an 'airhead'.

"The abundance of Nimbadon fossils also suggests that they travelled in family groups or perhaps even larger gatherings: it's possible that this also reflects the beginning of mob behaviour in herbivorous marsupials, such as we see today in grey kangaroos."

(Photo: Karen Black)

University of New South Wales

MAYA KING’S TOMB DISCOVERED IN GUATEMALA

0 comentarios

A well-preserved tomb of an ancient Maya king has been discovered in Guatemala by a team of archaeologists led by Brown University’s Stephen Houston. The tomb is packed with carvings, ceramics, textiles, and the bones of six children, who may have been sacrificed at the time of the king’s death.

The team uncovered the tomb, which dates from about 350 to 400 A.D., beneath the El Diablo pyramid in the city of El Zotz in May. The news was made public yesterday during a press conference in Guatemala City, hosted by the Ministry of Culture and Sports, which authorized the work.

Before making the actual discovery, Houston said the team thought “something odd” was happening in the deposit they were digging. They knew a small temple had been built in front of a sprawling structure dedicated to the sun god, an emblem of Maya rulership. “When we sunk a pit into the small chamber of the temple, we hit almost immediately a series of ‘caches’ — blood-red bowls containing human fingers and teeth, all wrapped in some kind of organic substance that left an impression in the plaster. We then dug through layer after layer of flat stones, alternating with mud, which probably is what kept the tomb so intact and airtight.”

Then on May 29, 2010, Houston was with a worker who came to a final earthen layer. “I told him to remove it, and then, a flat stone. We’d been using a small stick to probe for cavities. And, on this try, the stick went in, and in, and in. After chipping away at the stone, I saw nothing but a small hole leading into darkness.”

They lowered a bare light bulb into the hole, and suddenly Houston saw “an explosion of color in all directions — reds, greens, yellows.” It was a royal tomb filled with organics Houston says he’d never seen before: pieces of wood, textiles, thin layers of painted stucco, cord.

“When we opened the tomb, I poked my head in and there was still, to my astonishment, a smell of putrification and a chill that went to my bones,” Houston said. “The chamber had been so well sealed, for over 1600 years, that no air and little water had entered.”

The tomb itself is about 6 feet high, 12 feet long, and four feet wide. “I can lie down comfortably in it,” Houston said, “although I wouldn’t want to stay there.”

It appears the tomb held an adult male, but the bone analyst, Andrew Scherer, assistant professor of anthropology at Brown, has not yet confirmed the finding. So far, it seems likely that there are six children in the tomb, some with whole bodies and probably two solely with skulls.

And who was this man? Though the findings are still very new, the group believes the tomb is likely from a king they only know about from other hieroglyphic texts — one of Houston’s specialties in Maya archaeology. “These items are artistic riches, extraordinarily preserved from a key time in Maya history,” said Houston. “From the tomb’s position, time, richness, and repeated constructions atop the tomb, we believe this is very likely the founder of a dynasty.”

Houston says the tomb shows that the ruler is going into the tomb as a ritual dancer. He has all the attributes of this role, including many small ‘bells’ of shell with, probably, dog canines as clappers. There is a chance too, that his body, which rested on a raised bier that collapsed to the floor, had an elaborate headdress with small glyphs on them. One of his hands may have held a sacrificial blade.”

The stone expert on site, Zachary Hruby, suspects the blade was used for cutting and grinding through bone or some other hard material. Its surface seems to be covered with red organic residue. Though the substance still needs to be tested, “it doesn’t take too much imagination to think that this is blood,” Houston said.

“We still have a great deal of work to do,” Houston said. “Remember, we’ve only been out of the field for a few weeks and we’re still catching our breath after a very difficult, technical excavation. Royal tombs are hugely dense with information and require years of study to understand. No other deposits come close.”

(Photo: Arturo Godoy)

Brown University

AUTISM HAS UNIQUE VOCAL SIGNATURE, NEW TECHNOLOGY REVEALS

0 comentarios
A new automated vocal analysis technology could fundamentally change the study of language development as well as the screening for autism spectrum disorders and language delay, reports a study in the July 19 online Proceedings of the National Academy of Sciences.

The LENA™ (Language Environment Analysis) system automatically labeled infant and child vocalizations from recordings and thereafter an automatic acoustic analysis designed by the researchers showed that pre-verbal vocalizations of very young children with autism are distinctly different from those of typically developing children with 86 percent accuracy.

The system also differentiated typically developing children and children with autism from children with language delay based on the automated vocal analysis.

The researchers analyzed 1,486 all-day recordings from 232 children (or more than 3.1 million automatically identified child utterances) through an algorithm based on the 12 acoustic parameters associated with vocal development. The most important of these parameters proved to be the ones targeting syllabification, the ability of children to produce well-formed syllables with rapid movements of the jaw and tongue during vocalization. Infants show voluntary control of syllabification and voice in the first months of life and refine this skill as they acquire language.

The autistic sample showed little evidence of development on the parameters as indicated by low correlations between the parameter values and the children's ages (from 1 to 4 years). On the other hand, all 12 parameters showed statistically significant development for both typically developing children and those with language delays.

The research team, led by D. Kimbrough Oller, professor and chair of excellence in audiology and speech language pathology at the University of Memphis, called the findings a proof of concept that automated analysis of massive samples of vocalizations can now be included in the scientific repertoire for research on vocal development.

Although aberrations in the speech (or lack of it) of children with autism spectrum disorders has been examined by researchers and clinicians for more than 20 years, vocal characteristics are not included in standard criteria for diagnosis of autism spectrum disorders, said Steven F. Warren, professor of applied behavioral science and vice provost for research at the University of Kansas, who contributed to the study and was among the first to see the potential of the technology for autism spectrum disorders screening.

"A small number of studies had previously suggested that children with autism have a markedly different vocal signature, but until now, we have been held back from using this knowledge in clinical applications by the lack of measurement technology," said Warren.

Warren predicts that LENA, which allow the inexpensive collection and analysis of magnitudes of data unimagined in language research before now, could significantly impact the screening, assessment and treatment of autism and the behavioral sciences in general.

Since the analysis is not based on words, but rather on sound patterns, the technology theoretically could potentially be used to screen speakers of any language for autism spectrum disorders, Warren said. "The physics of human speech are the same in all people as far as we know."

Warren says that children with autism spectrum disorders can be diagnosed at 18 months but that the median age of diagnosis is 5.7 years in the United States.

"This technology could help pediatricians screen children for ASD to determine if a referral to a specialist for a full diagnosis is required and get those children into earlier and more effective treatments."

LENA is digital language processor and language analysis software. The processor fits into the pocket of specially designed children's clothing and records everything the child vocalizes but can reliably distinguish child vocalizations from its cries and vegetative sounds, other voices and extraneous environmental sounds.

Recordings with the device have been collected since 2006. Parents responded to advertisements and indicated if their children had been diagnosed with autism or language delay. A speech-language clinician employed by the project also evaluated many of the children with a reported diagnosis of language delay. Many of the parents of children with language delay and all of the children with autism supplied documentation from the diagnosing clinicians, who were independent of the research.

The recordings were made by the parents at home and in the other natural environments of the children, by simply turning the recorder on and placing in the special children's clothing, and then worn all day.

The discovery that it was possible to differentiate recordings of the autistic children from those of the typically developing children by the totally objective method of automated vocal analysis inspired the researchers to consider both the possibility of earlier screening and diagnosis and earlier intervention for children with autism.

"Autism interventions remain expensive and arduous. This tool may help us to develop cost-effective treatments and better understand how they work and how to keep them working," said Warren.

LENA could allow parents to continue and supplement language enrichment therapy at home and assess their own effectiveness for themselves, Warren said. "In this way, LENA could function similarly to the way a pedometer measures how much exercise one gets from walking."

University of Kansas

Thursday, July 29, 2010

WHAT IS THE SIZE OF THE PROTON?

0 comentarios
The proton, one of the primary components of matter, could be smaller than previously thought. This is the surprising result experimentally established by an international collaboration of physicists, in which the Laboratoire Kastler Brossel (ENS Paris / UPMC / CNRS) is actively involved. This new measurement of the radius of the proton, obtained with an extreme accuracy, could call into question certain predictions of quantum electrodynamics, one of the fundamental theories of quantum physics, or even the value of the Rydberg constant (the most accurate physical constant to date).

Published in Nature on 8 July, this work is featured on the journal's cover.

The nuclei of atoms are made up of protons and neutrons, around which electrons orbit. These three elements (protons, neutrons and electrons) constitute practically all of the Earth's matter. Whereas the electron is considered as a “sizeless” particle, the proton, which is composed of quarks, is an extended object. Until now, only two methods have been used to measure its radius. Based on the study of the interactions between a proton and an electron, they focus either on the collisions between an electron and a proton or on the hydrogen atom (constituted of an electron and a proton). The value obtained, and that used by physicists, is 0.877 femtometers (+/- 0.007).

In order to determine the radius of protons more accurately, the physicists used “muonic hydrogen” in which the electron is replaced by a muon, a negatively charged elementary particle. “This idea goes back to the 1970s”, explains François Nez, CNRS researcher at the Laboratoire Kastler Brossel (LKB). “However, techniques needed to improve in order to make it possible.” The hydrogen atom, which is the simplest of existing atoms, has often been the best object for studying fundamental questions in physics. But why replace the electron by a muon? Negatively charged, a muon is 200 times heavier than an electron. Therefore, according to the laws of quantum physics, it should move 200 times nearer the proton than an electron in “normal” hydrogen does. The muon is “much more sensitive” to the size of the proton than an electron. Consequently, its atomic binding energy is highly dependent on the size of the proton. The measurement of this energy allows scientists to determine the radius of the proton in a much more accurate manner (0.1 % accuracy) than measurements using electrons (around 1 % accuracy).

To achieve this, an infrared laser had to be specifically designed. The six LKB researchers, from CNRS and UPMC, mainly provided their expertise in its manufacturing, essentially with regard to the “titanium-sapphire” part of the laser chain. The objective was to design a laser in which the emission wavelength (in other words the color of the laser light) can be adjusted at will. Since a muon disintegrates in 2 millionths of a second, it is necessary to be able to carry out the measurement on the muonic hydrogen during this very short lapse of time. The laser shot therefore needs to be triggered very rapidly (in around 1 millionth of a second). A first measurement campaign at the end of 2002 allowed the experimental set up developed by LKB to be put through its paces. LKB was also responsible for measuring the emission wavelength of the complete laser system. This involved targeting the different wavelengths absorbed by the muonic hydrogen one by one, making it possible to deduce the energy of the muon around the proton and thus the size of the proton.

After several series of measurements conducted with the accelerator of the Paul Scherrer Institute (PSI) in Switzerland, where the beam of muons is particularly intense , the researchers obtained an unexpected value for the radius of the proton. In fact, this result differs from that obtained using electrons. It amounts to 0.8418 femtometers (+/- 0.0007) instead of 0.877 femtometers for measurements using electrons. “We did not envisage that there could be any divergence between known values and our measurements”, points out LKB director Paul Indelicato. This difference is far too big to be put down to measurement inaccuracies and the team of scientists is currently attempting to explain this discrepancy. It could call into question the most accurately tested theory in physics, namely the theory of quantum electrodynamics, which is one of the cornerstones of modern physics. Another possibility is that the current value of the Rydberg constant, the physical constant determined with the greatest accuracy so far, could need to be revised. The researchers plan to repeat this experiment in the near future with muonic helium (instead of hydrogen), which could shed new light on these unexpected results.

CNRS

COULD OUR MINDS BE TRICKED INTO SATISFYING OUR STOMACHS?

0 comentarios
The key to losing weight could lie in manipulating our beliefs about how filling we think food will be before we eat it, suggesting that portion control is all a matter of perception.

BBSRC-funded studies showed that participants were more satisfied for longer periods of time after consuming varying quantities of food when they were led to believe that portion sizes were larger than they actually were.

Memories about how satisfying previous meals were also played a causal role in determining how long they staved off hunger. Together, these results suggest that memory and learning play an important role in governing our appetite.

In the first experiment, participants were shown the ingredients of a fruit smoothie. Half were shown a small portion of fruit and half were shown a large portion. They were then asked to assess the 'expected satiety' of the smoothie and to provide ratings before and three hours after consumption. Participants who were shown the large portion of fruit reported significantly greater fullness, even though all participants were given the same quantity of fruit.

In a second experiment, researchers manipulated the 'actual' and 'perceived' amount of soup that people thought that they had consumed. Using a soup bowl connected to a hidden pump beneath the bowl, the amount of soup in the bowl was increased or decreased as participants ate, without their knowledge. 3 hours after the meal, it was the perceived (remembered) amount of soup in the bowl and not the actual amount of soup consumed that predicted post-meal hunger and fullness ratings.

The findings, which will be presented by researchers from the University of Bristol at this month's annual conference of the Society for the Study of Ingestive Behaviour (SSIB), could have implications for more effective labelling of diet foods.

"The extent to which a food can alleviate hunger is not determined solely by its physical size, energy content, and so on. Instead, it is influenced by prior experience with a food, which affects our beliefs and expectations about satiation. This has an immediate effect on the portion sizes that we select and an effect on the hunger that we experience after eating," said Dr Jeff Brunstrom, Reader in Behavioural Nutrition at Bristol university's Department of Experimental Psychology.

"Labels on 'light' and 'diet' foods might lead us to think we will not be satisfied by such foods, possibly leading us to eat more afterwards," added Dr Brunstrom. "One way to militate against this, and indeed accentuate potential satiety effects, might be to emphasise the satiating properties of a food using labels such as 'satisfying' or 'hunger relieving'."

BBSRC

KEEP YOUR FINGERS CROSSED!: HOW SUPERSTITION IMPROVES PERFORMANCE

0 comentarios
Don't scoff at those lucky rabbit feet. New research shows that having some kind of lucky token can actually improve your performance – by increasing your self-confidence.

"I watch a lot of sports, and I read about sports, and I noticed that very often athletes – also famous athletes – hold superstitions," says Lysann Damisch of the University of Cologne. Michael Jordan wore his college team shorts underneath his NBA uniform for good luck; Tiger Woods wears a red shirt on tournament Sundays, usually the last and most important day of a tournament. "And I was wondering, why are they doing so?" Damisch thought that a belief in superstition might help people do better by improving their confidence. With her colleagues Barbara Stoberock and Thomas Mussweiler, also of the University of Cologne, she designed a set of experiments to see if activating people's superstitious beliefs would improve their performance on a task.

In one of the experiments, volunteers were told to bring a lucky charm with them. Then the researchers took it away to take a picture. People brought in all kinds of items, from old stuffed animals to wedding rings to lucky stones. Half of the volunteers were given their charm back before the test started; the other half were told there was a problem with the camera equipment and they would get it back later. Volunteers who had their lucky charm did better at a memory game on the computer, and other tests showed that this difference was because they felt more confident. They also set higher goals for themselves. Just wishing someone good luck – with "I press the thumbs for you," the German version of crossing your fingers – improved volunteers' success at a task that required manual dexterity. The research is published in Psychological Science, a journal of the Association for Psychological Science

Of course, even Michael Jordan lost basketball games sometimes. "It doesn't mean you win, because of course winning and losing is something else," says Damisch. "Maybe the other person is stronger."

Psychological Science

Wednesday, July 28, 2010

PERSONALIZED APPROACH TO SMOKING CESSATION MAY BE REALITY IN THREE TO FIVE YEARS

0 comentarios
A personalized approach to smoking cessation therapy is quickly taking shape. New evidence from Duke University Medical Center and the National Institute on Drug Abuse (NIDA) suggests that combining information about a smoker’s genetic makeup with his or her smoking habits can accurately predict which nicotine replacement therapy will work best.

“Within three to five years, it’s conceivable we’ll have a practical test that could take the guesswork out of choosing a smoking-cessation therapy,” says Jed Rose, PhD, director of Duke’s Center for Nicotine and Smoking Cessation Research. “It could be used by clinicians to guide the selection of treatment and appropriate dose for each smoker, and hopefully increase cessation success rates.”

Statistics show 70 percent of the nation’s 46 million smokers say they want to quit, yet successfully kicking the habit has not proven easy. In previously published reports, less than 5 percent of smokers who tried to quit on their own without any aids were not smoking one year later. Long-term quit rates for smokers who relied on pharmacological intervention hover under 25 percent.

The research, which is published online in the July-August issue of Molecular Medicine, follows previous work done by Rose and George Uhl, MD, PhD, chief of the molecular neurobiology research at NIDA.

After conducting a genome-wide scan of 520,000 genetic markers taken from blood samples of smokers in several quit-smoking trials, they identified genetic patterns that appear to influence how well individuals respond to specific smoking cessation treatments.

The latest research focuses on combining the information from those individual genetic markers, called SNPs, into one number that represents a “quit success score,” Rose says. The score and the smokers’ nicotine dependence, assessed via a simple questionnaire, help predict an individual’s likelihood of quitting, as well as whether a high-dose or low-dose nicotine patch would work best.

In the trial, 479 cigarette smokers who smoked at least 10 cigarettes per day and wanted to quit were categorized as either high- or low-dependence based on their level of nicotine dependence. The smokers in each group were then randomly assigned to wear two nicotine skin patches daily delivering a high dose (42mg) or a standard dose (21 mg).

Patches were worn for two weeks prior to their quit date, and the nicotine doses were reduced gradually over the 10 weeks following their quit date. Participants were given denicotinized cigarettes during the two weeks before the quit date to minimize any potential adverse effects from the high dose nicotine patches. The treatment phase lasted for 12 weeks in all.

DNA was extracted from participants’ blood and was used to assess a quit-smoking success genetic score.

At six months follow up, the researchers were able to confirm which smokers fared better or worse on the high-dose compared to the low-dose patch.

“The genotype score was part of what predicted successful abstinence. In the future such a score could help us make our initial treatment decisions,” said Rose. “People who had both high nicotine dependence and a low or unfavorable quit success genetic score seemed to benefit markedly from the high-dose nicotine patch, while people who had less dependence on nicotine did better on the standard patch.”

Further studies are needed to replicate these results, and to expand the research to include therapies like verenicline (Chantix by Pfizer) and bupropion hydrochloride (Zyban by GlaxoSmithKline). But the potential this work holds for the future is significant, Rose says.

“Right now there is no treatment algorithm that tells a clinician or smoker which treatment is likely to work for them,” says Rose. "That’s what we are trying to do. We want to tailor and give informative guidance to clinicians in terms of what should be tried first to maximize smoking cessation success.”

This study was the result of a collaboration between an investigator supported by the National Institutes of Health (NIH) Intramural Research Program, National Institute on Drug Abuse, and Duke University researchers who received a grant from Philip Morris USA. The company had no role in the planning or execution of the study, data analysis, or publication of results.

Duke University Medical Center

GEOSCIENTISTS FIND CLUES TO WHY FIRST SUMATRAN EARTHQUAKE WAS DEADLIER THAN SECOND

0 comentarios

An international team of geoscientists has uncovered geological differences between two segments of an earthquake fault that may explain why the 2004 Sumatra Boxing Day Tsunami was so much more devastating than a second earthquake generated tsunami three months later. This could help solve what was a lingering mystery for earthquake researchers.

The quakes were caused by ruptures on adjacent segments of the same fault. One key difference was that the southern part of the fault that ruptured in 2004, producing the larger quake and tsunami, appears bright on subsurface seismic images possibly explained by a lower density fault zone than the surrounding sediments. In the 2005 segment of the fault, there was no evidence for such a low-density fault zone. This and several other differences resulted in the fault slipping over a much longer segment and reaching much closer to the seafloor in the first quake. Because tsunami waves are generated by the motion of the seafloor, a quake that moves more seafloor creates larger tsunamis.

Early in the morning of Dec. 26, 2004 a powerful undersea earthquake started off the west coast of Sumatra, Indonesia and extended about 1,200 kilometers (750 miles) to the north. The resulting tsunami caused devastation along the coastlines bordering the Indian Ocean, with tsunami waves up to 30 meters (100 feet) high inundating coastal communities. With very little warning of impending disaster, more than 230,000 people died and millions were made homeless.

Three months later in 2005, another strong earthquake (although significantly smaller than in 2004) occurred immediately to the south, but triggered only a relatively small tsunami that claimed far fewer lives.

A team of researchers from The University of Southampton in the United Kingdom, The University of Texas at Austin, The Agency for the Assessment and Application of Technology in Indonesia and The Indonesia Institute for Sciences has discovered one clue as to why the two earthquakes were so different. Working aboard the research vessel Sonne, the scientists used seismic instruments to probe layers of sediment beneath the seafloor with sound waves.

They discovered a number of unusual features at the rupture zone of the 2004 earthquake such as the seabed topography, how the sediments are deformed and the locations of small earthquakes (aftershocks) following the main earthquake.

They found the southern end of the 2004 rupture zone was unique in one other key way. To understand that requires a little background on how and why earthquakes happen there at all.

The largest undersea earthquakes occur at subduction zones, such as the one west of Indonesia, where one tectonic plate is forced (or subducts) under another. This subduction doesn't happen smoothly however, but sticks and then slips or ruptures with the release of vast amounts of stored energy as an earthquake. The plate boundary between the overriding Sumatran and Andaman islands and the subducting Indian Ocean sticks and slips in segments. This kind of plate boundary is called a décollement and is a very shallow fault running from beneath the trench to under the islands.

The researchers found that the décollement surface has different properties in the two earthquake rupture regions. In the 2004 area, the décollement was seismically imaged from the ship as a bright reflection whose specifics suggest lower density materials that would affect friction. In the 2005 area the décollement does not show these particular characteristics and thus would behave differently in an earthquake. This and several other differences resulted in the fault slipping over a much longer segment in 2004 and reaching much closer to the seafloor, potentially causing a larger tsunami.

The results of their study appear in the July 9 edition of the journal Science. The paper's lead author is Simon Dean, of the University of Southampton's School of Ocean and Earth Science, which is based at the National Oceanography Centre, Southampton (NOC).

"Both earthquakes occurred on the same fault system, initiating 30-40 kilometers below the seabed," said Dean. "Our results will help us understand why different parts of the fault behave differently during earthquake slip which then influences tsunami generation. This is critical for adequate hazard assessment and mitigation."

By comparing these results with other subduction zones around the world, the research team believes the region of the 2004 Sumatra earthquake is very unusual, suggesting that tsunami hazards may be particularly high in this region.

"By understanding parameters that make a particular region more hazardous in terms of earthquakes and tsunami we can speak to potential hazards of other margins," said Sean Gulick, a research scientist at The University of Texas at Austin's Institute for Geophysics. "We need to examine what limits the size of earthquakes and what properties contribute to tsunami formation."

The fact that the 2004 and 2005 source areas were different is good news. Had the two fault segments ruptured together, the resulting earthquake would have been about a magnitude 9.3 instead of 9.2. Because the earthquake magnitude scale is logarithmic, an increase in 0.1 translates to about a third more energy released. To put that in perspective, the first event had the explosive force of 1.8 trillion kilograms of TNT. Adding the second segment, the resulting earthquake would equal 2.4 trillion kilograms of TNT.

(Photo: Nicolle Rager Fuller, National Science Foundation)

The University of Texas at Austin

DISRUPTION OF CIRCADIAN RHYTHM COULD LEAD TO DIABETES

0 comentarios

Disruption of two genes that control circadian rhythms can lead to diabetes, a researcher at UT Southwestern Medical Center has found in an animal study.

Mice with defective copies of the genes, called CLOCK and BMAL1, develop abnormalities in pancreatic cells that eventually render the cells unable to release sufficient amounts of insulin.

“These results indicate that disruption of the daily clock may contribute to diabetes by impairing the pancreas’ ability to deliver insulin,” said Dr. Joseph Takahashi, an investigator with the Howard Hughes Medical Institute at UT Southwestern and co-senior author of the study, which appeared in the journal Nature. Dr. Takahashi, who recently joined UT Southwestern as chairman of neuroscience, performed the research with colleagues when he was at Northwestern University.

Circadian rhythms are cyclical patterns in biological activities, such as sleeping, eating, body temperature and hormone production.

The mammalian CLOCK gene, which Dr. Takahashi discovered in 1997, operates in many tissues of the body to regulate circadian rhythms. The gene codes for a protein called a transcription factor, which binds to other genes and controls whether they become active. BMAL1 also codes for a transcription factor that works together with the CLOCK protein.

The researchers examined pancreatic islet beta cells, which secrete insulin when blood sugar levels increase. They genetically engineered some mice to have defective CLOCK genes and some to also lack the BMAL1 gene. The mice also were engineered to contain a bioluminescent molecule that allowed the researchers to detect the circadian clock in pancreatic cells as a fluctuating glow.

Normal islet cells glowed in a 24-hour rhythm, while cells with defective CLOCK genes showed nearly flat rhythms. Cells from different organs exhibited different circadian rhythm patterns, indicating that each organ controls its own internal clocks.

Further study showed that the islet cells in the mutant animals created normal amounts of insulin, but the CLOCK mutant cells were defective in releasing the hormone.

Mice with defective CLOCK genes were prone to obesity and other signs of metabolic syndrome and liver dysfunction. Young mice lacking the BMAL1 gene only in their pancreas, however, had normal body weight and composition, and their behavior followed normal circadian patterns, although their blood sugar levels were abnormally high, the researchers found.

“This finding indicates that disruption of clock genes only in the pancreas, and not the rest of the body clock, can produce early signs of diabetes,” Dr. Takahashi said “These studies are important because they show a direct link between the clock in pancreatic beta-cells and glucose regulation. This should aid our understanding of the causes of glucose abnormalities.”

(Photo: UTSMC)

The University of Texas Southwestern Medical Center

INDIAN OCEAN SEA LEVEL RISE THREATENS COASTAL AREAS

0 comentarios

Indian Ocean sea levels are rising unevenly and threatening residents in some densely populated coastal areas and islands, a new study concludes. The study, led by scientists at the University of Colorado at Boulder (CU) and the National Center for Atmospheric Research (NCAR), finds that the sea level rise is at least partly a result of climate change.

Sea level rise is particularly high along the coastlines of the Bay of Bengal, the Arabian Sea, Sri Lanka, Sumatra, and Java, the authors found. The rise—which may aggravate monsoon flooding in Bangladesh and India—could have future impacts on both regional and global climate.

The key player in the process is the Indo-Pacific warm pool, an enormous, bathtub-shaped area spanning a region of the tropical oceans from the east coast of Africa to the International Date Line in the Pacific. The warm pool has heated by about 1 degree Fahrenheit, or 0.5 degrees Celsius, in the past 50 years, primarily because of human-generated emissions of greenhouses gases.

"Our results from this study imply that if future anthropogenic warming effects in the Indo-Pacific warm pool dominate natural variability, mid-ocean islands such as the Mascarenhas Archipelago, coasts of Indonesia, Sumatra, and the north Indian Ocean may experience significantly more sea level rise than the global average," says lead author Weiqing Han of CU's atmospheric and oceanic sciences department.

While a number of areas in the Indian Ocean region are experiencing sea level rise, sea level is lowering in other areas. The study indicates that the Seychelles Islands and the island of Zanzibar off Tanzania's coast show the largest sea level drop.

"Global sea level patterns are not geographically uniform," says NCAR scientist Gerald Meehl, a co-author. "Sea level rise in some areas correlates with sea level fall in other areas."

The new study was published in Nature Geoscience. Funding came from the National Science Foundation, NCAR's sponsor, as well as the Department of Energy (DOE) and NASA.

The patterns of sea level change are driven by the combined enhancement of two primary atmospheric wind patterns, known as the Hadley circulation and the Walker circulation. The Hadley circulation in the Indian Ocean is dominated by air currents rising above strongly heated tropical waters near the equator and flowing poleward at upper levels, then sinking to the ocean in the subtropics and causing surface air to flow back toward the equator.

The Indian Ocean's Walker circulation causes air to rise and flow westward at upper levels, sink to the surface and then flow eastward back toward the Indo-Pacific warm pool.

"The combined enhancement of the Hadley and Walker circulation forms a distinct surface wind pattern that drives specific sea level patterns," Han says.

In the Nature Geoscience article, the authors write, "Our new results show that human-caused changes of atmospheric and oceanic circulation over the Indian Ocean region—which have not been studied previously—are the major cause for the regional variability of sea level change."

The new study indicates that in order to anticipate global sea level change, researchers also need to know the specifics of regional sea level changes.

"It is important for us to understand the regional changes of the sea level, which will have effects on coastal and island regions," says NCAR scientist Aixue Hu.

The research team used several sophisticated ocean and climate models for the study, including the Parallel Ocean Program—the ocean component of the widely used Community Climate System Model, which is supported by NCAR and DOE. In addition, the team used a wind-driven linear ocean model for the study.

The complex circulation patterns in the Indian Ocean may also affect precipitation by forcing even more atmospheric air than normal down to the surface in Indian Ocean subtropical regions, Han speculates.

"This may favor a weakening of atmospheric convection in subtropics, which may increase rainfall in the eastern tropical regions of the Indian Ocean and drought in the western equatorial Indian Ocean region, including east Africa," Han says.

(Photo: NASA Earth Observatory)

The University Corporation for Atmospheric Research

MAGNETS TRUMP METALLICS

0 comentarios
Metallic carbon nanotubes show great promise for applications from microelectronics to power lines because of their ballistic transmission of electrons. But who knew magnets could stop those electrons in their tracks?

Rice physicist Junichiro Kono and his team have been studying the Aharonov-Bohm effect -- the interaction between electrically charged particles and magnetic fields -- and how it relates to carbon nanotubes. While doing so, they came to the unexpected conclusion that magnetic fields can turn highly conductive nanotubes into semiconductors.

Their findings are published online this month in Physical Review Letters.

"When you apply a magnetic field, a band gap opens up and it becomes an insulator," said Kono, a Rice professor in electrical and computer engineering and in physics and astronomy. "You are changing a conductor into a semiconductor, and you can switch between the two. So this experiment explores both an important aspect of the results of the Aharonov-Bohm effect and the novel magnetic properties of carbon nanotubes."

Kono, graduate student Thomas Searles and their colleagues at the National Institute of Standards and Technology (NIST) and in Japan successfully measured the magnetic susceptibility of a variety of nanotubes for the first time; they confirmed that metallics are far more susceptible to magnetic fields than semiconducting nanotubes, depending upon the orientation and strength of the field.

Single-walled nanotubes (SWNTs) -- rolled-up sheets of graphene -- would all look the same to the naked eye if one could see them. But a closer look reveals nanotubes come in many forms, or chiralities, depending on how they're rolled. Some are semiconducting; some are highly conductive metallics. The gold standard for conductivity is the armchair nanotube, so-called because the open ends form a pattern that looks like armchairs.

Not just any magnet would do for their experiments. Kono and Searles traveled to the Tsukuba Magnet Laboratory at the National Institute for Materials Science (NIMS) in Japan, where the world's second-largest electromagnet was used to tease a refined ensemble of 10 chiralities of SWNTs, some metallic and some semiconducting, into giving up their secrets.

By ramping the big magnet up to 35 tesla, they found that the nanotubes would begin to align themselves in parallel and that the metallics reacted far more strongly than the semiconductors. (For comparison, the average MRI machine for medical imaging has electromagnets rated at 0.5 to 3 tesla.) Spectroscopic analysis confirmed the metallics, particularly armchair nanotubes, were two to four times more susceptible to the magnetic field than semiconductors and that each chirality reacted differently.

The nanotubes were all about 0.7 to 0.8 nanometers (or billionths of a meter) wide and 500 nanometers long, so variations in size were not a factor in results by Searles. He spent a week last fall running experiments at the Tsukuba facility's "hybrid," a large-bore superconducting magnet that contains a water-cooled resistive magnet.

Kono said the work would continue on purified batches of nanotubes produced by ultracentrifugation at Rice. That should yield more specific information about their susceptibility to magnetic fields, though he suspects the effect should be even stronger in longer metallics. "This work clearly shows that metallic tubes and semiconducting tubes are different, but now that we have metallic-enriched samples, we can compare different chiralities within the metallic family," he said.

Rice University

ENGINEERING COULD GIVE RECONSTRUCTIVE SURGERY A FACE-LIFT

0 comentarios



Facial reconstruction patients may soon have the option of custom-made bone replacements optimized for both form and function, thanks to researchers at the University of Illinois and the Ohio State University Medical Center.

Whether resulting from illness or injury, loss of facial bones poses problems for reconstructive surgeons beyond cosmetic implications: The patient’s chewing, swallowing, speaking or even breathing abilities may be impaired.

“The mid-face is perhaps the most complicated part of the human skeleton,” said Glaucio Paulino, the Donald Biggar Willett Professor of Engineering at U. of I. “What makes mid-face reconstruction more complicated is its unusual unique shape (bones are small and delicate) and functions, and its location in an area susceptible to high contamination with bacteria.”

To fashion bone replacements, surgeons often will harvest bone from elsewhere in the patient’s body – the shoulder blade or hip, for example – and manually fashion it into something resembling the missing skull portion. However, since other bones are very different from facial bones in structure, patients may still suffer impaired function or cosmetic distortion.

The interdisciplinary research team, whose research results were published in the July 12 edition of the Proceedings of the National Academy of Sciences, applied an engineering design technique called topology optimization. The approach uses extensive 3-D modeling to design structures that need to support specific loads in a confined space, and is often used to engineer high-rise buildings, car parts and other structures.

“It tells you where to put material and where to create holes,” said Paulino, a professor of civil and environmental engineering. “Essentially, the technique allows engineers to find the best solution that satisfies design requirements and constraints.”

Facial reconstruction seemed a natural fit for the technique, Paulino said. “We looked at the clinical problem from a different perspective. Topology optimization offers an interdisciplinary framework to integrate concepts from medicine, biology, numerical methods, mechanics, and computations.”

Topology optimization would create patient-specific, case-by-case designs for tissue-engineered bone replacements. First, the researchers construct a detailed 3-D computer model of the patient in question and specify a design domain based on the injury and missing bone parts. Then a series of algorithms creates a customized, optimized structure, accounting for variables including blood flow, sinus cavities, chewing forces and soft tissue support, among other considerations. The researchers can then model the process of inserting the replacement bone into the patient and how the patient would look. The process is illustrated in this video.

“Ideally, it would allow the physician to explore surgical alternatives and to design patient-specific bone replacement. Each patient’s bone replacement designs are tailored for their missing volume and functional requirements,” Paulino said.

Now that they have demonstrated the concept successfully by modeling several different types of facial bone replacements, the researchers hope to work toward developing scaffolds for tissue engineering so that their designs could be translated to actual bones. They also hope to explore further surgical possibilities for their method.

“This technique has the potential to pave the way toward development of tissue engineering methods to create custom fabricated living bone replacements in optimum shapes and amounts,” Paulino said. “The possibilities are immense and we feel that we are just in the beginning of the process.”

(Photo: Janet Sinn-Hanlon, Beckman Institute for Advanced Science and Technology)

University of Illinois

FIBERS THAT CAN HEAR AND SING

0 comentarios

For centuries, "man-made fibers" meant the raw stuff of clothes and ropes; in the information age, it's come to mean the filaments of glass that carry data in communications networks. But to Yoel Fink, an associate professor of materials science and principal investigator at MIT's Research Lab of Electronics, the threads used in textiles and even optical fibers are much too passive. For the past decade, his lab has been working to develop fibers with ever more sophisticated properties, to enable fabrics that can interact with their environment.

In the August issue of Nature Materials, Fink and his collaborators announce a new milestone on the path to functional fibers: fibers that can detect and produce sound. Applications could include clothes that are themselves sensitive microphones, for capturing speech or monitoring bodily functions, and tiny filaments that could measure blood flow in capillaries or pressure in the brain. The paper, whose authors also include Shunji Egusa, a former postdoc in Fink's lab, and current lab members Noémie Chocat and Zheng Wang, appeared on Nature Materials' website on July 11, and the work it describes was supported by MIT's Institute for Soldier Nanotechnologies, the National Science Foundation and the U.S. Defense Department's Defense Advanced Research Projects Agency.

Ordinary optical fibers are made from a "preform," a large cylinder of a single material that is heated up, drawn out, and then cooled. The fibers developed in Fink's lab, by contrast, derive their functionality from the elaborate geometrical arrangement of several different materials, which must survive the heating and drawing process intact.

The heart of the new acoustic fibers is a plastic commonly used in microphones. By playing with the plastic's fluorine content, the researchers were able to ensure that its molecules remain lopsided — with fluorine atoms lined up on one side and hydrogen atoms on the other — even during heating and drawing. The asymmetry of the molecules is what makes the plastic "piezoelectric," meaning that it changes shape when an electric field is applied to it.

In a conventional piezoelectric microphone, the electric field is generated by metal electrodes. But in a fiber microphone, the drawing process would cause metal electrodes to lose their shape. So the researchers instead used a conducting plastic that contains graphite, the material found in pencil lead. When heated, the conducting plastic maintains a higher viscosity — it yields a thicker fluid — than a metal would.

Not only did this prevent the mixing of materials, but, crucially, it also made for fibers with a regular thickness. After the fiber has been drawn, the researchers need to align all the piezoelectric molecules in the same direction. That requires the application of a powerful electric field — 20 times as powerful as the fields that cause lightning during a thunderstorm. Anywhere the fiber is too narrow, the field would generate a tiny lightning bolt, which could destroy the material around it.

Despite the delicate balance required by the manufacturing process, the researchers were able to build functioning fibers in the lab. "You can actually hear them, these fibers," says Chocat, a graduate student in the materials science department. "If you connected them to a power supply and applied a sinusoidal current" — an alternating current whose period is very regular — "then it would vibrate. And if you make it vibrate at audible frequencies and put it close to your ear, you could actually hear different notes or sounds coming out of it." For their Nature Materials paper, however, the researchers measured the fiber's acoustic properties more rigorously. Since water conducts sound better than air, they placed it in a water tank opposite a standard acoustic transducer, a device that could alternately emit sound waves detected by the fiber and detect sound waves emitted by the fiber.

In addition to wearable microphones and biological sensors, applications of the fibers could include loose nets that monitor the flow of water in the ocean and large-area sonar imaging systems with much higher resolutions: A fabric woven from acoustic fibers would provide the equivalent of millions of tiny acoustic sensors.

Zheng, a research scientist in Fink's lab, also points out that the same mechanism that allows piezoelectric devices to translate electricity into motion can work in reverse. "Imagine a thread that can generate electricity when stretched," he says.

Ultimately, however, the researchers hope to combine the properties of their experimental fibers in a single fiber. Strong vibrations, for instance, could vary the optical properties of a reflecting fiber, enabling fabrics to communicate optically.

Max Shtein, an assistant professor in the University of Michigan's materials science department, points out that other labs have built piezoelectric fibers by first drawing out a strand of a single material and then adding other materials to it, much the way manufacturers currently wrap insulating plastic around copper wire. "Yoel has the advantage of being able to extrude kilometers of this stuff at one shot," Shtein says. "It's a very scalable technique." But for applications that require relatively short strands of fiber, such as sensors inserted into capillaries, Shtein say, "scalability is not that relevant."

But whether or not the Fink lab's technique proves, in all cases, the most practical way to make acoustic fibers, "I'm impressed by the complexity of the structures they can make," Shtein says. "They're incredibly virtuosic at that technique."

(Photo: Research Laboratory of Electronics at MIT/Greg Hren Photograph)

MIT

PROTEIN LINKED TO AGING MAY BOOST MEMORY AND LEARNING ABILITY

0 comentarios

Over the past 20 years, biologists have shown that proteins called sirtuins can slow the aging process in many animal species.

Now an MIT team led by Professor Li-Huei Tsai has revealed that sirtuins can also boost memory and brainpower — a finding that could lead to new drugs for Alzheimer’s disease and other neurological disorders.

Sirtuins’ effects on brain function, including learning and memory, represent a new and somewhat surprising role, says Tsai, the Picower Professor of Neuroscience and an investigator of the Howard Hughes Medical Institute. “When you review the literature, sirtuins are always associated with longevity, metabolic pathways, calorie restriction, genome stability, and so on. It has never been shown to play a role in synaptic plasticity,” she says.

Synaptic plasticity — the ability of neurons to strengthen or weaken their connections in response to new information — is critical to learning and memory. Potential drugs that enhance plasticity by boosting sirtuin activity could help patients with neurological disorders such as Alzheimer’s, Parkinson’s and Huntington’s diseases, says Tsai.

Sirtuins have received much attention in recent years for their life-span-boosting potential, and for their link to resveratrol, a compound found in red wine that has shown beneficial effects against cancer, heart disease and inflammation in animal studies.

MIT Biology Professor Leonard Guarente discovered about 15 years ago that the SIR2 gene regulates longevity in yeast. Later work revealed similar effects in worms, mice and rats.

More recently, studies have shown that one mammalian version of the gene, SIRT1, protects against oxidative stress (the formation of highly reactive molecules that can damage cells) in the heart and maintains genome stability in multiple cell types. SIRT1 is thought to be a key regulator of an evolutionarily conserved pathway that enhances cell survival during times of stress, especially a lack of food.

In 2007, Tsai and her colleagues showed that sirtuins (the proteins produced by SIR or SIRT genes) protect neurons against neurodegeneration caused by disorders such as Alzheimer’s. They also found that sirtuins improved learning and memory, but believed that might be simply a byproduct of the neuron protection.

However, Tsai’s new study, funded by National Institutes of Health, the Simons Foundation, the Swiss National Science Foundation and the Howard Hughes Medical Institute, shows that sirtuins promote learning and memory through a novel pathway, unrelated to their ability to shield neurons from damage. The team demonstrated that sirtuins enhance synaptic plasticity by manipulating tiny snippets of genetic material known as microRNA, which have recently been discovered to play an important role in regulating gene expression.

Specifically, the team showed that sirtuins block the activity of a microRNA called miR-134, which normally halts production of CREB, a protein necessary for plasticity. When miR-134 is inhibited, CREB is free to help the brain adjust its synaptic activity.

Mice with the SIRT1 gene missing in the brain performed poorly on several memory and learning tests, including object-recognition tasks and a water maze.

“Activation of sirtuins can directly enhance cognitive function,” says Tsai. “This really suggests that SIRT1 is a very good drug target, because it can achieve multiple beneficial effects.”

Raul Mostoslavsky, assistant professor of medicine at Harvard Medical School, says the findings do suggest that activating SIRT1 could benefit patients with neurodegenerative diseases. “However, we will need to be very cautious before jumping to conclusions,” he says, “since SIRT1 has (multiple) effects in multiple cells and tissues, and therefore targeting specifically this brain function will be quite challenging.”

Tsai and her colleagues are now studying the mechanism of SIRT1’s actions in more detail, and are also investigating whether sirtuin genes other than SIRT1 influence memory and learning.

(Photo: MIT)

MIT

THE BRAIN OF THE FLY - A HIGH-SPEED COMPUTER

0 comentarios

What would be the point of holding a soccer world championship if we couldn't distinguish the ball from its background? Simply unthinkable! But then again, wouldn't it be fantastic if your favourite team's striker could see the movements of the ball in slow motion! Unfortunately, this advantage only belongs to flies.

The minute brains of these aeronautic acrobats process visual movements in only fractions of a second. Just how the brain of the fly manages to perceive motion with such speed and precision is predicted quite accurately by a mathematical model. However, even after 50 years of research, it remains a mystery as to how nerve cells are actually interconnected in the brain of the fly. Scientists at the Max Planck Institute of Neurobiology are now the first to successfully establish the necessary technical conditions for decoding the underlying mechanisms of motion vision. The first analyses have already shown that a great deal more remains to be discovered.

Back in 1956, a mathematical model was developed that predicts how movements in the brain of the fly are recognized and processed. Countless experiments have since endorsed all of the assumptions of this model. What remains unclear, however, is the question as to which nerve cells are wired to each other in the fly brain for the latter to function as predicted in the model. "We simply did not have the technical tools to examine the responses of each and every cell in the fly's tiny, but high-powered brain", as Dierk Reiff from the Max Planck Institute of Neurobiology in Martinsried explains. That is hardly surprising, considering the minute size of the brain area that is responsible for the fly's motion detection. Here, one sixth of a cubic millimetre of brain matter contains more than 100,000 nerve cells - each of which has multiple connections to its neighbouring cells. Although it seems almost impossible to single out the reaction of a certain cell to any particular movement stimulus, this is precisely what the neurobiologists in Martinsried have now succeeded in doing.

The electrical activity of individual nerve cells is usually measured with the aid of extremely fine electrodes. In the fly, however, most of the nerve cells are simply too small to be measured using this method. Nevertheless, since the fly is the animal model in which motion perception has been studied in most detail, the scientists were all the more determined to prize these secrets from the insect's brain. A further incentive is the fact that, albeit the number of nerve cells in the fly is comparatively small, they are highly specialized and process the image flow with great precision while the fly is in flight. Flies can therefore process a vast amount of information about proper motion and movement in their environment in real time - a feat that no computer, and certainly none the size of a fly's brain, can hope to match. So it's no wonder that deciphering this system is a worth-while undertaking.

"We had to find some way of observing the activity of these tiny nerve cells without electrodes", Dierk Reiff explains one of the challenges that faced the scientists. In order to overcome this hurdle, the scientists used the fruit fly Drosophila melanogaster and some of the most up-to-date genetic methods available. They succeeded in introducing the indicator molecule TN-XXL into individual nerve cells. By altering its fluorescent properties, TN-XXL indicates the activity of nerve cells.

To examine how the brains of fruit flies process motion, the neurobiologists presented the insects with moving stripe patterns on a light-diode screen. The nerve cells in the flies' brains react to these LED light impulses by becoming active, thus causing the luminance of the indicator molecules to change. Although TN-XXL's luminance changes are much higher than that of former indicator molecules, it took quite some time to capture this comparatively small amount of light and to separate it from the LED-light impulse. After puzzling over this for a while, however, Dierk Reiff solved the problem by synchronizing the 2-photon-laser microscope with the LED-screen at a tolerance of merely a few microseconds. The TN-XXL signal could subsequently be separated from the LED-light and selectively measured using the 2-photon-microscope.

"At long last, after more than 50 years of trying, it is now technically possible to examine the cellular construction of the motion detector in the brain of the fly", reports a pleased Alexander Borst, who has been pursuing this goal in his department for a number of years. Just how much remains to be discovered was realized during the very first application of the new methods. The scientists began by observing the activity of cells known as L2-cells which receive information from the photoreceptors of the eye. The photoreceptors react when the light intensity increases or decreases. The reaction of the L2-cells is similar in that part of the cell where the information from the photoreceptor is picked up. However, the neurobiologists discovered that the L2-cell transforms these data and in particular, that it relays information only about the reduction in light intensity to the following nerve cells. The latter then calculate the direction of motion and pass this information on to the flight control system. "This means that the information "light on" is filtered out by the L2-cells", summarizes Dierk Reiff. "It also means, however, that another kind of cell must pass on the "light on" command, since the fly reacts to both kinds of signals."

Now that the first step has been taken, the scientists intend to examine - cell by cell - the motion detection circuitry in the fly brain to explain how it computes motion information at the cellular level. Their colleagues from the joint Robotics project are eagerly awaiting the results.

(Photo: Max Planck Institute of Neurobiology)

Max Planck Institute

Tuesday, July 27, 2010

CORAL TESTS SHOW FAST CONSTRUCTION PACE FOR POLYNESIAN TEMPLES

0 comentarios

Ancient Polynesians went from building small-scale temples to constructing monumental, pyramid-shaped temples in just 140 years, not in four or five centuries as previously calculated, according to research led by a University of California, Berkeley, anthropologist and published this week in the print edition of the journal Proceedings of the National Academy of Sciences (PNAS).

Patrick V. Kirch, a UC Berkeley professor of anthropology and of integrative biology and a Pacific Islands expert, said his research team applied a high-precision thorium/uranium dating process to samples of decorative veneers, large blocks and religious offerings — all of them made of coral — that were found in 22 temple sites on the Pacific island of Mo'orea. The process, commonly used on fossils, can precisely determine the age of calcium carbonate materials such as coral.

"Coral is almost ideal for this (process)," he said, "because it takes up uranium from sea water and stores it. Shells and other (oceanic) materials filter uranium out through their metabolic processes." Because the coral used in temple construction was collected while it was alive and was used quickly, the date of the final growth of the coral specimens gave researchers the dates of temple construction.

They linked a clear progression of architectural change and increasingly elaborate temples on Mo'orea from 1620 to 1760 A.D. to political competition, increasing stratification and hierarchy that accompanied the region's growing cult worship of 'Oro, a god of war and of fertility, and new sacred regalia and religious rituals that included human sacrifice.

"The construction of these massive temples with their ahu (altar platforms) reaching ever higher toward the heavens, was clearly an important part of the strategy of chiefly elite to gain favor with the gods and to assert their power and prestige over their people," the researchers wrote in their journal article.

"The neat thing about this is that this is the first time we've been able to show how fast ritualized architecture can develop elaborate, massive temples," Kirch said in a recent interview. The researchers note in their PNAS paper that the development of ritual architecture in the Oaxaca Valley in southern Mexico has been estimated to have taken more than 1,300 years.

Kirch said the people of Mo'orea did not reflect "cultures that were stagnant or slow — they used very sophisticated methods to vie for control, power and resources. We see sort of a race for power expressed through ritualization of gods. It's not dissimilar to what's happening in the world today."

The thorium/uranium process used on the coral found in Mo'orea temples also may yield new insights into the timetables for socio-political development elsewhere, such as along the coasts of the Indian Ocean and the Red Sea, where large populations of people have used coral from reefs in construction, Kirch said.

His team's work on Mo'orea extended a thorium/uranium analysis begun in 2004 of coral offerings found in ancient Hawaiian sites that helped date the development of divine kingship in Hawaii. The process has a huge advantage over standard methods such as radiocarbon dating, which is plagued by large error ranges, Kirch said. He added that radiocarbon dating nevertheless remains the only option for many archaeological projects.

Kirch noted that the thorium/uranium dating also is preferable to using oral histories passed along from generation to generation to assign chronological dates to political, cultural and material changes.

The oldest temples that his team examined were relatively small and used only natural corals as facings in their low, altar-like platforms. Around the mid-17th century, temples were built that used cut and dressed blocks of coral that rose up to a meter high to face the altars. The final architectural stage that appeared in the early 18th century featured the appearance of stepped altars and the first use of uniform-sized, pecked basalt cobbles in temple walls — changes linked to the rise of the 'Oro cult.
Using standard archaeological seriation techniques to evaluate the temple remains' architectural and morphological features, the researchers reported a good fit with the timeline provided by the thorium/uranium testing.

(Photo: Patrick V. Kirch)

University of California, Berkeley

UA RESEARCHERS PRESENT NEW SEX EVOLUTION THEORY

0 comentarios

The origin of the evolutionary game – the ability of animals (including humans) and plants to reproduce sexually, genetically recombine to repair DNA, and then produce eggs, sperm or pollen – is an unresolved mystery in biology.

In an article published in the July/August issue of BioScience, University of Arizona researchers Harris Bernstein and Carol Bernstein provide insights into the early evolution of sexual organisms and the role environmental stressors had on sexual reproduction as a key survival strategy.

The UA department of cell biology and anatomy researchers argue that eukaryotes, or cells with a nucleus, adapted their meiotic ability to recombine chromosomes sexually into new genetically distinct entities from their ancestors, called prokaryotic cells.

The ability to recombine chromosomes through meiosis gives rise to eggs and sperm in humans. According to the Bernsteins' theory, meiosis evolved to promote DNA repair, thereby greatly reducing DNA damage in resulting eggs and sperm.

After the repair during meiosis, when an egg meets a sperm, the chance of having a viable fetus is much improved, and the chance that the baby will have a newly arisen genetic defect is reduced.

Prokaryotic cells evolved to develop the ability to repair DNA through a process called transformation, which also promotes chromosome repair through a process called recombination.

In prokaryotic cells (which include bacteria), asexual reproduction is completed through a process called binary fission. In binary fission, each strand of the original double-stranded DNA molecule serves as template for the reproduction of a complementary strand as the cell readies to split into two parts.

Under certain conditions, these cells are capable of the exchange and repair of DNA through a process called transformation. Transformation is the transfer of a fragment of DNA from a donor cell to a recipient cell, followed by recombination in the recipient chromosome. The researchers call this bacterial process an early version of sex.

For eukaryotes, which include higher animals and plants as well as single-celled species such as yeast, reproduction occurs in two ways, through mitosis or meiosis.

In mitosis, one cell divides to produce two genetically identical cells. In cells committed to mitosis, if there is DNA damage, a good deal of the damage can be repaired, especially the damage on one strand of the DNA, where information on the opposite strand can direct the repair on the damaged strand of the double helical DNA.

Meiosis is required in sexual reproduction in eukaryotes. During meiosis, a cell with two copies of each chromosome, one from each parent, undergoes the process of recombination. This allows a special type of repair, not available during ordinary mitosis.
During meiotic recombination, the pairs of chromosomes line up next to each other, and if there is damage on either chromosome, repair can take place by recombination with the other chromosome. Meiotic recombination allows for the repair of damaged DNA as the chromosomes from each parent are broken and joined, resulting in different combinations of genes in each chromosome.

The prevailing theory is that eukaryotes developed the ability for meiosis and sexual reproduction from their ability to reproduce through mitosis and not from their early ancestor's ability to reproduce through transformation.

"Our proposal, that the sexual process of meiosis in eukaryotes arose from the sexual process of transformation in their bacterial ancestors, is a new and fundamentally different perspective that will likely generate controversy," the researchers predict.
Harris Bernstein is a professor of cell biology and anatomy. Carol Bernstein is an associate research professor of cell biology and anatomy.

"If it is assumed that meiosis arose only after mitosis was established, there would have been an extended period (while mitosis was evolving) when there was no meiosis, and therefore no sex, in eukaryotes. This assumption appears to be contradicted by evidence that the basic machinery for meiosis was present very early in eukaryote evolution," the authors state.

A key argument in their hypothesis is that in both prokaryotes and simple eukaryotes, sexual cycles are induced by stressful conditions. Thus, the recombinational repair promoted by transformation and meiosis is part of a survival strategy in response to stress.

"Coping with DNA damage appears to be a fundamental problem for all life. For instance, the average human cell incurs about 10,000 DNA damages per day, of which 50 are double-strand breaks. The DNA damages are mostly due to the reactive oxygen species generated when converting food into energy. Thus, efficient DNA recombinational repair is an adaptation for cell survival and for producing new offspring, in higher organisms, through meiosis," the researchers contend.

In bacteria – the most common prokaryote – transformation is typically induced by high cell density, nutritional limitation, or DNA-damaging conditions. In yeast, a eukaryote or protist, the meiotic sexual cycle is induced when the supply of nutrients becomes limiting or when the cells are exposed to oxidative stress and DNA damage, the team added.

"Observations suggest that facultative sex in bacteria and protists is often an adaptive response to stressful environmental conditions, as would be expected if transformation and meiosis were related adaptations," the researchers write.

(Photo: U. Arizona)

University of Arizona

PROSPECTS FOR FINDING NEW EARTHS BOOSTED BY NEW PLANET-HUNTING TECHNIQUE

0 comentarios

A team of astronomers from Germany, Bulgaria and Poland have used a completely new technique to find an exotic extrasolar planet. The same approach is sensitive enough to find planets as small as the Earth in orbit around other stars.

The group, led by Dr Gracjan Maciejewski of Jena University in Germany, used Transit Timing Variation to detect a planet with 15 times the mass of the Earth in the system WASP-3, 700 light years from the Sun in the constellation of Lyra. They publish their work in the journal Monthly Notices of the Royal Astronomical Society.

Transit Timing Variation (TTV) was suggested as a new technique for discovering planets a few years ago. Transits take place where a planet moves in front of the star it orbits, temporarily blocking some of the light from the star. So far this method has been used to detect a number of planets and is being deployed by the Kepler and Corot space missions in its search for planets similar to the Earth.

If a (typically large) planet is found, then the gravity of additional smaller planets will tug on the larger object, causing deviations in the regular cycle of transits. The TTV technique compares the deviations with predictions made by extensive computer-based calculations, allowing astronomers to deduce the makeup of the planetary system.

For this search, the team used the 90-cm telescopes of the University Observatory Jena and the 60-cm telescope of the Rohzen National Astronomical Observatory in Bulgaria to study transits of WASP-3b, a large planet with 630 times the mass of the Earth.

“We detected periodic variations in the transit timing of WASP-3b. These variations can be explained by an additional planet in the system, with a mass of 15 Earth-mass (i.e. one Uranus mass) and a period of 3.75 days”, said Dr Maciejewski.

“In line with international rules, we called this new planet WASP-3c”. This newly discovered planet is among the least massive planets known to date and also the least massive planet known orbiting a star which is more massive than our Sun.

This is the first time that a new extra-solar planet has been discovered using this method. The new TTV approach is an indirect detection technique, like the previously successful transit method.

The discovery of the second, 15 Earth-mass planet makes the WASP-3 system very intriguing. The new planet appears to be trapped in an external orbit, twice as long as the orbit of the more massive planet. Such a configuration is probably a result of the early evolution of the system.

The TTV method is very attractive, because it is particularly sensitive to small perturbing planets, even down to the mass of the Earth. For example, an Earth-mass planet will pull on a typical gas giant planet orbiting close to its star and cause deviations in the timing of the larger objects’ transits of up to 1 minute.

This is a big enough effect to be detected with relatively small 1-m diameter telescopes and discoveries can be followed up with larger instruments. The team are now using the 10-m Hobby-Eberly Telescope in Texas to study WASP-3c in more detail.

(Photo: RAS)

RAS

Monday, July 26, 2010

ARCHAEOLOGY FIND SHEDS NEW LIGHT ON FAMILY PETS

0 comentarios

A University of Leicester archaeologist has discovered a bone belonging to a late 19th-century tortoise from Stafford Castle, Staffordshire - believed to be the earliest archaeological evidence of a tortoise kept as a family pet.

As reported in Post-Medieval Archaeology (volume 44/1) by University of Leicester archaeologist Dr Richard Thomas, the significance of the find is in the insights it gives on the early importation of tortoises and the changing attitude of British society towards family pets.

The Stafford Castle tortoise bone was found amongst the skeletons of cats and dogs, in a context that suggests it was kept as a pet, possibly by the family who were caretakers at the castle at the time. The date of the find coincides with the late 19th-century increase in the trade of live animals and with the widespread importation of tortoises in particular.

As Dr. Thomas says, “Although we have archaeological evidence for terrapins and turtles from the 17th century, this is the first archaeological evidence we have for land tortoise in Britain. It seems very likely that this specimen was imported from North Africa or the Mediterranean; by the later 19th-century there was a dramatic rise in the commercial trade in tortoises from these regions to satisfy the growing demand for pet animals”.

The morality of keeping pets was considered highly suspect in the strict religious doctrines of Medieval and Early Modern society, and although there was an avid fascination in exotic creatures at the time, this seems to have curiously bypassed the tortoise.

Attitudes towards pets began to change in the 17th century, particularly under the famously dog-loving Stuart kings, and the reputation of the tortoise had certainly risen high enough by the early 17th century for the ill-fated Archbishop Laud to have kept one.

During the 18th and 19th centuries a more ‘modern’ attitude to pet animals gradually emerged. The sculptor Joseph Gott created sentimental statues of dogs during the 19th century, and in 1824 the Society (later Royal Society) for the Prevention of Cruelty to Animals was founded.

Pet burials have also been found from the period, in stark contrast to the bones of earlier domestic animals simply thrown out with the rubbish.

The discovery of the Stafford Castle tortoise bone a few years ago, now reported in Post-Medieval Archaeology, adds to the archaeological evidence that by the late 19th century ordinary families were keeping animals as pets with which there was probably some bond of affection.

As Dr. Thomas reveals, “Unfortunately, this interest in keeping exotic pet animals resulted in the capture and translocation of millions of wild tortoises each year during the 20th century. The animals were crated in ships and kept in appalling conditions; countless tortoises died during this journey and those that survived fared little better, given away as fairground prizes and kept by people with little knowledge of their upkeep. It was not until an EEC regulation in 1988, that this trade in wild tortoises was prohibited”.

Dr Richard Thomas is a Zooarchaeologist and Head of the University of Leicester’s Bone Laboratory in the internationally acclaimed School of Archaeology and Ancient History. His research interests focus on the study of animal bones as a means of understanding past human-animal relationships.

(Photo: U. Leicester)

University of Leicester

YOU CAN'T HIDE YOUR LYIN' EYES

0 comentarios
Shifty eyes long have been thought to signify a person's problem telling the truth. Now a group of University of Utah researchers are taking that old adage to a new level.

Educational psychologists John Kircher, Doug Hacker, Anne Cook, Dan Woltz and David Raskin are using eye-tracking technology to pioneer a promising alternative to the polygraph for lie detection. The researchers' efforts to commercialize their new technology reached a milestone recently when the University of Utah licensed the technology to Credibility Assessment Technologies (CAT).

CAT is based in Park City, Utah, and managed by venture capitalists Donald Sanborn and Gerald Sanders, who are the president and chairman, respectively.

"The eye-tracking method for detecting lies has great potential," Sanders says. "It's a matter of national security that our government agencies have the best and most advanced methods for detecting truth from fiction, and we believe we are addressing that need by licensing the extraordinary research done at the University of Utah."

In addition to bringing the technology closer to the marketplace, the licensing also helps maintain the university's leadership in lie-detection research. The university has been a leader in the field for at least 30 years, through the work of Raskin and Kircher, who both completed substantial research on the subject. Raskin now is a professor emeritus.

Tracking eye movement to detect lies became possible in recent years because of substantial improvements in technology. The Utah researchers say they are the first to develop and assess the software and methods for applying these tests effectively.

Using eye movement to detect lies contrasts with polygraph testing. Instead of measuring a person's emotional reaction to lying, eye-tracking technology measures the person's cognitive reaction. To do so, the researchers record a number of measurements while a subject is answering a series of true-and-false questions on a computer. The measurements include pupil dilation, response time, reading and rereading time, and errors.

The researchers determined that lying requires more work than telling the truth, so they look for indications that the subject is working hard. For example, a person who is being dishonest may have dilated pupils and take longer to read and answer the questions. These reactions are often minute and require sophisticated measurement and statistical modeling to determine their significance.

"We have gotten great results from our experiments," says Kircher. "They are as good as or better than the polygraph, and we are still in the early stages of this innovative new method to determine if someone is trying to deceive you."

Besides measuring a different type of response, eye-tracking methods for detecting lies has several other benefits over the polygraph. Eye tracking promises to cost substantially less, require one-fifth of the time currently needed for examinations, require no attachment to the subject being tested, be available in any language and be administered by technicians rather than qualified polygraph examiners.

Research into this method began five years ago, when faculty members started discussing the concept casually. They secured seed funding and the university's Department of Educational Psychology hired new faculty with relevant skills. Each member of the research team fills an important function, but few ever dreamed they would be working on lie-detection technology.

"I came to the University of Utah to do work in reading comprehension, but I jumped at the chance to get involved with this research," Cook says. "That's the fun of this kind of job. You get the opportunity to collaborate with your colleagues to achieve more than any of you could individually."

People across campus assisted the researchers. Help included research assistance from graduate students, intellectual property management through the Technology Commercialization Office and business development advice through the David Eccles School of Business's Lassonde New Venture Development Center, which links faculty researchers with master's of business administration students and graduate students from science, engineering and law.

The researchers still have more development work to do, but they hope the recent licensing will help them attract the additional funding necessary and interest from potential customers. Numerous government agencies, such as the U.S. Department of Defense, Department of Homeland Security, Customs and Border Protection, and Department of Energy use polygraphs regularly to screen employees and applicants for sensitive positions, and these agencies always are looking for more effective ways to detect lies.

"It's exciting," Cook says, "that our testing method is going to be taken from a basic research program to commercial use."

University of Utah

EXERCISE'S BRAIN BENEFITS

0 comentarios
Athletes have long known about the natural “high” exercise can induce. Now, for the first time, medical researchers have demonstrated that exercise can reverse the effects in the brain of psychological trauma experienced early in life.

Exercise can ameliorate anxiety and depression-like behaviours induced by an adverse early-life environment by altering the chemical composition in the hippocampus – the part of the brain that regulates stress response, researchers from UNSW have found.

The findings, derived from studies on lab rats, are further evidence of the plasticity of the brain and its ability to re-map neural networks. Previous studies from UNSW’s School of Medical Sciences have shown that comfort eating – eating palatable food rich in fat and sugar – achieves similar results.

With many neurological diseases displaying origins in early life, the researchers believe the results could provide clues for novel ways to tackle a range of mood and behaviour disorders.

“What’s exciting about this is that we are able to reverse a behavioural deficit that was caused by a traumatic event early in life, simply through exercise,” said Professor of Pharmacology Margaret Morris, who will present the findings this week at the International Congress of Obesity in Stockholm.

In the study, rats were divided into groups and either isolated from their mothers for controlled periods of time to induce stress or given normal maternal contact. Half were given access to a running wheel.

In addition to being more anxious, animals that were subjected to stress early in life had higher levels of stress hormones and fewer steroid receptors in the part of the brain controlling behaviour.

“Both the anxious behaviour and the levels of hormones in these rats were reversed with access to the exercise wheel,” Professor Morris said.

“We know that exercise can elevate mood, but here we are seeing chemical changes that may underpin this improvement. One of these is increases in brain-derived neurotrophic factor (BDNF), which helps nerve cells grow.

“Many neurological diseases appear to have their origins early in life. Stress hormones affect the way nerve cells grow in the brain. This discovery may be giving us a clue about a different way to tackle a range of conditions that affect mood and behaviour,” she said.

“Here we also compared effects of exercise to eating palatable food, and it was equally effective, suggesting there’s a more healthy option as an alternative.”

A paper detailing the work appears this month in the journal Psychoneuroendocrinology.

The University of New South Wales

MOJOCERATOPS: NEW DINOSAUR SPECIES NAMED FOR FLAMBOYANT FRILL

0 comentarios

When Nicholas Longrich discovered a new dinosaur species with a heart-shaped frill on its head, he wanted to come up with a name just as flamboyant as the dinosaur's appearance. Over a few beers with fellow paleontologists one night, he blurted out the first thing that came to mind: Mojoceratops.

"It was just a joke, but then everyone stopped and looked at each other and said, ‘Wait — that actually sounds cool,' " said Longrich, a postdoctoral associate at Yale University.”I tried to come up with serious names after that, but Mojoceratops just sort of stuck."

With the publication of Longrich's paper describing his find in the Journal of Paleontology, the name is now official.

The dinosaur is one of more than a dozen species belonging to the chasmosaurine ceratopsid family, which are defined by elaborate frills on their skulls. A plant eater about the size of a hippopotamus, Mojoceratops appeared about 75 million years ago during the Late Cretaceous — 10 million years earlier than its well-known cousin, the Triceratops. The species, which is related to another dinosaur in Texas, is found only in Canada's Alberta and Saskatchewan provinces and was short-lived, having survived for only about one million years.

It was only after coming up with the unusual name that Longrich looked into its etymology. Surprisingly, he found that it was a perfect fit for the species, which sported a flamboyant, heart-shaped frill on its head.

"I discovered that ‘mojo' is an early 20th-century African-American term meaning a magic charm or talisman, often used to attract members of the opposite sex," he said. "This dinosaur probably used its frill to attract mates, so the name made sense." The full name is Mojoceratops perifania, with "perifania" meaning pride in Greek. (The other part of the name mojoceratops follows the convention of other related species, with "ceras" being Greek for horn and "ops" being Greek for face.)

While all ceratopsids have frills on the tops of their skulls, "Mojoceratops is the most ostentatious," Longrich said, adding that their frill is also the most heart-shaped of all the related species.

Longrich got his first clue that he might have found a new species at the American Museum of Natural History in New York, where he was studying the dinosaur fossil collection in 2008. There, he found a distinctive frill that didn't match anything previously known. Later, while sketching the skull of another specimen on display, which was thought to be a species called Chasmosaurus, he noticed the skull was identical to the one on the specimen next to it.

"I realized the skull on the supposed Chasmosaurus must have been a reconstruction," he explained. When he studied the front of the skull, Longrich noticed some differences from the typical Chasmosaurus, including longer horns than usual. Trips to other museums in Western Canada turned up more examples that didn't fit with the rest of the known species. "The fossils didn't look like anything we'd seen before. They just looked wrong," he said.

Finding yet another previously unknown large dinosaur species that comes from Dinosaur Provincial Park in Alberta, Canada-which boasts the world's most diverse dinosaur fauna-was somewhat surprising, Longrich said, because the fossils have been studied for such a long time already. "So far, we really have no good explanation for why there are so many dinosaurs in the area and just how they managed to coexist," he said.

All in all, Longrich turned up eight partial skulls of the new species, which now boasts a name with just as much flair as its unusually shaped head.

"You're supposed to use Latin and Greek names, but this just seemed more fun," Longrich said. "You can do good science and still have some fun, too. So why not?"

(Photo: Nicholas Longrich)

Yale University

Followers

Archive

 

Selected Science News. Copyright 2008 All Rights Reserved Revolution Two Church theme by Brian Gardner Converted into Blogger Template by Bloganol dot com