Wednesday, July 28, 2010

PERSONALIZED APPROACH TO SMOKING CESSATION MAY BE REALITY IN THREE TO FIVE YEARS

0 comentarios
A personalized approach to smoking cessation therapy is quickly taking shape. New evidence from Duke University Medical Center and the National Institute on Drug Abuse (NIDA) suggests that combining information about a smoker’s genetic makeup with his or her smoking habits can accurately predict which nicotine replacement therapy will work best.

“Within three to five years, it’s conceivable we’ll have a practical test that could take the guesswork out of choosing a smoking-cessation therapy,” says Jed Rose, PhD, director of Duke’s Center for Nicotine and Smoking Cessation Research. “It could be used by clinicians to guide the selection of treatment and appropriate dose for each smoker, and hopefully increase cessation success rates.”

Statistics show 70 percent of the nation’s 46 million smokers say they want to quit, yet successfully kicking the habit has not proven easy. In previously published reports, less than 5 percent of smokers who tried to quit on their own without any aids were not smoking one year later. Long-term quit rates for smokers who relied on pharmacological intervention hover under 25 percent.

The research, which is published online in the July-August issue of Molecular Medicine, follows previous work done by Rose and George Uhl, MD, PhD, chief of the molecular neurobiology research at NIDA.

After conducting a genome-wide scan of 520,000 genetic markers taken from blood samples of smokers in several quit-smoking trials, they identified genetic patterns that appear to influence how well individuals respond to specific smoking cessation treatments.

The latest research focuses on combining the information from those individual genetic markers, called SNPs, into one number that represents a “quit success score,” Rose says. The score and the smokers’ nicotine dependence, assessed via a simple questionnaire, help predict an individual’s likelihood of quitting, as well as whether a high-dose or low-dose nicotine patch would work best.

In the trial, 479 cigarette smokers who smoked at least 10 cigarettes per day and wanted to quit were categorized as either high- or low-dependence based on their level of nicotine dependence. The smokers in each group were then randomly assigned to wear two nicotine skin patches daily delivering a high dose (42mg) or a standard dose (21 mg).

Patches were worn for two weeks prior to their quit date, and the nicotine doses were reduced gradually over the 10 weeks following their quit date. Participants were given denicotinized cigarettes during the two weeks before the quit date to minimize any potential adverse effects from the high dose nicotine patches. The treatment phase lasted for 12 weeks in all.

DNA was extracted from participants’ blood and was used to assess a quit-smoking success genetic score.

At six months follow up, the researchers were able to confirm which smokers fared better or worse on the high-dose compared to the low-dose patch.

“The genotype score was part of what predicted successful abstinence. In the future such a score could help us make our initial treatment decisions,” said Rose. “People who had both high nicotine dependence and a low or unfavorable quit success genetic score seemed to benefit markedly from the high-dose nicotine patch, while people who had less dependence on nicotine did better on the standard patch.”

Further studies are needed to replicate these results, and to expand the research to include therapies like verenicline (Chantix by Pfizer) and bupropion hydrochloride (Zyban by GlaxoSmithKline). But the potential this work holds for the future is significant, Rose says.

“Right now there is no treatment algorithm that tells a clinician or smoker which treatment is likely to work for them,” says Rose. "That’s what we are trying to do. We want to tailor and give informative guidance to clinicians in terms of what should be tried first to maximize smoking cessation success.”

This study was the result of a collaboration between an investigator supported by the National Institutes of Health (NIH) Intramural Research Program, National Institute on Drug Abuse, and Duke University researchers who received a grant from Philip Morris USA. The company had no role in the planning or execution of the study, data analysis, or publication of results.

Duke University Medical Center

GEOSCIENTISTS FIND CLUES TO WHY FIRST SUMATRAN EARTHQUAKE WAS DEADLIER THAN SECOND

0 comentarios

An international team of geoscientists has uncovered geological differences between two segments of an earthquake fault that may explain why the 2004 Sumatra Boxing Day Tsunami was so much more devastating than a second earthquake generated tsunami three months later. This could help solve what was a lingering mystery for earthquake researchers.

The quakes were caused by ruptures on adjacent segments of the same fault. One key difference was that the southern part of the fault that ruptured in 2004, producing the larger quake and tsunami, appears bright on subsurface seismic images possibly explained by a lower density fault zone than the surrounding sediments. In the 2005 segment of the fault, there was no evidence for such a low-density fault zone. This and several other differences resulted in the fault slipping over a much longer segment and reaching much closer to the seafloor in the first quake. Because tsunami waves are generated by the motion of the seafloor, a quake that moves more seafloor creates larger tsunamis.

Early in the morning of Dec. 26, 2004 a powerful undersea earthquake started off the west coast of Sumatra, Indonesia and extended about 1,200 kilometers (750 miles) to the north. The resulting tsunami caused devastation along the coastlines bordering the Indian Ocean, with tsunami waves up to 30 meters (100 feet) high inundating coastal communities. With very little warning of impending disaster, more than 230,000 people died and millions were made homeless.

Three months later in 2005, another strong earthquake (although significantly smaller than in 2004) occurred immediately to the south, but triggered only a relatively small tsunami that claimed far fewer lives.

A team of researchers from The University of Southampton in the United Kingdom, The University of Texas at Austin, The Agency for the Assessment and Application of Technology in Indonesia and The Indonesia Institute for Sciences has discovered one clue as to why the two earthquakes were so different. Working aboard the research vessel Sonne, the scientists used seismic instruments to probe layers of sediment beneath the seafloor with sound waves.

They discovered a number of unusual features at the rupture zone of the 2004 earthquake such as the seabed topography, how the sediments are deformed and the locations of small earthquakes (aftershocks) following the main earthquake.

They found the southern end of the 2004 rupture zone was unique in one other key way. To understand that requires a little background on how and why earthquakes happen there at all.

The largest undersea earthquakes occur at subduction zones, such as the one west of Indonesia, where one tectonic plate is forced (or subducts) under another. This subduction doesn't happen smoothly however, but sticks and then slips or ruptures with the release of vast amounts of stored energy as an earthquake. The plate boundary between the overriding Sumatran and Andaman islands and the subducting Indian Ocean sticks and slips in segments. This kind of plate boundary is called a décollement and is a very shallow fault running from beneath the trench to under the islands.

The researchers found that the décollement surface has different properties in the two earthquake rupture regions. In the 2004 area, the décollement was seismically imaged from the ship as a bright reflection whose specifics suggest lower density materials that would affect friction. In the 2005 area the décollement does not show these particular characteristics and thus would behave differently in an earthquake. This and several other differences resulted in the fault slipping over a much longer segment in 2004 and reaching much closer to the seafloor, potentially causing a larger tsunami.

The results of their study appear in the July 9 edition of the journal Science. The paper's lead author is Simon Dean, of the University of Southampton's School of Ocean and Earth Science, which is based at the National Oceanography Centre, Southampton (NOC).

"Both earthquakes occurred on the same fault system, initiating 30-40 kilometers below the seabed," said Dean. "Our results will help us understand why different parts of the fault behave differently during earthquake slip which then influences tsunami generation. This is critical for adequate hazard assessment and mitigation."

By comparing these results with other subduction zones around the world, the research team believes the region of the 2004 Sumatra earthquake is very unusual, suggesting that tsunami hazards may be particularly high in this region.

"By understanding parameters that make a particular region more hazardous in terms of earthquakes and tsunami we can speak to potential hazards of other margins," said Sean Gulick, a research scientist at The University of Texas at Austin's Institute for Geophysics. "We need to examine what limits the size of earthquakes and what properties contribute to tsunami formation."

The fact that the 2004 and 2005 source areas were different is good news. Had the two fault segments ruptured together, the resulting earthquake would have been about a magnitude 9.3 instead of 9.2. Because the earthquake magnitude scale is logarithmic, an increase in 0.1 translates to about a third more energy released. To put that in perspective, the first event had the explosive force of 1.8 trillion kilograms of TNT. Adding the second segment, the resulting earthquake would equal 2.4 trillion kilograms of TNT.

(Photo: Nicolle Rager Fuller, National Science Foundation)

The University of Texas at Austin

DISRUPTION OF CIRCADIAN RHYTHM COULD LEAD TO DIABETES

0 comentarios

Disruption of two genes that control circadian rhythms can lead to diabetes, a researcher at UT Southwestern Medical Center has found in an animal study.

Mice with defective copies of the genes, called CLOCK and BMAL1, develop abnormalities in pancreatic cells that eventually render the cells unable to release sufficient amounts of insulin.

“These results indicate that disruption of the daily clock may contribute to diabetes by impairing the pancreas’ ability to deliver insulin,” said Dr. Joseph Takahashi, an investigator with the Howard Hughes Medical Institute at UT Southwestern and co-senior author of the study, which appeared in the journal Nature. Dr. Takahashi, who recently joined UT Southwestern as chairman of neuroscience, performed the research with colleagues when he was at Northwestern University.

Circadian rhythms are cyclical patterns in biological activities, such as sleeping, eating, body temperature and hormone production.

The mammalian CLOCK gene, which Dr. Takahashi discovered in 1997, operates in many tissues of the body to regulate circadian rhythms. The gene codes for a protein called a transcription factor, which binds to other genes and controls whether they become active. BMAL1 also codes for a transcription factor that works together with the CLOCK protein.

The researchers examined pancreatic islet beta cells, which secrete insulin when blood sugar levels increase. They genetically engineered some mice to have defective CLOCK genes and some to also lack the BMAL1 gene. The mice also were engineered to contain a bioluminescent molecule that allowed the researchers to detect the circadian clock in pancreatic cells as a fluctuating glow.

Normal islet cells glowed in a 24-hour rhythm, while cells with defective CLOCK genes showed nearly flat rhythms. Cells from different organs exhibited different circadian rhythm patterns, indicating that each organ controls its own internal clocks.

Further study showed that the islet cells in the mutant animals created normal amounts of insulin, but the CLOCK mutant cells were defective in releasing the hormone.

Mice with defective CLOCK genes were prone to obesity and other signs of metabolic syndrome and liver dysfunction. Young mice lacking the BMAL1 gene only in their pancreas, however, had normal body weight and composition, and their behavior followed normal circadian patterns, although their blood sugar levels were abnormally high, the researchers found.

“This finding indicates that disruption of clock genes only in the pancreas, and not the rest of the body clock, can produce early signs of diabetes,” Dr. Takahashi said “These studies are important because they show a direct link between the clock in pancreatic beta-cells and glucose regulation. This should aid our understanding of the causes of glucose abnormalities.”

(Photo: UTSMC)

The University of Texas Southwestern Medical Center

INDIAN OCEAN SEA LEVEL RISE THREATENS COASTAL AREAS

0 comentarios

Indian Ocean sea levels are rising unevenly and threatening residents in some densely populated coastal areas and islands, a new study concludes. The study, led by scientists at the University of Colorado at Boulder (CU) and the National Center for Atmospheric Research (NCAR), finds that the sea level rise is at least partly a result of climate change.

Sea level rise is particularly high along the coastlines of the Bay of Bengal, the Arabian Sea, Sri Lanka, Sumatra, and Java, the authors found. The rise—which may aggravate monsoon flooding in Bangladesh and India—could have future impacts on both regional and global climate.

The key player in the process is the Indo-Pacific warm pool, an enormous, bathtub-shaped area spanning a region of the tropical oceans from the east coast of Africa to the International Date Line in the Pacific. The warm pool has heated by about 1 degree Fahrenheit, or 0.5 degrees Celsius, in the past 50 years, primarily because of human-generated emissions of greenhouses gases.

"Our results from this study imply that if future anthropogenic warming effects in the Indo-Pacific warm pool dominate natural variability, mid-ocean islands such as the Mascarenhas Archipelago, coasts of Indonesia, Sumatra, and the north Indian Ocean may experience significantly more sea level rise than the global average," says lead author Weiqing Han of CU's atmospheric and oceanic sciences department.

While a number of areas in the Indian Ocean region are experiencing sea level rise, sea level is lowering in other areas. The study indicates that the Seychelles Islands and the island of Zanzibar off Tanzania's coast show the largest sea level drop.

"Global sea level patterns are not geographically uniform," says NCAR scientist Gerald Meehl, a co-author. "Sea level rise in some areas correlates with sea level fall in other areas."

The new study was published in Nature Geoscience. Funding came from the National Science Foundation, NCAR's sponsor, as well as the Department of Energy (DOE) and NASA.

The patterns of sea level change are driven by the combined enhancement of two primary atmospheric wind patterns, known as the Hadley circulation and the Walker circulation. The Hadley circulation in the Indian Ocean is dominated by air currents rising above strongly heated tropical waters near the equator and flowing poleward at upper levels, then sinking to the ocean in the subtropics and causing surface air to flow back toward the equator.

The Indian Ocean's Walker circulation causes air to rise and flow westward at upper levels, sink to the surface and then flow eastward back toward the Indo-Pacific warm pool.

"The combined enhancement of the Hadley and Walker circulation forms a distinct surface wind pattern that drives specific sea level patterns," Han says.

In the Nature Geoscience article, the authors write, "Our new results show that human-caused changes of atmospheric and oceanic circulation over the Indian Ocean region—which have not been studied previously—are the major cause for the regional variability of sea level change."

The new study indicates that in order to anticipate global sea level change, researchers also need to know the specifics of regional sea level changes.

"It is important for us to understand the regional changes of the sea level, which will have effects on coastal and island regions," says NCAR scientist Aixue Hu.

The research team used several sophisticated ocean and climate models for the study, including the Parallel Ocean Program—the ocean component of the widely used Community Climate System Model, which is supported by NCAR and DOE. In addition, the team used a wind-driven linear ocean model for the study.

The complex circulation patterns in the Indian Ocean may also affect precipitation by forcing even more atmospheric air than normal down to the surface in Indian Ocean subtropical regions, Han speculates.

"This may favor a weakening of atmospheric convection in subtropics, which may increase rainfall in the eastern tropical regions of the Indian Ocean and drought in the western equatorial Indian Ocean region, including east Africa," Han says.

(Photo: NASA Earth Observatory)

The University Corporation for Atmospheric Research

MAGNETS TRUMP METALLICS

0 comentarios
Metallic carbon nanotubes show great promise for applications from microelectronics to power lines because of their ballistic transmission of electrons. But who knew magnets could stop those electrons in their tracks?

Rice physicist Junichiro Kono and his team have been studying the Aharonov-Bohm effect -- the interaction between electrically charged particles and magnetic fields -- and how it relates to carbon nanotubes. While doing so, they came to the unexpected conclusion that magnetic fields can turn highly conductive nanotubes into semiconductors.

Their findings are published online this month in Physical Review Letters.

"When you apply a magnetic field, a band gap opens up and it becomes an insulator," said Kono, a Rice professor in electrical and computer engineering and in physics and astronomy. "You are changing a conductor into a semiconductor, and you can switch between the two. So this experiment explores both an important aspect of the results of the Aharonov-Bohm effect and the novel magnetic properties of carbon nanotubes."

Kono, graduate student Thomas Searles and their colleagues at the National Institute of Standards and Technology (NIST) and in Japan successfully measured the magnetic susceptibility of a variety of nanotubes for the first time; they confirmed that metallics are far more susceptible to magnetic fields than semiconducting nanotubes, depending upon the orientation and strength of the field.

Single-walled nanotubes (SWNTs) -- rolled-up sheets of graphene -- would all look the same to the naked eye if one could see them. But a closer look reveals nanotubes come in many forms, or chiralities, depending on how they're rolled. Some are semiconducting; some are highly conductive metallics. The gold standard for conductivity is the armchair nanotube, so-called because the open ends form a pattern that looks like armchairs.

Not just any magnet would do for their experiments. Kono and Searles traveled to the Tsukuba Magnet Laboratory at the National Institute for Materials Science (NIMS) in Japan, where the world's second-largest electromagnet was used to tease a refined ensemble of 10 chiralities of SWNTs, some metallic and some semiconducting, into giving up their secrets.

By ramping the big magnet up to 35 tesla, they found that the nanotubes would begin to align themselves in parallel and that the metallics reacted far more strongly than the semiconductors. (For comparison, the average MRI machine for medical imaging has electromagnets rated at 0.5 to 3 tesla.) Spectroscopic analysis confirmed the metallics, particularly armchair nanotubes, were two to four times more susceptible to the magnetic field than semiconductors and that each chirality reacted differently.

The nanotubes were all about 0.7 to 0.8 nanometers (or billionths of a meter) wide and 500 nanometers long, so variations in size were not a factor in results by Searles. He spent a week last fall running experiments at the Tsukuba facility's "hybrid," a large-bore superconducting magnet that contains a water-cooled resistive magnet.

Kono said the work would continue on purified batches of nanotubes produced by ultracentrifugation at Rice. That should yield more specific information about their susceptibility to magnetic fields, though he suspects the effect should be even stronger in longer metallics. "This work clearly shows that metallic tubes and semiconducting tubes are different, but now that we have metallic-enriched samples, we can compare different chiralities within the metallic family," he said.

Rice University

ENGINEERING COULD GIVE RECONSTRUCTIVE SURGERY A FACE-LIFT

0 comentarios



Facial reconstruction patients may soon have the option of custom-made bone replacements optimized for both form and function, thanks to researchers at the University of Illinois and the Ohio State University Medical Center.

Whether resulting from illness or injury, loss of facial bones poses problems for reconstructive surgeons beyond cosmetic implications: The patient’s chewing, swallowing, speaking or even breathing abilities may be impaired.

“The mid-face is perhaps the most complicated part of the human skeleton,” said Glaucio Paulino, the Donald Biggar Willett Professor of Engineering at U. of I. “What makes mid-face reconstruction more complicated is its unusual unique shape (bones are small and delicate) and functions, and its location in an area susceptible to high contamination with bacteria.”

To fashion bone replacements, surgeons often will harvest bone from elsewhere in the patient’s body – the shoulder blade or hip, for example – and manually fashion it into something resembling the missing skull portion. However, since other bones are very different from facial bones in structure, patients may still suffer impaired function or cosmetic distortion.

The interdisciplinary research team, whose research results were published in the July 12 edition of the Proceedings of the National Academy of Sciences, applied an engineering design technique called topology optimization. The approach uses extensive 3-D modeling to design structures that need to support specific loads in a confined space, and is often used to engineer high-rise buildings, car parts and other structures.

“It tells you where to put material and where to create holes,” said Paulino, a professor of civil and environmental engineering. “Essentially, the technique allows engineers to find the best solution that satisfies design requirements and constraints.”

Facial reconstruction seemed a natural fit for the technique, Paulino said. “We looked at the clinical problem from a different perspective. Topology optimization offers an interdisciplinary framework to integrate concepts from medicine, biology, numerical methods, mechanics, and computations.”

Topology optimization would create patient-specific, case-by-case designs for tissue-engineered bone replacements. First, the researchers construct a detailed 3-D computer model of the patient in question and specify a design domain based on the injury and missing bone parts. Then a series of algorithms creates a customized, optimized structure, accounting for variables including blood flow, sinus cavities, chewing forces and soft tissue support, among other considerations. The researchers can then model the process of inserting the replacement bone into the patient and how the patient would look. The process is illustrated in this video.

“Ideally, it would allow the physician to explore surgical alternatives and to design patient-specific bone replacement. Each patient’s bone replacement designs are tailored for their missing volume and functional requirements,” Paulino said.

Now that they have demonstrated the concept successfully by modeling several different types of facial bone replacements, the researchers hope to work toward developing scaffolds for tissue engineering so that their designs could be translated to actual bones. They also hope to explore further surgical possibilities for their method.

“This technique has the potential to pave the way toward development of tissue engineering methods to create custom fabricated living bone replacements in optimum shapes and amounts,” Paulino said. “The possibilities are immense and we feel that we are just in the beginning of the process.”

(Photo: Janet Sinn-Hanlon, Beckman Institute for Advanced Science and Technology)

University of Illinois

FIBERS THAT CAN HEAR AND SING

0 comentarios

For centuries, "man-made fibers" meant the raw stuff of clothes and ropes; in the information age, it's come to mean the filaments of glass that carry data in communications networks. But to Yoel Fink, an associate professor of materials science and principal investigator at MIT's Research Lab of Electronics, the threads used in textiles and even optical fibers are much too passive. For the past decade, his lab has been working to develop fibers with ever more sophisticated properties, to enable fabrics that can interact with their environment.

In the August issue of Nature Materials, Fink and his collaborators announce a new milestone on the path to functional fibers: fibers that can detect and produce sound. Applications could include clothes that are themselves sensitive microphones, for capturing speech or monitoring bodily functions, and tiny filaments that could measure blood flow in capillaries or pressure in the brain. The paper, whose authors also include Shunji Egusa, a former postdoc in Fink's lab, and current lab members Noémie Chocat and Zheng Wang, appeared on Nature Materials' website on July 11, and the work it describes was supported by MIT's Institute for Soldier Nanotechnologies, the National Science Foundation and the U.S. Defense Department's Defense Advanced Research Projects Agency.

Ordinary optical fibers are made from a "preform," a large cylinder of a single material that is heated up, drawn out, and then cooled. The fibers developed in Fink's lab, by contrast, derive their functionality from the elaborate geometrical arrangement of several different materials, which must survive the heating and drawing process intact.

The heart of the new acoustic fibers is a plastic commonly used in microphones. By playing with the plastic's fluorine content, the researchers were able to ensure that its molecules remain lopsided — with fluorine atoms lined up on one side and hydrogen atoms on the other — even during heating and drawing. The asymmetry of the molecules is what makes the plastic "piezoelectric," meaning that it changes shape when an electric field is applied to it.

In a conventional piezoelectric microphone, the electric field is generated by metal electrodes. But in a fiber microphone, the drawing process would cause metal electrodes to lose their shape. So the researchers instead used a conducting plastic that contains graphite, the material found in pencil lead. When heated, the conducting plastic maintains a higher viscosity — it yields a thicker fluid — than a metal would.

Not only did this prevent the mixing of materials, but, crucially, it also made for fibers with a regular thickness. After the fiber has been drawn, the researchers need to align all the piezoelectric molecules in the same direction. That requires the application of a powerful electric field — 20 times as powerful as the fields that cause lightning during a thunderstorm. Anywhere the fiber is too narrow, the field would generate a tiny lightning bolt, which could destroy the material around it.

Despite the delicate balance required by the manufacturing process, the researchers were able to build functioning fibers in the lab. "You can actually hear them, these fibers," says Chocat, a graduate student in the materials science department. "If you connected them to a power supply and applied a sinusoidal current" — an alternating current whose period is very regular — "then it would vibrate. And if you make it vibrate at audible frequencies and put it close to your ear, you could actually hear different notes or sounds coming out of it." For their Nature Materials paper, however, the researchers measured the fiber's acoustic properties more rigorously. Since water conducts sound better than air, they placed it in a water tank opposite a standard acoustic transducer, a device that could alternately emit sound waves detected by the fiber and detect sound waves emitted by the fiber.

In addition to wearable microphones and biological sensors, applications of the fibers could include loose nets that monitor the flow of water in the ocean and large-area sonar imaging systems with much higher resolutions: A fabric woven from acoustic fibers would provide the equivalent of millions of tiny acoustic sensors.

Zheng, a research scientist in Fink's lab, also points out that the same mechanism that allows piezoelectric devices to translate electricity into motion can work in reverse. "Imagine a thread that can generate electricity when stretched," he says.

Ultimately, however, the researchers hope to combine the properties of their experimental fibers in a single fiber. Strong vibrations, for instance, could vary the optical properties of a reflecting fiber, enabling fabrics to communicate optically.

Max Shtein, an assistant professor in the University of Michigan's materials science department, points out that other labs have built piezoelectric fibers by first drawing out a strand of a single material and then adding other materials to it, much the way manufacturers currently wrap insulating plastic around copper wire. "Yoel has the advantage of being able to extrude kilometers of this stuff at one shot," Shtein says. "It's a very scalable technique." But for applications that require relatively short strands of fiber, such as sensors inserted into capillaries, Shtein say, "scalability is not that relevant."

But whether or not the Fink lab's technique proves, in all cases, the most practical way to make acoustic fibers, "I'm impressed by the complexity of the structures they can make," Shtein says. "They're incredibly virtuosic at that technique."

(Photo: Research Laboratory of Electronics at MIT/Greg Hren Photograph)

MIT

PROTEIN LINKED TO AGING MAY BOOST MEMORY AND LEARNING ABILITY

0 comentarios

Over the past 20 years, biologists have shown that proteins called sirtuins can slow the aging process in many animal species.

Now an MIT team led by Professor Li-Huei Tsai has revealed that sirtuins can also boost memory and brainpower — a finding that could lead to new drugs for Alzheimer’s disease and other neurological disorders.

Sirtuins’ effects on brain function, including learning and memory, represent a new and somewhat surprising role, says Tsai, the Picower Professor of Neuroscience and an investigator of the Howard Hughes Medical Institute. “When you review the literature, sirtuins are always associated with longevity, metabolic pathways, calorie restriction, genome stability, and so on. It has never been shown to play a role in synaptic plasticity,” she says.

Synaptic plasticity — the ability of neurons to strengthen or weaken their connections in response to new information — is critical to learning and memory. Potential drugs that enhance plasticity by boosting sirtuin activity could help patients with neurological disorders such as Alzheimer’s, Parkinson’s and Huntington’s diseases, says Tsai.

Sirtuins have received much attention in recent years for their life-span-boosting potential, and for their link to resveratrol, a compound found in red wine that has shown beneficial effects against cancer, heart disease and inflammation in animal studies.

MIT Biology Professor Leonard Guarente discovered about 15 years ago that the SIR2 gene regulates longevity in yeast. Later work revealed similar effects in worms, mice and rats.

More recently, studies have shown that one mammalian version of the gene, SIRT1, protects against oxidative stress (the formation of highly reactive molecules that can damage cells) in the heart and maintains genome stability in multiple cell types. SIRT1 is thought to be a key regulator of an evolutionarily conserved pathway that enhances cell survival during times of stress, especially a lack of food.

In 2007, Tsai and her colleagues showed that sirtuins (the proteins produced by SIR or SIRT genes) protect neurons against neurodegeneration caused by disorders such as Alzheimer’s. They also found that sirtuins improved learning and memory, but believed that might be simply a byproduct of the neuron protection.

However, Tsai’s new study, funded by National Institutes of Health, the Simons Foundation, the Swiss National Science Foundation and the Howard Hughes Medical Institute, shows that sirtuins promote learning and memory through a novel pathway, unrelated to their ability to shield neurons from damage. The team demonstrated that sirtuins enhance synaptic plasticity by manipulating tiny snippets of genetic material known as microRNA, which have recently been discovered to play an important role in regulating gene expression.

Specifically, the team showed that sirtuins block the activity of a microRNA called miR-134, which normally halts production of CREB, a protein necessary for plasticity. When miR-134 is inhibited, CREB is free to help the brain adjust its synaptic activity.

Mice with the SIRT1 gene missing in the brain performed poorly on several memory and learning tests, including object-recognition tasks and a water maze.

“Activation of sirtuins can directly enhance cognitive function,” says Tsai. “This really suggests that SIRT1 is a very good drug target, because it can achieve multiple beneficial effects.”

Raul Mostoslavsky, assistant professor of medicine at Harvard Medical School, says the findings do suggest that activating SIRT1 could benefit patients with neurodegenerative diseases. “However, we will need to be very cautious before jumping to conclusions,” he says, “since SIRT1 has (multiple) effects in multiple cells and tissues, and therefore targeting specifically this brain function will be quite challenging.”

Tsai and her colleagues are now studying the mechanism of SIRT1’s actions in more detail, and are also investigating whether sirtuin genes other than SIRT1 influence memory and learning.

(Photo: MIT)

MIT

THE BRAIN OF THE FLY - A HIGH-SPEED COMPUTER

0 comentarios

What would be the point of holding a soccer world championship if we couldn't distinguish the ball from its background? Simply unthinkable! But then again, wouldn't it be fantastic if your favourite team's striker could see the movements of the ball in slow motion! Unfortunately, this advantage only belongs to flies.

The minute brains of these aeronautic acrobats process visual movements in only fractions of a second. Just how the brain of the fly manages to perceive motion with such speed and precision is predicted quite accurately by a mathematical model. However, even after 50 years of research, it remains a mystery as to how nerve cells are actually interconnected in the brain of the fly. Scientists at the Max Planck Institute of Neurobiology are now the first to successfully establish the necessary technical conditions for decoding the underlying mechanisms of motion vision. The first analyses have already shown that a great deal more remains to be discovered.

Back in 1956, a mathematical model was developed that predicts how movements in the brain of the fly are recognized and processed. Countless experiments have since endorsed all of the assumptions of this model. What remains unclear, however, is the question as to which nerve cells are wired to each other in the fly brain for the latter to function as predicted in the model. "We simply did not have the technical tools to examine the responses of each and every cell in the fly's tiny, but high-powered brain", as Dierk Reiff from the Max Planck Institute of Neurobiology in Martinsried explains. That is hardly surprising, considering the minute size of the brain area that is responsible for the fly's motion detection. Here, one sixth of a cubic millimetre of brain matter contains more than 100,000 nerve cells - each of which has multiple connections to its neighbouring cells. Although it seems almost impossible to single out the reaction of a certain cell to any particular movement stimulus, this is precisely what the neurobiologists in Martinsried have now succeeded in doing.

The electrical activity of individual nerve cells is usually measured with the aid of extremely fine electrodes. In the fly, however, most of the nerve cells are simply too small to be measured using this method. Nevertheless, since the fly is the animal model in which motion perception has been studied in most detail, the scientists were all the more determined to prize these secrets from the insect's brain. A further incentive is the fact that, albeit the number of nerve cells in the fly is comparatively small, they are highly specialized and process the image flow with great precision while the fly is in flight. Flies can therefore process a vast amount of information about proper motion and movement in their environment in real time - a feat that no computer, and certainly none the size of a fly's brain, can hope to match. So it's no wonder that deciphering this system is a worth-while undertaking.

"We had to find some way of observing the activity of these tiny nerve cells without electrodes", Dierk Reiff explains one of the challenges that faced the scientists. In order to overcome this hurdle, the scientists used the fruit fly Drosophila melanogaster and some of the most up-to-date genetic methods available. They succeeded in introducing the indicator molecule TN-XXL into individual nerve cells. By altering its fluorescent properties, TN-XXL indicates the activity of nerve cells.

To examine how the brains of fruit flies process motion, the neurobiologists presented the insects with moving stripe patterns on a light-diode screen. The nerve cells in the flies' brains react to these LED light impulses by becoming active, thus causing the luminance of the indicator molecules to change. Although TN-XXL's luminance changes are much higher than that of former indicator molecules, it took quite some time to capture this comparatively small amount of light and to separate it from the LED-light impulse. After puzzling over this for a while, however, Dierk Reiff solved the problem by synchronizing the 2-photon-laser microscope with the LED-screen at a tolerance of merely a few microseconds. The TN-XXL signal could subsequently be separated from the LED-light and selectively measured using the 2-photon-microscope.

"At long last, after more than 50 years of trying, it is now technically possible to examine the cellular construction of the motion detector in the brain of the fly", reports a pleased Alexander Borst, who has been pursuing this goal in his department for a number of years. Just how much remains to be discovered was realized during the very first application of the new methods. The scientists began by observing the activity of cells known as L2-cells which receive information from the photoreceptors of the eye. The photoreceptors react when the light intensity increases or decreases. The reaction of the L2-cells is similar in that part of the cell where the information from the photoreceptor is picked up. However, the neurobiologists discovered that the L2-cell transforms these data and in particular, that it relays information only about the reduction in light intensity to the following nerve cells. The latter then calculate the direction of motion and pass this information on to the flight control system. "This means that the information "light on" is filtered out by the L2-cells", summarizes Dierk Reiff. "It also means, however, that another kind of cell must pass on the "light on" command, since the fly reacts to both kinds of signals."

Now that the first step has been taken, the scientists intend to examine - cell by cell - the motion detection circuitry in the fly brain to explain how it computes motion information at the cellular level. Their colleagues from the joint Robotics project are eagerly awaiting the results.

(Photo: Max Planck Institute of Neurobiology)

Max Planck Institute

Followers

Archive

 

Selected Science News. Copyright 2008 All Rights Reserved Revolution Two Church theme by Brian Gardner Converted into Blogger Template by Bloganol dot com