Wednesday, May 19, 2010

MAGNETIC STIMULATION SCORES MODEST SUCCESS AS ANTIDEPRESSANT

0 comentarios

Some depressed patients who don't respond to or tolerate antidepressant medications may benefit from a non-invasive treatment that stimulates the brain with a pulsing electromagnet, a study suggests.

This first industry-independent, multi-site, randomized, tightly controlled trial of repetitive transcranial magnetic stimulation (rTMS) (http://www.nimh.nih.gov/health/topics/brain-stimulation-therapies/brain-stimulation-therapies.shtml) found that it produced significant antidepressant effects in a subgroup of patients, with few side effects.

Active rTMS treatment accounted for remissions in 14 percent of antidepressant-resistant patients actively treated, compared to about 5 percent for a simulated treatment.

"Although rTMS treatment has not yet lived up to early hopes that it might replace more invasive therapies, this study suggests that the treatment may be effective in at least some treatment-resistant patients," said Thomas R. Insel, M.D., director of the National Institute of Mental Health (NIMH), part of the National Institutes of Health, which funded the study.

Mark George, M.D., of the Medical University of South Carolina, Charleston; Harold Sackeim, Ph.D., and Sarah Lisanby, M.D., of Columbia University, New York City; David Avery, M.D., of the University of Washington, Seattle; William McDonald, M.D., of Emory University, Atlanta; and colleagues, report on their findings in the May 2009 issue of the Archives of General Psychiatry.

"This study should help settle the debate about whether rTMS works for depression," said George, who led the research team. "We can now follow up clues suggesting ways to improve its effectiveness, and hopefully further develop a potential new class of stimulation treatments for other brain disorders."

The treatment aims to jump-start underactive mood-regulating circuitry by targeting the top left front part of the brain with an electromagnetic coil that emits 3,000 pulses over a 37-minute session. It can be safely administered in a doctor’s office with few side effects – unlike more invasive brain stimulation treatments, such as electroconvulsive therapy (ECT) (http://www.nimh.nih.gov/health/topics/brain-stimulation-therapies/brain-stimulation-therapies.shtml) (See Background below).

Following a decade and a half of studies yielding mixed results, the FDA cleared an rTMS device for treatment of mildly treatment resistant depression in 2008, based on data submitted by the manufacturer. The field has been awaiting the results of the NIMH-funded multi-site trial to provide more definitive evidence of efficacy.

Lack of a convincing simulation control treatment that mimics transient tapping and twitching sensations produced by the magnet weakened confidence in findings of some previous rTMS studies. To address these concerns, the new study sought to blind patients, treaters and raters with a simulation control treatment that produced the same head-tapping sensation and scalp twitching as the active treatment. A metal insert below the magnet blocked the magnetic field from entering the brain, while electrodes touching the scalp delivered the tapping sensation. This simulation was so convincing that even the treaters could not confidently guess the randomization above chance level, according to the researchers.

A sample of 190 patients who had previously failed to respond to antidepressant medications received at least three weeks of randomized, controlled magnetic stimulations on weekdays for three weeks, with the rTMS magnet aimed at their brain's left prefrontal cortex. Those who showed improvement received up to an additional three weeks of such blinded treatment.

Thirteen (14 percent) of 92 patients who received the active treatment achieved remission, compared to 5 (about 5 percent) of 98 patients who received the simulation treatment. Patients who received active rTMS were significantly more likely to reach remission, particularly if they had been moderately, rather than severely, treatment resistant. The remission rate climbed to nearly 30 percent in an open-label phase of this study in which there was no simulation control. George said this is comparable to rates seen in the STAR*D (http://www.nimh.nih.gov/trials/practical/stard/index.shtml) medication studies. However, the researchers note that "the overall number of remitters and responders was less than one would like with a treatment that requires daily intervention for three weeks or more, even with a benign side effect profile."

Patients who responded to active treatment received up to three weeks of additional blinded, controlled rTMS until they achieved remission or stopped showing a meaningful response – so the number of responders did not differ significantly from the number of remitters. These patients who remitted then received a combination of medications intended to help maintain the treatment effect. Despite failing to respond to medications in the past, most remained in remission for several months.

Study participants who failed to improve during the blinded phase entered a course of open-label rTMS. Among those who had been in the active rTMS group, 30 percent achieved remission during this second phase. This suggests that some patients might require as many as 5-6 weeks of daily rTMS treatment, according to George. Most patients who remitted required 3-5 weeks of treatment.

"For treatment resistant-patients, we found that rTMS is at least as good as current medications or anything else we have available, except ECT,” said George. “Our current antidepressants do not work for many people."

Since the rTMS treatment did not trigger any seizures or notable side effects, the researchers propose that higher levels of magnetic stimulation be used in future studies, as evidence suggests antidepressant effects of such stimulation are dose-dependent. Higher remission rates might also be attainable if rTMS were combined with medications, they suggest.

Using magnetic resonance imaging (MRI) scans of patients' brains showing exactly where the magnetic coil was positioned, the researchers hope to confirm earlier findings suggesting that a more forward and to-the-side placement produces a larger therapeutic effect. They plan to report the results of the MRI analysis at the American Psychiatric Association meeting in late May.

(Photo: Mark George, M.D., Medical University of South Carolina)

National Institutes of Health

YALE SCIENTISTS EXPLAIN WHY COMPUTERS CRASH BUT WE DON’T

0 comentarios

Nature and software engineers face similar design challenges in creating control systems. The different solutions they employ help explain why living organisms tend to malfunction less than computers, a Yale study has found.

The Yale team compared the evolution of organisms and computer operating systems by analyzing the control networks in both a bacterium Escherichia coli and the Linux operating system. They report their findings online in the May 3 edition of the Proceedings of the National Academy of Sciences.

“It is a commonplace metaphor that the genome is the operating system of a living organism. We wanted to see if the analogy actually holds up,” said Mark Gerstein, the Albert L. Williams Professor of Biomedical Informatics; professor of molecular biophysics and biochemistry, and computer science; and senior author of the paper.

Both E coli and the Linux networks are arranged in hierarchies, but with some notable differences in how they achieve operational efficiencies. The molecular networks in the bacteria are arranged in a pyramid, with a limited number of master regulatory genes at the top that control a broad base of specialized functions, which act independently.

In contrast, the Linux operating system is organized more like an inverted pyramid, with many different top-level routines controlling few generic functions at the bottom of the network. Gerstein said that this organization arises because software engineers tend to save money and time by building upon existing routines rather than starting systems from scratch.

“But it also means the operating system is more vulnerable to breakdowns because even simple updates to a generic routine can be very disruptive,” Gerstein said. To compensate, these generic components have to be continually fine-tuned by designers.

Operating systems are like urban streets – engineers tend to focus on areas that get a lot of traffic,” said Gerstein. “We can do this because we are designing these changes intelligently.”

However, he noted, if the analogy is extended to an organism like E coli, the situation is different: Without fine-tuning, a disruption of such major molecular roadways by random mutations would be fatal. That’s why E. coli cannot afford generic components and has preserved an organization with highly specialized modules, said Gerstein, adding that over billions of years of evolution, such an organization has proven robust, protecting the organism from random damaging mutations.

(Photo: Yale U.)

Yale University

GENES AS FOSSILS

0 comentarios

When exactly did oxygen first appear in Earth’s atmosphere? Although many physical and chemical processes are thought to be responsible for that profound transformation, scientists have tried to answer at least part of that question by looking for the origin of oxygenic photosynthesis — the process that organisms use to split water to make oxygen — in rocks that are billions of years old. One way they try to pinpoint the start of that process is by searching for biological links between the distant past and the present. Specifically, they study molecules known as biomarkers that are produced by modern organisms and can be traced to the origins of certain biological processes because they are found in rocks that are 2.5 billion years old.

One biomarker that had been proposed for such research is a type of lipid, or fat molecule, known as 2-methylhopanoid. This substance was thought to be a good biomarker because it has been found in ancient rocks (where it is referred to as 2-methylhopane) and is also produced in the modern environment by cyanobacteria, which are oxygen-producing bacteria located in shallow marine environments.

But it turns out that these molecules might not be the best biomarkers for oxygenic photosynthesis, according to a recent collaboration between MIT’s Department of Biology and Department of Earth, Atmospheric and Planetary Sciences (EAPS). In a paper published in the Proceedings of the National Academy of Sciences, the group reported it had discovered the gene and related protein that are responsible for producing 2-methylhopanoids. Because this DNA can be traced to bacteria that do not produce oxygen, 2-methylhopanoids cannot confidently be used as biomarkers for oxygenic photosynthesis, the researchers say.

The research, which was supported by NASA, the National Science Foundation and the Agouron Institute, counters previous work of co-author Roger Summons, an EAPS professor of geobiology, who first proposed in 1999 that 2-methylhopanoids could be a biomarker for cyanobacteria. That work was called into question in 2007 when researchers in the lab of co-author Dianne K. Newman, the John and Dorothy Wilson Professor of Biology and Geobiology, in collaboration with Alex Sessions at the California Institute of Technology, discovered a type of bacterium that doesn’t produce oxygen but does produce 2-methylhopanoids.

To determine whether this was a chance finding or whether different kinds of bacteria produce 2-methylhopanoids, Summons, Newman, Sessions and several postdoctoral researchers joined forces to figure out which genes and proteins are involved in making the lipids. Knowing this gene, the researchers could then search the genome databases for other bacteria that also produce these molecules. They could also learn more about the purpose of the molecules, such as whether they emerged in response to some sort of environmental stress billions of years ago. “This is an excellent example of how genes themselves can also be used as fossils,” said lead author Paula Welander, a postdoc from Newman’s lab, who explained that previous surveys of bacteria that produce 2-methylhopanoids were limited because they were done in the laboratory under arbitrary growth conditions which did not necessarily elicit their production. Moreover, because there are lots of bacteria that biologists can’t yet grow in the lab due to technical limitations, this means they might not be aware of some that can produce 2-methylhopanoids. By taking a molecular genetics approach to analyzing lipid production, the group was able to circumvent these limitations.

Ann Pearson, a biogeochemist at Harvard who was not involved in the research, said that although scientists still have a long way to go before understanding the earliest origins of these biomarker molecules, the discovery of this DNA is “crucial” because now scientists know where to look as they start to “fill in the holes” about the earliest history of oxygenic photosynthesis.

What makes 2-methylhopanoids unusual from other lipids that are produced by bacteria is that they have an extra methyl group (one carbon and three hydrogen atoms). To determine which gene and protein are responsible for adding the methyl group, the MIT researchers analyzed the genome of the bacterium that Newman’s lab had previously discovered also produces 2-methylhopanoids.

Once they found a gene cluster that is responsible for making lipids, they generated mutants with deletions in genes encoding proteins likely to be responsible for adding the methyl group. After pinpointing the gene that encodes this protein, they searched the genome databases to determine what other types of bacteria contain this specific gene. It turns out there are three major groups containing many bacteria, including cyanobacteria, but the researchers were not able to trace the complex history of the protein to determine which bacteria carried the oldest version.

In addition to determining this history, the researchers are trying to decipher the function of 2-methylhopanoids, which could reveal something about what microbial life and its environment were like on Earth billions of years ago. They are currently conducting microbiology and molecular biology experiments to examine the features of membranes that contain the molecules, and they are also researching the environments where these molecules are found today.

(Photo: Ryan C. Hunter)

Massachusetts Institute of Technology

MAYA PLUMBING, FIRST PRESSURIZED WATER FEATURE FOUND IN NEW WORLD

0 comentarios

A water feature found in the Maya city of Palenque, Mexico, is the earliest known example of engineered water pressure in the new world, according to a collaboration between two Penn State researchers, an archaeologist and a hydrologist. How the Maya used the pressurized water is, however, still unknown.

"Water pressure systems were previously thought to have entered the New World with the arrival of the Spanish," the researchers said in a recent issue of the Journal of Archaeological Science. "Yet, archaeological data, seasonal climate conditions, geomorphic setting and simple hydraulic theory clearly show that the Maya of Palenque in Chiapas, Mexico, had empirical knowledge of closed channel water pressure predating the arrival of Europeans."

The feature, first identified in 1999 during a mapping survey of the area, while similar to the aqueducts that flow beneath the plazas of the city, was also unlike them. In 2006, an archaeologist returned to Palenque with a hydrologist to examine the unusual water feature. The area of Palenque was first occupied about the year 100 but grew to its largest during the Classic Maya period 250 to 600. The city was abandoned around 800.

"Under natural conditions it would have been difficult for the Maya to see examples of water pressure in their world," said Christopher Duffy, professor of civil and environmental engineering. "They were apparently using engineering without knowing the tools around it. This does look like a feature that controls nature."

Underground water features such as aqueducts are not unusual at Palenque. Because the Maya built the city in a constricted area in a break in an escarpment, inhabitants were unable to spread out. To make as much land available for living, the Maya at Palenque routed streams beneath plazas via aqueducts.

"They were creating urban space," said Kirk French, lecturer in anthropology. "There are streams in the area every 300 feet or so across the whole escarpment. There is very little land to build on."

These spring-fed streams combined with approximately 10 feet of rain that falls during the six-month rainy season also presented a flooding hazard that the aqueducts would have at least partially controlled.

The feature the researchers examined, Piedras Bolas Aqueduct, is a spring-fed conduit located on steep terrain. The elevation drops about 20 feet from the entrance of the tunnel to the outlet about 200 feet downhill. The cross section of the feature decreases from about 10 square feet near the spring to about a half square foot where water emerges from a small opening. The combination of gravity on water flowing through the feature and the sudden restriction of the conduit causes the water to flow out of the opening forcefully, under pressure.

"The conduit could have reached a theoretical hydraulic head limit of 6 meters (about 20 feet)," said Duffy.

At the outlet, the pressure exerted could have moved the water upwards of 20 feet.

"The experience the Maya at Palenque had in constructing aqueducts for diversion of water and preservation of urban space may have led to the creation of useful water pressure," said French.

The Piedras Bolas Aqueduct is partially collapsed so very little water currently flows from the outlet. French and Duffy used simple hydraulic models to determine the potential water pressure achievable from the Aqueduct. They also found that Aqueduct would hold about 18,000 gallons of water if the outlet were controlled to store the water.

One potential use for the artificially engineered water pressure would have been a fountain. The researchers modeled the aqueduct with a fountain as the outlet and found that even during flood conditions, water would flow in the aqueduct, supplying the fountain, and above ground in the channel running off the slope. Another possibility could be to use the pressure to lift water onto the adjacent residential area for use as wastewater disposal.

"The palace has features that suggest something similar," said French.

(Photo: Kirk French, Penn State)

The Pennsylvania State University

GLOBAL GLACIATION SNOWBALLED INTO GIANT CHANGE IN CARBON CYCLE

0 comentarios

For insight into what can happen when the Earth's carbon cycle is altered -- a cause and consequence of climate change -- scientists can look to an event that occurred some 720 million years ago.

New data from a Princeton University-led team of geologists suggest that an episode called "snowball Earth," which may have covered the continents and oceans in a thick sheet of ice, produced a dramatic change in the carbon cycle. This change in the carbon cycle, in turn, may have triggered future ice ages.

Pinpointing the causes and effects of the extreme shift in the way carbon moved through the oceans, the biosphere and the atmosphere -- the magnitude of which has not been observed at any other time in Earth history -- is important for understanding just how much Earth's climate can change and how the planet responds to such disturbances.

Publishing their findings in the April 30 issue of the journal Science, the researchers also put forth a hypothesis to explain how changes to Earth's surface wrought by the glaciers of the Neoproterozoic Era could have created the anomaly in carbon cycling.

"The Neoproterozoic Era was the time in Earth history when the amount of oxygen rose to levels that allowed for the evolution of animals, so understanding changes to the carbon cycle and the dynamics of the Earth surface at the time is an important pursuit," said Princeton graduate student Nicholas Swanson-Hysell, the first author on the paper.

The Neoproterozoic era, which lasted from 1,000 million years ago to 542 million years ago, is divided into three distinct periods, beginning with the Tonian, extending through the Cryogenian and ending with the Ediacaran. The Cryogenian period is notable in Earth history for the extensive and repeated ice ages that took place, beginning with the massive Sturtian glaciation at the start of the period. This marked the first ice age on Earth in roughly 1.5 billion years, which is an unusually long time span between glaciations. Since the Cryogenian, Earth has endured an ice age about once every 100 to 200 million years.

The "snowball Earth" theory suggests that the Sturtian glaciation was global in scope, literally encasing the planet in ice, which could have wreaked havoc on the normal functioning of the carbon cycle. While the theory is controversial and the extent of the deep freeze is under investigation, research team member Adam Maloof co-wrote a March 2010 Science paper demonstrating that glaciers reached the equator some 716.5 million years ago, providing further evidence to support the existence of a Cryogenian "snowball Earth."

In the latest research, Swanson-Hysell, Maloof and their collaborators collected samples of limestone from Central and South Australia dating back to the Tonian and Cryogenian periods. Using a technique known as isotope analysis to learn how the carbon cycle worked in ancient times, the team pieced together clues that are hidden in the atomic composition of the carbon found in inorganic limestone sediment and ancient organic material. In addition, the geologists recorded where the samples were found in the rock layers to determine crucial information about the relative age of the samples and the environmental conditions under which they formed.

Their results documented a peculiar and large shift in the carbon cycle based on analyses of samples obtained from tropical limestone sediments known as the Trezona Formation, which dates to the end of the Cryogenian period approximately 650 million years ago and was deposited between "snowball Earth" events.

"The disturbance we're seeing in the Neoproterozoic carbon cycle is larger by several orders of magnitude than anything we could cause today, even if we were to burn all the fossil fuels on the planet at once," said Maloof, an assistant professor of geosciences at Princeton.

Previous data from the Ediacaran period at the end of the Neoproterozoic era have shown a similar perturbation to the carbon cycle, and in 2003 Massachusetts Institute of Technology geophysicist Daniel Rothman suggested that a buildup of a huge pool of organic carbon in the ocean could have led to the observed disturbance.

The perturbation studied by the Princeton researchers shows this same behavior during an event that was roughly 25 percent larger and 100 million years older than the previously recognized disturbance. The team also documented that the carbon cycle was not operating in this bizarre fashion 800 million years ago prior to the first Neoproterozoic glaciations, constraining in time the onset of such behavior and linking it to the proposed "snowball Earth" event.

"The new carbon isotopic data shows a whopping … downshift in the isotopic composition of carbonate, possibly the largest single isotopic change in Earth history, while the isotopic composition of organic carbon is invariant," said Rothman, who was not part of the research team. "The co-occurrence of such signals is enigmatic, suggesting that the carbon cycle during this period behaved fundamentally differently than it does today."

Building on Rothman's framework, the Princeton-led geologists set out to explain how an ice-covered globe in the early Cryogenian period could have prompted the accumulation of massive amounts of organic carbon in the ocean, leading to the observed disturbance to the carbon cycle later in the period.

According to their proposed hypothesis, the passage of the Sturtian glaciers across continental surfaces would have removed the weathered material and debris, which had accumulated in the 1.5 billion years since the preceding ice age. When the glaciers receded, this would have exposed vast amounts of bedrock to the carbon dioxide in the atmosphere for weathering, freeing up nutrients in the rock for delivery into the oceans.

This process would have generated a relatively large influx of iron into the oceans, which could have interrupted biomechanisms used by marine bacteria during the Tonian to process, or eat, the organic carbon in the water and convert it into carbon dioxide and other dissolved inorganic carbon compounds. If the organic carbon was not eaten by bacteria, it would have accumulated into a massive oceanic reservoir and resulted in the strange carbon cycle of the Cryogenian and early Ediacaran.

The interaction of carbon dioxide with the continental surfaces during the weathering process also would have removed some of the carbon dioxide from the atmosphere, lowering the global temperatures and creating conditions conducive to the series of glacial events that were observed throughout the Cryogenian.

According to Rothman's hypothesis, over millions of years the levels of oceanic and atmospheric oxygen would have grown as a consequence of the altered carbon cycle, ultimately leading to the oxidation of the large reservoir of organic carbon, removing the extra organic carbon from the oceans and returning the carbon cycle to a steady state more similar to how it functions today. Increased levels of oxygen in the atmosphere also would have provided the conditions that were necessary for the explosive diversification of animal life at the end of the Neoproterozoic and into the Cambrian Period.

In field work this summer, the Princeton team will continue to investigate the disturbances to the Cryogenian and Ediacaran carbon cycles, and conduct research on the Tonian-Cryogenian-Ediacaran geologic, isotopic and paleogeographic history of northern Ethiopia and southern Australia. The geologists will explore some of the many questions that remain, such as what enabled the Cryogenian growth of ice sheets after a 1.5 billion year hiatus.

(Photo: Adam Maloof)

Princeton University

RESEARCHERS FIND FUTURE TEMPERATURES COULD EXCEED LIVABLE LIMITS

0 comentarios

Reasonable worst-case scenarios for global warming could lead to deadly temperatures for humans in coming centuries, according to research findings from Purdue University and the University of New South Wales, Australia.

Researchers for the first time have calculated the highest tolerable "wet-bulb" temperature and found that this temperature could be exceeded for the first time in human history in future climate scenarios if greenhouse gas emissions continue unabated.

Wet-bulb temperature is equivalent to what is felt when wet skin is exposed to moving air. It includes temperature and atmospheric humidity and is measured by covering a standard thermometer bulb with a wetted cloth and fully ventilating it.

The researchers calculated that humans and most mammals, which have internal body temperatures near 98.6 degrees Fahrenheit, will experience a potentially lethal level of heat stress at wet-bulb temperature above 95 degrees sustained for six hours or more, said Matthew Huber, the Purdue professor of earth and atmospheric sciences who co-authored the paper that is currently available online and will be published in an upcoming issue of the Proceedings of the National Academy of Sciences.

"Although areas of the world regularly see temperatures above 100 degrees, really high wet-bulb temperatures are rare," Huber said. "This is because the hottest areas normally have low humidity, like the 'dry heat' referred to in Arizona. When it is dry, we are able to cool our bodies through perspiration and can remain fairly comfortable. The highest wet-bulb temperatures ever recorded were in places like Saudi Arabia near the coast where winds occasionally bring extremely hot, humid ocean air over hot land leading to unbearably stifling conditions, which fortunately are short-lived today."

The study did not provide new evaluations of the likelihood of future climate scenarios, but explored the impacts of warming. The challenges presented by the future climate scenarios are daunting in their scale and severity, he said.

"Whole countries would intermittently be subject to severe heat stress requiring large-scale adaptation efforts," Huber said. "One can imagine that such efforts, for example the wider adoption of air conditioning, would cause the power requirements to soar, and the affordability of such approaches is in question for much of the Third World that would bear the brunt of these impacts. In addition, the livestock on which we rely would still be exposed, and it would make any form of outside work hazardous."

While the Intergovernmental Panel on Climate Change central estimates of business-as-usual warming by 2100 are seven degrees Fahrenheit, eventual warming of 25 degrees is feasible, he said.

"We found that a warming of 12 degrees Fahrenheit would cause some areas of the world to surpass the wet-bulb temperature limit, and a 21-degree warming would put half of the world's population in an uninhabitable environment," Huber said. "When it comes to evaluating the risk of carbon emissions, such worst-case scenarios need to be taken into account. It's the difference between a game of roulette and playing Russian roulette with a pistol. Sometimes the stakes are too high, even if there is only a small chance of losing."

Steven Sherwood, the professor at the Climate Change Research Centre at the University of New South Wales, Australia, who is the paper's lead author, said prolonged wet-bulb temperatures above 95 degrees would be intolerable after a matter of hours.

"The wet-bulb limit is basically the point at which one would overheat even if they were naked in the shade, soaking wet and standing in front of a large fan," Sherwood said. "Although we are very unlikely to reach such temperatures this century, they could happen in the next."

Humans at rest generate about 100 watts of energy from metabolic activity. Wet-bulb temperature estimates provide upper limits on the ability of people to cool themselves by sweating and otherwise dissipating this heat, he said. In order for the heat dissipation process to work, the surrounding air must be cooler than the skin, which must be cooler than the core body temperature. The cooler skin is then able to absorb excess heat from the core and release it into the environment. If the wet-bulb temperature is warmer than the temperature of the skin, metabolic heat cannot be released and potentially dangerous overheating can ensue depending on the magnitude and duration of the heat stress.

The National Science Foundation-funded research investigated the long-term implications of sustained greenhouse gas emissions on climate extremes. The team used climate models to compare the peak wet-bulb temperatures to the global temperatures for various climate simulations and found that the peak wet-bulb temperature rises approximately 1 degree Centigrade for every degree Centigrade increase in tropical mean temperature.

Huber did the climate modeling on supercomputers operated by Information Technology at Purdue (ITaP), Purdue's central information technology organization. Sherwood performed the wet-bulb calculations.

"These temperatures haven't been seen during the existence of hominids, but they did occur about 50 million years ago, and it is a legitimate possibility that the Earth could see such temperatures again," Huber said. "If we consider these worst-case scenarios early enough, perhaps we can do something to address the risk through mitigation or new technological advancements that will allow us to adapt."

(Photo: Purdue University graphic/Matthew Huber)

Purdue University

NEW INSIGHTS INTO THE MYSTERY OF NATURAL HIV IMMUNITY

0 comentarios

When people become infected by HIV, it’s usually only a matter of time, barring drug intervention, until they develop full-blown AIDS. However, a small number of people exposed to the virus progress very slowly to AIDS — and some never develop the disease at all.

In the late 1990s, researchers showed that a very high percentage of those naturally HIV-immune people, who represent about one in 200 infected individuals, carry a gene called HLA B57. Now a team of researchers from the Ragon Institute of Massachusetts General Hospital, MIT and Harvard has revealed a new effect that contributes to this gene’s ability to confer immunity.

The research team, led by MIT Professor Arup Chakraborty and Harvard Professor Bruce Walker of MGH, found that the HLA B57 gene causes the body to make more potent killer T cells — white blood cells that help defend the body from infectious invaders. Patients with the gene have a larger number of T cells that bind strongly to more pieces of HIV protein than people who do not have the gene. This makes the T cells more likely to recognize cells that express HIV proteins, including mutated versions that arise during infection. This effect contributes to superior control of HIV infection (and any other virus that evolves rapidly), but it also makes those people more susceptible to autoimmune diseases, in which T cells attack the body’s own cells.

This new knowledge, described in the May 5 online edition of Nature, could help researchers develop vaccines that provoke the same response to HIV that individuals with HLA B57 muster on their own, says Walker, who is director of the Ragon Institute and a professor at Harvard Medical School.

“HIV is slowly revealing itself,” says Walker. “This is another point in our favor in the fight against the virus, but we have a long way to go.”

Chakraborty, a professor of chemical engineering, chemistry and biological engineering who specializes in theoretical and computational studies of the immune system, undertook this study after Walker told him about the phenomenon of HLA B57-induced immunity. Chakraborty was also intrigued by the fact that people who carry the HLA B57 gene also are more likely to develop autoimmune disorders.

Chakraborty, Walker and their colleagues focused on killer T cells, one of two types of T cells that play an important role in the immune response. Most killer T cells are genetically unique and recognize different pieces of foreign proteins, known as epitopes, attached to the surface of cells that have been infected by viruses or bacteria.

After a killer T cell grabs hold of such a protein, it becomes activated and starts sweeping the body for more cells that express the same protein, so it can kill them. It also clones itself to produce an army of T cells targeting the invader.

The new Ragon Institute study shows that individuals with the HLA B57 gene produce larger numbers of killer T cells that are cross-reactive, meaning they can attack more than one epitope associated with HIV, including mutants that arise to escape activated killer T cells.

The finding offers hope that researchers could design a vaccine to help draw out cross-reactive T cells in people who don’t have the HLA B57 gene. “It’s not that they don’t have cross-reactive T cells,” says Chakraborty. “They do have them, but they’re much rarer, and we think they might be coaxed into action with the right vaccine.”

The work is a valuable contribution to scientists’ understanding of HIV, says David Baltimore, professor of biology and former president of Caltech.

“This is a remarkable paper because it starts from a clinical observation, integrates it with experimental observations, generates a valuable model and derives from the model a deep understanding of the behavior of the human immune system. Rarely does one read a paper that stretches the mind so surprisingly far,” says Baltimore, a Nobel laureate in physiology or medicine who now studies HIV and human T cell interactions.

Chakraborty and colleagues had previously developed computational models of T-cell development in the thymus, an organ located behind the breastbone through which T cells must pass in order to become mature killers. There they undergo a selection process designed to weed out cells that might attack the body’s own cells (which display pieces of human proteins on their surface). T cells must also demonstrate that they can bind weakly to some human protein fragments. Only a tiny percentage of T cells pass these tests and are allowed to leave the thymus and circulate in the body to defend against viruses, other diseases, and cancerous cells.

Inside the thymus, T cells are exposed to “self-peptides” — small human protein fragments — bound to HLA proteins. Chakraborty and co-workers had previously shown that the diversity of self-peptide fragments presented in the thymus influences the kinds of T cells a person can produce. The type and number of self-peptides expressed are determined by the HLA genes, which have hundreds of distinct forms, including HLA B57. Each person carries up to six of them (three inherited from each parent).

Using data from previous studies, the Ragon team found that HLA B57 protein presents fewer types of self-peptides than most other HLA proteins. (HLA B27 is another protein that presents few types of self-peptides and also appears to protect against HIV and promote autoimmune disorders.) In this study, Chakraborty and postdoctoral fellow Elizabeth Read and graduate student Andrej Kosmrlj, lead authors of the paper, used their computer model to study what happens when maturing T cells are exposed to only a small diversity of self-peptides in the thymus.

T cells with receptors that bind strongly to any of the self-peptides in the thymus are forced to undergo cell suicide, because of their potential to attack the body’s own cells. Chakraborty and co-workers showed that this means that, for most individuals, most of the body’s T cells have receptors that bind to targeted viral proteins via a number of weak interactions, with each interaction making a significant contribution to the binding. Thus, a single mutation to an HIV peptide can potentially evade the immune response.

A different scenario unfolds in people who have the HLA B57 gene. Using their computer model, Chakraborty and colleagues showed that, because those individuals’ T cells are exposed to fewer self-peptides in the thymus, T cells with receptors that mediate strong binding to viral proteins via just a few important contacts are more likely to escape the thymus. This makes these T cells more cross-reactive to targeted HIV peptide mutants, because as long as those points in the viral proteins don’t mutate, the T cells are still effective. The model also showed that once those T cells are released into the bloodstream, they can effectively attack HIV proteins, even when the virus mutates.

This model also explains why people with the HLA B57 gene have autoimmune problems: Their T cells are more likely to bind strongly to human peptides not encountered in the thymus.

The computational studies explained many puzzles, but also made a prediction: Individuals with HLA genes that result in a display of fewer self-peptides should control HIV (and other viruses like hepatitis C virus) better. To test this prediction, the researchers studied nearly 2,000 patients — 1,100 “HIV controllers” and 800 who progressed normally to AIDS, and confirmed that this appears to be true.

The melding of complementary approaches — clinical studies, basic immunology and computations rooted in engineering and the physical sciences — exemplified by this study is part of the mission of the Ragon Institute, founded last year to support discovery of an effective AIDS vaccine.

The idea is to break down the “scientific silos” in which many researchers work, says Walker. “Because of the Ragon funding, Arup and I had a conversation we never would have otherwise had,” he says. “We probably never would have met.”

There are now a few dozen researchers working on Ragon-funded HIV studies, and Walker believes those new collaborations can lead to many more successes. “There are people out there who have never worked on HIV problems that have something to immediately contribute, and this is a great example of that,” he says. “We have not yet brought the full potential of scientific knowledge to bear on trying to do something about HIV, which remains a horrific global problem.”

The research was also partly funded by Chakraborty’s NIH Director’s Pioneer Award. “Combining fundamental understanding of molecular mechanisms underlying immune function with computational methods for scanning large numbers of sequences allowed the development of a powerful explanation for a problem of great importance to public health. Such interdisciplinary research results are one of the goals of the NIH Director’s Pioneer Award program,” says Jeremy M. Berg, director of the National Institute of General Medical Sciences.

(Photo: Donna Coveney)

MIT

MALE OBESITY LINKED TO LOW TESTOSTERONE LEVELS

0 comentarios
Obesity, a condition linked to heart disease and diabetes, now appears to be associated with another health problem, but one that affects men only -- low testosterone levels.

Results of a study published online ahead of print in the journal Diabetes Care, conducted by University at Buffalo endocrinologists, showed that 40 percent of obese participants involved in the Hypogonadism in Males (HIM) study had lower-than-normal testosterone readings.

The percentage rose to 50 percent among obese men with diabetes. Results also revealed that as body mass index (BMI) -- a relationship of weight–to-height -- increased, testosterone levels fell.

"The effect of diabetes on lowering testosterone levels was similar to that of a weight gain of approximately 20 pounds," says Sandeep Dhindsa, MD, an endocrinology specialist in the UB Department of Medicine and first author on the study.

"In view of the fact that almost one-third of the U.S. is obese, these observations have profound pathophysiological, clinical, epidemiological and public health implications."

This is the largest analysis of the association between obesity and low testosterone, and the first to compare prevalence of low testosterone with obesity and diabetes separately and together. The study shows that obesity and diabetes may exert independent influences on testosterone concentrations.

"We published a report in 2004 on the high prevalence of low testosterone levels in men with type 2 diabetes, and multiple studies all over the world have confirmed the association of low testosterone with diabetes," Dhindsa notes.

"The Endocrine Society now recommends that all men with type 2 diabetes should have their testosterone levels measured. Our new study shows that obese men also have a very high prevalence of low testosterone levels, so physicians should consider screening obese non-diabetic men, as well, for low testosterone."

The HIM study was funded by Solvay Pharmaceuticals Inc., and was conducted from November 2003 to February 2004 in 95 primary care practices throughout the U.S. The study involved 2,165 men 45 years or older who provided blood samples for analysis of testosterone concentrations.

UB researchers excluded participants from the full study who had no BMI data or were on certain drugs that can affect testosterone levels, providing a study population of 1,849 men -- 398 with diabetes and 1,451 non-diabetics.

"With the rising prevalence of obesity in the U.S. and the rest of the world," says Paresh Dandona, MD, head of the Division of Endocrinology, Diabetes and Metabolism at UB and Kaleida Health, and senior author of the study, "it is imperative that the prevalence of low testosterone levels in obese men be defined. In addition, the magnitude of the contribution of obesity to subnormal testosterone needs to be quantified.

"We hypothesized that obese men are more likely to have low testosterone than non-obese men, and that we would find more low testosterone levels in men with diabetes than in men without diabetes, both obese and non-obese."

Results confirmed these hypotheses, showing a 40 percent higher prevalence of low testosterone in obese men compared to the non-obese participants. Men with diabetes, whether obese or not, showed lower levels of testosterone than non-diabetic men across all weight categories. Testosterone levels decreased significantly in both diabetic and non-diabetic men as BMI increased.

"In view of the increasing prevalence of obesity, even in younger populations, it would be important to conduct a similar study in the men at the prime of their reproductive years," he says.

UB endocrinologists published a study in Diabetes Care in 2008 showing that more than 50 percent of men between 18 and 35 years old with type 2 diabetes had lower than normal testosterone levels.

"In view of the high rates of subnormal testosterone in patients with obesity or diabetes, testosterone concentrations should be measured regularly in these populations, especially when these conditions occur together," says Dandona.

University at Buffalo

SCIENTISTS CLOCK ONTO HOW SUNLIGHT PUTS A SPRING IN OUR STEP

0 comentarios

Scientists have discovered two “body clock” genes that reveal how seasonal changes in hormones are controlled and could ultimately help find treatments for seasonal affective disorder.

Researchers at the Universities of Edinburgh and Manchester also found that one of these genes (EYA3) has a similar role in both birds and mammals, showing a common link that has been conserved for more than 300 million years.

Scientists studied thousands of genes in Soay sheep. This breed, which dates back to the Bronze Age, is considered to be one of the most primitive with seasonal body clocks unaffected by cross breeding throughout the centuries.

For a long time, scientists had speculated that a key molecule – termed tuberalin – was produced in the pituitary gland at the base of the brain and sent signals to release hormones involved in driving seasonal changes.

However, until now scientists have had no idea about the nature of this molecule, how it works or how it is controlled.

The team focussed on a part of the brain that responds to melatonin – a hormone known to be involved in seasonal timing in mammals.

The study revealed a candidate molecule for the elusive tuberalin, which communicates within the pituitary gland to signal the release of another hormone – prolactin – when days start getting longer. This helps animals adapt to seasonal changes in the environment.

The researchers, whose findings are published in the journal Current Biology, subsequently identified two genes – TAC1 and EYA3 – that were both activated early when natural hormone levels rise due to longer days.

Professor Dave Burt, of The Roslin Institute at the University of Edinburgh, said: “For more than a decade scientists have known about the presence of this mysterious molecule tuberalin, but until now nobody has known quite how it worked.

“Identifying these genes not only sheds light on how our internal annual body clocks function but also shows a key link between birds and mammals that has been conserved over 300 million years.”

The study suggests that the first gene TAC1 could only work when the second gene EYA3 – which is also found in birds - was present. The second gene may act to regulate TAC 1 so that it could be switched on in response to increasing day length.

Professor Andrew Loudon, of the University of Manchester’s Faculty of Life Sciences, said: “A lot of our behaviour is controlled by seasons. This research sheds new light on how animals adapt to seasonal change, which impacts on factors including hibernation, fat deposition and reproduction as well as the ability to fight off diseases.”

(Photo: U. Manchester)

University of Manchester

LOW-MAINTENANCE STRAWBERRY MAY BE GOOD CROP TO GROW IN SPACE

0 comentarios

Astronauts could one day tend their own crops on long space missions, and Purdue University researchers have found a healthy candidate to help satisfy a sweet tooth - a strawberry that requires little maintenance and energy.

Cary Mitchell, professor of horticulture, and Gioia Massa, a horticulture research scientist, tested several cultivars of strawberries and found one variety, named Seascape, which seems to meet the requirements for becoming a space crop.

"What we're trying to do is grow our plants and minimize all of our inputs," Massa said. "We can grow these strawberries under shorter photoperiods than we thought and still get pretty much the same amount of yield."

Seascape strawberries are day-neutral, meaning they aren't sensitive to the length of available daylight to flower. Seascape was tested with as much as 20 hours of daylight and as little as 10 hours. While there were fewer strawberries with less light, each berry was larger and the volume of the yields was statistically the same.

"I was astounded that even with a day-neutral cultivar we were able to get basically the same amount of fruit with half the light," Mitchell said.

The findings, which were reported online early in the journal Advances in Space Research, showed that the Seascape strawberry cultivar is a good candidate for a space crop because it meets several guidelines set by NASA. Strawberry plants are relatively small, meeting mass and volume restrictions. Since Seascape provides fewer, but larger, berries under short days, there is less labor required of crew members who would have to pollinate and harvest the plants by hand. Needing less light cuts down energy requirements not only for lamps, but also for systems that would have to remove heat created by those lights.

"We're trying to think of the whole system -- growing food, preparing it and getting rid of the waste," Massa said. "Strawberries are easy to prepare and there's little waste."

Seascape also had less cycling, meaning it steadily supplied fruit throughout the test period. Massa said the plants kept producing fruit for about six months after starting to flower.

Mitchell said the earliest space crops will likely be part of a "salad machine," a small growth unit that will provide fresh produce that can supplement traditional space meals. Crops being considered include lettuces, radishes and tomatoes. Strawberries may be the only sweet fruit being considered, he said.

"The idea is to supplement the human diet with something people can look forward to," Mitchell said. "Fresh berries can certainly do that."

Judith Santini, a research statistical analyst in Purdue's Department of Agronomy, was responsible for data analysis from the tests.

Mitchell and Massa said they next plan to test Seascape strawberries using LED lighting, hydroponics and different temperature ranges. NASA funded their work.

(Photo: Purdue Agricultural Communication/Tom Campbell)

Purdue University

CELL PHONES COULD DOUBLE AS NIGHT VISION DEVICES

0 comentarios
Call it Nitelite: The newest app for cell phones might be night vision. A University of Florida engineering researcher has crafted a nickel-sized imaging device that uses organic light-emitting diode technology similar to that found in cell phone or laptop screens for night vision. But unlike night vision goggles, which are heavy and expensive, the device is paper-thin, light and inexpensive, making it a possible add-on to cell phone cameras, even eyeglasses, once it is enlarged.

“Really, this is a very inexpensive device,” said Franky So, a UF professor of materials science and engineering. “Incorporating it into a cell phone might not be a big deal.”

So is the lead author of a paper about the infrared-to-vision device that appeared in a recent issue of the journal Advanced Materials. Do Young Kim, a postdoctoral associate in materials science and engineering, co-authored the paper and collaborated with So on the project.

Standard night vision goggles use a photocathode to convert invisible infrared light photons into electrons. The electrons are accelerated under high voltage and driven into a phosphorous screen, producing greenish images of objects not visible to the eye in darkness. The process requires thousands of volts and a cathode ray tube-like vacuum tube made of thick glass. That is why the goggles tend to be bulky and heavy.

So’s imaging device replaces the vacuum tube with several layers of organic semiconductor thin film materials. The structure is simple: It consists of a photodetector connected in series with an LED. When operating, infrared light photons are converted into electrons in the photodetector, and these photo-generated electrons are injected into the LED, generating visible light. The device – versions range from millimeter- to nickel-size — currently uses glass, but it could be made with plastics, which would make it lightweight.

Conventional night vision goggles or scopes weigh 1 to 2 pounds, with price tags ranging from hundreds to thousands of dollars. Sized for cell phones, So said, his imaging devices weigh just a couple of ounces and would be inexpensive to manufacture because factories could use the same equipment used today to make laptop screens or flat-screen televisions.

So said other applications could include night vision technology for car windshields, or even for standard glasses to use at night.

So’s research is funded by Nanoholdings LLC, a Connecticut-based diversified nano-energy company that licenses and develops nano-energy discoveries in partnership with universities and their scientists, and the Defense Advanced Research Projects Agency. A UF startup company, NirVision, a portfolio company of Nanoholdings, was recently formed to further develop and commercialize the technology for different market segments.

University of Florida

APHIDS EVOLVED SPECIAL, SURPRISING TALENTS

0 comentarios

Contrary to popular belief, aphids are not just sap-sucking, plant-destroying enemies of agriculture. In fact, these pests are genetic pioneers that evolved two unique traits, according to a study that appeared in the April 30 issue of the journal Science.

First, aphids are, so far, the only animal known to produce essential pigments known as carotenoids. The aphid's pigment-producing ability is unique to the animal kingdom. Other animals, including humans, that need carotenoids cannot produce these essentials themselves; instead, they must obtain carotenoids from food.

Why are carotenoids needed by many plants and animals? Because they provide vital support to varied functions, ranging from promoting immunity to reducing cell damage and providing color to fruits and vegetables. For example, carotenoids give tomatoes their red color and flamingoes their pink color. Carotenoids also determine whether aphids are red or green--a color distinction that influences their vulnerability to predators and other threats.

As for the second unique trait, aphids probably acquired their carotenoid-producing ability through a rare, and perhaps unique, process: millions of years ago, aphids apparently "snatched" carotenoid-producing genes from a carotenoid-producing member of the fungi kingdom, and then snapped those snatched genes into their own genetic code.

Gene transfer between organisms is not itself a rare phenomenon. However, the fungi-to-aphid gene transfer is the only known gene transfer between members of the fungi kingdom and animal kingdom--which are so evolutionarily distant from one another that it was long thought that never the twain would genetically meet.

But by busting through kingdom barriers, aphids gained something akin to a "genetic magic wand" that empowered them to produce their own carotenoids. They were thereby freed of the need to scavenge for carotenoid-yielding foods. The result: one less chore on the aphid's "to do" list, and a new self-sufficiency for these insects.

No one knows what compelled genes to jump from fungi to aphids. But "the transferred fungi genes may have originated from a closely associated fungus, such as one of the fungi that causes diseases in aphids," says Nancy Moran of the University of Arizona, the lead author of the Science paper. "Because the carotenoid-producing genes were the only fungus-related genes that we found in the aphid genes, we think that the fungi-to-aphid transfer was an extremely rare event."

"This is a very big discovery," says Matt Kane of the National Science Foundation. "By recognizing the horizontal transfer of nutritionally important carotenoid genes, Nancy Moran and her colleagues are the first to discover that gene transfer can occur between very distantly related groups of higher, multi-cellular organisms such as fungi and insects."

The foundation for the discovery of the fungi-to-aphid gene transfer was laid when a research team that included Moran constructed the first map of the entire genetic code of aphids. Then, when follow-up studies of the aphid's genetic map were conducted by a different research led by Moran, the presence of carotenoid-producing genes was discovered.

Because a few cases of bacterium-to-animal gene transfer are known and because aphids have close associations with bacterial symbionts, bacteria were initially considered a more likely suspect for genetic swapping with aphids than were the more genetically complex fungi. But after identifying signature similarities between the sequences and arrangements of the aphid and fungi carotenoid-producing genes, Moran's team was able to eliminate bacteria, as well as laboratory contamination, as potential sources for the aphids' carotenoid-producing genes.

(Photo: Zina Deretsky, National Science Foundation)

National Science Foundation

Followers

Archive

 

Selected Science News. Copyright 2008 All Rights Reserved Revolution Two Church theme by Brian Gardner Converted into Blogger Template by Bloganol dot com