Thursday, February 3, 2011


0 comentarios

Scientists have found that the pleasurable experience of listening to music releases dopamine, a neurotransmitter in the brain important for more tangible pleasures associated with rewards such as food, drugs and sex. The new study from The Montreal Neurological Institute and Hospital – The Neuro at McGill University also reveals that even the anticipation of pleasurable music induces dopamine release [as is the case with food, drug, and sex cues].

Published in the prestigious journal Nature Neuroscience, the results suggest why music, which has no obvious survival value, is so significant across human society.

The team at The Neuro measured dopamine release in response to music that elicited "chills", changes in skin conductance, heart rate, breathing, and temperature that were correlated with pleasurability ratings of the music. 'Chills' or 'musical frisson' is a well established marker of peak emotional responses to music. A novel combination of PET and fMRI brain imaging techniques, revealed that dopamine release is greater for pleasurable versus neutral music, and that levels of release are correlated with the extent of emotional arousal and pleasurability ratings. Dopamine is known to play a pivotal role in establishing and maintaining behavior that is biologically necessary.

"These findings provide neurochemical evidence that intense emotional responses to music involve ancient reward circuitry in the brain," says Dr. Robert Zatorre, neuroscientist at The Neuro. "To our knowledge, this is the first demonstration that an abstract reward such as music can lead to dopamine release. Abstract rewards are largely cognitive in nature, and this study paves the way for future work to examine non-tangible rewards that humans consider rewarding for complex reasons."

"Music is unique in the sense that we can measure all reward phases in real-time, as it progresses from baseline neutral to anticipation to peak pleasure all during scanning," says lead investigator Valorie Salimpoor, a graduate student in the Zatorre lab at The Neuro and McGill psychology program. "It is generally a great challenge to examine dopamine activity during both the anticipation and the consumption phase of a reward. Both phases are captured together online by the PET scanner, which, combined with the temporal specificity of fMRI provides us with a unique assessment of the distinct contributions of each brain region at different time points."

This innovative study, using a novel combination of imaging techniques, reveals that the anticipation and experience of listening to pleasurable music induces release of dopamine, a neurotransmitter vital for reinforcing behavior that is necessary for survival. The study also showed that two different brain circuits are involved in anticipation and experience, respectively: one linking to cognitive and motor systems, and hence prediction, the other to the limbic system, and hence the emotional part of the brain. These two phases also map onto related concepts in music, such as tension and resolution.

(Photo: McGill U.)

McGill University


0 comentarios
The findings show that missing a night of sleep burns roughly 135 calories, the equivalent of two slices of bread or a 225 ml glass of semi-skimmed milk. In terms of physical exertion, this amounts to walking just under two miles. On the flip side, eight hours of sleep saved the same approximate amount of energy.

'While the amount of energy saved during sleep may seem small, it was actually more than we expected,' says Professor Kenneth Wright, lead author of the study and Director of Colorado University's Sleep and Chronobiology Laboratory. 'If one considers the amount of positive energy storage needed to explain the obesity epidemic is 50 calories a day (Hill et al., 2003), the energy savings represented by sleep is physiologically meaningful.'

The tightly controlled study included seven carefully vetted young adult subjects, with the participants required to stay in bed and follow a weight maintenance diet for the entire three-day duration. The first day established baseline data and consisted of a typical 16 hours of wakefulness followed by eight hours of sleep. Days two and three included 40 hours of total sleep deprivation followed by eight hours of recovery sleep.

The findings showed that compared to a typical night of sleep, the amount of energy expended by the subjects during 24 hours of sleep deprivation increased about seven per cent. In contrast, energy expenditure decreased to five per cent during the recovery episode, which included 16 hours of wakefulness (following the sleep deprivation night), then eight hours of recovery sleep.

The study proves there is a direct correlation between the sleep–wake cycle and how the body uses energy. It also demonstrates that sleep deprivation is metabolically costly. 'The function of sleep, especially in humans, is considered one of the most important scientific enigmas,' commented Wright, who hopes the new data will help researchers better understand one piece of the sleep puzzle.

One question arising from the study concerns why humans don't conserve more energy during sleep. 'There are other functions of sleep that are important and cost energy,' explains Wright. 'Some conserved energy may be re-distributed to support vital physiological processes like learning and memory consolidation, immune function, and hormone synthesis and release.'

Wright is also quick to caution that energy expenditure during sleep deprivation is neither a safe nor effective strategy for weight loss as other studies have shown that chronic sleep deprivation is associated with impaired cognition and weight gain. He hopes recently initiated research in his lab will shed further light on this relationship and generate useful findings for 'at risk' members of the public.



0 comentarios

When we catch a cold, the immune system steps in to defend us. This is a well-known biological fact, but is difficult to observe directly. Processes at a molecular level are not only miniscule, they are often extremely fast, and therefore difficult to capture in action. Scientists at Helmholtz-Zentrum Berlin für Materialien und Energie (HZB) and the Technische Universität Berlin (TUB) now present a method that takes us a good step towards producing a "molecular movie". They can record two pictures at such a short time interval that it will soon be possible to observe molecules and nanostructures in real time.

A "molecular movie" that shows how a molecule behaves at the crucial moment of a chemical reaction would help us better understand fundamental processes in the natural sciences. Such processes are often only a few femtoseconds long. A femtosecond is a millionth of a billionth of a second.

While it is possible to record a single femtosecond picture using an ultra-short flash of light, it has never been possible to take a sequence of pictures in such rapid succession. On a detector that captures the image, the pictures would overlap and "wash out". An attempt to swap or refresh the detector between two images would simply take too long, even if it could be done at speed of light.

In spite of these difficulties, members of the joint research group "Functional Nanomaterials" of HZB and the Technische Universität Berlin have now managed to take ultrafast image sequences of objects mere micrometres in size using pulses from the X-ray laser FLASH in Hamburg, Germany. Furthermore, they chart out a path how their approach can be scaled to nanometer resolution in the future. Together with colleagues from DESY and the University of Münster, they have published their results in the journal Nature Photonics (DOI: 10.1038/NPHOTON.2010.287).

The researchers came up with an elegant way to descramble the information superimposed by the two subsequent X-ray pulses. They encoded both images simultaneously in a single X-ray hologram. It takes several steps to obtain the final image sequence: First, the scientists split the X-ray laser beam into two separate beams. Using multiple mirrors, they force one beam to take a short detour, which causes the two pulses to reach the object under study at ever so slightly different times – the two pulses arrive only 0.00000000000005 seconds apart. Due to a specific geometric arrangement of the sample, the pulses generate a "double-hologram". This hologram encodes the structure of the object at the two times at which the X-ray pulses hit., Using a mathematical reconstruction procedure, the researchers can then simply associate the images with the respective X-ray pulses and thus determine the image sequence in correct temporal order.

Using their method, the scientists recorded two pictures of a micro-model of the Brandenburg Gate, separated by only 50 femtoseconds. "In this short time interval, even a ray of light travels no further than the width of a human hair," says PhD student Christian Günther, the first author of the publication. The short-wavelength X-rays used allow to reveal extremely small detail, since the shorter the wavelength of light you use, the smaller the objects you can resolve.

"The long-term goal is to be able to follow the movements of molecules and nanostructures in real time," says project head Prof. Dr. Stefan Eisebitt. The extremely high temporal resolution in conjunction with the possibility to see the tiniest objects was the motivation to develop the new technique. A picture may be worth a thousand words, but a movie made up of several pictures can tell you about an object's dynamics.

(Photo: HZB/Eisebitt)

The Helmholtz Association


0 comentarios
Enrique García Artero, the principal author of the study and researcher at the University of Granada pointed out that, "Our objective was to analyse the relationship between the duration of breastfeeding babies and their physical condition in adolescence". "The results suggest further beneficial effects and provide support to breast feeding as superior to any other type of feeding".

The authors asked the parents of 2,567 adolescents about the type of feeding their children received at birth and the time this lasted. The adolescents also carried out physical tests in order to evaluate several abilities such as aerobic capacities and their muscular strength.

The paper, which was published in the Journal of Nutrition, shows that the adolescents who were breastfed as babies ha stronger leg muscles than those who were not breastfed. Moreover, muscular leg strength was greater in those who had been breastfed for a longer period of time.

This type of feeding (exclusively or in combination with other types of food) is associated with a better performance in horizontal jumping by boys and girls regardless of morphological factors such as fat mass, height of the adolescent or the amount of muscle.

Adolescents who were breastfed from three to five months, or for more than six months had half the risk of low performance in the jump exercise when compared with those who had never been breastfed.

García Artero stressed that, "Until now, no studies have examined the association between breastfeeding and future muscular aptitude". "However, our results concur with the observations made as regards other neonatal factors, such as weight at birth, are positively related to better muscular condition during adolescence".

What importance does breastfeeding have?

"If all children were exclusively breastfed from birth, it would be possible to save approximately 1.5 million lives". This was stated by the UNICEF, which pointed out that breast feeding is the "perfect feed" exclusively during the first six months of life and additionally over two years.

As regards the new born, the advantages in the first years of life include immunological protection against allergies, skin diseases, obesity and diabetes, as well as a guarantee of the growth, development and intelligence of the baby.

The benefits also substantially involve the woman: reduction of post-birth haemorrhage, anaemia, maternity mortality, and the risk of breast and ovarian cancer, and it strengthens the affective link between mother and child. "Let's forget about the money saved by not buying other types of milk and baby bottles", says García Artero.

Fundación Española para la Ciencia y la Tecnología


0 comentarios

Fewer children who attend regular formal centre- and family-based child care at 1.5 years and 3 years of age were late talkers compared with children who are looked after at home by a parent, child-carer or in an outdoor nursery. This is shown in a new study by the Norwegian Institute of Public Health of nearly 20,000 children.

The study found no relation between the type of child care at the age of 1 year and subsequent language competence, which may indicate that the positive effect of centre-based child care first occurs between the ages of 1 to 1.5 years.

Furthermore, there were fewer children who were late talkers among those who attended full-time centre-based child care compared with part-time attendance at 3 years of age.

The findings support most of the previous research showing that children who have been in formal child care have better language skills than children who have had more informal care.

The study, “Does universally accessible child care protect children from late talking? Results from a Norwegian population-based prospective study,” is published by Ratib Lekhal and co-authors in the journal Early Child Development and Care.

The publication is based on information about 19,919 children collected by the Norwegian Mother and Child Cohort Study (MoBa) and the Medical Birth Registry of Norway at the Norwegian Institute of Public Health. This is a correlation study, meaning that one can comment on the relationship between centre-based child care and language development, but not directly about the cause. However, a number of other possible factors such as income, parental education, mother tongue and parental age are accounted for. The child's health at birth and social communication before the child started in formal child care was accounted for.

Studies show that about 12 per cent of three year old children have either delayed language or show other deviations from normal language development. The prevalence varies according to the definition of language problems.

For about half of the children, difficulties in learning language are transient. For other children, the difficulties persist throughout school years and could have implications for how the child is able to adapt socially and function in school, work and society.

For many of the children, language difficulties disappear in the transition between pre-school and school, but appear in the form of reading and writing difficulties during the early school years.

The publication is part of the Norwegian Institute of Public Health's Language and Learning study. This project will provide new knowledge about children's language development, the development trajectory in children with language difficulties and how they cope in school. The study will also reveal whether the organisation of learning activities in formal child care affects children's academic and social development through their school years. Knowledge from studies of different development trajectories will eventually be used to better target treatment measures for children with language difficulties.

The project uses information from MoBa and hopes to conduct a clinical study of children with language difficulties at the age of 5.

(Photo: Nasjonalt folkehelseinstitutt)

The Norwegian Institute of Public Health


0 comentarios
For those of you who resolve to lose weight this new year, here's a tip: skip the bundled "value meals" offered by fast-food restaurants.

These meals add more than just fries and a soft drink to cost-conscious consumers' meals; they also provide more calories than diners might expect.

That's the conclusion of research by Richard Staelin from Duke University's Fuqua School of Business and Kathryn M. Sharpe at the University of Virginia's Darden School of Business, who studied consumers' purchase patterns and eating behavior when presented with bundled and á la carte options from fast-food menus. They found that most diners don't realize they end up eating larger amounts of unhealthy food when they order combination menu items.

Sharpe's and Staelin's findings, published in the fall issue of the Journal of Public Policy & Marketing, indicate consumers respond positively to the perceived cost-savings and the simplified ordering process of a value meal versus á la carte options. These findings occur even when there is no cost savings associated with choosing the combo meal.

"The perceived value of a bundled meal encourages consumers to super-size their orders," Staelin said. "Our study found 26 percent of participants increased the size of the meal bundle when given the combo meal option, consuming more than 100 additional calories per meal compared to a la carte menu items at the same prices."

The research surveyed 215 U.S. adults over the age of 21 who indicated they ate at a fast-food restaurant at least once a month. Participants were selected from a demographically diverse sample of the U.S. population.

The study first asked participants to imagine they were traveling on a cross-country road trip and stopping at nine different fast-food outlets. For each hypothetical outlet, participants viewed pictures of the types of menu items they could choose (entrees, drinks and fries) and were shown the amount they would have spent on their selected meals.

In addition to the imaginary road trip, participants were offered further meal choices with varying portion sizes and item combinations for a total of 33 meal selection options.

Staelin and Sharpe believe policy options, such as a tax on soft drinks and other foods, will not induce consumers to substantially reduce their consumption of unhealthy foods.

The researchers recommend implementing size standards and reducing drink and side-item portion sizes within the combo meal option, which will decrease calorie consumption.

Duke University


0 comentarios

Girls who begin menstruating at an early age are at greater risk of depressive symptoms during their adolescence, according to new research by academics from the University of Bristol and the University of Cambridge.

The research published in the January issue of the British Journal of Psychiatry examined the link between timing of a first period and depressive symptoms in a sample of 2,184 girls taking part in a long-term study known as the Avon Longitudinal Study of Parents and Children (ALSPAC).

The researchers used a structural equation model to examine the association between onset of menstruation and depressive symptoms at ages 10.5, 13 and 14 years.

The mean age at which the girls in the study group started menstruating was 12 years and 6 months. They found that girls who started their periods early (before the age of 11.5 years) had the highest levels of depressive symptoms at ages 13 and 14. Girls who started their periods later (after the age of 13.5 years) had the lowest levels of depressive symptoms.

Lead researcher Dr Carol Joinson, Lecturer in the School of Social and Community Medicine at Bristol University, said: “Our study found that girls who mature early are more vulnerable to developing depressive symptoms by the time they reach their mid-teens. This suggests that later maturation may be protective against psychological distress.

“The transition into puberty is a critical developmental period, associated with many biological, cognitive and social changes. These can include increased conflict with parents, the development of romantic relationships, changes in body image and fluctuating hormone levels. These changes may have a more negative impact on girls who mature at an early age than those who mature later. Early maturing girls may feel isolated, and faced with demands which they are not emotionally prepared for.”

Dr Joinson concluded: “If girls who reach puberty early are at greater risk of psychological problems in adolescence, it may be possible to help them with school- and family-based programmes aimed at early intervention and prevention.”

However, it is still unclear from the current results whether an early period is associated with persistent adverse consequences for emotional development beyond the mid-adolescent period. The researchers point out it is possible that girls who mature later may eventually experience similar levels of psychological distress to those who mature earlier, after sufficient time has unfolded.

(Photo: Bristol U.)

University of Bristol


0 comentarios

The school dance committee is split; one group wants an "Alice in Wonderland" theme; the other insists on "Vampire Jamboree." Mathematics could have predicted it.

Social scientists have long argued that when under stress, social networks either end up all agreeing or splitting into two opposing factions. Either condition is referred to as "structural balance."

New Cornell research has generated a mathematical description of how this evolves. Previous mathematical approaches to structural balance have proven that when conditions are right, the result of group conflict will be a split into just two groups, the researchers said. The new work shows for the first time the steps through which friendships and rivalries shift over time and who ends up on each side.

"Structural balance theory applies in situations where there's a lot of social stress -- gossip disparaging to one person, countries feeling pressure, companies competing -- where we need to make alliances and find our friends and enemies," said Cornell Ph.D. candidate Seth Marvel, lead author of a paper explaining the research published during the week of Jan. 3, 2011, in the online edition of the Proceedings of the National Academy of Sciences with co-authors Jon Kleinberg, the Tisch University Professor of Computer Science, Robert Kleinberg, assistant professor of computer science, and Steven Strogatz, the Jacob Gould Schurman Professor of Applied Mathematics.

People may form alliances based on shared values, or may consider the social consequences of allying with a particular person, Marvel said. "The model shows that the latter is sufficient to divide a group into two factions," he said.

The model is a simple differential equation applied to a matrix, or grid of numbers, that can represent relationships between people, nations or corporations. The researchers tested their model on a classic sociological study of a karate club that split into two groups and got results that matched what happened in real life. They plugged in data on international relations prior to World War II and got almost perfect predictions on how the Axis and Allied alliances formed.

The smallest unit in such a network is a "relationship triangle," between, say, Bob and Carol and Ted, which can have four states:

* They're all good friends. That's a stable or "balanced" situation.
* Bob and Carol are friends, but neither gets along with Ted. That's also balanced.
* Bob and Carol dislike each other, but both are friends with Ted. Ted will try to get Bob and Carol together; he will either succeed or end up alienating one or both of his friends, so the situation is unbalanced and has a tendency to change.
* All three dislike each other. Various pairs will try to form alliances against the third, so this situation is also unbalanced.

"Every choice has consequences for other triangles," Strogatz explained, so unbalanced triangles kick off changes that propagate through the entire system. Too often the final state consists of two groups, each with all positive connections among themselves and all negative connections with members of the opposite group.

Is there a way to avoid the mathematical certainty of a split, perhaps to make Republicans and Democrats in Congress less polarized? It depends on the initial conditions, Marvel said. The model shows that if the "mean friendliness" -- the average strength of connections across the entire network is positive, the system evolves to a single, all-positive pattern. "The model shows how to influence the result, but it doesn't tell you how to get there," Kleinberg cautioned.

Marvel plans to test the model on other social networks, and perhaps work with psychologists to do lab experiments with human subjects. But he too cautions against leaning too heavily on the equations for practical advice. "This is a simple model and deterministic, and people aren't deterministic," he said.

(Photo: From "Networks" by Jon Kleinberg and David Easley)

Cornell University


0 comentarios

Boredom is provoking as many as a third of drivers to take unnecessary risks at the wheel, a new study has found.

Researchers at Newcastle University found that drivers who didn’t find the highways taxing enough were more prone to speeding or overtaking as they sought excitement. As a result the researchers suggest that making roads more complicated by building in more obstacles could actually make them safer.

The study of 1,563 drivers published in Transportation Planning and Technology, highlights to planners that efforts to make roads safer could unintentionally provoke more accidents as people may take risks to liven up their journey.

Edmund King, president of the AA and Visiting Professor of Transport at Newcastle University said: “As cars come fitted with more gadgets to make driving easier and planners remove more of the distractions it comes as no surprise to me that people are finding the pleasure of driving has become rather a chore. With that comes an increase in the risks drivers take as they mentally switch-off instead of focussing on the road.”

In the study all drivers were put into four groups. The first category, made up of nearly a third of drivers (31%) included those who are “easily bored, nervous and dangerous” – those more likely to have an accident. While, perhaps unsurprisingly, more younger drivers came into this category, more women were also found to be in this group looking for driving thrills.

The largest group, making up 35% of the driving population, are described as “enthusiastic”. The Newcastle University researchers found that they were less likely to have a crash because they find driving more challenging or intrinsically interesting. This kind of motorist enjoys driving, is calmer and is therefore less likely to have an accident.

More than one in five drivers (21%) were found to “drive slowly and dislike driving”. While unlikely to get fined for speeding they also drove the least. The smallest group - just 13% of motorists - were labelled as “safe and slow”. Admitting to driving slowly in cities, they were also most positive about life in general.

Lead researcher Dr Joan Harvey said: “It would be nice to think that we could train people to be better drivers but we think that those people who would most benefit from training are the least likely to take part. So we’ve considered the other options and contrary to what you might expect when driving, hazards can actually increase our attention to the road so this may well be the way forward for planners.

"We may need to start considering some radical schemes such as putting bends back into roads or introducing the concept of shared space as it would force motorists to think about their driving and pedestrians to think about cars."

The Newcastle research team, including Dr Neil Thorpe and Simon Heslop, asked 1,563 UK drivers aged 17-years-old upwards to complete a questionnaire about their driving style and personality. They were also asked to estimate the speed they would drive on four different road types. This information formed the basis for identifying clusters of people as drivers based on their boredom levels. These boredom-based clusters were then compared in terms of age, sex, personality, attitude, emotions and accident and offence data.

(Photo: Newcastle U.)

Newcastle University


0 comentarios

Since 1846, when a Boston dentist named William Morton gave the first public demonstration of general anesthesia using ether, scientists and doctors have tried to figure out what happens to the brain during general anesthesia.

Though much has been learned since then, many aspects of general anesthesia remain a mystery. How do anesthetic drugs interfere with neurons and brain chemicals to produce the profound loss of consciousness and lack of pain typical of general anesthesia? And, how does general anesthesia differ from sleep or coma?

Emery Brown, an MIT neuroscientist and practicing anesthesiologist at Massachusetts General Hospital, wants to answer those questions by bringing the rigorous approach of neuroscience to the study of general anesthesia. In a review article published online Dec. 29 in the New England Journal of Medicine, he and two colleagues lay out a new framework for studying general anesthesia by relating it to what is already known about sleep and coma.

Such an approach could help researchers discover new ways to induce general anesthesia, and improve our understanding of other brain states such as drug addiction, epilepsy and Parkinson’s disease, says Brown, who is a professor in the Department of Brain and Cognitive Sciences and the Harvard-MIT Division of Health Sciences and Technology.

“Anesthesia hasn’t been attacked as seriously as other questions in neuroscience, such as how the visual system works,” says Brown, who started studying general anesthesia several years ago. Neuroscientists study vision at all levels, including molecular, neurophysiological and theoretical. “Why shouldn’t we be doing the same thing for questions of general anesthesia?” he asks.

In the United States, 60,000 surgical patients undergo general anesthesia every day. Though doctors sometimes tell their patients that they will be “going to sleep” during a surgical procedure, that is not accurate, says Brown. “This may sound nitpicky, but we need to speak precisely about what this state is,” he says. “This paper is an attempt to start at square one and get clear definitions in place.”

General anesthesia is not simply a deep sleep, Brown emphasizes. In fact, part of the reason that he and his colleagues wrote the NEJM paper is to make doctors more aware of the differences and similarities between general anesthesia, sleep and coma. Co-author Ralph Lydic, a neuroscientist at the University of Michigan, is an expert in sleep, and Nicholas Schiff, another co-author and neurologist at Weill Cornell Medical College, is an expert in coma.

In the NEJM paper, the authors define general anesthesia as a “drug-induced, reversible condition that includes specific behavioral and physiological traits” — unconsciousness, amnesia, pain numbing, and inability to move. Also key is the stability of body functions such as respiration, circulation and temperature regulation.

Using EEG (electroencephalography) readings, which reveal electrical activity in the brain, Brown and his colleagues show that even the deepest sleep is not as deep as the lightest general anesthesia.

Throughout the night, the sleeping brain cycles through three stages of non-REM (rapid eye movement) sleep, alternating with REM sleep, which is when most dreaming occurs. Each of these has a distinctive EEG pattern. None of those resembles the EEG of a brain under general anesthesia, however. In fact, general anesthesia EEG patterns are most similar to those of a comatose brain. As Brown points out, general anesthesia is essentially a “reversible coma.”

Indeed, the early clinical signs of emergence from general anesthesia — return of regular breathing, return of movements and cognition — parallel those of recovery from a coma, though compressed over minutes instead of the hours (or even years) it takes to come out of a coma.

In addition, the paper offers explanations for the clinical signs of loss of consciousness induced by general anesthesia, based on the underlying neural circuits. The authors also explain a phenomenon known as “paradoxical excitation,” in which anesthetic drugs actually increase brain activity while inducing loss of consciousness. As one example, the authors describe how the drug ketamine produces unconsciousness by inhibiting neurons whose job is to restrain other neurons, leading to overexcitation in several brain regions. This overexcitation explains the hallucinations seen with ketamine, a common drug of abuse also known as “Special K.”

Though general anesthesia is seen as a routine procedure, it does hold some risk. Estimated mortality directly attributable to anesthesia is one in 250,000. The drug believed to have caused Michael Jackson’s death, propofol, is a potent anesthetic.

“Anesthetics are very powerful medications with a very narrow safety margin, as evidenced by the unfortunate events surrounding Michael Jackson’s death,” says Andreas Loepke, associate professor of clinical anesthesia and pediatrics at the University of Cincinnati College of Medicine. “These medications carry potent side effects, such as respiratory depression, loss of protective airway reflexes, blood-pressure instability, as well as nausea and vomiting.”

A better understanding of how general anesthesia works at the cellular and molecular level could help researchers develop anesthestic drugs that lack those side effects, says Loepke, who was not involved with this paper.

Brown also hopes his work will lead to new anesthetic drugs. To that end, he has several ongoing studies in which he is recording electrical activity from the brains of animals under general anesthesia, as well as imaging human brains. From these studies, he hopes to learn more about which parts of the brain become more or less active during general anesthesia.

(Photo: Dr. Emery Brown)

Massachusetts Institute of Technology


0 comentarios

About one-third of the human population is infected with a parasite called Toxoplasma gondii, but most of them don’t know it.

Though Toxoplasma causes no symptoms in most people, it can be harmful to individuals with suppressed immune systems, and to fetuses whose mothers become infected during pregnancy. Particularly dangerous strains, found mainly in South America, are the leading cause of blindness in Brazil.

Toxoplasma is one of the very few parasites that can infect nearly any warm-blooded animal. Its spores are found in dirt and easily infect farm animals such as cows, sheep, pigs and chickens. Humans can be infected by eating undercooked meat or unwashed vegetables.

“It’s everywhere, and you just need one spore to become infected,” says Jeroen Saeij, an assistant professor of biology at MIT. “Most cases don’t kill the host but establish a lifelong, chronic infection, mainly in the brain and muscle tissue.”

Saeij is investigating a key question: why certain strains of the Toxoplasma parasite (there are at least a dozen) are more dangerous to humans than others.

He and his colleagues have focused their attention on the type II strain, which is the most common in the United States and Europe, and is also the most likely to produce symptoms. In a paper appearing in the Jan. 3 online edition of The Journal of Experimental Medicine, the researchers report the discovery of a new Toxoplasma protein that may help explain why type II is more virulent than others.

Toxoplasma infection rates vary around the world. In the United States, it’s about 10 to 15 percent, while rates in Europe and Brazil are much higher, around 50 to 80 percent. However, these are only estimates — it is difficult to calculate precise rates because most infected people don’t have any symptoms.

After an infection is established, the parasite forms cysts, which contain many slowly reproducing parasites, in muscle tissue and the brain. If the cysts rupture, immune cells called T cells will usually kill the parasites before they spread further. However, people with suppressed immune systems, such as AIDS patients or people undergoing chemotherapy, can’t mount an effective defense.

“In AIDS patients, T cells are essentially gone, so once a cyst ruptures, it can infect more brain cells, which eventually causes real damage to the brain,” says Saeij.

The infection can also cause birth defects, if the mother is infected for the first time while pregnant. (If she is already infected before becoming pregnant, there is usually no danger to the fetus.)

There are drugs that can kill the parasite when it first infects someone, but once cysts are formed, it is very difficult to eradicate them.

A few years ago, Saeij and colleagues showed that the Toxoplasma parasite secretes two proteins called rhoptry18 and rhoptry16 into the host cell. Those proteins allow the parasite to take over many host-cell functions.

“That was a major earthquake in the field,” says Eric Denkers, professor of immunology at Cornell University College of Veterinary Medicine, who was not involved in the new MIT paper. “This paper will be a major aftershock.”

In the new study, the MIT team showed that the parasite also secretes a protein called GRA15, which triggers inflammation in the host. All Toxoplasma strains have this protein, but only the version found in type II causes inflammation, an immune reaction that is meant to destroy invaders but can also damage the host’s own tissues if unchecked. In the brain, inflammation can lead to encephalitis. This ability to cause inflammation likely explains why the type II strain is so much more hazardous for humans, says Saeij.

Saeij and his team, which included MIT Department of Biology graduate students Emily Rosowski and Diana Lu, showed that type II GRA15 leads to the activation of the transcription factor known as NF-kB, which eventually stimulates production of proteins that cause inflammation. The team is now trying to figure out how that interaction between GRA15 and NF-kB occurs, and why it is advantageous to the parasite.

Ultimately, Saeij hopes to figure out how the parasite is able to evade the immune system and establish a chronic infection. Such work could eventually lead to new drugs that block the parasite from establishing such an infection, or a vaccine that consists of a de-activated form of the parasite.

(Photo: Saeij Laboratory)

Massachusetts Institute of Technology


0 comentarios

Ask a computer to add 100 and 100, and its answer will be 200. But what if it sometimes answered 202, and sometimes 199, or any other number within about 1 percent of the correct answer?

Arithmetic circuits that returned such imprecise answers would be much smaller than those in today’s computers. They would consume less power, and many more of them could fit on a single chip, greatly increasing the number of calculations it could perform at once. The question is how useful those imprecise calculations would be.

If early results of a research project at MIT are any indication, the answer is, Surprisingly useful. About a year ago, Joseph Bates, an adjunct professor of computer science at Carnegie Mellon University, was giving a presentation at MIT and found himself talking to Deb Roy, a researcher at MIT’s Media Lab. Three years earlier, before the birth of his son, Roy had outfitted his home with 11 video cameras and 14 microphones, intending to flesh out what he calls the “surprisingly incomplete and biased observational data” about human speech acquisition. Data about a child’s interactions with both its caregivers and its environment could help confirm or refute a number of competing theories in developmental psychology. But combing through more than 100,000 hours of video for, say, every instance in which either a child or its caregivers says “ball,” together with all the child’s interactions with actual balls, is a daunting task for human researchers and artificial-intelligence systems alike. Bates had designed a chip that could perform tens of thousands of simultaneous calculations using sloppy arithmetic and was looking for applications that leant themselves to it.

Roy and Bates knew that algorithms for processing visual data are often fairly error-prone: A system that identifies objects in static images, for instance, is considered good if it’s right about half the time. Increasing a video-processing algorithm’s margin of error ever so slightly, the researchers reasoned, probably wouldn’t compromise its performance too badly. And if the payoff was the ability to do thousands of computations in parallel, Roy and his colleagues might be able to perform analyses of video data that they hadn’t dreamt of before.

So in May 2010, with funding from the U.S. Office of Naval Research, Bates came to MIT as a visiting professor, working with Roy’s group to determine whether video algorithms could be retooled to tolerate sloppy arithmetic. George Shaw, a graduate student in Roy’s group, began by evaluating an algorithm, commonly used in object-recognition systems, that distinguishes foreground and background elements in frames of video.

To simulate the effects of a chip with imprecise arithmetic circuits, Shaw rewrote the algorithm so that the results of all its numerical calculations were either raised or lowered by a randomly generated factor of between 0 and 1 percent. Then he compared its performance to that of the standard implementation of the algorithm. “The difference between the low-precision and the standard arithmetic was trivial,” Shaw says. “It was about 14 pixels out of a million, averaged over many, many frames of video.” “No human could see any of that,” Bates adds.

Of course, a really useful algorithm would have to do more than simply separate foregrounds and backgrounds in frames of video, and the researchers are exploring what tasks to tackle next. But Bates’ chip design looks to be particularly compatible with image and video processing. Although he hasn’t had the chip manufactured yet, Bates has used standard design software to verify that it will work as anticipated. Where current commercial computer chips often have four or even eight “cores,” or separate processing units, Bates’ chip has a thousand; since they don’t have to provide perfectly precise results, they’re much smaller than conventional cores.

But the chip has another notable idiosyncrasy. In most commercial chips, and even in many experimental chips with dozens of cores, any core can communicate with any other. But sending data across the breadth of a chip consumes much more time and energy than sending it locally. So in Bates’ chip, each core can communicate only with its immediate neighbors. That makes it much more efficient — a chip with 1,000 cores would really be 1,000 times faster than a conventional chip — but it also limits its use. Any computation that runs on the chip has to be easily divided into subtasks whose results have consequences mainly for small clusters of related subtasks — those running on the adjacent cores.

Fortunately, video processing seems to fit the bill. Digital images are just big blocks of pixels, which can be split into smaller blocks of pixels, each of which is assigned its own core. If the task is to, say, determine whether the image changes from frame to frame, each core need report only on its own block. The core associated with the top left corner of the image doesn’t need to know what’s happening in the bottom right corner.

Bates has identified a few other problems that his chip also handles well. One is a standard problem in computer science called “nearest-neighbor search,” in which you have a set of objects that can each be described by hundreds or thousands of criteria, and you want to find the one that best matches some sample. Another is computer analysis of protein folding, in which you need to calculate all the different ways in which the different parts of a long biological molecule could interact with each other.

Bob Colwell, who was the chief architect on several of Intel’s Pentium processors and has been a private consultant since 2000, thinks that the most promising application of Bates’ chip could be in human-computer interactions. “There’s a lot of places where the machine does a lot of work on your behalf just to get information in and out of the machine suitable for a human being,” Colwell says. “If you put your hand on a mouse, and you move it a little bit, it really doesn’t matter where exactly the mouse is, because you’re in the loop. If you don’t like where the cursor goes, you’ll move it a little more. Real accuracy in the input is really not necessary.” A system that can tolerate inaccuracy in the input, Colwell argues, can also tolerate (some) inaccuracy in its calculations. The type of graphics processors found in most modern computers are another example, Colwell says, since they work furiously hard to produce 3-D images that probably don’t need to be rendered perfectly.

Bates stresses that his chip would work in conjunction with a standard processor, shouldering a few targeted but labor-intensive tasks, and Colwell says that, depending on how Bates’ chip is constructed, there could be some difficulty in integrating it with existing technologies. But he doesn’t see any of the technical problems as insurmountable. But “there’s going to be a fair amount of people out in the world that as soon as you tell them I’ve got a facility in my new chip that gives sort-of wrong answers, that’s what they’re going to hear no matter how you describe it,” he adds. “That’s kind of a non-technical barrier, but it’s real nonetheless.”

(Photo: Christine Daniloff)

Massachusetts Institute of Technology


0 comentarios

The magnitude of climate change during Earth’s deep past suggests that future temperatures may eventually rise far more than projected if society continues its pace of emitting greenhouse gases, a new analysis concludes. The study, by National Center for Atmospheric Research (NCAR) scientist Jeffrey Kiehl, will appear as a “Perspectives” piece in the journal Science.

Building on recent research, the study examines the relationship between global temperatures and high levels of carbon dioxide in the atmosphere tens of millions of years ago. It warns that, if carbon dioxide emissions continue at their current rate through the end of this century, atmospheric concentrations of the greenhouse gas will reach levels that last existed about 30 million to 100 million years ago, when global temperatures averaged about 29 degrees Fahrenheit (16 degrees Celsius) above pre-industrial levels.

Kiehl said that global temperatures may gradually rise over the next several centuries or millennia in response to the carbon dioxide. Elevated levels of the greenhouse gas may remain in the atmosphere for tens of thousands of years, according to recent computer model studies of geochemical processes that the study cites.

The study also indicates that the planet’s climate system, over long periods of times, may be at least twice as sensitive to carbon dioxide than currently projected by computer models, which have generally focused on shorter-term warming trends. This is largely because even sophisticated computer models have not yet been able to incorporate critical processes, such as the loss of ice sheets, that take place over centuries or millennia and amplify the initial warming effects of carbon dioxide.
Jeffrey Kiehl: Learning about future climate from Earth's deep past

“If we don’t start seriously working toward a reduction of carbon emissions, we are putting our planet on a trajectory that the human species has never experienced,” says Kiehl, a climate scientist who specializes in studying global climate in Earth’s geologic past. “We will have committed human civilization to living in a different world for multiple generations.”

The Perspectives article pulls together several recent studies that look at various aspects of the climate system, while adding a mathematical approach by Kiehl to estimate average global temperatures in the distant past. Its analysis of the climate system’s response to elevated levels of carbon dioxide is supported by previous studies that Kiehl cites. The work was funded by the National Science Foundation, NCAR’s sponsor.

Kiehl focused on a fundamental question: When was the last time Earth’s atmosphere contained as much carbon dioxide as it may by the end of this century?

If society continues on its current pace of increasing the burning of fossil fuels, atmospheric levels of carbon dioxide are expected to reach about 900 to 1,000 parts per million by the end of this century. That compares with current levels of about 390 parts per million, and pre-industrial levels of about 280 parts per million.

Since carbon dioxide is a greenhouse gas that traps heat in Earth’s atmosphere, it is critical for regulating Earth’s climate. Without carbon dioxide, the planet would freeze over. But as atmospheric levels of the gas rise, which has happened at times in the geologic past, global temperatures increase dramatically and additional greenhouse gases, such as water vapor and methane, enter the atmosphere through processes related to evaporation and thawing. This leads to further heating.

Kiehl drew on recently published research that, by analyzing molecular structures in fossilized organic materials, showed that carbon dioxide levels likely reached 900 to 1,000 parts per million about 35 million years ago.

At that time, temperatures worldwide were substantially warmer than at present, especially in polar regions—even though the Sun’s energy output was slightly weaker. The high levels of carbon dioxide in the ancient atmosphere kept the tropics at about 9-18 degrees F (5-10 degrees C) above present-day temperatures. The polar regions were some 27-36 degrees F (15-20 degrees C) above present-day temperatures.

Kiehl applied mathematical formulas to calculate that Earth’s average annual temperature 30 to 40 million years ago was about 88 degrees F (31 degrees C)—substantially higher than the pre-industrial average temperature of about 59 degrees F (15 degrees C).

The study also found that carbon dioxide may have at least twice the effect on global temperatures than currently projected by computer models of global climate.

The world’s leading computer models generally project that a doubling of carbon dioxide in the atmosphere would have a heating impact in the range of 0.5 to 1.0 degree C watts per square meter. (The unit is a measure of the sensitivity of Earth’s climate to changes in greenhouse gases.) However, the published data show that the comparable impact of carbon dioxide 35 million years ago amounted to about 2 degrees C watts per square meter.

Computer models successfully capture the short-term effects of increasing carbon dioxide in the atmosphere. But the record from Earth's geologic past also encompasses longer-term effects, which accounts for the discrepancy in findings. The eventual melting of ice sheets, for example, leads to additional heating because exposed dark surfaces of land or water absorb more heat than ice sheets.

“This analysis shows that on longer time scales our planet may be much more sensitive to greenhouse gases than we thought,” Kiehl says.

Climate scientists are currently adding more sophisticated depictions of ice sheets and other factors to computer models. As these improvements come on line, Kiehl believes that the computer models and the paleoclimate record will be in closer agreement, showing that the impacts of carbon dioxide on climate over time will likely be far more substantial than recent research has indicated.

Because carbon dioxide is being pumped into the atmosphere at a rate that has never been experienced, Kiehl could not estimate how long it would take for the planet to fully heat up. However, a rapid warm-up would make it especially difficult for societies and ecosystems to adapt, he says.

If emissions continue on their current trajectory, “the human species and global ecosystems will be placed in a climate state never before experienced in human history,” the paper states.

(Photo: ©UCAR, Photo by Andrew Watt)

National Center for Atmospheric Research


0 comentarios

By eavesdropping on the activity of a single brain cell, Yale University researchers can predict the outcome of decisions such as whether you will dip into your retirement account to buy a Porsche.

In a study published online January 12 in the journal Neuron, the research team helped identify areas of the brain involved in the choice between taking an immediate reward or deferring for a larger but delayed payoff.

The decision involves a complex network that links multiple areas of the brain in a sort of complex feedback loop.

“But in the instant before the choice is made, we can predict the outcome of the decision by listening to the firing activity in a single neuron,” said Daeyeol Lee, associate professor of neurobiology and psychology at Yale School of Medicine and senior author of the study.

Scientists have described in general terms how the brain responds to potential rewards, such as food, alcohol or sex. However, Lee’s team looked at the information processed at the level of both brain regions and individual cells.

They recorded activity in individual neurons of monkeys as they were offered choices between smaller rewards or larger ones, which were delivered after delays. Like humans, monkeys tend to opt for immediate gratification.

They found in hundreds of tests that the activity of a single brain cell differed depending upon whether the monkey sought immediate award or delayed one.

The firing of individual neurons were part of a larger regional pattern of activity. The researchers found that the basal ganglia, an area of the brain best known for controlling motor function, appears to help assess both the magnitude of the reward and the time it takes for the reward to be received. In an earlier study, Lee’s team found that the prefrontal cortex, an area of the brain associated with working memory and rational thinking, played a similar function. They also found that two areas of the basal ganglia play quite different roles. One area, the dorsal striatum, helps target the reward and a second, the ventral striatum, helps evaluate the reward already chosen.

This pattern reveals an important piece of the puzzle regarding how the prefrontal cortex and areas closely connected to it, such as the basal ganglia, orchestrate their activity to reach a consensus when different goals create a conflict.

“We don’t know the anatomical basis of lots of the psychiatric disorders, problem gambling or impulsive behavior,” Lee said. “Now we are starting to pinpoint those areas, even down to individual neurons.”

(Photo: Yale U.)

Yale University


0 comentarios
We all want an apology when someone does us wrong. But a new study, published in Psychological Science, a journal of the Association for Psychological Science, finds that people aren’t very good at predicting how much they’ll value an apology.

Apologies have been in the news a lot the last few years in the context of the financial crisis, says David De Cremer of Erasmus University in the Netherlands. He cowrote the study with Chris Reinders Folmer of Erasmus University and Madan M. Pillutla of London Business School. “Banks didn’t want to apologize because they didn’t feel guilty but, in the public eye, banks were guilty,” De Cremer says. But even when some banks and CEOs did apologize, the public didn’t seem to feel any better. “We wondered, what was the real value of an apology?”

De Cremer and his colleagues used an experiment to examine how people think about apologies. Volunteers sat at a computer and were given 10 euros to either keep or give to a partner, with whom they communicated via computer. The money was tripled so that the partner received 30 euros. Then the partner could choose how much to give back—but he or she only gave back five euros. Some of the volunteers were given an apology for this cheap offer, while others were told to imagine they’d been given an apology.

The people who imagined an apology valued it more than people who actually received an apology. This suggests that people are pretty poor forecasters when it comes down to what is needed to resolve conflicts. Although they want an apology and thus rate it as highly valuable, the actual apology is less satisfying than predicted.

“I think an apology is a first step in the reconciliation process,” De Cremer says. But “you need to show that you will do something else.” He and his authors speculate that, because people imagine that apologies will make them feel better than they do, an apology might actually be better at convincing outside observers that the wrongdoer feels bad than actually making the wronged party feel better.

Psychological Science




Selected Science News. Copyright 2008 All Rights Reserved Revolution Two Church theme by Brian Gardner Converted into Blogger Template by Bloganol dot com