Tuesday, August 31, 2010

SCIENTISTS UNCOVER ACHILLES HEEL OF CHRONIC INFLAMMATORY PAIN

0 comentarios
Researchers have made a discovery that could lead to a brand new class of drugs to treat chronic pain caused by inflammatory conditions such as arthritis and back pain without numbing the whole body.

The team, funded by the Biotechnology and Biological Sciences Research Council (BBSRC) and working at UCL (University College London), have shown for the first time that genes involved in chronic pain are regulated by molecules inside cells called small RNAs. This mechanism is so different from what has already been discovered about the biology underpinning pain that it could be the Achilles heel of chronic inflammatory pain, which is notoriously difficult to treat. The research appears in The Journal of Neuroscience.

Lead researcher Professor John Wood from UCL said "When a person experiences chronic pain as a result of some sort of inflammation - as in arthritis - their pain threshold goes down very dramatically. What they can normally do without pain, such as walking or putting on clothes, becomes very painful.

"Chronic inflammatory pain can be treated with pain-killing drugs - analgesics - but these usually have an impact on the whole body and may also dull our experience of acute pain, which is actually very important as it protects us from injury. Just imagine if you didn't get a sharp pain when you accidentally touch the oven - you wouldn't be compelled to take your hand away quickly and could end up with a serious burn.

"What we would really like to be able to do is return the pain thresholds to normal in a person who has chronic inflammatory pain, rather than just numbing the whole body. This would mean that they still get the protection of acute pain. Currently, aspirin-like drugs that can do this have a number of side effects but the present discovery might make it possible to invent a class of drugs that act in a completely novel way."

The researchers studied mice that lack an enzyme called Dicer in some of their nerve cells and found that they respond normally to acute pain but don't seem to be bothered by anything that would usually cause chronic inflammatory pain. This is because Dicer makes small RNAs, which they now know are required for regulation of genes involved in chronic inflammatory pain. Without Dicer the small RNAs aren't made and without the small RNAs many of these genes are expressed at low levels. So, for example, molecules such as sodium channels that make pain nerves responsive to inflammation are produced at low levels and therefore inflammatory pain is not detected by the mouse's body.

Professor Wood concluded "Knowing that small RNAs are so important in chronic inflammatory pain provides a new avenue for developing drugs for some of the most debilitating and life-long conditions out there. We have identified small RNAs, which are possible drug targets".

Professor Douglas Kell, BBSRC Chief Executive said "It is extremely important to be able to find out as much as possible about the fundamental processes of 'normal' biology, as a vehicle for understanding what may go wrong. Because these researchers have made efforts to unpick what is happening at a molecular level in our nerves, they have been able to lay the groundwork for future drug development in the important area of chronic pain. This is an excellent example of the basic research we have to do to help ensure that our increasing lifespan does not mean that the later years of our lives are spent in ill health and discomfort."

Biotechnology and Biological Sciences Research Council

BERKELEY STUDY SHOWS OZONE AND NICOTINE A BAD COMBINATION FOR ASTHMA

0 comentarios

Another reason for including asthma on the list of potential health risks posed by secondhand tobacco smoke, especially for non-smokers, has been uncovered. Furthermore, the practice of using ozone to remove the smell of tobacco smoke from indoor environments, including hotel rooms and the interiors of vehicles, is probably a bad idea.

A new study by researchers with the Lawrence Berkeley National Laboratory (Berkeley Lab) shows that ozone can react with the nicotine in secondhand smoke to form ultrafine particles that may become a bigger threat to asthma sufferers than nicotine itself. These ultrafine particles also become major components of thirdhand smoke – the residue from tobacco smoke that persists long after a cigarette or cigar has been extinguished.

“Our study reveals that nicotine can react with ozone to form secondary organic aerosols that are less than 100 nanometers in diameter and become a source of thirdhand smoke,” says Mohamad Sleiman, a chemist with the Indoor Environment Department of Berkeley Lab’s Environmental Energy Technologies Division (EETD) who led this research.

“Because of their size and high surface area to volume ratio, ultrafine particles have the capacity to carry and deposit potentially harmful organic chemicals deep into the lower respiratory tract where they promote oxidative stress,” Sleiman says. “It’s been well established by others that the elderly and the very young are at greatest risk.”

Results of this study have been reported in the journal Atmospheric Environment in a paper titled “Secondary organic aerosol formation from ozone-initiated reactions with nicotine and secondhand tobacco smoke.” Co-authoring this paper with Sleiman were Hugo Destaillats and Lara Gundel, also with EETD’s Indoor Environment Department, and Jared Smith, Chen-Lin Liu, Musahid Ahmed and Kevin Wilson with the Chemical Dynamics Group of Berkeley Lab’s Chemical Sciences Division. The study was carried out under a grant from the University of California’s Tobacco-Related Disease Research Program.

The dangers of mainstream and secondhand tobacco smoke, which contain several thousand chemical toxins distributed as particles or gases, have been well documented. This past February, a study, also spearheaded by Sleiman, Destaillats and Gundel, revealed the potential health hazards posed by thirdhand tobacco smoke which was shown to react with nitrous acid, a common indoor air pollutant, to produce dangerous carcinogens. Until now, however, in terms of forming ultrafine particles, there have been no studies on the reaction of nicotine with ozone.

Released as a vapor by the burning of tobacco, nicotine is a strong and persistent adsorbent onto indoor surfaces that is released back to indoor air for a period of months after smoking ceased. Ozone is a common urban pollutant that infiltrates from outdoor air through ventilation that has been linked to health problems, including asthma and respiratory ailments.

Says co-author Gundel, “Not only did we find that nicotine from secondhand smoke reacts with ozone to make ultrafine particles – a new and stunning development – but we also found that several oxidized products of ozone and nicotine have higher values on the asthma hazard index than nicotine itself.”

Says co-author Destaillats, “In our previous study, we found that carcinogens were formed on indoor surfaces, which can lead to exposures that are likely to be dominated by dermal uptake and dust ingestion. This study suggests a different exposure pathway to aged secondhand or thirdhand smoke through the formation and inhalation of ultrafine particles. Also, our group had previously described the formation of secondary organic aerosols in reaction of indoor ozone with terpenoids, commonly present in household products. But this is the first time that nicotine has been tagged as a potential candidate to form ultrafine particles or aerosols through a reaction with ozone.”

To identify the products formed when nicotine in secondhand smoke is reacted with ozone, Sleiman and his co-authors utilized the unique capabilities of Berkeley Lab’s Advanced Light Source (ALS), a premier source of x-ray and ultraviolet light for scientific research. Working at ALS Beamline 9.0., which is optimized for the study of chemical dynamics using vacuum ultraviolet (VUV) light and features an aerosol chemistry experimental station, the researchers found new chemical compounds forming within one hour after the start of the reaction.

“The tunable VUV light of Beamline 9.0.2’s custom-built VUV aerosol mass spectrometer minimized the fragmentation of organic molecules and enabled us to chemically characterize the secondhand smoke and identify individual constituents of secondary organic aerosols,” says Sleiman. “The identification of multifunctional compounds, such as carbonyls and amines, present in the ultrafine particles, made it possible for us to estimate the Asthma Hazard Index for these compounds.”

While the findings in this study support recommendations from the California EPA and the Air Resources Board that discourage the use of ozone-generating “air purifiers,” which among other applications, have been used for the removal of tobacco odors, the Berkeley Lab researchers caution that the levels of both ozone and nicotine in their study were at the high end of typical indoor conditions.

Says Sleiman, “In addition, we need to do further investigations to verify that the formation of ultrafine particles occurs under a range of real world conditions. However, given the high levels of nicotine measured indoors when smoking takes place regularly and the significant yield of ultrafine particles formation in our study, our findings suggest new link between asthma and exposure to secondhand and thirdhand smoke.”

(Photo: Roy Kaltschmidt, Berkeley Lab Public Affairs)

Lawrence Berkeley National Laboratory

ASTEROID FOUND IN GRAVITATIONAL DEAD ZONE

0 comentarios

There are places in space where the gravitational tug between a planet and the Sun balance out, allowing other smaller bodies to remain stable. These places are called Lagrangian points. So-called Trojan asteroids have been found in some of these stable spots near Jupiter and Neptune. Trojans share their planet’s orbit and help astronomers understand how the planets formed and how the solar system evolved. Now Scott Sheppard at the Carnegie Institution’s Department of Terrestrial Magnetism and Chad Trujillo have discovered the first Trojan asteroid, 2008 LC18, in a difficult-to-detect stability region at Neptune, called the Lagrangian L5 point.

They used the discovery to estimate the asteroid population there and find that it is similar to the asteroid population at Neptune’s L4 point. The research is published in the August 12, 2010, online issue of Science Express.

Sheppard explained: “The L4 and L5 Neptune Trojan stability regions lie about 60 degrees ahead of and behind the planet, respectively. Unlike the other three Lagrangian points, these two areas are particularly stable, so dust and other objects tend to collect there. We found 3 of the 6 known Neptune Trojans in the L4 region in the last several years, but L5 is very difficult to observe because the line-of-sight of the region is near the bright center of our galaxy.”

The scientists devised a unique observing strategy. Using images from the digitized all-sky survey they identified places in the stability regions where dust clouds in our galaxy blocked out the background starlight from the galaxy’s plane, providing an observational window to the foreground asteroids. They discovered the L5 Neptune Trojan using the 8.2-meter Japanese Subaru telescope in Hawaii and determined its orbit with Carnegie’s 6.5-meter Magellan telescopes at Las Campanas, Chile.

“We estimate that the new Neptune Trojan has a diameter of about 100 kilometers and that there are about 150 Neptune Trojans of similar size at L5,” Sheppard said. “It matches the population estimates for the L4 Neptune stability region. This makes the Neptune Trojans in the 100-kilometer range more numerous than those bodies in the main asteroid belt between Mars and Jupiter. There are fewer Neptune Trojans known simply because they are very faint since they are so far from the Earth and Sun.”

The L5 Trojan has an orbit that is very tilted to the plane of the solar system, just like several in L4. This suggests they were captured into these stable regions during the very early solar system when Neptune was moving on a much different orbit than it is now. Capture was either through a slow, smooth planetary migration process or as the giant planets settled into their orbits, their gravitational attraction could have caught and “frozen” asteroids into these spots. The solar system was likely a much more chaotic place during that time with many bodies stirred up onto unusual orbits.

The region of space surveyed also included a volume through which the New Horizons spacecraft will pass after its encounter with Pluto in 2015.The work was funded in part by the New Horizon’s spacecraft mission to Pluto.

(Photo: Carnegie I.)

Carnegie Institution

SCIENTISTS CALL FOR A GLOBAL NUCLEAR RENAISSANCE IN NEW STUDY

0 comentarios

Scientists outline a 20-year master plan for the global renaissance of nuclear energy that could see nuclear reactors with replaceable parts, portable mini-reactors, and ship-borne reactors supplying countries with clean energy, in research published in the journal Science.

The scientists, from Imperial College London and the University of Cambridge, suggest a two-stage plan in their review paper that could see countries with existing nuclear infrastructure replacing or extending the life of nuclear power stations, followed by a second phase of global expansion in the industry by the year 2030. The team say their roadmap could fill an energy gap as old nuclear, gas and coal fired plants around the world are decommissioned, while helping to reduce the planet’s dependency on fossil fuels.

Professor Robin Grimes, from the Department of Materials at Imperial College London, says: “Our study explores the exciting opportunities that a renaissance in nuclear energy could bring to the world. Imagine portable nuclear power plants at the end of their working lives that can be safely shipped back by to the manufacturer for recycling, eliminating the need for countries to deal with radioactive waste. With the right investment, these new technologies could be feasible. Concerns about climate change, energy security and depleting fossil fuel reserves have spurred a revival of interest in nuclear power generation and our research sets out a strategy for growing the industry long-term, while processing and transporting nuclear waste in a safe and responsible way.”

The researchers suggest in their study that based on how technologies are developing, new types of reactors could come online that are much more efficient than current reactors by 2030. At the moment, most countries have light water reactors, which only use a small percentage of the uranium for energy, which means that the uranium is used inefficiently. The team suggest that new ‘fast reactors’ could be developed that could use uranium approximately 15 times more efficiently, which would mean that uranium supplies could last longer, ensuring energy security for countries.

Another idea is to develop reactors with replaceable parts so that they can last in excess of 70 years, compared to 40 or 50 years that plants can currently operate at. Reactors are subjected to harsh conditions including extreme radiation and temperatures, meaning that parts degrade over time, affecting the life of the reactor. Making replaceable parts for reactors would make them more cost effective and safe to run over longer periods of time.

Flexible nuclear technologies could be an option for countries that do not have an established nuclear industry, suggest the scientists. One idea involves ship-borne civil power plants that could be moored offshore, generating electricity for nearby towns and cities. This could reduce the need for countries to build large electricity grid infrastructures, making it more cost effective for governments to introduce a nuclear industry from scratch.

The researchers also suggest building small, modular reactors that never require refuelling. These could be delivered to countries as sealed units, generating power for approximately 40 years. At the end of its life, the reactor would be returned to the manufacturer for decommissioning and disposal. Because fuel handling is avoided at the point of electricity generation, the team say radiation doses to workers would be reduced, meaning that the plants would be safer to operate.

The scientists believe the roll out of flexible technologies that could be returned to the manufacturer at their end of their shelf life could also play an important role in preventing the proliferation of nuclear armaments, because only the country of origin would have access to spent fuel, meaning that other countries could not reprocess the fuel for use in weapons.

In the immediate future, the researchers suggest the first stage of the renaissance will see countries with existing nuclear energy infrastructure extending the life of current nuclear power plants. The researchers suggest this could be made possible by further developing technologies for monitoring reactors, enabling them to last longer because engineers can continually assess the safety and performance of the power plants.

The researchers say new global strategies for dealing with spent fuel and radioactive components will have to be devised. Until now, countries have not developed a coordinated strategy for dealing with waste. One suggestion is to develop regional centres, where countries can send their waste for reprocessing, creating new industries in the process.

Professor Grimes adds: “In the past, there has been the perception in the community that nuclear technology has not been safe. However, what most people don’t appreciate is just how much emphasis the nuclear industry places on safety. In fact, safety is at the very core of the industry. With continual improvements to reactor design, nuclear energy will further cement its position as an important part of our energy supply in the future.”

However, the authors caution that governments around the world need to invest more in training the next generation of nuclear engineers. Otherwise, the nuclear industry may not have enough qualified personnel to make the renaissance a reality.

Dr William Nuttall, University Senior Lecturer in Technology Policy at Cambridge Judge Business School, University of Cambridge, concludes: “The second phase of the ‘Two-Stage Nuclear Renaissance’ is not inevitable, but we would be foolish if we did not provide such an option for those that must make key energy technology decisions in the decades ahead. Too often, decisions shaping the direction of research and development in the nuclear sector are made as part of a strategy for eventual deployment. As such small research capacities can become confused with multi-billion dollar plans and stall as a result. Relatively modest research and development can, however, provide us with important options for the future. Such research and development capacities need to be developed now if they are to be ready when needed. While some good measures are already underway, the possible challenge ahead motivates even greater efforts.”

(Photo: ICL)

Imperial College London

NEW EVIDENCE THAT MATTER AND ANTIMATTER MAY BEHAVE DIFFERENTLY

0 comentarios

Neutrinos, elementary particles generated by nuclear reactions in the sun, suffer from an identity crisis as they cross the universe, morphing between three different “flavors.” Their antimatter counterparts (which are identical in mass but opposite in charge and spin) do the same thing.

A team of physicists including some from MIT has found surprising differences between the flavor-switching behavior of neutrinos and antineutrinos. If confirmed, the finding could help explain why matter, and not antimatter, dominates our universe.

“People are very excited about it because it suggests that there are differences between neutrinos and antineutrinos,” says Georgia Karagiorgi, an MIT graduate student and one of the leaders of the analysis of experimental data produced by the Booster Neutrino Experiment (MiniBooNE) at the Fermi National Accelerator Laboratory.

The new result, announced in June and submitted to the journal Physical Review Letters, appears to be one of the first observed violations of CP symmetry: the theory that matter and antimatter should behave in the same way. CP symmetry violation has been seen before in quarks, another type of elementary particle that makes up protons and neutrons, but never in neutrinos or electrons.

The finding could also force physicists to revise their Standard Model, which catalogs all of the known particles that make up matter. The model now posits only three flavors of neutrino, but a fourth (or fifth or sixth) may be necessary to explain the new results.

“If this should be proven to be correct, it would have major implications for particle physics,” says John Learned, professor of physics at the University of Hawaii, who is not part of the MiniBooNE team.

So far, the researchers have enough data to present their results with a confidence level of just below 99.7 percent (also called 3 sigma), which is not high enough to claim a new discovery. To reach that level, 5-sigma confidence (99.99994 percent) is required. “People are going to rightfully demand a really clean, 5-sigma result,” says Learned.

Since the 1960s, physicists have been gathering evidence that neutrinos can switch, or oscillate, between three different flavors — muon, electron and tau, each of which has a different mass. However, they have not yet been able to rule out the possibility that more types of neutrino might exist.

In an effort to help nail down the number of neutrinos, MiniBooNE physicists send beams of neutrinos or antineutrinos down a 500-meter tunnel, at the end of which sits a 250,000-gallon tank of mineral oil. When neutrinos or antineutrinos collide with a carbon atom in the mineral oil, the energy traces left behind allow physicists to identify what flavor of neutrino took part in the collision. Neutrinos, which have no charge, rarely interact with other matter, so such collisions are rare.

MiniBooNE was set up in 2002 to confirm or refute a controversial finding from an experiment at the Liquid Scintillator Neutrino Detector (LSND) at Los Alamos National Laboratory. In 1990, the LSND reported that a higher-than-expected number of antineutrinos appeared to be oscillating over relatively short distances, which suggests the existence of a fourth type of neutrino, known as a “sterile” neutrino.

In 2007, MiniBooNE researchers announced that their neutrino experiments did not produce oscillations similar to those seen at LSND. At the time, they assumed the same would hold true for antineutrinos. “In 2007, I would have told you that you can pretty much rule out LSND,” says MIT physics professor Janet Conrad, a member of the MiniBooNE collaboration and an author of the new paper.

MiniBooNE then switched to antineutrino mode and collected data for the next three years. The research team didn’t look at all of the data until earlier this year, when they were shocked to find more oscillations than would be expected from only three neutrino flavors — the same result as LSND.

Already, theoretical physicists are posting papers online with theories to account for the new results. However, “there’s no clear and immediate explanation,” says Karsten Heeger, a neutrino physicist at the University of Wisconsin. “To nail it down, we need more data from MiniBooNE, and then we need to experimentally test it in a different way.”

The MiniBooNE team plans to collect antineutrino data for another 18 months. Conrad also hopes to launch a new experiment that would use a cyclotron, a type of particle accelerator in which particles travel in a circle instead of a straight line, to help confirm or refute the MiniBooNE results.

(Photo: Fermilab)

Massachusetts Institute of Technology

PROBING THE NANOPARTICLE: PREDICTING HOW NANOPARTICLES WILL REACT IN THE HUMAN BODY

0 comentarios
Researchers at North Carolina State University have developed a method for predicting the ways nanoparticles will interact with biological systems – including the human body. Their work could have implications for improved human and environmental safety in the handling of nanomaterials, as well as applications for drug delivery.

NC State researchers Dr. Jim Riviere, Burroughs Wellcome Distinguished Professor of Pharmacology and director of the university’s Center for Chemical Toxicology Research and Pharmacokinetics, Dr. Nancy Monteiro-Riviere, professor of investigative dermatology and toxicology, and Dr. Xin-Rui Xia, research assistant professor of pharmacology, wanted to create a method for the biological characterization of nanoparticles – a screening tool that would allow other scientists to see how various nanoparticles might react when inside the body.

“We wanted to find a good, biologically relevant way to determine how nanomaterials react with cells,” Riviere says. “When a nanomaterial enters the human body, it immediately binds to various proteins and amino acids. The molecules a particle binds with will determine where it will go.”

This binding process also affects the particle’s behavior inside the body. According to Monteiro-Riviere, the amino acids and proteins that coat a nanoparticle change its shape and surface properties, potentially enhancing or reducing characteristics like toxicity or, in medical applications, the particle’s ability to deliver drugs to targeted cells.

To create their screening tool, the team utilized a series of chemicals to probe the surfaces of various nanoparticles, using techniques previously developed by Xia. A nanoparticle’s size and surface characteristics determine the kinds of materials with which it will bond. Once the size and surface characteristics are known, the researchers can then create “fingerprints” that identify the ways that a particular particle will interact with biological molecules. These fingerprints allow them to predict how that nanoparticle might behave once inside the body.

The study results appear in the Aug. 15 online edition of Nature Nanotechnology.

“This information will allow us to predict where a particular nanomaterial will end up in the human body, and whether or not it will be taken up by certain cells,” Riviere adds. “That in turn will give us a better idea of which nanoparticles may be useful for drug delivery, and which ones may be hazardous to humans or the environment.”

North Carolina State University

Saturday, August 28, 2010

NANOSCALE DNA SEQUENCING COULD SPUR REVOLUTION IN PERSONAL HEALTH CARE

2 comentarios

In experiments with potentially broad health care implications, a research team led by a University of Washington physicist has devised a method that works at a very small scale to sequence DNA quickly and relatively inexpensively.

That could open the door for more effective individualized medicine, for example providing blueprints of genetic predispositions for specific conditions and diseases, such as cancer, diabetes or addiction.

"The hope is that in 10 years people will have all their DNA sequenced and this will lead to personalized, predictive medicine," said Jens Gundlach, a UW physics professor and lead author of a paper describing the new technique published the week of Aug. 16 in the Proceedings of the National Academy of Sciences.

The technique creates a DNA reader that combines biology and nanotechnology using a nanopore taken from Mycobacterium smegmatis porin A. The nanopore has an opening 1 billionth of a meter in size, just large enough to measure a single strand of DNA as it passes through.

The scientists placed the pore in a membrane surrounded by potassium-chloride solution. A small voltage was applied to create an ion current flowing through the nanopore, and the current's electrical signature changed depending on the nucleotides traveling through the nanopore. Each of the nucleotides that are the essence of DNA -- cytosine, guanine, adenine and thymine -- produced a distinctive signature.

The team had to solve two major problems. One was to create a short and narrow opening just large enough to allow a single strand of DNA to pass through the nanopore and for only a single DNA molecule to be in the opening at any time. Michael Niederweis at the University of Alabama at Birmingham modified the M. smegmatis bacterium to produce a suitable pore.

The second problem, Gundlach said, was that the nucleotides flowed through the nanopore at a rate of one every millionth of a second, far too fast to sort out the signal from each DNA molecule. To compensate, the researchers attached a section of double-stranded DNA between each nucleotide they wanted to measure. The second strand would briefly catch on the edge of the nanopore, halting the flow of DNA long enough for the single nucleotide to be held within the nanopore DNA reader. After a few milliseconds, the double-stranded section would separate and the DNA flow continued until another double strand was encountered, allowing the next nucleotide to be read.

The delay, though measured in thousandths of a second, is long enough to read the electrical signals from the target nucleotides, Gundlach said.

"We can practically read the DNA sequence from an oscilloscope trace," he said.

Besides Gundlach and Niederweiss, other authors are Ian Derrington, Tom Butler, Elizabeth Manrao and Marcus Collins of the UW; and Mikhail Pavlenok at Alabama-Birmingham.

The work was funded by the National Institutes of Health and its National Human Genome Research Institute as part of a program to create technology to sequence a human genome for $1,000 or less. That program began in 2004, when it cost on the order of $10 million to sequence a human-sized genome.

The new research is a major step toward achieving DNA sequencing at a cost of $1,000 or less.

"Our experiments outline a novel and fundamentally very simple sequencing technology that we hope can now be expanded into a mechanized process," Gundlach said.

(Photo: Ian Derrington)

University of Washington

DANGEROUS BACTERIUM HOSTS GENETIC REMNANT OF LIFES DISTANT PAST

0 comentarios
Within a dangerous stomach bacterium, Yale University researchers have discovered an ancient but functioning genetic remnant from a time before DNA existed, they report in the August 13 issue of the journal Science.

To the surprise of researchers, this RNA complex seems to play a critical role in the ability of the organism to infect human cells, a job carried out almost exclusively by proteins produced from DNA’s instruction manual.

“What these cells are doing is using ancient RNA technology to control modern gene expression,” said Ron Breaker, the Henry Ford II Professor of Molecular, Cellular and Developmental Biology at Yale, investigator for the Howard Hughes Medical Institute and senior author of the study.

In old textbooks, RNA was viewed simply as the chemical intermediary between DNA’s instruction manual and the creation of proteins. However, Breaker’s lab has identified the existence and function of riboswitches, or RNA structures that have the ability to detect molecules and control gene expression – an ability once believed to be possessed solely by proteins. Breaker and many other scientists now believe the first forms of life depended upon such RNA machines, which would have had to find ways to interact and carry out many of the functions proteins do today.

The new paper describes the complex interactions of two small RNA molecules and two larger RNA molecules that together influence the function of a self-splicing ribozyme, a structure many biologists had believed had no role other than to reproduce itself. The new study, however, suggests that in the pathogenic stomach bacterium Clostridium difficile, this RNA structure acts as a sort of sensor to help regulate the expression of genes, probably to help the bacterium manipulate human cells.

“They were though to be molecular parasites, but it is clear they are being harnessed by cells to do some good for the organism,” Breaker said.

This is the sort of RNA structure that would have been needed for life exist before the evolution of double-stranded DNA, with its instruction book for proteins that carry out almost all of life’s functions today. If proteins are necessary to carry out life’s functions, scientists need to explain how life arises without DNA’s recipe. The answer to the chicken or egg question is RNA machines such as the one identified in the new study, Breaker said.

“A lot of sophisticated RNA gadgetry has gone extinct but this study shows that RNA has more of the power needed to carry out complex biochemistry,” Breaker said. “It makes the spontaneous emergence of life on earth much more palatable.”

Yale University

MODERATE DRINKING, ESPECIALLY WINE, ASSOCIATED WITH BETTER COGNITIVE FUNCTION

0 comentarios
A large prospective study of 5033 men and women in the Tromsø Study in northern Norway has reported that moderate wine consumption is independently associated with better performance on cognitive tests.

The subjects (average age 58 and free of stroke) were followed over 7 years during which they were tested with a range of cognitive function tests. Among women, there was a lower risk of a poor testing score for those who consumed wine at least 4 or more times over two weeks in comparison with those who drink < 1 time during this period The expected associations between other risk factors for poor cognitive functioning were seen, i.e. lower testing scores among people who were older, less educated, smokers, and those with depression, diabetes, or hypertension.

It has long been known that "moderate people do moderate things." The authors state the same thing: "A positive effect of wine . . . could also be due to confounders such as socio-economic status and more favourable dietary and other lifestyle habits.

The authors also reported that not drinking was associated with significantly lower cognitive performance in women. As noted by the authors, in any observational study there is the possibility of other lifestyle habits affecting cognitive function, and the present study was not able to adjust for certain ones (such as diet, income, or profession) but did adjust for age, education, weight, depression, and cardiovascular disease as its major risk factors.

The results of this study support findings from previous research on the topic: In the last three decades, the association between moderate alcohol intake and cognitive function has been investigated in 68 studies comprising 145,308 men and women from various populations with various drinking patterns. Most studies show an association between light to moderate alcohol consumption and better cognitive function and reduced risk of dementia, including both vascular dementia and Alzheimer's Disease.

Such effects could relate to the presence in wine of a number of polyphenols (antioxidants) and other micro elements that may help reduce the risk of cognitive decline with ageing. Mechanisms that have been suggested for alcohol itself being protective against cognitive decline include effects on atherosclerosis ( hardening of the arteries), coagulation ( thickening of the blood and clotting), and reducing inflammation ( of artery walls, improving blood flow).

Boston Medical Center

Friday, August 27, 2010

SINGLE NEURONS CAN DETECT SEQUENCES

0 comentarios

Single neurons in the brain are surprisingly good at distinguishing different sequences of incoming information according to new research by UCL neuroscientists.

The study, published in Science and carried out by researchers based at the Wolfson Institute for Biomedical Research at UCL, shows that single neurons, and indeed even single dendrites, the tiny receiving elements of neurons, can very effectively distinguish between different temporal sequences of incoming information.

This challenges the widely held view that this kind of processing in the brain requires large numbers of neurons working together, as well as demonstrating how the basic components of the brain are exceptionally powerful computing devices in their own right.

First author Tiago Branco said: "In everyday life, we constantly need to use information about sequences of events in order to understand the world around us. For example, language, a collection of different sequences of similar letters or sounds assembled into sentences, is only given meaning by the order in which these sounds or letters are assembled.

"The brain is remarkably good at processing sequences of information from the outside world. For example, modern computers will still struggle to decode a rapidly spoken sequence of words that a 5 year-old child will have no trouble understanding. How the brain does so well at distinguishing one sequence of events from another is not well understood but, until now, the general belief has been that this job is done by large numbers of neurons working in concert with each other."

Using a mouse model, the researchers studied neurons in areas of the brain which are responsible for processing sensory input from the eyes and the face. To probe how these neurons respond to variation in the order of a number of inputs, they used a laser to activate inputs on the dendrites in precisely defined patterns and recorded the resulting electrical responses of the neurons.

Surprisingly, they found that each sequence produced a different response, even when it was delivered to a single dendrite. Furthermore, using theoretical modelling, they were able to show that the likelihood that two sequences can be distinguished from each other is remarkably high.

Senior author Professor Michael Hausser commented: "This research indicates that single neurons are reliable decoders of temporal sequences of inputs, and that they can play a significant role in sorting and interpreting the enormous barrage of inputs received by the brain.

"This new property of neurons and dendrites adds an important new element to the "toolkit" for computation in the brain. This feature is likely to be widespread across many brain areas and indeed many different animal species, including humans."

(Photo: Tiago Branco)

UCL

ASTRONAUT MUSCLES WASTE IN SPACE

0 comentarios
Astronaut muscles waste away on long space flights reducing their capacity for physical work by more than 40%, according to research published online in the Journal of Physiology.

This is the equivalent of a 30- to 50-year-old crew member's muscles deteriorating to that of an 80-year-old. The destructive effects of extended weightlessness to skeletal muscle – despite in-flight exercise – pose a significant safety risk for future manned missions to Mars and elsewhere in the Universe.

An American study, led by Robert Fitts of Marquette University (Milwaukee, Wisconsin), was recently published online by The Journal of Physiology and will be in the September printed issue. It comes at a time of renewed interest in Mars and increased evidence of early life on the planet. NASA currently estimates it would take a crew 10 months to reach Mars, with a 1 year stay, or a total mission of approximately 3 years.

Fitts, Chair and Professor of Biological Sciences at Marquette, believes if astronauts were to travel to Mars today their ability to perform work would be compromised and, with the most affected muscles such as the calf, the decline could approach 50%. Crew members would fatigue more rapidly and have difficulty performing even routine work in a space suit. Even more dangerous would be their return to Earth, where they'd be physically incapable of evacuating quickly in case of an emergency landing.

The study – the first cellular analysis of the effects of long duration space flight on human muscle – took calf biopsies of nine astronauts and cosmonauts before and immediately following 180 days on the International Space Station (ISS). The findings show substantial loss of fibre mass, force and power in this muscle group. Unfortunately starting the journey in better physical condition did not help. Ironically, one of the study's findings was that crew members who began with the biggest muscles also showed the greatest decline.

The results highlight the need to design and test more effective exercise countermeasures on the ISS before embarking on distant space journeys. New exercise programmes will need to employ high resistance and a wide variety of motion to mimic the range occurring in Earth's atmosphere.

Fitts doesn't feel scientists should give up on extended space travel. 'Manned missions to Mars represent the next frontier, as the Western Hemisphere of our planet was 800 years ago,' says Fitts. 'Without exploration we will stagnate and fail to advance our understanding of the Universe.'

In the shorter term, Fitts believes efforts should be on fully utilizing the International Space Station so that better methods to protect muscle and bone can be developed. 'NASA and ESA need to develop a vehicle to replace the shuttle so that at least six crew members can stay on the ISS for 6-9 months,' recommends Fitts. 'Ideally, the vehicle should be able to dock at the ISS for the duration of the mission so that, in an emergency, all crew could evacuate the station.'

Wiley

HYDROGEN CAUSES METAL TO BREAK

0 comentarios

Most likely, there is hardly a soul that cannot recall K.I.T.T. – the legendary talking supercar from the US television series “Knight Rider”. A hydrogen turbo motor fuels the fantasy vehicle and propels it on the chase for the bad guys at over 300 miles an hour. In the future, cars may be equipped with hydrogen propulsion not just in the movies, but in real life as well.

In the transportation and energy sectors, hydrogen is viewed as an eventual alternative to the raw materials of fossil-fuel power, such as coal, petroleum and natural gas. However, for metals like steel, aluminum and magnesium - which are commonly used in automotive and energy technology – hydrogen is not quite ideal. It can make these metals brittle; the ductility of the metal becomes reduced. Its durability deteriorates. This can lead to sudden failure of parts and components. Beside the fuel tank itself, or parts of the fuel cell, but ordinary components like ball bearings could also be affected. These are found not only in the car, but also in almost all industrial machinery.

This lightest of the chemical elements permeates the raw materials of which the vehicle is made not only when filling the tank, but also through various manufacturing processes. Hydrogen can infiltrate the metal lattice through corrosion, or during chromium-plating of car parts. Infiltration may likewise occur during welding, milling or pressing. The result is always the same: the material may tear or break without warning. Costly repairs are the consequence. To prevent cracks and breakage in the future, the researchers at the Fraunhofer Institute for Mechanics of Materials IWM in Freiburg are studying hydrogen-induced embrittlement. Their objective: to find materials and manufacturing processes that are compatible with hydrogen. “With our new special laboratory, we are investigating how and at which speed hydrogen migrates through a metal. We are able to detect the points at which the element accumulates in the material, and where it doesn't,” says Nicholas Winzer, researcher at IWM.

Since the risk potential mostly emanates from the diffusible, and therefore mobile, portion of the hydrogen, it is necessary to separate this from the entire hydrogen content. Researchers can release and simultaneously measure the movable part of the hydrogen by heat treatment, where samples are continuously heated up. In addition, the experts can measure the rate that hydrogen is transported through the metal while simultaneously applying stress to the material samples mechanically. They can determine how the hydrogen in the metal behaves when tension is increased. For this purpose, the scientists use special tensile test equipment that permit simultaneous mechanical loading and infiltration with hydrogen. Next, they determine how resistant the material is. “In industry, components have to withstand the combined forces of temperature, mechanical stress and hydrogen. With the new special laboratory, we can provide the necessary analytical procedures,” as Winzer explains the special feature of the simultaneous tests.

The researchers use the results from the laboratory tests for computer simulation, with which they calculate the hydrogen embrittlement in the metals. In doing so, they enlist atomic and FEM simulation to investigate the interaction between hydrogen and metal both on an atomic and a macroscopic scale. “Through the combination of special laboratory and simulation tools, we have found out which materials are suitable for hydrogen, and how manufacturing processes can be improved. With this knowledge, we can support companies from the industry,” says Dr. Wulf Pfeiffer, head of the process and materials analysis business unit at IWM.

(Photo: Fraunhofer IWM)

Fraunhofer IWM

Thursday, August 26, 2010

LOVE LIFE

0 comentarios

“Love stinks!” the J. Geils Band told the world in 1980, and while you can certainly argue whether or not this tender and ineffable spirit of affection has a downside, working hard to find it does. It may even shorten your life.

A new study shows that ratios between males and females affect human longevity. Men who reach sexual maturity in a context in which they far outnumber women live, on average, three months less than men whose competition for a mate isn’t as stiff. The steeper the gender ratio (also known as the operational sex ratio), the sharper the decline in life span.

“At first blush, a quarter of a year may not seem like much, but it is comparable to the effects of, say, taking a daily aspirin, or engaging in moderate exercise,” says Nicholas Christakis, senior author on the study and professor of medicine and medical sociology at Harvard Medical School as well as professor of sociology at Harvard University’s Faculty of Arts and Sciences. “A 65-year-old man is typically expected to live another 15.4 years. Removing three months from this block of time is significant.”
These results are published in the August issue of the journal Demography.

An association between gender ratios and longevity had been established through studies of animals before, but never in humans. To search for a link in people, Christakis collaborated with researchers from the Chinese University of Hong Kong, the University of Wisconsin, and Northwestern University. The researchers looked at two distinct datasets.

First, they examined information from the Wisconsin Longitudinal Study, a long-term project involving individuals who graduated from Wisconsin high schools in 1957. The researchers calculated the gender ratios of each high school graduating class, then ascertained how long the graduates went on to live. After adjusting for a multitude of factors, they discovered that, 50 years later, men from classes with an excess of boys did not live as long as men whose classes were gender-balanced. By one measurement, mortality for a 65-year-old who had experienced a steeper sex ratio decades earlier as a teenager was 1.6 percent higher than one who hadn’t faced such stiff competition for female attention.

Next, the research team compared Medicare claims data with census data for a complete national sample of more than 7 million men throughout the United

States and arrived at similar results (for technical reasons, the study was unable to evaluate results for women who outnumbered men at sexual maturity).

Much attention has been paid to the deleterious social effects of gender imbalances in countries such as China and India, where selective abortion, internal migration, and other factors have in some areas resulted in men outnumbering women by up to 20 percent. Such an environment, already associated with a marked increase in violence and human trafficking, appears to shorten life as well.

The researchers have not investigated mechanisms that might account for this phenomenon, but Christakis suspects that it arises from a combination of social and biological factors. After all, finding a mate can be stressful, and stress as a contributor to health disorders has been well-documented.

Says Christakis, “We literally come to embody the social world around us, and what could be more social than the dynamics of sexual competition?”

(Photo: Kristyn Ulanday/Harvard Staff Photographer)

Harvard University

BU SCIENTISTS STUDY RAT BRAINS TO HELP ROBOTS NAVIGATE

0 comentarios
A team led by Boston University College of Arts & Sciences Professor of Psychology Michael Hasselmo has won a $1.5 million grant from the Department of Defense's Office of Naval Research to study rat brains to learn how to help military robots navigate.

The team will develop biologically inspired algorithms for robotic navigation based on recent data on grid cells recorded in the entorhinal cortex of the rat. In contrast to robots, rodents are highly effective at exploring an environment and returning to rewarding locations. This behavior may depend on neural activity selective to location, including the activity of recently discovered grid cells in the entorhinal cortex.

An active area of robotics research concerns the ability of a robot to perform navigation toward selected goals in the environment, and the capacity for a human operator to communicate with a robot about locations and goals. This includes the requirement of a robot to learn a representation of the environment during exploration while accurately recognizing location, termed simultaneous localization and mapping (SLAM).

Grid cells are neurons recorded as a rat explores an environment. The cells respond in an array of locations that can be described as the vertices of tightly packed equilateral triangles, or as a hexagonal grid. Recent models have shown how grid cells can code location based on self-motion information provided by neurons that code head direction or running speed, and have shown how grid cells could arise from oscillations in the entorhinal cortex. Recent imaging data indicates that grid cells may exist in the human cortex.

The researchers on this grant will further develop the models based on biological data and use them to guide the development of algorithms for robotic navigation, and for communication of information about spatial location between human operators and robots.

Boston University

COASTAL CREATURES MAY HAVE REDUCED ABILITY TO FIGHT OFF INFECTIONS IN ACIDIFIED OCEANS

0 comentarios

Human impact is causing lower oxygen and higher carbon dioxide levels in coastal water bodies. Increased levels of carbon dioxide cause the water to become more acidic, having dramatic effects on the lifestyles of the wildlife that call these regions home. The problems are expected to worsen if steps aren’t taken to reduce greenhouse emissions and minimize nutrient-rich run-off from developed areas along our coastlines.

The ocean is filled with a soup of bacteria and viruses. The animals living in these environments are constantly under assault by pathogens and need to be able to mount an immune response to protect themselves from infection, especially if they have an injury or wound that is openly exposed to the water.

Louis Burnett, professor of biology and director of the Grice Marine Laboratory of the College of Charleston, and Karen Burnett, research associate professor at Grice Marine Laboratory of the College of Charleston, study the effects of low oxygen and high carbon dioxide on organisms’ immune systems. They have found that organisms in these conditions can’t fight off infections as well as animals living in oxygen rich, low carbon dioxide environments.

The researchers examined fish, oysters, crabs and shrimp, and showed that all these animals have a decreased ability to fight off infection of Vibrio bacteria when subjected to low oxygen, high carbon dioxide conditions. It takes about half as much bacteria to administer a lethal dose to a creature in a low oxygen, high carbon dioxide environment.

“Our approach is exciting because traditionally physiologists haven’t considered bacteria or disease as a natural environmental barrier, so it’s a pretty open field,” says Louis Burnett.

Apparently, if marine animals are challenged with a pathogen, a large number of their blood cells disappear within a few minutes. The blood cells clump up to attack the pathogen, but also lodge in the gills (the sea critter version of lungs), where the body gets it oxygen. The scientists see evidence that sea animals fighting off infection lower their metabolism, which slows down other important processes like making new proteins.

“Everything we see points to the fact that if an animal that mounts a successful immune response then their gill function and ability to exchange oxygen is reduced by about 40 percent, which is why they seem to be having such problems living in low oxygen conditions,” says Karen Burnett. “If you add high carbon dioxide to that, it gets worse.”

The researchers are now using microarrays to measure changes in gene expression in marine organisms that are exposed to bacteria under low oxygen, high carbon dioxide conditions.

“After exposure to these conditions for only a day, animals at the molecular level have given up in trying to adapt to the situation, and they are going into molecular pathways that indicate cell death,” says Karen Burnett.

The coastal animals the Burnett’s study live in environments where natural levels of oxygen and carbon dioxide fluctuate. Theoretically, these animals are already adapted for varied environments, and yet they still struggle with these changing conditions. It’s alarming that deep-water animals may be much more affected by ocean acidification, since they are not used to the ebb and flow of oxygen and carbon dioxide levels.

“Some of the models for how the coastal organisms adapt may help researchers predict how deep water organisms are going to be affected by overall climate change too,” says Louis Burnett.

(Photo: Louis and Karen Burnett)

The American Physiological Society

THE SECRET OF LIFE MAY BE AS SIMPLE AS WHAT HAPPENS BETWEEN THE SHEETS--MICA SHEETS

0 comentarios

That age-old question, "where did life on Earth start?" now has a new answer. If the life between the mica sheets hypothesis is correct, life would have originated between sheets of mica that were layered like the pages in a book.

The so-called "life between the sheets" mica hypothesis was developed by Helen Hansma of the University of California, Santa Barbara, with funding from the National Science Foundation (NSF). This hypothesis was originally introduced by Hansma at the 2007 annual meeting of the American Society for Cell Biology, and is now fully described by Hansma in the September 7, 2010 issue of Journal of Theoretical Biology.

According to the "life between the sheets" mica hypothesis, structured compartments that commonly form between layers of mica--a common mineral that cleaves into smooth sheets--may have sheltered molecules that were the progenitors to cells. Provided with the right physical and chemical environment in the structured compartments to survive and evolve, the molecules eventually reorganized into cells, while still sheltered between mica sheets.

Mica chunks embedded in rocks could have provided the right physical and chemical environment for pre-life molecules and developing cells because:

1. Mica compartments could have held, protected and sheltered molecules, and thereby promoted their survival. Also, mica could have provided enough isolation for molecules to evolve without being disturbed and still allow molecules to migrate towards one another and eventually bond together to form large organic molecules. And mica compartments may have provided something akin to a template for the production of a life form composed of compartments, which are now known as cells.

2. Mica sheets are held together by potassium. If high levels of potassium were donated by mica sheets to developing cells, the high levels of potassium found in mica sheets could account for the high levels of potassium currently found in human cells.
3. Mica chunks embedded in rocks that were sitting in an early ocean would have received an endless supply of energy from waves, the sun, and the occasional sloshing of water into the spaces between the mica sheets. This energy could have pushed the mica sheets into up-and-down motions that could have pushed together molecules sitting between mica sheets, thereby enabling them to bond together.

Because mica surfaces are hospitable to living cells and to all the major classes of large biological molecules, including proteins, nucleic acids, carbohydrates and fats, the "between the sheets" mica hypothesis is consistent with other well-known hypotheses that propose that life originated as RNA, fatty vesicles or primitive metabolisms. Hansma says a "mica world" might have sheltered all the ancient metabolic and fat-vesicle and RNA "worlds."

Hansma also says that mica would provide a better substrate for developing cells than other minerals that have been considered for that role. Why? Because most other minerals would probably have tended to intermittently become either too wet or too dry to support life. By contrast, the spaces between mica sheets would probably have undergone more limited wet/dry cycles that would support life without reaching killing extremes. In addition, many clays that have been considered as potential surfaces for life's origins respond to exposure to water by swelling. By contrast, mica resists swelling and would therefore provide a relatively stable environment for developing cells and biological molecules, even when it did get wet.

Hansma sums up her hypothesis by observing that "mica would provide enough structure and shelter for molecules to evolve but also accommodate the dynamic, ever-changing nature of life."

What's more, Hansma says that "mica is old." Some micas are estimated to be over 4 billion years old. And micas such as biotite have been found in regions containing evidence of the earliest life-forms, which are believed to have existed about 3.8 million years ago.

Hansma's passion for mica evolved gradually--starting when she began conducting pioneering, NSF-funded research in former husband Paul K. Hansma's AFM lab to develop techniques for imaging DNA and other biological molecules in the atomic force microscope (AFM)--a high-resolution imaging technique that allows researchers to observe and manipulate molecular and atomic level features.

Says Helen Hansma, "Mica sheets are atomically flat, so we can see DNA molecules on the mica surface without having to cover the DNA with something that makes it look bigger and easier to see. Sometimes we can even see DNA molecules swimming on the surface of mica, under water, in the AFM. Mica sheets are so thin (one nanometer) that there are a million of them in a millimeter-thick piece of mica."

Hansma's "life between the sheets" hypothesis first struck her a few years ago, after she and family members had collected some mica from a Connecticut mine. When she put water on a piece of the mica under her dissecting microscope, she noticed a greenish organic 'crud' at some step edges in the mica. "It occurred to me that this might be a good place for the origins of life--sheltered within these stacks of sheets that can move up and down in response to flowing water, which could have provided the mechanical energy for making and breaking chemical bonds," says Hansma.

Hansma says that recent advancements in imaging techniques, including the AFM, made possible her recent research, leading to her "between mica sheets" hypothesis. She adds that direct support for her hypothesis might be obtained from additional studies involving mica sheets in an AFM, being subjected its push-and-pull forces while sitting in liquids resembling an early ocean.

(Photo: Helen Greenwood Hansma, University of California, Santa Barbara)

National Science Foundation

SCIENTISTS UNLOCK SECRET OF RABIES TRANSMISSION IN BATS

0 comentarios

Most infectious diseases infect multiple host species, but to date, efforts to quantify the frequency and outcome of cross-species transmission (CST) of these diseases have been severely limited.

This lack of information represents a major gap in knowledge of how diseases emerge, and from which species they will emerge.

A paper published in the journal Science by a team of researchers led by Daniel Streicker of the University of Georgia has begun to close that gap.

Results of a study, conducted by Streicker and co-authors from the U.S. Centers for Disease Control, the University of Tennessee-Knoxville, and Western Michigan University, provide some of the first estimates for any infectious disease of how often CST happens in complex, multi-host communities--and the likelihood of disease in a new host species.

"Some of the deadliest human diseases, including AIDS and malaria, arose in other species and then jumped to humans," said Sam Scheiner of the National Science Foundation (NSF)'s Division of Environmental Biology, which co-funded the research with NSF's Directorate for Geosciences through the joint NIH-NSF Ecology of Infectious Diseases Program.

"Understanding that process," said Scheiner, "is key to predicting and preventing the next big outbreak."

Rabies is an ideal system to answer these questions, believes Streicker.

The disease occurs across the country, affects many different host species, and is known to mutate frequently. Although cases of rabies in humans are rare in the U.S., bats are the most common source of these infections.

To determine the rate of CST, and what outcomes those transmissions had, Streicker and his colleagues used a large dataset, unprecedented in its scope, containing hundreds of rabies viruses from 23 North American bat species.

They sequenced the nucleoprotein gene of each virus sample and used tools from population genetics to quantify how many CST events were expected to occur from any infected individual.

Their analysis showed that, depending on the species involved, a single infected bat may infect between 0 and 1.9 members of a different species; and that, on average, CST occurs only once for every 72.8 transmissions within the same species.

"What's really important is that molecular sequence data, an increasingly cheap and available resource, can be used to quantify CST," said Streicker.

Scientist Sonia Altizer of UGA agrees.

"This is a breakthrough," said Altizer. "The team defined, for the first time, a framework for quantifying the rates of CST across a network of host species that could be applied to other wildlife pathogens, and they developed novel methods to do it."

The researchers also looked at the factors that could determine the frequency of CST, using extensive data about each bat species, such as foraging behavior, geographic range and genetics.

"There's a popular idea that because of their potential for rapid evolution, the emergence of these types of viruses is limited more by ecological constraints than by genetic similarity between donor and recipient hosts," said Streicker. "We wanted to see if that was the case."

He found, instead, that rabies viruses are much more likely to jump between closely related bat species than between ones that diverged in the distant past.

Overlapping geographic range was also associated with CST, but to a lesser extent.

"CST and viral establishment do not occur at random, but instead are highly constrained by host-associated barriers," Streicker said. "Contrary to popular belief, rapid evolution of the virus isn't enough to overcome the genetic differences between hosts."

Streicker believes that what he and colleagues have learned about bat rabies will be influential in understanding the ecology, evolution and emergence of many wildlife viruses of public health and conservation importance.

"The basic knowledge we've gained will be key to developing new intervention strategies for diseases that can jump from wildlife to humans."

Streicker is continuing his work with rabies and bats with funding for a three-year study from NSF.

He and Altizer, in collaboration with investigators at the CDC, University of Michigan and the Peruvian Ministries of Health and Agriculture, will explore how human activities affect the transmission of the rabies virus in vampire bats in Peru--and how those changes might alter the risk of rabies infection for humans, domesticated animals, and wildlife.

(Photo: Ivan Kuzmin)

National Science Foundation

MEASURING THE SPEED OF THOUGHT

0 comentarios
If the eyes are the window to the soul, psychologists hoping to solve the mystery of why our neural impulses do not always trigger an immediate response, could find the answer in the flick of the eye.

The reasons why the speed of human responses to a given event can be so variable - even in laboratory controlled conditions where determining factors such as alertness and vigilance are a constant - remain a mystery to scientists studying the connection between our brains and behaviour.

For example, when driving, the onset of a red traffic light does not always lead to as rapid a response as one might expect from a driver when pressing the brake.

Now BBSRC-funded researchers from the University of Bristol are to provide a comprehensive account of response time variability, in a bid to explain at a functional level where the variability originates and what neural processes and anatomical structures are involved.

Professor Iain Gilchrist from Bristol is a cognitive neuroscientist who has just been awarded a BBSRC Research Development fellowship to explore the neural basis of response time variability.

"In my research group we focus on the study of the eye movement response," says Prof Gilchrist, also Director of Bristol Neuroscience. "Eye movements are interesting for a number of reasons. First, they are important for visual perception: we only see fine detailed information when the eyes point directly at a location. Second, eye movements are ubiquitous: we make more eye movements in a day than heart beats. Third, we have a detailed knowledge of the neurophysiology of eye movements control which allows us to link functional and neural explanations."

For the last ten years, researchers at Bristol University have focused on identifying functional explanations for eye movement responses using mathematical models and behavioural experiments.

The next major step for this research will be to investigate the brain processes that account for this variability. Functional Magnetic Radiation Imaging (fMRI) and Magnetoencephalography (MEG) are both methods for imaging the human brain while participants carry out a task. This allows the brain areas involved in the task to be identified.

The fellowship will allow Prof Gilchrist and his team to introduce these methods to his research to study the neural basis of response time variability. The fMRI work will also be one of the first projects to be carried out at the new Clinical Research and Imaging Centre, a pioneering collaboration between the University of Bristol and University Hospitals Bristol NHS Foundation Trust providing a £6.4M facility focused on world-class translational research across the scientific disciplines.

The behavioural experiments will be simple; human participants will be asked to move their eyes as quickly as possible to look directly at suddenly appearing visual targets. The movements of their eyes will be measured to determine when they responded. At the same time brain activity will be recorded to determine the time course and pattern of changes in the brain that determine response time variability.

Biotechnology and Biological Sciences Research Council

DOGS' WIDE RANGE OF PHYSICAL TRAITS CONTROLLED BY SMALL NUMBER OF GENETIC REGIONS

0 comentarios

Sure, dogs are special. You might not be aware, however, that studying their genomes can lead to advances in human health. So next time you gaze soulfully into a dog’s eyes or scratch behind its ears, take note of the length of his nose or the size of his body. Although such attributes can vary wildly among different breeds, a team of investigators co-led by researchers at Stanford University School of Medicine, Cornell University and the National Human Genome Research Institute have found that they are determined by only a few genetic regions.

The discovery shows how studying genetic differences among dog breeds may ultimately help us understand human biomedical traits, such as height, hair color and body weight that are usually influenced by the net impact of hundreds of different genes in our species. The key idea is that identifying the dozen regions where dogs harbor genetic switches among breeds will provide critical clues as to where researchers could find mutations important to human health and disease.

The study describes the most comprehensive genetic analysis of dogs to date, in which the researchers genotyped more than 900 individual dogs and assessed nearly 60 specific physical traits, and found that only a few genetic regions determine much of a dog’s appearance.

“We’ve found that only six or seven locations in the dog genome are necessary to explain about 80 percent of the differences in height and weight among dog breeds,” said Carlos Bustamante, PhD, professor of genetics at Stanford. “In humans these are controlled by hundreds if not thousands of variants.”

The research is published in the Aug. 10 Public Library of Science-Biology. Bustamante is a co-senior author of the study; Stanford research associate Adam Boyko, PhD, is one of three co-first authors. Elaine Ostrander, PhD, chief of the Cancer Genetics Branch of the National Human Genome Research Institute is the other senior author. Bustamante and Boyko began the work while they were at Cornell.

The work is a product of an intensive collaboration called the CanMap project, which involves several groups around the country including NHGRI, Cornell, the University of California-Los Angeles and now Stanford. The CanMap groups are using the dog as a model system to identify genomic regions responsible for many key physical characteristics. Although a few individual relationships, including an association between small body size and a gene called IGF-1, have been previously reported by the groups, many others were identified for the first time in this new analysis.

Dogs have been our companions and protectors for thousands of years. During this time, dogs adapted to living near human settlements largely through natural selection for being able to survive among people. But recently we humans decided to take things into our own hands. Driven sometimes by a love of novelty and other times by usefulness, our relentless breeding campaigns have left us with the Great Dane and the Chihuahua, the collie and the bulldog, and many more. As a result of our meddling, the dog is now the physically most diverse land animal.

“This dizzying array of morphological variants has happened extraordinarily quickly in terms of evolutionary timescales, due to extraordinarily strong selection by humans,” said Bustamante. “Most dog breeds are only a couple of hundred years old.”

All told, there are about 57 phenotypic traits that were used to visually differentiate one breed from another, including body size, snout length and ear type. The CanMap project set out to identify what regions of the dog genome contributed to each of these different traits. They didn’t know whether the differences in appearance from breed to breed resulted from many genetic mutations, each of which makes a small contribution to a dog’s appearance, or if they were due to only a few, powerful changes.

To answer the question, the NHGRI team genotyped more than 60,000 single genetic changes called SNPs (for single nucleotide polymorphisms) in 915 dogs. The dogs included representatives of 80 domestic breeds, 83 wild canids such as wolves, foxes and coyotes, and 10 Egyptian village dogs — domesticated but of no particular breed.

The CanMap researchers used the SNPs to identify chunks of DNA shared among individual dogs of the same breed. They found that while purebred dogs tended to share large stretches of DNA with other members of their breed, the wild dogs and village mongrels were more variable. They then looked to see which regions varied with specific physical traits from breed to breed.

The researchers found that — in contrast to humans — many physical traits in dogs are determined by very few genetic regions. For example, a dog with version A of the “snout length” region may have a long, slender muzzle, while version B confers a more standard nose and C an abnormally short schnoz. And let’s say X, Y and Z in the “leg length” region bestow a range of heights from short to tall. That would mean that in this example an A/X dog would have a slender muzzle and short legs like a dachshund. C/Y might be a bulldog, while B/Z would be more like a Labrador. This mixing and matching of chunks of DNA is how breeders were able to come up with so many different breeds in a relatively short amount of time.

Determining the differences between dog breeds may seem inconsequential, but it has important implications for human health.

“Understanding the genetic bases of complex traits in humans is difficult because many different genes can influence a particular trait,” explained Bustamante. “Having model systems, such as mice and dogs, is critical for making sense of the biology. For example, one of the strongest associations in human genetics is between a common variant in a gene called HMGA2 and height. In our study, we also see a strong association with body size and HMGA2 (just as we see at IGF-1 in humans, mice and dogs and body-size variation within each species). This suggests that studying what underlies the HMGA2 association in dogs could help us understand the relationship in humans. In this way, dogs are a fantastic model system since they complement mouse and human genetics.”

In the future, the researchers plan to investigate whether dog behavioral traits can be linked to specific genomic regions, and how these regions may be important in mammalian behavior.

(Photo: Stanford U.)

Stanford University

POPPING CELLS SURPRISE LIVING CIRCUITS CREATORS

0 comentarios
Under the microscope, the bacteria start dividing normally, two cells become four and then eight and so on. But then individual cells begin "popping," like circus balloons being struck by darts.

This phenomenon, which surprised the Duke University bioengineers who captured it on video, turns out to be an example of a more generalized occurrence that must be considered by scientists creating living, synthetic circuits out of bacteria. Even when given the same orders, no two cells will behave the same.

The researchers believe this accidental finding of a circuit they call "ePop" can help increase the efficiency and power of future synthetic biology circuits.

Synthetic circuits are created by genetically altering colonies of bacteria to produce a myriad of useful proteins, enzymes or chemicals in a coordinated way. The circuits can even be reprogrammed to deliver different types of drugs or to selectively kill cancer cells. Scientists in this emerging field of synthetic biology have operated under the assumption that when identical snippets of engineered DNA - known as plasmids -- are inserted into cells, each cell will respond in the same way.

"In the past, synthetic biologists have often assumed that the components of the circuit would act in a predictable fashion every time and that the cells carrying the circuit would just serve as a passive reactor," said Lingchong You, an assistant professor of biomedical engineering and member of Duke's Institute for Genome Sciences & Policy. "In essence, they have taken a circuit-centric view for the design and optimization process. This notion is helpful in making the design process more convenient."

But the cells in this study unexpectedly began popping when the colony reached a certain density of cells because of an unintended consequence of introducing plasmids.

Biochemistry graduate student Philippe Marguet said the research team looked at many factors to try to explain how the bacteria sensed the size of their colonies. "In the end, it turns out that the (number of copies of) plasmid increases with cell density. This is the critical link that enables the cells to sense their density and to commit suicide at sufficiently high densities."

"We ran computer models and experiments to show that this is indeed the case," Marguet said. "Our results underscore the importance of the amount of plasmids and the potential impact of hidden interactions on the behavior of engineered gene circuits."

The results of the team's experiments were published online Aug. 9 in the journal PLoS One.

Researchers can reprogram populations of genetically altered bacteria to direct their actions in much the same way that a computer program directs a computer. In this analogy, the plasmids are the software, the cell the computer. One of these plasmids tells cells to commit suicide if the number of cells in a population gets too high.

However, in the ePop circuit, which made use of the common Escherichia coli (E. coli) bacteria, the cell death, or popping, took place without the suicide gene. The researchers believe that when the plasmid is inserted into bacteria, it can be expressed at different levels in different cells. When over-expressed in a particular cell, it leads to the cell's demise. When enough of the cells are so affected, the population of cells in the colony decreases.

"Perhaps the confluence of the conditions for significant plasmid amplification was not seen in previous experiments," You said. "In this regard, ePop can be valuable as a probe of cell physiology to find out what environmental and genetic conditions lead to this amplification. As a probe, ePop has the advantage of being easily observable and highly sensitive and it has the ability to provide new information on complex interactions between the plasmid and the host cell."

The goal, You said, is to get to the point where scientists have a complete understanding of each component of a circuit, so that when a new plasmid is added, all of its effects can be observed.

Duke University

THE JELLYFISH-LIKE SALP: MOST EFFICIENT FILTER-FEEDER IN THE DEEP, SCIENTISTS DISCOVER

0 comentarios

What if trains, planes and automobiles all were powered simply by the air through which they move? What if their exhaust and by-products helped the environment?

Such an energy-efficient, self-propelling mechanism already exists in nature.

The salp, a small, barrel-shaped organism that resembles a streamlined jellyfish, gets everything it needs from ocean waters to feed and propel itself.

Scientists believe its waste material may help remove carbon dioxide (CO2) from the upper ocean and the atmosphere.

Now researchers at the Woods Hole Oceanographic Institution (WHOI) and MIT have found that the half-inch to 5-inch-long creatures are even more efficient than had been believed.

"This innovative research is providing an understanding of how a key organism in marine food webs affects important biogeochemical processes," said David Garrison, director of the National Science Foundation (NSF)'s biological oceanography program, which funded the research.

Reporting in the journal Proceedings of the National Academy of Sciences (PNAS), the scientists have found that mid-ocean-dwelling salps are capable of capturing and eating extremely small organisms as well as larger ones, rendering them even hardier--and perhaps more plentiful--than had been believed.

"We had long thought that salps were about the most efficient filter-feeders in the ocean," said Larry Madin, WHOI Director of Research and one of the paper's authors.

"But these results extend their impact down to the smallest available size fraction, showing they consume particles spanning four orders of magnitude in size. This is like eating everything from a mouse to a horse."

Salps capture food particles, mostly phytoplankton, with an internal mucus filter net. Until now, it was thought that included only particles larger than the 1.5-micron-wide holes in the mesh; smaller particles would slip through.

But a mathematical model suggested salps somehow might be capturing food particles smaller than that, said Kelly Sutherland, who co-authored the PNAS paper after her PhD research at MIT and WHOI.

In the laboratory at WHOI, Sutherland and her colleagues offered salps food particles of three sizes: smaller, around the same size as, and larger than the mesh openings.

"We found that more small particles were captured than expected," said Sutherland, now a post-doctoral researcher at Caltech. "When exposed to ocean-like particle concentrations, 80 percent of the particles that were captured were the smallest particles offered in the experiment."

The finding helps explain how salps--which can exist either singly or in "chains" that may contain a hundred or more--are able to survive in the open ocean where the supply of larger food particles is low.

"Their ability to filter the smallest particles may allow them to survive where other grazers can't," said Madin.

Perhaps most significantly, the result enhances the importance of the salps' role in carbon cycling. As they eat small, as well as large, particles, "they consume the entire 'microbial loop' and pack it into large, dense fecal pellets," Madin says.

The larger and denser the carbon-containing pellets, the sooner they sink to the ocean bottom. "This removes carbon from the surface waters," said Sutherland, "and brings it to a depth where you won't see it again for years to centuries."

And the more carbon that sinks to the bottom, the more space there is for the upper ocean to accumulate carbon, hence limiting the amount that rises into the atmosphere as CO2, said paper co-author Roman Stocker of MIT.

"The most important aspect of this work is the very effective shortcut that salps introduce in the process of particle aggregation," Stocker said. "Typically, aggregation of particles proceeds slowly, by steps, from tiny particles coagulating into slightly larger ones."

"Now, the efficient foraging of salps on particles as small as a fraction of a micrometer introduces a substantial shortcut in this process, since digestion and excretion package these tiny particles into much larger particles, which thus sink a lot faster."

This process starts with the mesh made of fine mucus fibers inside the salp's hollow body.

Salps, which can live for weeks or months, swim and eat in rhythmic pulses, each of which draws seawater in through an opening at the front end of the animal. The mesh captures the food particles, then rolls into a strand and goes into the gut, where it is digested.

"It was assumed that very small cells or particles were eaten mainly by other microscopic consumers, like protozoans, or by a few specialized metazoan grazers like appendicularians," said Madin.

"This research indicates that salps can eat much smaller organisms, like bacteria and the smallest phytoplankton, organisms that are numerous and widely distributed in the ocean."

The work, also funded by the WHOI Ocean Life Institute, "implies that salps are more efficient vacuum cleaners than we thought," said Stocker.

"Their amazing performance relies on a feat of bioengineering--the production of a nanometer-scale mucus net--the biomechanics of which remain a mystery."

(Photo: Kelly Sutherland and Larry Madin, WHOI)

National Science Foundation

GLOBAL TROPICAL FORESTS THREATENED BY 2100

0 comentarios

By 2100 only 18% to 45% of the plants and animals making up ecosystems in global, humid tropical forests may remain as we know them today, according to a new study led by Greg Asner at the Carnegie Institution’s Department of Global Ecology.

The research combined new deforestation and selective logging data with climate-change projections. It is the first study to consider these combined effects for all humid tropical forest ecosystems and can help conservationists pinpoint where their efforts will be most effective. The study is published in the August 5, 2010, issue of Conservation Letters.

“This is the first global compilation of projected ecosystem impacts for humid tropical forests affected by these combined forces,” remarked Asner. “For those areas of the globe projected to suffer most from climate change, land managers could focus their efforts on reducing the pressure from deforestation, thereby helping species adjust to climate change, or enhancing their ability to move in time to keep pace with it. On the flip side, regions of the world where deforestation is projected to have fewer effects from climate change could be targeted for restoration.”

Tropical forests hold more than half of all the plants and animal species on Earth. But the combined effect of climate change, forest clear cutting, and logging may force them to adapt, move, or die. The scientists looked at land use and climate change by integrating global deforestation and logging maps from satellite imagery and high-resolution data with projected future vegetation changes from 16 different global climate models. They then ran scenarios on how different types of species could be geographically reshuffled by 2100.They used the reorganization of plant classes, such as tropical broadleaf evergreen trees, tropical drought deciduous trees, plus different kinds of grasses as surrogates for biodiversity changes.

For Central and South America, climate change could alter about two-thirds of the humid tropical forests biodiversity—the variety and abundance of plants and animals in an ecosystem. Combining that scenario with current patterns of land-use change, and the Amazon Basin alone could see changes in biodiversity over 80% of the region.

Most of the changes in the Congo area likely to come from selective logging and climate change, which could negatively affect between 35% and 74% of that region. At the continental scale, about 70% of Africa’s tropical forest biodiversity would likely be affected if current practices are not curtailed.

In Asia and the central and southern Pacific islands, deforestation and logging are the primary drivers of ecosystem changes. Model projections suggest that climate change might play a lesser role there than in Latin America or Africa. That said, the research showed that between 60% and 77% of the area is susceptible to biodiversity losses via massive ongoing land-use changes in the region.

“This study is the strongest evidence yet that the world’s natural ecosystems will undergo profound changes—including severe alterations in their species composition—through the combined influence of climate change and land use,” remarked Daniel Nepstad, senior scientist at the Woods Hole Research Center. “Conservation of the world’s biota, as we know it, will depend upon rapid, steep declines in greenhouse gas emissions.”

(Photo: Carnegie I.)

Carnegie Institution

STRINGING TOGETHER A PICTURE OF SUPERCONDUCTORS

0 comentarios

For decades, physicists have been trying to reconcile the two major theories that describe physical behavior. The first, Einstein’s theory of general relativity, uses gravity — forces of attraction — to explain the behavior of objects with large masses, such as falling trees or orbiting planets. However, at the atomic and subatomic level, particles with negligible masses are better described using another theory: quantum mechanics.

A “theory of everything” that marries general relativity and quantum mechanics would encompass all physical interactions, no matter the size of the object. One of the most popular candidates for a unified theory is string theory, first developed in the late 1960s and early 1970s.

String theory holds that electrons and quarks (the building blocks of larger particles) are one-dimensional oscillating strings, not the dimensionless objects they are traditionally thought to be.

Physicists are divided on whether string theory is a viable theory of everything, but many agree that it offers a new way to look at physical phenomena that have otherwise proven difficult to describe. In the past decade, physicists have used string theory to build a connection between quantum and gravitational mechanics, known as gauge/gravity duality.

MIT physicists, led by Hong Liu and John McGreevy, have now used that connection to describe a specific physical phenomenon — the behavior of a type of high-temperature superconductor, or a material that conducts electricity with no resistance. The research, published in the Aug. 5 online edition of Science, is one of the first to show that gauge/gravity duality can shed light on a material’s puzzling physical behavior.

So far, the team has described a few aspects of behavior of a type of superconducting materials called cuprates. However, the researchers hope their work could lead to more general theories to describe other materials, and eventually predict their behavior. “That’s the ultimate theoretical goal, and we haven’t really achieved that,” says Liu.

MIT graduate student Nabil Iqbal and recent PhD recipients Thomas Faulkner and David Vegh are also authors of the paper.

In 1986, physicists discovered that cuprates (ceramic compounds that contain copper) can superconduct at relatively high temperatures (up to 135 degrees Celsius above absolute zero).

At the atomic level, cuprates are classified as a “many-body system” — essentially a vast collection of electrons that interact with each other. Such systems are usually described using quantum mechanics. However, so far, physicists have found it difficult to describe cuprates, because their behavior is so different from other materials. Understanding that behavior could help physicists find new materials that superconduct at even higher temperatures. These new materials would have potentially limitless applications.

Unlike most materials, cuprates do not obey Fermi’s laws, a set of quantum-mechanics principles that govern microscopic behavior at very low temperatures (close to absolute zero, or -273 degrees Celsius). Instead, cuprates become superconductors. Just above the temperature at which they begin to superconduct, they enter a state called the “strange metal” state.

In this study, the researchers focused on two properties that distinguish those cuprate strange metals from Fermi liquids. In ordinary Fermi liquids, electrical resistivity and the rates of electron scattering (deflection from their original course caused by interactions with each other) are both proportional to the temperature squared. However, in cuprates (and other superconducting non-Fermi liquids), electron scattering and resistivity are proportional to the temperature. “There’s really no theory of how to explain that,” says Liu.

Using gauge/gravity duality — the connection between quantum and gravitational mechanics — the MIT team identified a system that has the same unusual properties as strange metals, but could be explained by gravitational mechanics. In this case, the model they used was a gravitational system with a black hole. “It’s a mathematical abstraction which we hope may shed light on the physics of the real system,” says Liu. In their model, they can study behavior at high and low energy (determined by how the excitation energy of a single electron compares to the average energy of an electron in the system), and it turns out that at low energy, the black-hole model exhibits many of the same unusual traits seen in non-Fermi liquids such as cuprates.

For example, in both systems, when an electron at the lowest possible energy level is excited (by a photon or another particle), the resulting interaction between the electron and the hole left behind cannot be described as a quasiparticle (as it can in ordinary metals), because the electron excitation decays so quickly. (The electrons decay so quickly because their scattering rate is proportional to the temperature.) Furthermore, the electrical resistance of the black-hole system is directly proportional to temperature — just as it is in cuprates.

Gauge/gravity duality offers a “map” that correlates certain features of the black-hole model to corresponding features of strange metals. Therefore, once the physicists calculated the features of the model, using general relativity, those values could be translated to the corresponding values in the strange-metal system. For example, the value of an electromagnetic field in the gravitational system could correspond to the density of electrons in the quantum system.

Physicists have previously used gauge/gravity duality to describe some characteristics of quark gluon plasma, the “hot soup” of elementary particles that existed in the first millionths of a second after the Big Bang. However, this is the first time it has been used to give insight into a type of condensed matter (solids and liquids are condensed matter).

For that reason, the paper should have a significant impact in theoretical physics, says Joseph Polchinski, a theoretical physicist at the University of California at Santa Barbara. “Whenever people have systems they can’t understand in other ways, this might be a tool to try to understand it,” he says.

The MIT team believes the approach could shed light on a group of rare metal compounds known as heavy fermion metals, whose electrons behave as if their masses were 100 to 1,000 times greater than those in ordinary metals. They also display some of the same non-Fermi liquid behavior seen in the strange metal phase of cuprates.

(Photo: Wikimedia commons)

MIT

Followers

Archive

 

Selected Science News. Copyright 2008 All Rights Reserved Revolution Two Church theme by Brian Gardner Converted into Blogger Template by Bloganol dot com