Thursday, July 8, 2010


0 comentarios

A sensitive measuring device must not be dropped - because this usually destroys the precision of the instrument. A team of researchers including scientists from the Max Planck Institute of Quantum Optics has done exactly this, however. And the researchers want to use this experience to make the measuring instrument even more sensitive.

The team, headed by physicists from the University of Hanover, dropped a piece of apparatus, in which they generated a weightless Bose-Einstein condensate (BEC), to the bottom of a drop tower at the University of Bremen. The particles in a BEC lose their individuality and can be considered to be a 'super-particle'. The researchers want to use such an ultra-cold quantum gas at zero gravity to construct a very sensitive measuring device for the Earth's gravitational field - in order to find deposits of minerals, and also to settle fundamental issues in physics.

In a vacuum, a feather falls as quickly as a lead ball - something that is already presented to students as being irrefutable. "However, the equivalence principle is only a postulate that needs to be tested," says Ernst Maria Rasel, professor at the University of Hanover. According to the equivalence principle, the heavy mass with which bodies attract each other corresponds to the inertial mass, which resists an accelerating force. This means that in a vacuum all bodies hit the ground with the same speed. Physicists want to use a measuring device that measures gravity extremely accurately to investigate whether this hypothesis can really become a physical law. Ernst Maria Rasel’s team has now taken an initial step in this direction.

The researchers generated a Bose-Einstein condensate (BEC) in zero gravity and observed, for more than a second, how the atomic cloud behaves in free fall. To this end, they installed an atom chip developed by researchers working with Theodor W. Hänsch, Director at the Max Planck Institute of Quantum Optics, and solenoids, lasers, a camera and the necessary energy supply into a cylindrical capsule, which is about as high and wide as a door. After they had moved a cloud of several million rubidium atoms onto the atom chip, they dropped the complete apparatus 146 metres into the depths. A tower at the Center of Applied Space Technology and Microgravity of the University of Bremen specializes in such scientific cases.

As the capsule was falling to the ground for four seconds in the drop tower, the researchers generated the BEC on the atom chip, initially by remote control: strong magnetic fields and lasers hold the particles on the chip and cool them. At a few millionths of a degree above absolute zero, the temperature at minus 273.16 degrees Celsius, the particles have lost almost all of their energy and assume a new physical state: all atoms are now in the quantum mechanical ground state so that they can no longer be distinguished as individual particles in the quantum gas.

"They behave completely coherently, practically like a heap of atoms that assumes the properties of a single huge atom," says Tilo Steinmetz, who was involved in the experiment as a researcher from the Max Planck Institute of Quantum Optics. Since the laws of quantum mechanics say that every particle can also be considered to be a wave, it is possible to describe what is happening in a different way: A wave packet of matter forms in which the atoms no longer stay at fixed locations - they are delocalized. This grouping is maintained until an energetic push, however small, mixes it up.

"We generate a BEC in less than a second on our atom chip. With conventional laboratory apparatus, this takes up to one minute," says Tilo Steinmetz. In addition, an experiment on an atom chip requires significantly less electrical power. "It is thus ideal for use in a drop tower capsule, where energy supply and cooling present a logistical challenge," says Steinmetz.

As soon as the atoms on the chip had merged into the super-particle, the researchers carefully loosened the hold of the trap and released the BEC. The camera in the capsule now enabled them to observe how the condensate spread. This movement reacts extremely sensitively to external fields - to differences in Earth’s gravitational field, for example. These differences exist because the gravitation at a certain point on Earth depends on the local density of the Earth's crust. The longer the Bose-Einstein condensate expands, i.e. the longer it floats in zero gravity, the clearer these differences make themselves felt as it expands. With the experiment in the drop tower alone, the researchers extended the time available for a measurement by more than tenfold when compared to a laboratory experiment. This could help in the future to drastically improve the accuracy of measurement data.

The differences can be measured in an atom interferometer: A quantum gas, that is the wave-packet of matter, is split into two parts and moves in the gravitational field along different paths through space-time. Gravitation behaves like an optical medium, whose refractive index refracts the waves. As soon as the two parts reunite, there is interference, as is also generated when waves on a water surface run into each other. The interference pattern depends on how differently the two matter waves expand. If matter waves of different composition are compared, a test of the equivalence principle with matter waves is performed. The physicists in Ernst Maria Rasel’s group now want to construct such an atom interferometer for the capsule of the Bremen drop tower.
"Ultimately, we would like to perform such experiments in space," says Ernst Maria Rasel. The equivalence principle could also be tested there. To this end, the researchers must drop clouds of different atoms to Earth for as long as possible. They could then find out whether all bodies really fall with the same speed. And the longer the atom clouds remain in zero gravity - that is, the further they fall - the more chance there is of clarifying this.

(Photo: ZARMS / University of Bremen)

Max Planck Institute


0 comentarios
Researchers have found that increasing certain proteins in the blood vessels of mice, relaxed the vessels, lowering the animal’s blood pressure. The study provides new avenues for research that may lead to new treatments for hypertension.

"The paper demonstrates that cytochrome P450 plays an important role in the management of high blood pressure, a disease of enormous public health concern," said Darryl Zeldin, M.D., acting clinical director of the National Institute of Environmental Health Sciences (NIEHS) and senior author on the paper.

According to the Centers for Disease Control and Prevention, about 1 in 3 adults in the United States has high blood pressure, which increases the risk for heart disease and stroke, the first and third leading causes of death in the United States.

The study, published online in The FASEB Journal, was conducted by researchers at NIEHS who teamed with investigators at the University of North Carolina at Chapel Hill (UNC), Medical College of Wisconsin, Milwaukee, and Oregon Health and Science University, Portland.

The researchers created animal models that had a human cytochrome P450 (CYP450 or P450) in the cells that line their blood vessels. The mice with the P450 generated more substances called epoxyeicosatrienoic acids or EETs, known for their role in protecting the cardiovascular system. EETs relax and dilate the blood vessels and fight inflammation.

"We found that when the animals were exposed to substances known to increase blood pressure, the animals with the P450 had lower blood pressure and less damage to the kidneys compared to normal mice," said Craig R. Lee, Pharm.D., Ph.D., assistant professor at UNC and lead author on the paper. "We hope that these studies will advance the development of new treatments for high blood pressure."

"This is a great example of a basic finding that improves our understanding of a metabolic pathway that can be used to develop improved treatments for those suffering from a common disease like hypertension," said Linda Birnbaum, Ph.D., director of the NIEHS and the National Toxicology Program.

National Institutes of Health


0 comentarios
Reporting at the American Headache Society's 52nd Annual Scientific Meeting in Los Angeles this week, new research shows that sleep deprivation leads to changes in the levels of key proteins that facilitate events involved in the underlying pathology of migraine.

Paul L. Dunham, Ph.D. and his team at Missouri State University's Center for Biomedical & Life Sciences sought to understand the mechanisms by which sleep disturbance increases the risk of migraine and may even trigger migraine.

"Previous clinical data support a relationship between sleep quality and migraine," said Dr. Durham, "so we used an established model of sleep deprivation to measure levels of proteins that lower the activation threshold of peripheral and central nerves involved in pain transmission during migraine. We found that REM sleep deprivation caused increased expression of the proteins p38, PKA, and P2X3, which are known to play an important role in initiating and sustaining chronic pain."

"So little is known about the biological mechanisms that underlie how certain factors trigger a migraine attack," said David Dodick, M.D., president of the AHS. "This is important work and this Missouri State team should be applauded for beginning to shed light on an area desperately in need of investigation."

The work was supported by Merck & Co.

More than 200 scientific papers and posters are being presented during the AHS meeting which is expected to draw some 500 migraine and headache health professionals including doctors, researchers, and specialists.

American Headache Society


0 comentarios

Carbon 14 dating has recently enabled an international team of researchers to establish an absolute chronology of Dynastic Egypt (approximately 1100-2700 years BC) for the first time. The analysis of short-lived organic samples archeologically attributed to a specific reign or period of Egyptian history, has confirmed previous chronological estimates, but has also called other estimates into question. These results are published in Science of 18 June 2010.

For more than 150 years, explorers and researchers throughout the world have been striving to gain a clearer understanding of one of the most fascinating civilizations: Ancient Egypt. A relative chronology of the kings who succeeded one another to the Egyptian throne has gradually been established through the study of epigraphic, historical or archaeological documents. However, determining an absolute chronology was much more difficult because at the beginning of each new reign, the clock went back to zero. Astrophysical data had already made it possible to envisage some temporal reference points, but such data was not sufficient to date each of the Egyptian dynasties precisely.

In order to determine the absolute chronology of this historical period, research teams in worldwide laboratories, including the Laboratoire de Mesure du Carbone 14 (CEA, CNRS, IRD, IRSN, Ministère de la Culture et de la Communication) collected 211 specimens of Egyptian artifacts from numerous European and American museums. Seeds, baskets, textiles, plants and fruits archaeologically attributed to a specific Egyptian reign or period, were dated using Carbon 14 technology. "The Department of Egyptian Antiquities at the Louvre Museum in Paris supplied us with samples of basket-ware attributed to the reign of Thoutmosis III, one of the most important periods of Ancient Egypt", explains Anita Quiles, Ph.D. student at the Laboratoire de Mesure du Carbone 14. Indeed, some of the dating was performed by using this laboratory's ARTEMIS device, which is the only accelerator mass spectrometer in France.

These analyzes, combined with the known or supposed duration of each reign and their successions, have made it possible to establish the first complete and accurate chronology of ancient Egyptian dynasties.

The chronology thus obtained is in line with most previous findings. However, it requires some historical rectifications. For example, the Old Kingdom appears to be older than the chronological estimates proposed hitherto. The results also suggest that the reign of Djoser during the Old Kingdom started between 2691 and 2625 BC, and that the New Kingdom started between 1570 and 1544 BC. A remarkable source of information for Egyptologists, this chronology will also contribute to a more accurate temporal framework for surrounding civilizations, such as Nubia or the Near East.

Carbon 14 is a radioactive carbon isotope. The radioactive period of 14C, which corresponds to the time after which the number of atoms is halved, is 5730 years. 14C forms in the upper atmosphere, which is bombarded by cosmic radiation. In the environment, an equilibrium is created between the production of 14C and its disappearance through disintegration. The equilibrium value is one radioactive 14C atom per 1,000 billion non-radioactive 12C atoms. This ratio is found in living organisms as a result of photosynthesis in plants and the food chain in animals. When the organism dies, 14C is no longer incorporated in the organism and the quantity of 14C atoms declines as a result of radioactive decay, while that of 12C remains constant. Dating is based on comparing the 14C/12C ratio of a sample with that of a standard sample. It is then possible to deduce the age of the sample, going back about 50,000 years.

(Photo: © Anita Quiles/CEA)

Centre National de la Recherche Scientifique


0 comentarios

In nature, random signals often fall mysteriously in step. Fireflies flashing sporadically in early evening soon flash together, and the same harmonic behavior can be seen in chirping crickets, firing neurons, swinging clock pendulums and now, it turns out, rupturing earthquake faults.

Scientists have well established that big earthquakes can trigger other big quakes by transferring stress along a single fault, as successive earthquakes in Turkey and Indonesia have shown. But some powerful quakes can set off other big quakes on faults tens of kilometers away, with just a tiny nudge, says a new paper. Christopher Scholz, a seismologist at Columbia University’s Lamont-Doherty Earth Observatory, explains how: the faults are already synchronized, he says.

Scholz argues in the most recent issue of the Bulletin of the Seismological Society of America that when a fault breaks, it may sometimes gently prod a neighboring fault also on the verge of fracturing. The paper finds evidence for synchronized, or “phase locked,” faults in southern California’s Mojave Desert, the mountains of central Nevada, and the south of Iceland. Drawing on earthquake patterns as far back as 15,000 years, the paper identifies strings of related earthquakes, and explains the physics of how faults separated by up to 50 kilometers, and rupturing every few thousand years, might align themselves to rupture almost simultaneously.

“All of a sudden bang, bang, bang, a whole bunch of faults break at the same time,” says Scholz. “Now that we know that some faults may act in consort, our basic concept of seismic hazard changes. When a large earthquake happens, it may no longer mean that the immediate future risk is lower, but higher.”
The Landers quake may have triggered another big quake, seven years later, at Hector Mine near Joshua Tree National Park.

The idea of independent events synchronizing themselves goes back to the Age of Discovery and the pendulum clock, invented as scientists and navigators were searching for a device to measure longitude at sea. In 1665, Christiaan Huygens, the Dutch mathematician who invented the pendulum clock (a dead end, it turned out, in solving the longitude problem) first described how the pendulums of two clocks hanging from the same wall became synchronized. Known as entrainment, or coupled oscillation, this phenomenon is caused by the motion of the two pendulums communicating through the beam supporting the clocks.

Entrainment can also happen when faults lie relatively close, between 10 and 50 kilometers apart, and are moving at comparable speeds, Scholz says. As faults break successively over time, their cycles may eventually fall in sync, a process described in the paper by the mathematical “Kuramoto Model.”

The paper provides real-world examples from places where geologists and seismologists have compiled a long record of past quakes. In the Mojave Desert, the Camp Rock fault, a secondary fault off the San Andreas, ruptured in 1992, causing a magnitude 7.3 quake in the town of Landers, killing one child. Seven years later, the Pisgah fault, 24 kilometers away, broke, causing a magnitude 7.1 quake at Hector Mine, inside the Twentynine Palms Marine Corps Base.

When a fault ruptures in a large earthquake, the movement releases stresses that may have built up over millennia. But the movement also transfers a small amount of that stress, usually a fraction of a percent, to nearby faults. In order for that tiny added stress to trigger a large earthquake on a nearby fault, that fault had to already be very near its breaking point, says Scholz. For the two faults to have been simultaneously near their breaking points requires them to be synchronized in their seismic cycles.

Paleoseismology—that is, studies of the physical signs left by past earthquakes-- show that the Mojave faults rupture every 5,000 years or so, so the relatively short seven-year lag between the Landers and Hector Mine quakes suggested to Scholz the timing could not be random. When he looked at the paleoseismological record, he saw that both faults had ruptured together before, at about 5,500 years ago and 10,000 years ago. He noticed a similar trend with the nearby Lenwood and Helendale faults, which had ruptured together 1,000 years ago and 9,000 years ago. And, the two fault pairs happened to be moving at virtually the same pace, 1 millimeter and .8 millimeter, respectively.

He noticed a similar trend in Nevada. In the summer of 1954, the Rainbow Mountain fault system was hit by five earthquakes ranging in magnitude from 5.5 to 6.8. The action culminated on Dec. 16 with a 7.1 quake on Fairview Peak and a 6.8 quake four minutes later on the Dixie Valley fault, 40 kilometers away. Again, the triggering stress was a small fraction of a percent. Paleoseismic evidence showed that similar groups of faults nearby had produced clusters of earthquakes every 3,000 years or so over the last 12,000 years.

The same pattern emerged in Iceland. In June 2000, two quakes--magnitudes 6.5 and 6.4-- struck within four days of each other on parallel faults 14 kilometers apart. In 1896, five large quakes struck on different neighboring faults within 11 days of each other, with similar clusters occurring in 1784, and from 1732 to 1734.

Scholz says his hypothesis of synchronized faults could make it easier to assess some earthquake hazards by showing that faults moving at similar speeds, and within roughly 50 kilometers of each other, may break at similar times, while faults moving at greatly different speeds, and located relatively far apart, will not.

However, seismologists have yet to come up with a reliable method for predicting imminent earthquakes; the best they can do so far is to identify dangerous areas, and roughly estimate how often quakes of certain sizes may strike.

Ross Stein, a geophysicist at the U.S. Geological Survey, who was not involved in the study, questioned the paper’s wider significance. There is “good” evidence for historic earthquake sequences, and “possible” evidence for prehistoric sequences, he said, but those quakes make up a minority of earthquake events.

(Photo: Southern California Earthquake Data Center)

Columbia University


0 comentarios

Materials do funny things at the nanoscale. A metal oxide complex called lanthanum strontium manganite is ferromagnetic in large quantities. But scaled to nanometer thickness, it becomes an insulator and loses much of its ferromagnetism. Same material, different behavior.

Using cutting-edge spectroscopy at atomic resolutions, researchers led by David A. Muller, professor of applied and engineering physics, have figured out why this happens, and how to grow ultra-thin manganite films while retaining their magnetic properties. Perfecting such a technique could pave the way for manganites and other oxides to replace silicon in thin-film electronics, memory storage and other technologies.

The work is detailed in a paper published online June 14 in the journal Proceedings of the National Academy of Sciences.

"A number of research groups have grown these thin layers before, and their results suggested that there is a 15-atomic-layer critical thickness, below which you could not get it conducting," said postdoctoral associate Lena Fitting Kourkoutis, the paper's first author. "But we show that we can go much lower to a handful of atomic layers and still keep it conducting."

The key is understanding how to grow perfect, defect-free manganite sheets. The chemical composition has to be exactly right, and even the slightest break in the crystalline lattice of the atomic layers can ruin the films' conductivity. These defects don't matter as much on a larger scale.

To examine manganite samples grown by their collaborators in Japan, the scientists used a technique called electron energy loss spectroscopy, performed in a scanning transmission electron microscope. They employed a technique (described in a 2008 Science paper) called aberration correction, which allows them extreme precision for imaging the composition of films only atoms thick.

Manganites have good potential for the emerging field of spintronics, which exploits materials' electron spin and magnetic moment for use in memory storage technologies.

(Photo: Lena Fitting Kourkoutis/Muller lab)

Cornell University


0 comentarios

The University of Bristol Innocence Project has submitted an application to the Criminal Cases Review Commission (CCRC) — the independent public body set up to investigate possible miscarriages of justice — on behalf of Neil Hurley, who is currently serving a life sentence for the murder of Sharon Pritchard.

Sharon Pritchard, who is Mr Hurley’s ex-partner and mother of their two children, was found dead in a playing field of Croeserw Primary School, South Wales, on the 30 August 1993.

Mr Hurley was arrested a few days after Sharon Pritchard’s murder. There was no physical evidence linking Mr Hurley to the murder. He was convicted primarily on the basis of his acrimonious relationship and alleged history of violence with the victim. At trial, it emerged that two vital suspects were overlooked during the police investigation, one of whom arrived home on the morning of the murder with his clothing covered in mud and blood, none of which were subjected to any forensic testing.

Subsequent to Mr Hurley’s conviction, a possible alibi for Mr Hurley has emerged that might prove that he was at home at the time of the murder. A number of witnesses who testified against Mr Hurley at trial have also retracted their evidence, claiming that they were coerced by police officers into giving false evidence against him. In addition, a blood-stained sock was recovered half a mile from where Sharon Pritchard’s body was found. Blood-grouping analysis was conducted on the sock and yielded results that matched neither the deceased nor Mr Hurley and was deemed by the forensic scientist to be ‘irrelevant to the crime’.

Mr Hurley’s case was allocated to the University of Bristol Innocence Project by the Innocence Network UK (INUK), the umbrella organisation for 26 member innocence projects in UK universities, shortly after his third application to the Criminal Cases Review Commission failed. The Criminal Cases Review Commission is the official body that reviews alleged miscarriages of justice in England, Wales and Northern Ireland and refers cases thought to have a real possibility back to the appeal courts.

The investigation by the University of Bristol Innocence Project has involved several teams of students over the last five years. They found that a number of police officers involved in Sharon Pritchard’s murder investigation were also alleged to have caused several high-profile miscarriages of justice. Two key police officers in Mr Hurley’s case have also recently been convicted of conspiracy to commit misconduct in a public office and conspiracy to commit fraud.

More recently, the possibility of exoneration through DNA testing was identified when Gabe Tan, Casework Manager, uncovered over a hundred exhibits recovered from the crime scene, the victim’s body, Mr Hurley and other suspects that were never subjected to any form of DNA testing.

At the time of the original police investigation in 1993, although DNA testing was already available, the technique was still at its infancy and not regularly used by the police. The main forensic technique utilised in the investigation was rudimentary blood-group testing which was highly limited as it could not be applied to exhibits which did not contain blood. In addition, even for items that did contain blood, many of the blood samples were either too degraded or insufficient in quantity for blood-grouping analysis to be successfully conducted.

These problems with blood-grouping analyses can be circumvented by advanced DNA testing today. Over the last decade, the development of new DNA techniques such as Low-Copy Number and Touch DNA can yield DNA profiles from degraded and minute quantities of biological material where standard DNA tests might not. These tests could potentially exonerate Mr Hurley.

Almost 13 years on from his first application to the Criminal Cases Review Commission, Mr Hurley remains in prison three years past his tariff date, the date that he could have been released on parole, when the means of potentially validating his claim of innocence are still waiting to be pursued.

Dr Michael Naughton, the Founder and Director of the Innocence Network UK (INUK) and the University of Bristol Innocence Project said: ‘It is a matter of public concern that the DNA tests that can prove whether Mr Hurley is innocent or guilty could have been commissioned by the Criminal Cases Review Commission when he made his first application in 1997. It is, perhaps, indicative of the fundamental failures of the Criminal Cases Review Commission’s method of review that it required the efforts of innocence project students working on a pro bono basis to unearth the potential of DNA testing in Mr Hurley's case.’

The University of Bristol Innocence Project urges the Criminal Cases Review Commission to conduct a speedy and full investigation into Mr Hurley’s latest application and to undertake without delay DNA testing of the exhibits from the crime scene that might settle his claim of innocence.

(Photo: Bristol U.)

University of Bristol


0 comentarios

If concerns for global climate change and ever-increasing costs weren’t enough, the disastrous Gulf oil spill makes an even more compelling case for the development of transportation fuels that are renewable, can be produced in a sustainable fashion, and do not put the environment at risk. Liquid fuels derived from plant biomass have the potential to be used as direct replacements for gasoline, diesel and jet fuels if cost-effective means of commercial production can be found.

Researchers with the U.S. Department of Energy (DOE)’s Joint BioEnergy Institute (JBEI) have identified a trio of bacterial enzymes that can catalyze key steps in the conversion of plant sugars into hydrocarbon compounds for the production of green transportation fuels.

Harry Beller, an environmental microbiologist who directs the Biofuels Pathways department for JBEI’s Fuels Synthesis Division, led a study in which a three-gene cluster from the bacterium Micrococcus luteus was introduced into the bacterium Escherichia coli. The enzymes produced by this trio of genes enabled the E. coli to synthesize from glucose long-chain alkene hydrocarbons. These long-chain alkenes can then be reduced in size – a process called “cracking” – to obtain shorter hydrocarbons that are compatible with today’s engines and favored for the production of advanced lignocellulosic biofuels.

“In order to engineer microorganisms to make biofuels efficiently, we need to know the applicable gene sequences and specific metabolic steps involved in the biosynthesis pathway,” Beller says. “We have now identified three genes encoding enzymes that are essential for the bacterial synthesis of alkenes. With this information we were able to convert an E. coli strain that normally cannot make long-chain alkenes into an alkene producer.”

Working with Beller on this study were Ee-Been Goh and Jay Keasling. The three were the co-authors of a paper that appeared earlier this year in the journal Applied and Environmental Microbiology, titled “Genes Involved in Long-Chain Alkene Biosynthesis in Micrococcus luteus.”

It has long been known that certain types of bacteria are able to synthesize aliphatic hydrocarbons, which makes them promising sources of the enzymes needed to convert lignocellulose into advanced biofuels. However, until recently, little was known about the bacterial biosynthesis of non-isoprenoid hydrocarbons beyond a hypothesis that fatty acids are precursors. JBEI researchers in the Fuels Synthesis Division, which is headed by co-author Keasling, are using the tools of synthetic biology, and mathematical models of metabolism and gene regulation to engineer new microbes that can quickly and efficiently produce advanced biofuel molecules. E.coli is one of the model organisms being used in this effort because it is a well-studied microbe that is exceptionally amenable to genetic manipulation.

“We chose to work with M. luteus because a close bacterial relative was well-documented to synthesize alkenes and because a draft genome sequence of M. luteus was available,” Beller says. “The first thing we did was to confirm that M. luteus also produces alkenes.”

Beller and his colleagues worked from a hypothesis that known enzymes capable of catalyzing both decarboxylation and condensation should be good models for the kind of enzymes that might catalyze alkene synthesis from fatty acids. Using condensing enzymes as models, the scientists identified several candidate genes in M. luteus, including Mlut_13230. When expressed in E. coli together with the two adjacent genes – Mlut_13240 and 13250 – this trio of enzymes catalyzed the synthesis of alkenes from glucose. Observations were made both in vivo and in vitro.

“This group of enzymes can be used to make aliphatic hydrocarbons in an appropriate microbial host but the resulting alkenes are too long to be used directly as liquid fuels,” Beller says. “However, these long-chain alkenes can be cracked – a technique routinely used in oil refineries – to create hydrocarbons of an appropriate length for diesel fuel.”

The next step Beller says is to learn more about how these three enzymes work, particularly Mlut_13230 (also called OleA), which catalyzes the key step in the alkene biosynthesis pathway – the condensation of fatty acids.

“We’re also studying other pathways that can produce aliphatic hydrocarbons of an appropriate length for diesel fuels without the need for cracking,” Beller says. “Nature has devised a number of biocatalysts to produce hydrocarbons, and our goal is to learn more about them for the production of green transportation fuels.”

The draft genome sequence of M. luteus was prepared at DOE’s Joint Genome Institute in Walnut Creek, CA, which carries out advanced genomics research in support of DOE missions related to clean energy generation, and environmental characterization and cleanup.

(Photo: Centers for Disease Control and Prevention)

Lawrence Berkeley National Laboratory


0 comentarios

New research published today (17 June) by scientists funded by the Biotechnology and Biological Sciences Research Council (BBSRC) shows that malaria is tens of thousands of years older than previously thought. An international team, led by researchers at Imperial College London, have found that the potentially deadly tropical disease evolved alongside anatomically modern humans and moved with our ancestors as they migrated out of Africa around 60-80,000 years ago. The research is published in the journal Current Biology.

The findings and the techniques in the study could be important in informing current control strategies aimed at reducing the prevalence of malaria. There are an estimated 230 million cases each year, causing between 1 and 3 million deaths, and around 1.4Bn people are considered to be at risk of infection.

Dr Francois Balloux from the Medical Research Council (MRC) Centre for Outbreak Analysis and Modelling at Imperial College London was lead researcher on the project. He said: "Most recent work to understand how malaria has spread across the tropics has worked on the premise that the disease arose alongside the development of agriculture around 10,000 years ago. Our research shows that the malaria parasite has evolved and spread alongside humans and is at least as old as the event of the human expansion out of Africa 60-80,000 years ago."

The international team worked on the largest collection of malaria parasites ever assembled. By characterising them by DNA sequencing they were able to track the progress of malaria across the tropics and to calculate the age of the parasite. The scientists discovered clear correlation of decreasing genetic diversity with distance from sub-Saharan Africa. This accurately mirrored the same data for humans suggesting strong evidence of co-evolution and migration.

Dr Balloux said: "The genetic sequencing of the malaria parasite shows a geographic spread pattern with striking similarities to studies on humans. This points to a shared geographic origin, age and route of spread around the world. This understanding is important because despite the prevalence and deadly impact of malaria little research has previously been done to understand the genetic variation of the parasite. The genetic diversity of malaria parasites is central to their threat as it helps them to overcome the immune system and to develop drug resistance, making this research vital in informing new and more effective control strategies."

(Photo: © Centers for Disease Control and Prevention)

Biological Sciences Research Council


0 comentarios

Toxic seas may have been responsible for delaying the evolution of life on Earth by 1 billion years, experts at Newcastle University have revealed.

The study, published online in Nature Geoscience, reveals for the first time a chemical ‘layering’ of the ocean which may have delayed the evolution of our earliest animal ancestors.

Using novel geochemical techniques developed by Newcastle University’s Dr Simon Poulton, the team found that beneath oxygenated surface waters, mid-depth oceanic waters were rich in sulphide about 1.8 billion years ago, conditions that may have persisted until oxygenation of the deep ocean more than1 billion years later.

These widespread sulphidic conditions close to the continents, coupled with deeper waters that remained oxygen-free and iron-rich, would have placed major restrictions on both the timing and pace of biological evolution.

Dr Poulton, who led the research, explained: “It has traditionally been assumed that the first rise in atmospheric oxygen eventually led to oxygenation of the deep ocean around 1.8 billion years ago.

“This assumption has been called into question over recent years, and here we show that the ocean remained oxygen-free but became rich in toxic hydrogen-sulphide over an area that extended more than 100 km from the continents. It took a second major rise in atmospheric oxygen around 580 million years ago to oxygenate the deep ocean.

“This has major implications as it would have potentially restricted the evolution of higher life forms that require oxygen, explaining why animals appear so suddenly, relatively late in the geological record.”

Between 2.4-1.8 billion years ago, the Earth underwent a major upheaval, sparked by the first great rise in atmospheric oxygen 2.4 billion years ago when atmospheric oxygen rose from nothing to around 5 per cent of present levels.

What has remained unclear, however, is the response of ocean chemistry to rising atmospheric oxygen – a vital piece of the evolutionary jigsaw because it is here that early life evolved.

Dr Poulton adds: “What we have done with this study, is to provide the first detailed evaluation of changes in ocean chemistry with water depth in the global ocean at this critical time.

“Earth scientists will need to consider the consequences of this oceanic structure when trying to piece together the co-evolution of life and the environment.”

(Photo: Newcastle U.)

Newcastle University


0 comentarios

Science has long puzzled over why a baby's brain is particularly flexible and why it easily changes. Is it because babies have to learn a lot? A group of researchers from the Bernstein Network Computational Neuroscience, the Max Planck Institute for Dynamics and Self-Organization in Göttingen, the Schiller University in Jena and Princeton University (USA) have now put forward a new explanation: Maybe it is because the brain still has to grow.

Using a combination of experiments, mathematical models and computer simulations they showed that neuronal connections in the visual cortex of cats are restructured during the growth phase and that this restructuring can be explained by self-organisational processes. The study was headed by Matthias Kaschube, former researcher at the Max Planck Institute for Dynamics and Self-Organization and now at Princeton University (USA).

The brain is continuously changing. Neuronal structures are not hard-wired, but are modified with every learning step and every experience. Certain areas of the brain of a newborn baby are particularly flexible, however. In animal experiments, the development of the visual cortex can be strongly influenced in the first months of life, for example, by different visual stimuli.

Nerve cells in the visual cortex of fully-grown animals divide up the processing of information from the eyes: Some "see" only the left eye, others only the right. Cells of right or left specialisation each lie close to one another in small groups, called columns. The researchers showed that during growth, these structures are not simply inflated - columns do not become larger but their number increases. Neither do new columns form from new nerve cells. The number of nerve cells remains almost unchanged, a large part of the growth of the visual cortex can be attributed to an increase in the number of non-neuronal cells. These changes can be explained by the fact that existing cells change their preference for the right or the left eye. In addition, another of the researchers' observations also points to such a restructuring: The arrangement of the columns changes. While the pattern initially looks stripy, these stripes dissolve in time and the pattern becomes more irregular.

"This is an enormous achievement by the brain - undertaking such a restructuring while continuing to function," says Wolfgang Keil, scientist at the Max Planck Institute for Dynamics and Self-Organization Göttingen and first author of the study. "There is no engineer behind this conducting the planning, the process must generate itself." The researchers used mathematical models and computer simulations to investigate how the brain could proceed to achieve this restructuring. On the one hand, the brain tries to keep the neighbourhood relations in the visual cortex as uniform as possible. On the other, the development of the visual cortex is determined by the visual process itself - cells which have once been stimulated more strongly by the left or right eye try to maintain this particular calling. The researchers' model explains the formation of columns by taking both these tendencies into account. The scientists showed that when the tissue grows and the size of the columns is kept constant, the columns in the computer model change exactly as they had observed in their experimental studies on the visual cortex of the cat: The stripes dissolve into a zigzag pattern and thus become more irregular. In this way, the researchers provide a mathematical basis which realistically describes how the visual cortex could restructure during the growth phase.

(Photo: Wolfgang Keil)

Max Planck Institute


0 comentarios

In humans and other primates, the prefrontal cortex is the seat of high-level functions such as learning, decision making, and planning. Neuroscientists have long wondered whether neurons in that part of the brain are specialized for one type of task or if they are “generalists” — that is, able to participate in many functions. A new study from MIT’s Picower Institute for Learning and Memory comes down in favor of the generalist theory.

MIT professor Earl Miller and others in his lab showed that when they trained monkeys to perform two different categorization tasks, about half of the neurons involved could switch between the two. The findings, reported in the June 10 issue of the journal Neuron, suggest that neurons of the prefrontal cortex have a much greater ability to adapt to different cognitive demands than neurons in other parts of the brain. These results support ideas about the malleability of neurons — nervous-system cells that process and transmit information — that Miller first proposed a decade ago.

Miller, the Picower Professor of Neuroscience in MIT’s Department of Brain and Cognitive Sciences, says he’s not surprised by the findings. “We have a lot of mental flexibility,” he says. “We can change our topic of conversation, we can change what we’re thinking about. Some part of the brain has to have that flexibility on a neural level.”

Most neuroscientists who study brain activity in monkeys train the animals on only one task, so until now it had been impossible to reveal whether single neurons in the prefrontal cortex could be involved in more than one job.

In previous studies, Miller has shown that when monkeys are trained to categorize animals by distinguishing cats from dogs, some neurons in the prefrontal cortex become tuned to the concept of “cat” while others respond to the idea of “dog.”

This time, Miller, postdoctoral fellow Jason Cromer, and research scientist Jefferson Roy trained the monkeys to perform two different categorization tasks — distinguishing cats from dogs and sedans from sports cars. They recorded activity from about 500 neurons in the monkeys’ prefrontal cortex as the animals switched back and forth between the tasks.

Although they found that some neurons were more attuned to car images and others to animal images, they also identified many neurons that were active during both tasks. In fact, these “multitasking” neurons were best at making correct identifications in both categories.

The findings suggest that neurons in the prefrontal cortex have a unique ability to adapt to different tasks, says Miller. In other parts of the brain, earlier research has shown, most neurons are highly specialized. Neurons in the visual cortex, for example, are programmed to respond to very specific inputs, such as a vertical line or a certain color. Some have even been shown to fire only in response to one particular face.

“Our results suggest that the prefrontal cortex is different from the sensory cortex and the motor cortex. It’s highly plastic,” says Miller. “That’s important, because it means the human brain has the capacity to absorb a lot of information.”

The Neuron study focused on two categorization tasks, but Miller hopes to run another study in which the monkeys learn a third task involving some other cognitive function. That could give another hint about how much information our brains can handle, says David Freedman, an assistant professor of neurobiology at the University of Chicago.

“We’re very good at learning dozens, hundreds, or even thousands of categories,” he says. “You wonder if there is some limit, or would these neurons be as flexible as we are as observers?”

Freedman says he would also be interested to see whether the same prefrontal-cortex neurons can multitask between activities that involve different kinds of sensory inputs — for example, a visual task and an auditory task.

Meanwhile, Miller has a study under way that he believes could demonstrate a biological basis for the impaired categorization ability often seen in people with autism. Autistic children often have a hard time understanding that two slightly different objects — for example, a red toothbrush and a blue toothbrush — both belong to the same category.

Miller theorizes that an evolutionarily older part of the brain, known as the basal ganglia, gathers information about new objects, and the prefrontal cortex learns how to categorize them. “The basal ganglia learn the pieces of the puzzle, and the prefrontal cortex puts the pieces together,” he says.

In his current study, Miller is monitoring brain activity in monkeys as they learn a categorization task. He expects to find a sharp peak in prefrontal-cortex activity at the moment when the monkeys learn that certain objects belong to the same category.

Eventually, he hopes to show that in autism, the balance between those two brain regions is thrown off: there is either too much activity in the basal ganglia or not enough in the prefrontal cortex.

(Photo: MIT)





Selected Science News. Copyright 2008 All Rights Reserved Revolution Two Church theme by Brian Gardner Converted into Blogger Template by Bloganol dot com