Thursday, October 7, 2010


0 comentarios

The forest in the Amazon Basin produces its own rain. During the wet season, aerosol particles, which serve to condensate clouds and precipitation here, mainly consist of organic material. These aerosols are released by the rainforest itself. This has been demonstrated by scientists from the Max Planck Institute for Chemistry in Mainz, who are now able to draw conclusions about the mechanisms of this ecosystem: the high content of organic material indicates that the Amazon Basin acts as a largely self-contained biogeochemical reactor during the rainy season. The results could also help scientists construct more accurate climate models in the future. They can therefore be used to analyse anthropogenic influence on cloud formation and precipitation.

The air above the Amazon rainforest is cleaner than almost anywhere else on earth. This allows climate scientists to investigate cloud condensation under natural conditions. The results of this study may serve as a point of reference for future analyses of anthropogenic influence on cloud evolution and precipitation. Scientists from the Max Planck Institute for Chemistry have now made a valuable contribution to this. Together with international partners, they have, for the first time, used scanning electron microscopy and mass spectrometry during the rainy season to characterise the chemical composition of aerosols, tiny suspended particles in the air above the Amazon rainforest.

Submicron particles have a diameter that is smaller than a thousandth of a millimetre and serve as cloud condensation nuclei. They consist of 85 % secondary organic aerosol components. These are formed from volatile organic compounds which are released by the forest ecosystem and which can be converted to low-volatile particles through photochemical reactions and condensate. The remaining tenth of the submicron particle primarily consists of salts, minerals and soot, transported from the Atlantic and Africa by the winds.

More than 80 % of supermicron particles, which have a diameter of more than a micrometre, are made up of primary biological aerosol material, such as fungal spores, pollen and plant debris, and are directly released into the air from the rainforest. They serve as ice nuclei and are very important for the evolution of precipitation.

The fact that aerosols above the Amazon rainforest are nearly completely of biogenic origin tells the scientists a lot about the ecosystem. "The Brazilian rainforest during the rainy season can be described as a bioreactor", says Ulrich Pöschl, who played a leading part in the study. Water vapour rises from the forest, compensating on aerosols. These are then transported up to a height of 18 kilometres. Water droplets and ice crystals grow in the clouds thus formed until they fall to the ground again as precipitation. Part of the precipitation evaporates and the rest irrigates the Amazon flora. The plants, while growing, continue to release organic material into the atmosphere, on which new clouds grow.

The findings about the natural process of cloud condensation provide scientists with information about what differentiates it from the cloud condensation provoked by human activities. "We are already able to say that the number of cloud droplets over the Amazon rainforest is aerosol-limited, which means that it depends on the number of aerosol particles which is released by the ecosystem", explains Ulrich Pöschl. In densely populated areas and in the Amazon during the dry season, where exhaust fumes from traffic, industry and slash-and-burn agriculture saturate the air with aerosols, it instead depends on the velocity of the updraft which transports the particles upwards.

The scientists want to continue to investigate the atmosphere above the Amazon rainforest in the next years. For this purpose, they are now erecting a 300 metre high observational tower near Manaus, Brazil, where the current studies were carried out. "There, we want to conduct further long-term and comprehensive observations", says Ulrich Pöschl. Besides the aerosols, the scientists also want to look closer at the carbon and nitrogen cycles. "If we can better understand the natural ecosystem of the rainforest, we will be able to describe the influence that we humans exert on the climate more reliably."

(Photo: MPI for Chemistry)

Max Planck Society


0 comentarios

Every year, about 30 billion metric tons of carbon dioxide are pumped into the Earth’s atmosphere from power plants, cars and other industrial sources that rely on fossil fuels. Scientists who want to mitigate carbon dioxide’s effects on global climate have started experimenting with storing the gas underground, a process known as carbon sequestration. However, there are still many unknowns surrounding the safety and effectiveness of that strategy.

MIT engineer Angela Belcher is now taking a new approach that would not only remove carbon dioxide from the environment, but also turn it into something useful: solid carbonates that could be used for building construction.

“We want to capture carbon dioxide and not put it underground, but turn it into something that will be stable for hundreds of thousands of years,” says Belcher, the W.M. Keck Professor of Energy.

By genetically engineering ordinary baker’s yeast, Belcher and two of her graduate students, Roberto Barbero and Elizabeth Wood, have created a process that can convert carbon dioxide into carbonates that could be used as building materials. Their process, which has been tested in the lab, can produce about two pounds of carbonate for every pound of carbon dioxide captured. Next, they hope to scale up the process so it could be used in a power plant or industrial factory.

To create the yeast-powered process, Belcher drew inspiration from marine animals that build their own rock-solid shells from carbon dioxide and mineral ions dissolved in seawater. (Her 1997 PhD thesis focused on the abalone, a sea snail that produces exceptionally strong shells made of calcium carbonate.)

Funded by the Italian energy company Eni, the new MIT process for turning carbon dioxide into carbonates requires two steps. The first step is capturing carbon dioxide in water. Second, the dissolved carbon dioxide is combined with mineral ions to form solid carbonates.

Yeast don’t normally do any of those reactions on their own, so Belcher and her students had to engineer them to express genes found in organisms such as the abalone. Those genes code for enzymes and other proteins that help move carbon dioxide through the mineralization process. The researchers also used computer modeling and other methods to identify novel proteins that can aid in the mineralization process.

“We’re trying to mimic natural biological processes,” says Belcher. But, “we don’t necessarily want to make the exact same structure that an abalone does.”

Some companies have commercialized a process that captures carbon dioxide and converts it to solid material, but those efforts rely on a chemical process to capture carbon dioxide. The MIT team’s biological system captures carbon dioxide at a higher rate, says Barbero. Another advantage of the biological system is that it requires no heating or cooling, and no toxic chemicals.

Next, the team plans to try scaling up the process to handle the huge volumes of carbon dioxide produced at fossil-fuel-burning power plants. If the process is successfully industrialized, a potential source for the mineral ions needed for the reaction could be the briny water produced as a byproduct of desalination, says Barbero.

(Photo: Patrick Gillooly)



0 comentarios

Uranium supplies will not limit the expansion of nuclear power in the U.S. or around the world for the foreseeable future, according to a major new interdisciplinary study produced under the auspices of the MIT Energy Initiative.

The study challenges conventional assumptions about nuclear energy. It suggests that nuclear power using today’s reactor technology with a once-through fuel cycle can play a significant part in displacing the world’s carbon-emitting fossil-fuel plants, and thus help to reduce the potential for global climate change. But determining the best fuel cycle for the next generation of nuclear power plants will require more research, the report concludes.

The report focuses on what is known as the “nuclear fuel cycle” — a concept that encompasses both the kind of fuel used to power a reactor (currently, most of the world’s reactors run on mined uranium that has been enriched, while a few run on plutonium) and what happens to the fuel after it has been used (either stored on site or disposed of underground — a “once-through” cycle — or reprocessed to yield new reactor fuel).

Ernest J. Moniz, director of the MIT Energy Initiative and co-chair of the new study, says the report’s conclusion that uranium supplies will not limit growth of the industry runs contrary to the view that had prevailed for decades — one that guided decisions about which technologies were viable. “The failure to understand the extent of the uranium resource was a very big deal” for determining which fuel cycles were developed and the schedule of their development, he says.

In the United States, the idea of a limited uranium supply prompted decades of planning aimed at ultimately developing “fast spectrum” reactors to breed plutonium. Such systems convert non-fissile forms of uranium — that is, not capable of sustaining a nuclear reaction — into different fissile elements, including plutonium, that could be used to fuel other reactors. Thus, via fuel recycling, they create a much greater supply of reactor fuel than could be obtained by relying only on fuel made directly from processed uranium ore.

But it would take a conventional light-water reactor (LWR) 30 years just to provide the plutonium to start one such breeder reactor, and so far, such systems have not been found to be economically viable.

The new study suggests an alternative: an enriched uranium-initiated breeder reactor in which additional natural or depleted (that is, a remnant of the enrichment process) uranium is added to the reactor core at the same rate nuclear materials are consumed. No excess nuclear materials are produced. This is a much simpler and more efficient self-sustaining fuel cycle.

There’s an additional benefit to this concept that would provide a built-in protection against nuclear weapons proliferation: Large amounts of separated plutonium, a nuclear-weapons material, are needed to start the breeder reactors in the traditional fuel cycle. In contrast, the starting uranium fuel could not be used for a weapon. On the downside, however, there are little hard data on whether such a cycle would really be practical and economically competitive.

One of the report’s major conclusions is that more research is needed before such decisions can be made.

One reason the study came to such different conclusions from previous research is because it looked at the various components — from mining to reactor operation to waste disposal — holistically, explains Mujid Kazimi, the TEPCO Professor of Nuclear Engineering at MIT and co-chair of the study. “When you look at the whole thing together, you start seeing things that were not obvious before,” he says.

The report — the latest in a series of broad-based MITEI studies of different aspects of energy — was produced by 10 faculty members, three contributing authors and eight student research assistants, with guidance from a 13-member expert advisory panel from industry, academia and nonprofit organizations.

It was funded by the Electric Power Research Institute, Idaho National Laboratory, Nuclear Energy Institute, Areva, GE-Hitachi, Westinghouse, Energy Solutions, and Nuclear Assurance Corporation.

“There has been very little research on the fuel cycle for about 30 years,” says Charles Forsberg, MIT research scientist in nuclear engineering and executive director of the study. “People hadn’t gone back and looked at the underlying assumptions.”

In this study, Kazimi says, “what we found was that, at any reasonable expected growth of nuclear power over this century, the availability of uranium will not be a constraint.”

The report also concludes that in the United States, significant changes are needed in the planning and implementation of spent-fuel storage and disposal options, including the creation of a new quasi-governmental body to oversee the process. Planning for how to deal with the spent fuel should be closely integrated with studies of the optimal fuel cycle, the authors suggest.

The report strongly recommends that interim storage of spent nuclear fuel for a century or so, preferably in regional consolidated sites, is the best option. This allows the fuel to cool, and most importantly preserves future fuel-cycle choices to eventually send the fuel to a geological repository or reprocess it for energy resource and/or waste-management benefits. The optimal choice will reflect future conditions, such as the scale of nuclear-power deployment and the state of technology and its costs.

Ultimately, how to treat the spent fuel depends on the outcome of research, Moniz says. “Today, we would argue that we do not know whether spent fuel is a waste product or a resource,” he says. If the world continues to build once-through LWRs, it can be treated as waste and simply disposed of in a geological repository, but if the industry in the U.S. and worldwide switches to self-sustaining uranium breeder reactors, then spent fuel will become an important resource, providing the raw material to be enriched and produce new fuel.

The report also strongly supports the present U.S. government policy of providing loan guarantees for the first several new nuclear plants to be built under newly revised licensing rules. Positive experience with “first-mover” plants — the first of these new U.S. plants built after the current long hiatus — could reduce or eliminate financing premiums for nuclear-plant construction. Once those premiums are eliminated, Forsberg says, “we think nuclear power is economically competitive” with coal power, currently the cheapest option for utilities.

The potential for using nuclear power to reduce greenhouse-gas emissions is significant, the study suggests. In the U.S., nuclear power now represents 70 percent of all zero-carbon electricity production. While no new U.S. plants have been ordered in 30 years, 27 new license applications have been submitted since new regulations were instituted to streamline the process. Meanwhile, China, India and other nations have accelerated construction of new plants.

One key message of the report is that it’s time to really study the underlying basis of nuclear-plant technology — what kind of fuel goes in, what comes out, and what happens to it — before focusing too much money and effort on the engineering details of specific power-plant designs. “You want to start with the things that drive all your choices,” Forsberg says. “People had not looked at these options enough.”

(Photo: Christine Daniloff)



0 comentarios

A star golfer misses a critical putt; a brilliant student fails to ace a test; a savvy salesperson blows a key presentation. Each of these people has suffered the same bump in mental processing: They have just choked under pressure.

It’s tempting to dismiss such failures as “just nerves.” But to University of Chicago psychologist Sian Beilock, they are preventable results of information logjams in the brain. By studying how the brain works when we are doing our best — and when we choke — Beilock has formulated practical ideas about how to overcome performance lapses at critical moments.

Beilock’s research is the basis of her new book, Choke: What the Secrets of the Brain Reveal About Getting it Right When You Have To, published Sept. 21 by Simon and Schuster, Free Press.

“Choking is suboptimal performance, not just poor performance. It’s a performance that is inferior to what you can do and have done in the past and occurs when you feel pressure to get everything right,” said Beilock, an associate professor in psychology.
Some of the most spectacular and memorable moments of choking occur in sports when the whole world is watching. Many remember golfer Greg Norman’s choke at the 1996 U.S. Masters. Norman had played brilliantly for the first three days of the tournament, taking a huge lead. But on the final day, his performance took a dive, and he ended the Masters five shots out of first place.

Choking in such cases happens when the polished programs executed by the brains of extremely accomplished athletes go awry. In Choke, Beilock recounts famous examples of these malfunctions in the context of brain science to tell the story of why people choke and what can be done to alleviate it.

Thinking too much about what you are doing, because you are worried about losing the lead (as in Norman’s case) or worrying about failing in general, can lead to “paralysis by analysis.” In a nutshell, paralysis by analysis occurs when people try to control every aspect of what they are doing in an attempt to ensure success. Unfortunately, this increased control can backfire, disrupting what was once a fluid, flawless performance.

“My research team and I have found that highly skilled golfers are more likely to hole a simple 3-foot putt when we give them the tools to stop analyzing their shot, to stop thinking,” Beilock said. “Highly practiced putts run better when you don’t try to control every aspect of performance.”

Even a simple trick of singing helps prevent portions of the brain that might interfere with performance from taking over, Beilock’s research shows. Whistling can help at work. “If the tasks are automatic and you have done them a thousand times in the past, a mild distraction such as whistling can help them run off more smoothly under pressure.”

The brain also can work to sabotage performance in ways other than paralysis by analysis. For instance, pressure-filled situations can deplete a part of the brain’s processing power known as working memory, which is critical to many everyday activities.

Beilock’s work has shown the importance of working memory in helping people perform their best, in academics and in business. Working memory is lodged in the prefrontal cortex and is a sort of mental scratch pad that is temporary storage for information relevant to the task at hand, whether that task is doing a math problem at the board or responding to tough, on-the-spot questions from a client. Talented people often have the most working memory, but when worries creep up, the working memory they normally use to succeed becomes overburdened. People lose the brain power necessary to excel.

One example is the phenomenon of “stereotype threat.” This is when otherwise talented people don’t perform up to their abilities because they are worried about confirming popular cultural myths that contend, for instance, that boys and girls naturally perform differently in math or that a person’s race determines his or her test performance.

In Choke, Beilock describes research demonstrating that high-achieving people underperform when they are worried about confirming a stereotype about the racial group or gender to which they belong. These worries deplete the working memory necessary for success. The perceptions take hold early in schooling and can be either reinforced or abolished by powerful role models.

In one study, researchers gave standardized tests to black and white students, both before and after President Obama was elected. Black test takers performed worse than white test takers before the election. Immediately after Obama’s election, however, blacks’ performance improved so much that their scores were nearly equal with whites. When black students can overcome the worries brought on by stereotypes, because they see someone like President Obama who directly counters myths about racial variation in intelligence, their performance improves.

Beilock and her colleagues also have shown that when first-grade girls believe that boys are better than girls at math, they perform more poorly on math tests. One big source of this belief? The girls’ female teachers. It turns out that elementary school teachers are often highly anxious about their own math abilities, and this anxiety is modeled from teacher to student. When the teachers serve as positive role models in math, their male and female students perform equally well.
Meditation and practice can help

Even when a student is not a member of a stereotyped group, tests can be challenging for the brightest people, who can clutch if anxiety taps out their mental resources. In that instance, relaxation techniques can help.

In tests in her lab, Beilock and her research team gave people with no meditation experience 10 minutes of meditation training before they took a high-stakes test. Students with meditation preparation scored 87, or B+, versus the 82 or B- score of those without meditation training. This difference in performance occurred despite the fact that all students were of equal ability.

Stress can undermine performance in the world of business, where competition for sales, giving high-stakes presentations or even meeting your boss in the elevator are occasions when choking can squander opportunities.

Practice helps people navigate through these tosses on life’s ocean. But, more importantly, practicing under stress — even a moderate amount — helps a person feel comfortable when they find themselves standing in the line of fire, Beilock said. The experience of having dealt with stress makes those situations seem like old hat. The goal is to close the gap between practice and performance.

A person also can overcome anxiety by thinking about what to say, not what not to say, said Beilock, who added that staying positive is always a good idea.

“Think about the journey, not the outcome,” Beilock advised. “Remind yourself that you have the background to succeed and that you are in control of the situation. This can be the confidence boost you need to ace your pitch or to succeed in other ways when facing life’s challenges.”

(Photo: Jason Smith)

University of Chicago


0 comentarios

Neuroscientists at the University of Bristol have discovered a new form of synaptic interaction in the brain involved in memory function which could open up the possibility of a new treatment for Alzheimer’s disease.

Synaptic plasticity, one of the neurochemical foundations of learning and memory, is predominantly controlled by NMDA receptors. One of the hallmarks of Alzheimer’s disease is a neurological dysfunction caused by nerve cell damage, which in turn is caused by the over-activation of NMDA receptors.

Currently, clinically available treatments for the disease are drugs that stimulate the actions of acetylcholine – a neurotransmitter in the nervous system – or inhibit the function of NMDA receptors.

Published in Nature Neuroscience, a team led by Professor Kei Cho from Bristol’s Faculty of Medicine and Dentistry, detail how they discovered a novel interaction between NMDA receptors and muscarinic acetylcholine receptors (mAChRs), whereby activating the latter can depress NMDA receptor function. This novel form of synaptic plasticity was located in the hippocampus, a centre of learning and memory in the brain.

Their findings suggest that this molecular mechanism could be targeted for new drug developments for the treatment of Alzheimer’s disease - stimulating mAChRs would reduce the strength of synaptic transmission mediated by the activation of NMDA receptors. This could be beneficial in targeting the excitotoxicity caused by NMDA receptors, thereby potentially reducing the widespread cellular damage seen in the brains of people with Alzheimer’s disease.

However, Prof Cho cautioned that while there are several mAChR agonists available for research purposes, they would require further evaluation using various Alzheimer's disease models, including clinical trials, before they could be readily translated into a usable treatment.

Professor Graham Collingridge FRS, Director of the MRC Centre for Synaptic Plasticity and a member of the research team, said: “Basic research such as this is vital if we want to understand the causes of Alzheimer’s disease and other devastating brain disorders,” yet added a note of caution, saying “we are still a long way from finding a cure; more investment in basic research, not less as advocated by Government, is desperately needed.”

Professor Stafford Lightman, co-founder of Bristol Neuroscience, added: "This is yet another example of how Bristol Neuroscience has brought together teams from different faculties within the University of Bristol to tackle some of the most important challenges facing society."

(Photo: Bristol U.)

University of Bristol


0 comentarios
Most hip fractures due to osteoporosis follow a pattern: the patient falls, and the bones around the hip joint shatter into pieces. But 2 to 3 years ago, orthopedic surgeons began seeing an increase in unusual breaks that snapped the thighbone in two, often with no warning.

Such “atypical femur fractures” are associated with long-term use of a widely prescribed class of drugs for osteoporosis, an expert panel led by Elizabeth Shane, MD, professor of medicine at the College of Physicians and Surgeons, has now found. In the most comprehensive scientific report to date on the topic, the task force, convened by the American Society of Bone and Mineral Research, reviewed all available case reports of “atypical femur fractures” in the United States and found that 94 percent of patients (291 out of 310 cases) had taken the drugs, most for more than five years.

The finding, as well as other data from a large U.S. healthcare system, indicates a strong association between the drugs, called bisphosphonates, and atypical fractures, Dr. Shane says, though she emphasizes that the fractures are rare.

“Less than 1 percent of hip and thigh fractures are atypical, and millions of people have taken bisphosphonates and have not had these fractures,” Dr. Shane says. “We don’t want people to be afraid and stop taking their medications. For people with osteoporosis, the drugs’ benefits in preventing common, but equally devastating, fractures far outweigh the risk of a rare, atypical one.”

Health professionals and patients should be alert to the possibility of atypical femur fractures, though, and the ASBMR task force calls for changes in the drugs’ labels to raise awareness about the fractures and list their warning signs.

“A dull or aching pain in the groin or thigh, especially in a patient who has taken bisphosphonates for five years or more, should raise concern for an impending atypical femur fracture,” Dr. Shane says. Among the cases reviewed by the task force, more than half of patients reported groin or thigh pain for weeks or months before the fracture occurred. More than a quarter of patients with one break also later suffered a fracture in their other leg. Therefore, it is crucial for physicians to X-ray both femurs when a patient breaks a leg or shows signs of an impending fracture.

The task force also is calling for a change in the labeling for bisphosphonates to reflect the association with atypical fractures and their warning signs, better diagnostic codes to improve the quality of case reports, and an international registry of patients to track cases and facilitate future research.

Though the association between bisphosphonates and atypical fractures is strong, Dr. Shane says it is not certain that the drugs directly cause the breaks. The drugs may make the bone more brittle with long-term use, or other drugs taken concurrently may contribute in some patients. More research is needed to determine what causes the breaks and who is at highest risk.

Columbia University Medical Center


0 comentarios

Plants picked up to 150 years ago by Victorian collectors and held by the million in herbarium collections across the world could become a powerful - and much needed - new source of data for studying climate change, according to research led by the University of East Anglia and published in the British Ecological Society's Journal of Ecology.

The scarcity of reliable long-term data on phenology – the study of natural climate-driven events such as the timing of trees coming into leaf or plants flowering each spring – has hindered scientists' understanding of how species respond to climate change.

But new research by a team of ecologists from UEA, the University of Kent, the University of Sussex and the Royal Botanic Gardens, Kew shows that plants pressed up to 150 years ago tell the same story about warmer springs resulting in earlier flowering as field-based observations of flowering made much more recently.

The team examined 77 specimens of the early spider orchid (Ophrys sphegodes) collected between 1848 and 1958 and held at the Royal Botanic Gardens, Kew and the Natural History Museum in London. Because each specimen contains details of when and where it was picked, the researchers were able to match this with Meteorological Office records to examine how mean spring temperatures affected the orchids' flowering.

They then compared these data with field observations of peak flowering of the same orchid species in the Castle Hill National Nature Reserve, East Sussex from 1975 to 2006, and found that the response of flowering time to temperature was identical both in herbarium specimens and field data. In both the pressed plants and the field observations, the orchid flowered 6 days earlier for every 1oC rise in mean spring temperature.

The results are first direct proof that pressed plants in herbarium collections can be used to study relationships between phenology and climate change when field-based data are not available, as is almost always the case.

According to the study's lead author Karen Robbirt, a PhD student in UEA’s School of Biological Sciences: “The results of our study are exciting because the flowering response to spring temperature was so strikingly close in the two independent sources of data. This suggests that pressed plant collections may provide valuable additional information for climate-change studies.

“We found that the flowering response to spring temperature has remained constant, despite the accelerated increase in temperatures since the 1970s. This gives us some confidence in our ability to predict the effects of further warming on flowering times.”

The study opens up important new uses for the 2.5 billion plant and animal specimens held in natural history collections in museums and herbaria. Some specimens date back to the time of Linnaeus (who devised our system of naming plants and animals) 250 years ago.

Co-author Prof Anthony Davy, of the School of Biological Sciences, said: “There is an enormous wealth of untapped information locked within our museums and herbaria that can contribute to our ability to predict the effects of future climate change on many plant species. Importantly it may well be possible to extend similar principles to museum collections of insects and animals.”

Phenology – or the timing of natural events – is an important means of studying the impact of climate change on plants and animals.

“Recent climate change has undoubtedly affected the timing of development and seasonal events in many groups of organisms. Understanding the effects of recent climate change is a vital step towards predicting the consequences of future change. But only by elucidating the responses of individual species will we be able to predict the potentially disruptive effects of accelerating climate change on species interactions,” said Prof Davy.

Detecting phenological trends in relation to long-term climate change is not straightforward and relies on scarce long-term studies. “We need information collected over a long period to enable us confidently to identify trends that could be due to climate change. Unfortunately most field studies are relatively brief, so there are very few long-term field data available,” explained Prof Davy.

(Photo: K. Robbirt)

University of East Anglia


0 comentarios

Shopping on the internet or working from home could be increasing carbon emissions rather than helping to reduce them, a new report claims.

The research reveals that people who shop online must order more than 25 items otherwise the impact on the environment is likely to be worse than traditional shopping.

It also highlights that working from home can increase home energy use by as much as 30 per cent, and can lead to people moving further from the workplace, stretching urban sprawl and increasing pollution.

The Institution of Engineering and Technology (IET) report looks at the ‘rebound’ effects of activities that are commonly thought to be green. Rebound effects are the unintended consequences of policies that are designed to reduce emissions, but on closer analysis can move the emission’s production elsewhere or lessen the positive impact.

Professor Phil Blythe, Chair of the IET Transport Policy Panel and Professor of Intelligent Transport Systems at Newcastle University, which produced the report, said: “We hear a lot about the environmental benefits achieved as a result of working from home. However, on closer inspection it does appear that any environmental benefits are marginal.”

The report highlights that buying goods online can provide carbon savings, but only if the conditions are right. The study found that environmental savings can be achieved if online shopping replaces 3.5 traditional shopping trips, or if 25 orders are delivered at the same time, or, if the distance travelled to where the purchase is made is more than 50km.

Shopping online does not offer net environmental benefits unless these criteria are met.

The report also highlights that the top 20 per cent of British households spend almost nine times as much on transport costs (such as air travel) as the bottom 20 per cent.

Professor Phil Blythe says: “Our report highlights two important messages for policy makers. Firstly, climate change is a real threat to our planet, so we must not get overwhelmed by the task and use rebound effects as an excuse not to act.

“Secondly, policy makers must do their homework to ensure that rebound effects do not negate the positive benefits of their policy initiatives and simply move carbon emissions from one sector to another.”

(Photo: Newcastle U.)

Newcastle University


0 comentarios

After almost six months of operation, experiments at the LHC are starting to see signs of potentially new and interesting effects. In results announced by the CMS collaboration in Geneva, correlations have been observed between particles produced in 7 TeV proton-proton collisions.

Having re-measured known physics in time for the summer conferences, the LHC experiments are now starting to probe new ground. ATLAS recently extended limits on excited quarks, while the LHCb detector has demonstrated its capacity by observing atom-like particles built from beauty quarks and antiquarks.

In some of the LHC’s proton-proton collisions, a hundred or more particles can be produced. The CMS collaboration has studied such collisions by measuring angular correlations between the particles as they fly away from the point of impact, and this has revealed that some of the particles are intimately linked in a way not seen before in proton collisions.

The effect is subtle and many detailed crosschecks and studies have been performed to ensure that it is real. It bears some similarity to effects seen in the collisions of nuclei at the RHIC facility located at the US Brookhaven National Laboratory, which have been interpreted as being possibly due to the creation of hot dense matter formed in the collisions. Nevertheless, the CMS collaboration has stressed that there are several potential explanations to be considered and the collaboration’s presentation to the physics community at CERN today focussed on the experimental evidence in the interest of fostering a broader discussion on the subject.

“Now we need more data to analyse fully what’s going on, and to take our first steps into the vast landscape of new physics we hope the LHC will open up,” said CMS Spokesperson Guido Tonelli.

UK CMS Spokesperson Professor Geoff Hall from Imperial College London added “It is encouraging to see our complex detector showing up new subtle effects such as these correlations in proton-proton collisions soon after startup at high energies in March 2010. It demonstrates, at a very early stage of the experiment, the power of the detectors and the analysis techniques which have been constructed over almost twenty years. The UK has contributed substantially to the tracking detector at the heart of CMS which precisely measures charged particle trajectories, and has made this result possible. Each of about ten points on every track is measured with a spatial resolution of about 10-20 micrometres using silicon sensors and high speed electronic readout.”

(Photo: Copyright CMS/CERN)

Science and Technology Facilities Council




Selected Science News. Copyright 2008 All Rights Reserved Revolution Two Church theme by Brian Gardner Converted into Blogger Template by Bloganol dot com