Saturday, October 30, 2010


0 comentarios
Discovery of a molecular switch that turns off the natural process of skin pigmentation may lead to a novel way of protecting the skin – activating the tanning process without exposure to cancer-causing UV radiation. In their report in the journal Genes & Development, researchers from the Massachusetts General Hospital (MGH) Cutaneous Biology Research Center (CBRC) describe how blocking the action of this switch – an enzyme called PDE-4D3 – in the skin of mice led to a significant increase in melanin production.

"The primary goal of inducing melanin production in human skin would be prevention of skin cancer, since all the common forms are known to be associated with UV exposure, " explains David Fisher, MD, PhD, director of the hospital's Department of Dermatology and an investigator at the MGH CBRC, who led the study. "Not only would increased melanin directly block UV radiation, but an alternative way to activate the tanning response could help dissuade people from sun tanning or indoor tanning, both of which are known to raise skin cancer risk."

In 2006 Fisher's group showed that the metabolic pathway leading to UV-induced pigmentation is controlled by cyclic AMP (cAMP), a molecule known to regulate many important cellular processes by carrying messages from the cell surface to internal target molecules. Using a strain of transgenic mice with red hair and melanocytes in their epidermis – common mice have none of these melanin-producing cells in the outer skin layer – they found that inducing cAMP production in the animals' skin led to significant pigmentation. But since the drug used in that study cannot penetrate human skin, they needed to investigate an alternative approach.

Because most drugs act by blocking rather than stimulating their target molecules, better defining the pathway leading from UV exposure to melanin production could identify a step limiting melanin expression that, if suppressed, would increase production of the pigment. The strength and duration of the signals carried by cAMP are controlled by PDE enzymes, which break down the molecule after its message is delivered. Detailed analysis of the melanin expression pathway identified PDE-4D3 as the regulator of cAMP activity in melanocytes. The transcription factor activated by cAMP induces production of both melanin and PDE-4D3, and the enzyme in turn modulates the pigmentation process by breaking down cAMP.

The researchers confirmed role of PDE-4D3 in controlling melanin expression by applying several agents that block PDE production to the skin of the transgenic mice with epidermal melanocytes. After five days of treatment, the animals' skin had darkened significantly, while treatment of control mice with no epidemal melanocytes produced no effect.

"Although PDE enzymes degrade cAMP within all cells, different members of this enzyme family are active in different types of cells," Fisher explains. "We showed that PDE-4D3 is particularly important within melanocytes, and while the enzyme may have a role in other cells, a blocking drug that is applied directly to the skin would probably have limited effects in other tissues." Additional research is needed to identify drugs that penetrate human skin and safely block PDE-4D3, he notes, and his team has already starting searching for such agents.

Massachusetts General Hospital


0 comentarios

How do astronomers weigh a star that's trillions of miles away and way too big to fit on a bathroom scale? In most cases they can't, although they can get a best estimate using computer models of stellar structure.

New work by astrophysicist David Kipping says that in special cases, we can weigh a star directly. If the star has a planet, and that planet has a moon, and both of them cross in front of their star, then we can measure their sizes and orbits to learn about the star.

"I often get asked how astronomers weigh stars. We've just added a new technique to our toolbox for that purpose," said Kipping, a predoctoral fellow at the Harvard-Smithsonian Center for Astrophysics.

Astronomers have found more than 90 planets that cross in front of, or transit, their stars. By measuring the amount of starlight that's blocked, they can calculate how big the planet is relative to the star. But they can't know exactly how big the planet is unless they know the actual size of the star. Computer models give a very good estimate but in science, real measurements are best.

Kipping realized that if a transiting planet has a moon big enough for us to see (by also blocking starlight), then the planet-moon-star system could be measured in a way that lets us calculate exactly how large and massive all three bodies are.

"Basically, we measure the orbits of the planet around the star and the moon around the planet. Then through Kepler's Laws of Motion, it's possible to calculate the mass of the star," explained Kipping.

The process isn't easy and requires several steps. By measuring how the star's light dims when planet and moon transit, astronomers learn three key numbers: 1) the orbital periods of the moon and planet, 2) the size of their orbits relative to the star, and 3) the size of planet and moon relative to the star.

Plugging those numbers into Kepler's Third Law yields the density of the star and planet. Since density is mass divided by volume, the relative densities and relative sizes gives the relative masses. Finally, scientists measure the star's wobble due to the planet's gravitational tug, known as the radial velocity. Combining the measured velocity with the relative masses, they can calculate the mass of the star directly.

"If there was no moon, this whole exercise would be impossible," stated Kipping. "No moon means we can't work out the density of the planet, so the whole thing grinds to a halt."

Kipping hasn't put his method into practice yet, since no star is known to have both a planet and moon that transit. However, NASA's Kepler spacecraft should discover several such systems.

"When they're found, we'll be ready to weigh them," said Kipping.

(Photo: David A. Aguilar (CfA))



0 comentarios
If you're having a bad day, you may want to stay away from listening to commercials for Lululemon or Coca Cola. Or from any retailer or merchandise whose name bears a similarly repetitive phonetic sound.

University of Alberta marketing professor Jennifer Argo recently published a study in the Journal of Marketing indicating that hearing the names of brands containing these types of repetitive sounds can influence our mood and thus our decision-making ability when it comes to choosing whether or not we frequent that establishment or buy those items.

Argo, along with her colleagues, conducted a number of studies testing brand names, including identical samples of ice cream that were given two different names: one for which the name contained a repetitive sound and one where there was none. The researchers introduced the identical products to test subjects one at a time, citing the name for each sample aloud during the product description. Despite the same ice cream being used, the majority of respondents chose the brand with the repetitive-sounding name.

In other studies, giving people choices over everything from types of desserts in one or cell phone options in another, the researchers found similar results from the respondents' selections. In these cases, they chose based on an affective (emotional) response. Argo says that an audible repetition needs to be present—findings that are key for marketers, advertisers and store managers.

"Based on the results, it would say that tv and radio advertisements are critical to this strategy," Argo said. "But the employees are also critical. Before customers order, a server can remind the name of the restaurant they're at. Sales people can talk with customers and mention the brand name."

In all of the six trials Argo's group conducted, each invented brand name underwent only minute changes in variations, such as "zanozan" versus "zanovum". Argo noted that, in all cases, such small variations, even as much as a single letter, had a huge impact as to the person's choice and how they responded.

Alas, too much sound repetition can also be a bad thing, as can developing a name that does not follow a natural linguistic sound, for example, "ranthfanth". In these cases, she says, respondents displayed negative affect when these conditions were present.

"You can't deviate too much from our language, otherwise it will backfire on you," said Argo.

Argo, whose studies often deal with subjects related to consumer awareness, notes that there is one loophole to the brand/sound strategy: the device is less effective if the person is already positively affected. Argo's advice for someone practising retail therapy would be to "plug your ears; don't let anyone talk to you." Overall, Argo notes that people need to be aware of the influence that a brand name may have on mood and choice and that marketing strategists have gone to great lengths in choosing the moniker for their product.

"The companies have spent millions of dollars choosing their brands and their brand names and they've been picked explicitly to have an influence on consumers," she said. "We show that it can get you at the affective level."

University of Alberta

Friday, October 29, 2010


0 comentarios
In a laboratory at Ohio State University, an ongoing experiment is studying why batteries lose their ability to hold a charge as they age -- specifically lithium-ion batteries, which have generated a lot of buzz for their potential to power the electric cars of the future.
(NC&T/API) Preliminary results presented at the AVS 57th International Symposium & Exhibition, suggest that the irreversible changes inside a dead battery start at the nanoscale.

Yann Guezennec and Giorgio Rizzoni of OSU developed new experimental facilities and procedures to charge and discharge commercially-available Li-ion batteries thousands of times over many months in a variety of conditions designed to mimic how these batteries are actually used by hybrid and all-electric vehicles. Some of the batteries were run in hot temperatures like those in Arizona; others in colder conditions similar to those in Alaska.

To understand the results of this testing, Bharat Bhushan, Suresh Babu, and Lei Raymond Cao studied the materials inside of the batteries to help determine how this aging manifests itself in the structure of the electrode materials.

When the batteries died, the scientists dissected them and used a technique called infrared thermal imaging to search for problem areas in each electrode, a 1.5-meter-long strip of metal tape coated with oxide and rolled up like a jelly roll. They then took a closer look at these problem areas using a variety of techniques with different length scale resolutions (e.g. scanning electron microscopy, atomic force microscope, scanning spreading resistance microscopy, Kelvin probe microscopy, transmission electron microscopy) and discovered that the finely-structured nanomaterials on these electrodes that allow the battery rapidly charge and discharge had coarsened in size.

Additional studies of the aged batteries, using neutron depth profiling, revealed that a fraction of the lithium that is responsible, in ion form, for shuttling electric charge between electrodes during charging and discharging, was no longer available for charge transfer, but was irreversibly lost from the cathode to the anode.

"We can clearly see that an aged sample versus and unaged sample has much lower lithium concentration in the cathode," said Rizzoni, director of the Center for Automotive Research at OSU. "It has essentially combined with anode material in an irreversible way."

This research is being performed by Center for Automotive Research at OSU in collaboration with Oak Ridge National Laboratory and the National Institute of Standards Technology.

The researchers suspect, but cannot yet prove, that the coarsening of the cathode may be behind this loss of lithium. If this theory turns out to be correct, it could point battery manufacturers in the right direction for making durable batteries with longer lifetimes.

American Institute of Physics


0 comentarios
Lithium-ion batteries have become ubiquitous in today's consumer electronics -- powering our laptops, phones, and iPods. Research funded by DARPA is pushing the limits of this technology and trying to create some of the tiniest batteries on Earth, the largest of which would be no bigger than a grain of sand.

These tiny energy storage devices could one day be used to power the electronics and mechanical components of tiny micro- to nano-scale devices.

Jane Chang, an engineer at the University of California, Los Angeles, is designing one component of these batteries: the electrolyte that allows charge to flow between electrodes. She presented her results at the AVS 57th International Symposium & Exhibition.

"We're trying to achieve the same power densities, the same energy densities as traditional lithium ion batteries, but we need to make the footprint much smaller," says Chang.

To reach this goal, Chang is thinking in three dimensions in collaboration with Bruce Dunn other researchers at UCLA. She's coating well-ordered micro-pillars or nano-wires -- fabricated to maximize the surface-to-volume ratio, and thus the potential energy density -- with electrolyte, the conductive material that allows current to flow in a battery.

Using atomic layer deposition -- a slow but precise process that allows layers of material only an atom thick to be sprayed on a surface -- she has successfully applied the solid electrolyte lithium aluminosilicate to these nanomaterials.

The research is still in its early stages: other components of these 3D microbatteries, such as the electrodes, have also been developed, but they have yet to be assembled and integrated to make a functioning battery.

American Institute of Physics


0 comentarios
Much of medicine is based on what is considered the strongest possible evidence: The placebo-controlled trial. A paper published in the October 19 issue of Annals of Internal Medicine – entitled "What's In Placebos: Who Knows?" calls into question this foundation upon which much of medicine rests, by showing that there is no standard behind the standard – no standard for the placebo.

The thinking behind relying on placebo-controlled trials is this: to be sure a treatment itself is effective, one needs to compare people whose only difference is whether or not they are taking the drug. Both groups should equally think they are on the drug – to protect against effects of factors like expectation. So study participants are allocated "randomly" to the drug or a "placebo" – a pill that might be mistaken for the active drug but is inert.

But, according to the paper's author, Beatrice Golomb, MD, PhD, associate professor of medicine at the University of California, San Diego School of Medicine, this standard has a fundamental problem, "there isn't anything actually known to be physiologically inert. On top of that, there are no regulations about what goes into placebos, and what is in them is often determined by the makers of the drug being studied, who have a vested interest in the outcome. And there has been no expectation that placebos' composition be disclosed. At least then readers of the study might make up their own mind about whether the ingredients in the placebo might affect the interpretation of the study."

Golomb pointed out these limitations to the placebo in a pair of letters to the journal Nature 15 years ago.

"A positive or negative effect of the placebo can lead to the misleading appearance of a negative or positive effect of the drug," she said. "And an effect in the same direction as the drug can lead a true effect of the drug to be lost. These concerns aren't just theoretical. Where the composition has been disclosed, the ingredients of the placebo have in some instances had a likely impact on the result of the study – in either direction (obscuring a real effect, or creating a spurious one). In the cases we know about, this is not because of any willful manipulation, but because it can in fact be difficult to come up with a placebo that does not have some kind of problem."

Since 15 years have elapsed, the situation might have improved. Therefore, Golomb and her colleagues analyzed just how often randomized trials published in the past two years in each of the top four general medical journals actually disclosed the makeup of placebos.

The answer is not reassuring, according to the researchers, who found that the placebo ingredients for pills were disclosed in fewer than 10 percent of cases. (The nature of the "control" was significantly more likely to be stated for other types of treatments – like injections, acupuncture, or surgery – where people are more likely to question what "placebo" actually means.)

"How often study results are affected by what's in the placebo is hard to say – because, as this study showed, most of the time we have no idea what the placebo is," Golomb concluded.

University of California, San Diego

Thursday, October 28, 2010


0 comentarios

The European Space Agency (ESA) Rosetta spacecraft recently beamed back to Earth a dramatic set of close-up images as it flew past the asteroid Lutetia, on its way to a comet rendezvous in 2014. But even before Rosetta made its close encounter with the 100-kilometer sized asteroid, astronomers using three of the world’s largest telescopes, including the W. M. Keck Observatory, were busy making their own assessment of the asteroid’s shape and size, as well as searching for its satellites. Their pre-flyby images are being compared this week with those from Rosetta at a meeting of the Division for Planetary Sciences of the American Astronomical Society in Pasadena, California, revealing that the ground-based images are amazingly accurate.

These telescopes all use adaptive optics (AO), which moves the blurring caused by the Earth’s atmosphere. Using something like a fun-house mirror in reverse, AO allows clear pictures to be made, from Earth’s surface, of distant astronomical objects that were impossible to see previously. “Adaptive optics has set in motion an astronomical revolution, bringing new worlds into better view, ranging from asteroids that were previously unresolved pinpoints of light, to the discovery of new planets in other solar systems,” said Dr. William Merline of Southwest Research Institute (SwRI) in Boulder, Colorado, lead scientist of the international team that made the observations, funded by NASA and the National Science Foundation. Two of the telescopes are atop Mauna Kea in Hawaii: the W.M. Keck telescope with its 10-meter mirror and the Gemini telescope, equipped with an 8-meter mirror. The 8-meter Very Large Telescope (VLT) of the European Southern Observatory in Chile was also used.

“We carefully evaluated the size and shape of Lutetia, and pinned down the orientation of its spin pole using telescopes on Earth, prior to the flyby,” reported Dr. Jack Drummond, an astronomer at Starfire Optical Range in Albuquerque, New Mexico, where AO was first developed in the early 1990s and can be considered the cradle of adaptive optics. Drummond is an expert in turning AO images into models of asteroids, detailing their shapes and sizes. He is the lead author on the first of two papers predicting the appearance of Lutetia, which are now in press in the journal Astronomy and Astrophysics. Drummond adds, “after the many years developing these techniques at Starfire, it is gratifying to see how well they work when put to this kind of test.”

Rosetta’s Lutetia encounter provided a rare chance to combine the strengths of spacecraft and ground-based approaches to understand the complex shapes and elusive sizes of asteroids. For decades, astronomers watched Lutetia change brightness as it rotates. Before adaptive optics, such “lightcurve” studies were the only way astronomers could infer the shape of a body like Lutetia. “A sphere will have a flat lightcurve, an egg will have a lightcurve that goes up and down smoothly like an ocean swell, but an irregular potato-shape will look like your EKG on a bad day!”, says Dr. Al Conrad of Keck Observatory, where many of the observations were made.

While lightcurves provide approximate shape, they cannot provide fine detail nor absolute scale. “AO has dramatically improved our ability to determine asteroid shapes from the ground by providing both of these missing ingredients,” said team member Dr. Benoit Carry of Paris Observatory, who led the efforts to produce the “shape model”, derived by combining AO images with decades of lightcurve observations taken on smaller telescopes. “We dubbed this new technique KOALA, for Knitted Occultation Adaptive Optics, and Lightcurve Analysis. With it, we can make much improved use of our own data and of previous studies,” adds Carry, who leads the second paper and worked to develop KOALA in collaboration with Dr. Mikko Kaasalainen of Tampere University, Finland. The results were provided to the Rosetta mission teams ahead of the flyby to assist in planning.

Such ground-based imaging can help prepare for spacecraft flybys in other ways as well. “We determined that the spin pole of the asteroid was highly inclined, and almost in its orbital plane, much like that of Uranus,” says Carry. “We predicted that Rosetta would see only the northern hemisphere and that the southern hemisphere would be dark and cold,” he said --- predictions borne out by the flyby data.

“This encounter enables us to verify, validate, and calibrate our method of combining AO data with lightcurve studies,” noted Merline. “Our goal is to apply this technique to many other asteroids to find their sizes and shapes. The validation from these flyby images gives us confidence that we can do so. We can observe about 200 asteroids in this manner now, and that number will increase as larger telescopes are built,” he adds.

“The tremendous power of the Keck telescope, when coupled with AO, is demonstrated superbly in the Lutetia data,” says Conrad. Because asteroids have no active geology, such as volcanoes or tectonics, their shapes result from collision with other, smaller, asteroids. “Details of shape, such as flat facets or apparent concavities, help reveal the history of asteroid collisions,” he adds. The importance of collisions can be seen in the crater-marked surface of the Moon, reminding us that asteroids continue to pose a threat to Earth.

The AO images from large telescopes, used in concert with lightcurves and the spacecraft images, go beyond validation, however. By combining all data, Lutetia’s shape could be accurately determined, allowing astronomers to compute its volume. Moreover, measurements of the gravitational tug from Lutetia on the spacecraft as it flew past the asteroid will yield a very accurate mass. Mass and volume, taken together, will provide the density of Lutetia. Density is the concept of how much something weighs for its size. For example, two wrapped birthday gifts of the same size, one of Styrofoam™ and one of lead, would invoke very different speculations from a recipient. Asteroid compositions could potentially span the full range from ice to rock to iron. Different compositions of an asteroid could be distinguished by different densities. Astronomers use this “birthday present” approach, combined with information from studies of the brightness and color of the surface, to infer the type of rock (or ice or metal) that makes up an asteroid.

Lutetia was first categorized as an M-type asteroid in the 1970s by team member Dr. Clark Chapman, also of SwRI. “Some people think that M-type asteroids must be metallic, but it has always been known that some might be rocky, like the so-called enstatite chondrite meterorites,” says Chapman. “While Lutetia is not metallic, its composition is still a mystery and it may even be unlike anything we have in our meteorite collections. We hope that getting an accurate density will help us answer this question,” he says. The spacecraft data will be combined with past and future ground-based observations, and the secret of Lutetia’s composition may be revealed in the coming months.

In addition to studying its size and shape, the scientists also searched for satellites of Lutetia, but none were found. “We wanted to know if satellites were present because they would at the same time provide new objects for study during the flyby, but would also pose a risk to the spacecraft that could be avoided with prior knowledge,” says Merline.

(Photo: Dr. Benoit Carry (Paris Observatory), Dr. Al Conrad (Keck Observatory), and Dr. William Merline (Southwest Research Institute))

W. M. Keck Observatory


0 comentarios

A team of Rice University and Lockheed Martin scientists has discovered a way to use simple silicon to radically increase the capacity of lithium-ion batteries.

Sibani Lisa Biswal, an assistant professor of chemical and biomolecular engineering, revealed how she, colleague Michael Wong, a professor of chemical and biomolecular engineering and of chemistry, and Steven Sinsabaugh, a Lockheed Martin Fellow, are enhancing the inherent ability of silicon to absorb lithium ions.

Their work was introduced at Rice's Buckyball Discovery Conference, part of a yearlong celebration of the 25th anniversary of the Nobel Prize-winning discovery of the buckminsterfullerene, or carbon 60, molecule. It could become a key component for electric car batteries and large-capacity energy storage, they said.

"The anode, or negative, side of today's batteries is made of graphite, which works. It's everywhere," Wong said. "But it's maxed out. You can't stuff any more lithium into graphite than we already have."

Silicon has the highest theoretical capacity of any material for storing lithium, but there's a serious drawback to its use. "It can sop up a lot of lithium, about 10 times more than carbon, which seems fantastic," Wong said. "But after a couple of cycles of swelling and shrinking, it's going to crack."

Other labs have tried to solve the problem with carpets of silicon nanowires that absorb lithium like a mop soaks up water, but the Rice team took a different tack.

With Mahduri Thakur, a post-doctoral researcher in Rice's Chemical and Biomolecular Engineering Department, and Mark Isaacson of Lockheed Martin, Biswal, Wong and Sinsabaugh found that putting micron-sized pores into the surface of a silicon wafer gives the material sufficient room to expand. While common lithium-ion batteries hold about 300 milliamp hours per gram of carbon-based anode material, they determined the treated silicon could theoretically store more than 10 times that amount.

Sinsabaugh described the breakthrough as one of the first fruits of the Lockheed Martin Advanced Nanotechnology Center of Excellence at Rice (LANCER). He said the project began three years ago when he met Biswal at Rice and compared notes. "She was working on porous silicon, and I knew silicon nanostructures were being looked at for battery anodes. We put two and two together," he said.

Nanopores are simpler to create than silicon nanowires, Biswal said. The pores, a micron wide and from 10 to 50 microns long, form when positive and negative charge is applied to the sides of a silicon wafer, which is then bathed in a hydrofluoric solvent. "The hydrogen and fluoride atoms separate," she said. "The fluorine attacks one side of the silicon, forming the pores. They form vertically because of the positive and negative bias."

The treated silicon, she said, "looks like Swiss cheese."

The straightforward process makes it highly adaptable for manufacturing, she said. "We don't require some of the difficult processing steps they do -- the high vacuums and having to wash the nanotubes. Bulk etching is much simpler to process.

"The other advantage is that we've seen fairly long lifetimes. Our current batteries have 200-250 cycles, much longer than nanowire batteries," said Biswal.

They said putting pores in silicon requires a real balancing act, as the more space is dedicated to the holes, the less material is available to store lithium. And if the silicon expands to the point where the pore walls touch, the material could degrade.
The researchers are confident that cheap, plentiful silicon combined with ease of manufacture could help push their idea into the mainstream.

"We are very excited about the potential of this work," Sinsabaugh said. "This material has the potential to significantly increase the performance of lithium-ion batteries, which are used in a wide range of commercial, military and aerospace applications
Biswal and Wong plan to study the mechanism by which silicon absorbs lithium and how and why it breaks down. "Our goal is to develop a model of the strain that silicon undergoes in cycling lithium," Wong said. "Once we understand that, we'll have a much better idea of how to maximize its potential."

(Photo: Biswal Lab/Rice University)

Rice University


0 comentarios

Oil recovery using carbon dioxide could lead to a North Sea oil bonanza worth £150 billion ($ 240 billion) – but only if the current infrastructure is enhanced now, according to a new study published today by a world-leading energy expert.

A new calculation by Durham University of the net worth of the UK oil field shows that using carbon dioxide (CO2) to enhance the recovery from our existing North Sea oil fields could yield an extra three billion barrels of oil over the next 20 years. Three billion barrels of oil could power, heat and transport the UK for two years with every other form of energy switched off.

Importantly, at a time of rising CO2 emissions, the enhanced oil recovery process is just about carbon neutral with as much carbon being put back in the ground as will be taken out.

The technique could yield an enormous amount of oil revenue at a time of public service cuts and developing the infrastructure would put the UK in the driving seat for developing enhanced recovery off-shore oil production around the world. It would also allow the UK to develop its carbon storage techniques in line with the UK government's commitments on emissions reductions.

The study, funded by DONG Energy (UK) Ltd. and Ikon Science Ltd., was presented, October 14th 2010, at a conference on Carbon Capture and Storage (CCS), at the Institution of Mechanical Engineers, London. The new figures are conservative estimates and extend a previous calculation that predicted a 2.7 billion barrel yield from selected fields in the North Sea.

The UK Government's Energy Statement, published in April 2010, outlines the continued role that fossil fuels will have to play in the UK energy mix. CO2 enhanced oil recovery in the UK would secure supplies for the next 20 years.

Jon Gluyas, a Professor in CCS & Geo-Energy, Department of Earth Sciences, Durham University, who has calculated the new figures, said: "Time is running out to make best use of our precious remaining oil reserves because we're losing vital infrastructure as the oil fields decline and are abandoned. Once the infrastructure is removed, we will never go back and the opportunity will be wasted.

"We need to act now to develop the capture and transportation infrastructure to take the CO2 to where it is needed. This would be a world-leading industry using new technology to deliver carbon dioxide to the North Sea oil fields. We must begin to do this as soon as possible before it becomes too expensive to do so.

"My figures are at the low end of expectations but they show that developing this technology could lead to a huge rejuvenation of the North Sea. The industrial CO2 output from Aberdeen to Hull is all you need to deliver this enhanced oil recovery."

Carbon dioxide is emitted into the atmosphere when fossil fuels are burnt and the UK Government plans to collect it from power stations in the UK. Capturing and storing carbon dioxide is seen as a way to prevent global warming and ocean acidification. Old oil and gas fields, such as those in the North Sea, are considered to be likely stores.

Enhanced oil recovery using carbon dioxide (CO2 EOR) adds further value to the potential merits of CCS.

Oil is usually recovered by flushing oil wells through with water at pressure. Since the 1970s oil fields in West Texas, USA, have been successfully exploited using carbon dioxide. CO2 is pumped as a fluid into oil fields at elevated pressure and helps sweep the oil to the production wells by contacting parts of the reservoirs not accessed by water injection; the result is much greater oil production.

Experience from the USA shows that an extra four to twelve per cent of the oil in place can be extracted using CO2-EOR. Professor Gluyas calculated the total oil in place in the UK fields and the potential UK gain in barrels and revenue from existing reserves using the American model.

David Hanstock, a founding director of Progressive Energy and director of COOTS Ltd, which is developing an offshore CO2 transport and storage infrastructure in the North Sea, said: "The UK has significant storage capacity potential for captured carbon dioxide in North sea oil and gas fields.

"There is a unique opportunity to develop a new offshore industry using our considerable experience in offshore engineering. This would give us a technical lead on injecting and monitoring CO2 that we could then export to the wider world to establish the UK as a world leader in carbon capture and storage technology."

Professor Gluyas added: "Enhanced recovery of oil in the North Sea oil fields can secure our energy supplies for the next fifty years. The extra 3 billion barrels of oil that could be produced by enhanced CO2 recovery would make us self sufficient and would add around £60bn in revenue to the Treasury.

"Priming the system now would mean we have 10-15 years to develop CO2 recycling and sufficient time to help us bridge to a future serviced by renewable energy."

(Photo: Durnham U.)

Durham University


0 comentarios

Scientists from the MARUM Center for Marine Environmental Sciences and the Max Planck Institute for Marine Microbiology in Bremen on board the German research vessel Meteor have discovered a new hydrothermal vent 500 kilometres south-west of the Azores.

The vent with chimneys as high as one meter and fluids with temperatures up to 300 degrees Celsius was found at one thousand metres water depth in the middle of the Atlantic Ocean. The discovery of the new deep-sea vent is remarkable because the area in which it was found has been intensively studied during previous research cruises. The MARUM and Max Planck researchers describe their discovery in their video blog.

The Bremen scientists were able to find the hydrothermal vent by using the new, latest-generation multibeam echosounder on board the research vessel Meteor that allows the imaging of the water column above the ocean floor with previously unattained precision. The scientists saw a plume of gas bubbles in the water column at a site about 5 kilometers away from the known large vent field Menez Gwen that they were working on. A dive with the remote-controlled submarine MARUM-QUEST revealed the new hydrothermal site with smokers and animals typically found at vents on the Mid-Atlantic Ridge.

Since the discovery of the new vent, the scientists have been intensively searching the water column with the multibeam echosounder. To their astonishment, they have already found at least five other sites with gas plumes. Some even lie outside the volcanically active spreading zone in areas where hydrothermal activity was previously not assumed to occur.

"Our results indicate that many more of these small active sites exist along the Mid-Atlantic Ridge than previously assumed," said Dr. Nicole Dubilier, the chief scientist of the expedition. "This could change our understanding of the contribution of hydrothermal activity to the thermal budget of the oceans. Our discovery is also exciting because it could provide the answer to a long standing mystery: We do not know how animals travel between the large hydrothermal vents, which are often separated by hundreds to thousands of kilometres from each other. They may be using these smaller sites as stepping stones for their dispersal."

Research on deep-sea hydrothermal vents in the Atlantic is the objective of the 30 marine scientists from Hamburg, Bremen, Kiel, Portugal, and France who have been on board the German research vessel Meteor since September 6th. The expedition to the submarine volcano Menez Gwen near the Azores is financed by MARUM, the Center for Marine Environmental Sciences in Bremen. "One of the questions that the team would like to answer is why the hydrothermal sources in this area emit so much methane - a very potent greenhouse gas," says chief scientist Nicole Dubilier, who is also a member of the Steering Committee of the Census of Marine Life Vents and Seeps project ChEss (Chemosynthetic Ecosystem Science). "Another important focus of the research is the deep-sea mussels that live at the hydrothermal vents and host symbiotic bacteria in their gills. The mussels obtain their nutrition from these bacteria."

(Photo: MARUM)

Max Planck Institute


0 comentarios

How can watching primitive fish rot away reveal answers to the fundamental questions of how, when and why our earliest vertebrate ancestors evolved?

An innovative experiment at the University of Leicester that involved studying rotting fish has helped to create a clearer picture of what our early ancestors would have looked like.

The scientists wanted to examine the decaying process in order to understand the decomposition of soft-body parts in fish. This in turn will help them reconstruct an image of creatures that existed 500 million years ago.

Their findings have been published in the journal Proceedings of the Royal Society B. The work was funded by the Natural Environment Research Council (NERC).

The researchers, from the Department of Geology at the University of Leicester, studied the way primitive fish, such as hagfishes and lampreys, decompose to gain an impression of our early ancestry.

The team at Leicester (Rob Sansom, Sarah Gabbott and Mark Purnell) explain: “Our earliest fish-like relatives left fossil remains which have the potential to show us how the group to which we belong evolved from worm-like relatives. But there is a major problem – people are familiar with bones, and teeth as fossils but do not perhaps realise that before these inventions our ancestors consisted of entirely soft bodied creatures. Eyes, organs, guts and muscles all decompose very quickly after death, and as any forensic scientist knows recognising rotted anatomy is difficult.

“Fossils from 500 million years ago provide our only direct evidence of how our earliest vertebrate ancestors evolved from the simple worm-like animals”.

The fossils from the early phase of vertebrate evolution are very rare because being completely soft-bodied they normally rotted away completely after death leaving nothing behind. But very occasionally their remains became preserved as fossils giving us a tantalising glimpse of our early vertebrate relatives.

However, as Rob Sansom explains correctly reading and reconstructing what our ancestors looked like from these semi-rotted remains is tricky. “Interpreting half-a-billion year old fossils is challenging enough in itself, but even more so when the remains comprise only the decayed soft parts which may look quite different to how they would have done in life”.

Sarah Gabbott, one of the leaders of the study, admits that at first it may be difficult to see why spending hundreds of hours studying the stinking carcasses of rotting fish helps us to unlock our evolutionary history, but she points out that the results have been critical to correctly reading fossils from this phase in our history. “In a way our experiments are similar to those going on at the infamous ‘body farms’ in the USA, where human cadavers are left to decompose so that forensic scientists can determine time and cause of death. But, as palaeontologists we want to uncover what an animal which lived 500 million years ago looked like before it died”.

“Our macabre experiments are grisly and smelly but they have revealed, for the first time, what characteristic vertebrate features look like when they are partially decomposed”.

Rare fossilized fish-like fossils are recognised as being part of our evolutionary history because they possess characteristic anatomical features, such as a tail, eyes and the precursor of a backbone. Mark Purnell, explains further: “our experiments have provided us with a set of photofit-like images allowing us to decipher and correctly identify features in fossils. Our ability to flesh out what our earliest vertebrate ancestors looked like and correctly place them on the Tree of Life is critical to understanding whether our earliest relatives evolved in a burst of complexity or gradually over millions of years”

The results published in The Proceedings of the Royal Society B, show that some of the characteristic anatomical features of early vertebrate fossils have been badly affected by decomposition, and in some cases may have rotted away completely. Knowing how decomposition affected the fossils means our reconstructions of our earliest ancestors will be more scientifically accurate.

(Photo: U. Leicester)

University of Leicester


0 comentarios

Some tiny crustaceans living in clear-water alpine ponds high in Washington state's Olympic Mountains have learned how to cope with the sun's damaging ultraviolet rays without sunblock -- and with very little natural pigmentation to protect them.

In fact, in laboratory tests these water fleas, about the size of fruit flies, withstood UV rays much better than the same species of flea taken from a pond less than a mile away, where the water was murkier and thus offered protection.

"The ponds pretty much look the same to us, but the environments are very different for the animals that live there," said Brooks Miner, a University of Washington doctoral student in biology, whose findings are published Oct. 13 in the online edition of the United Kingdom's Proceedings of the Royal Society B. Coauthor is Benjamin Kerr, a UW assistant professor of biology and Miner's doctoral adviser.

"Through their behavior, the Daphnia in the darker ponds can choose a deeper environment where there is no ultraviolet radiation, but the animals in the clear ponds don't have that choice," he said.

Miner conducted field sampling of water and zooplankton from several ponds at 4,200 to 4,800 feet of elevation in the Seven Lakes Basin of Olympic National Park. The samples were gathered from July through September in 2006 through 2009, after winter snow had melted and the water was beginning to warm enough for zooplankton such as the species Daphnia melanica to survive.

Daphnia from ponds below the tree line were protected from the sun because the water was made darker when surrounding vegetation fell into the water, starting a process similar to what happens in a steeping cup of tea. Water in ponds above the tree line was nearly crystal clear, Miner said, so the fleas there did not receive the same protection from the surrounding water. When the fleas were subjected to UV light in the laboratory, those from the clear ponds survived the best.

The biggest surprise, Miner said, was that the water fleas had very little melanin, a protective pigment found in most animals. He noted that fleas from other habitats that have melanin grow at a slower rate than those without it, so the water fleas in the Olympic Mountains apparently evolved less-costly means to deal with UV radiation.

"It could be that they evolved to use other strategies because the ultraviolet isn't as intense here," he said.

Miner sampled some ponds with very clear water that had no Daphnia at all. That, he suggested, could have resulted because UV radiation in those ponds was too intense.

The next phase of his research, part of his doctoral thesis, will examine whether populations that are exposed to high levels of UV radiation but lack melanin have better-developed systems to repair damage to their DNA.

The work, funded in part by the National Science Foundation, will help in understanding how different populations adapt to UV radiation exposure, Miner said. It also could be instrumental in understanding how to maintain the health of aquatic ecosystems in the face of increasing human population and climate change.

(Photo: Anna Coogan)

University of Washington


0 comentarios

World leaders gathering in Japan at the UN's biodiversity summit must agree to put at least 25 percent of the Earth's land and 15 percent of the oceans under protection by 2020 if they are to be successful in their efforts to solve the current global environmental crisis, a new analysis by Conservation International showed.

Putting a larger area of the planet under protection is crucial to secure important biodiversity and the delivery of vital services from nature to people. Natural habitats – and the species and genetic resources they harbor – support the global economy and billions of people who directly depend on them for immediate needs, like food, income and shelter. Currently, about 13 percent of the world's terrestrial areas and less than 1 percent of the open oceans are protected.

The analysis shows that at least 17 percent of the Earth's land is necessary to protect priority areas for known biodiversity and an additional 6-11 percent is needed to ensure adequate storage of carbon in natural ecosystems. The analysis clarifies that protected areas are not just strict nature reserves, but can also refer to areas managed for multiple uses, such as recreation, sustainable economic activities or for their unique beauty and cultural value.

When world leaders met at the Convention on Biological Diversity (CBD), they discussed a set of 20 targets to slow biodiversity loss over the coming decade – one of them being about the need to put areas under protection. The numbers discussed are around 15-20 percent for land and a yet to be determined percentage for oceans.

"The current targets are clearly inadequate in protecting biodiversity and ensuring key services to people. Science shows us that we need more places to be protected and where the key places are," said Conservation International's Frank Larsen, lead author of the analysis. "There is also evidence that the costs of expanding protected areas are compensated by the many benefits, including new jobs and people's ability to withstand the effects of climate change."

Lina Barrera, Conservation International's Director of Biodiversity and Ecosystem Services Policy, added: "The problem is that most of the costs are local, while most of the benefits are global, so politicians do not see much incentive to make things happen. This is the time to be brave and get real about the need to put us on the path for a more sustainable future."

According to the analysis, protecting 25 percent of the lands and 15 percent of the oceans is still a preliminary and conservative estimate. It takes into account the needs to address only carbon storage, but when other important ecosystem services – like water supply, crop pollination and fisheries – are added, the numbers will be higher. Also, in regions highly impacted by environmental degradation, protected areas are likely to be the only intact natural environments that will remain.

(Photo: Conservation I.)

Conservation International


0 comentarios
Does thinking about time or money make you happier? A new study published in Psychological Science, a journal of the Association for Psychological Science, finds that people who are made to think about time plan to spend more of their time with the people in their lives while people who think about money fill their schedules with work, work, and—you guessed it—more work.

To find out how thinking about time or money makes people feel, Cassie Mogilner of the University of Pennsylvania designed an experiment, carried out online with adults from all over the United States, in which they concentrated on money or time. In this experiment, volunteers were asked to unscramble a series of sentences. Some participants were presented with sentences containing words related to time (e.g., “clock” and “day”), whereas others’ sentences contained words related to money (e.g., “wealth” and “dollar”). Next all participants were asked how they planned to spend their next 24 hours. The ones who had been primed to think about time planned to spend more time socializing. People who’d been primed to think about money planned to spend more time working.

She also carried out the experiment on low-income people and found that having them think about time had the same effect, but having them think about money did not. This may mean that low-income people already live concerned about and, therefore, highly focused on money, Mogilner speculates.

But Mogilner wanted to test the effect in the real world, seeing how people actually spent their time. So her research team approached people going into a café on campus to ask them to take part in a questionnaire, which included the word-scrambling task that primed them with thoughts of time or money. These individuals were then watched to see how they spent their time in the café—whether they chatted with people there or on a cell phone, or whether they worked. When they left the café, they filled out a second questionnaire about how happy and satisfied they felt. The results were similar: People who were primed to think about time spent more time socializing and were happier, while people who were primed with money spent more time with their noses buried in books and were less happy when they emerged.

Although focusing on money motivates people to work more, passing the hours working does not generally make one happy. Spending time with loved ones does, and thinking about time might motivate people to pursue these social connections. “There is so much discussion and focus on money, optimal ways to spend and save it, and the relationship between money and happiness,” says Mogilner. “We’re often ignoring the ultimately more important resource, which is time.” She does not suggest that people stop working altogether, but she does say that people need to be reminded to make time for friends and family.

Association for Psychological Science


0 comentarios
Humans are overloading ecosystems with nitrogen through the burning of fossil fuels and an increase in nitrogen-producing industrial and agricultural activities, according to a new study. While nitrogen is an element that is essential to life, it is an environmental scourge at high levels.

According to the study, excess nitrogen that is contributed by human activities pollutes fresh waters and coastal zones, and may contribute to climate change. Nevertheless, such ecological damage could be reduced by the adoption of time-honored sustainable practices.

Appearing in the October 8, 2010 edition of Science and conducted by an international team of researchers, the study was partially funded by the National Science Foundation.

The nitrogen cycle--which has existed for billions of years--transforms non-biologically useful forms of nitrogen found in the atmosphere into various biologically useful forms of nitrogen that are needed by living things to create proteins, DNA and RNA, and by plants to grow and photosynthesize. The transformation of non-biologically useful forms of nitrogen to useful forms of nitrogen is known as nitrogen fixation.

Mostly mediated by bacteria that live in legume plant roots and soils, nitrogen fixation and other components of the nitrogen cycle weave and wind through the atmosphere, plants, subsurface plant roots, and soils; the nitrogen cycle involves many natural feedback relationships between plants and microorganisms.

According to the Science paper, since pre-biotic times, the nitrogen cycle has gone through several major phases. The cycle was initially controlled by slow volcanic processes and lightning and then by anaerobic organisms as biological activity started. By about 2.5 billion years ago, as molecular oxygen appeared on Earth, a linked suite of microbial processes evolved to form the modern nitrogen cycle.

But the start of the 20th century, human contributions to the nitrogen cycle began skyrocketing. "In fact, no phenomenon has probably impacted the nitrogen cycle more than human inputs of nitrogen into the cycle in the last 2.5 billion years," says Paul Falkowski of Rutgers University, a member of the research team.

"Altogether, human activities currently contribute twice as much terrestrial nitrogen fixation as natural sources, and provide around 45 percent of the total biological useful nitrogen produced annually on Earth," says Falkowski. Much of the human contributions of nitrogen into ecosystems come from an 800 percent increase in the use of nitrogen fertilizers from 1960 to 2000.

Another problem: Much of nitrogen fertilizer that is used worldwide is applied inefficiently. As a result, about 60 percent of the nitrogen contained in applied fertilizer is never incorporated into plants and so is free to wash out of root zones, and then pollute rivers, lakes, aquifers and coastal areas through eutrophication. (Eutrophication is a process caused by excess nutrients that depletes oxygen in water bodies and ultimately leads to the death of animal life.)

In addition, some reactions involving nitrogen release nitrogen oxide into the atmosphere. Nitrogen oxide is a greenhouse gas that has 300 times (per molecule) the warming potential of carbon dioxide. In addition, nitrogen oxide destroys stratospheric ozone, which protects the earth from harmful ultraviolet (UV-B) radiation.

"Natural feedbacks driven by microorganisms will likely produce a new steady-state over time scales of many decades," says Falkowski. "Through this steady state, excess nitrogen added from human sources will be removed at rates equivalent to rates of addition, without accumulating."

But meanwhile, the Earth's population is approaching 7 billion people, and so ongoing pressures for food production are continuing to increase. "There is no way to feed people without fixing huge amounts of nitrogen from the atmosphere, and that nitrogen is presently applied to crop plants very ineffectively." says Falkowski.

So unless promising interventions are taken, the damage done by humans to the Earth's nitrogen cycle will persist for decades or centuries. These promising interventions, which would be designed to reduce the need to use fertilizers that add nitrogen to ecological systems, could include:

* Using systematic crop rotations that would supply nitrogen that would otherwise be provided by fertilizers;
* Optimizing the timing and amounts of fertilizer applications, adopting selected breeding techniques or developing genetically engineered varieties of plants that would increase the efficiency of nitrogen use;
* Using traditional breeding techniques to boost the ability of economically important varieties of wheat, barley and rye to interact favorably with the microbial communities associated with plant root systems and do so in ways that enhance the efficiency of nitrogen use.

"While the processes of eutrophication have been recognized for many years, only recently have scientists been able to begin placing the anthropogenic processes in the context of an understanding of the broader biogeochemical cycles of the planet," says Robert Burnap, an NSF program director. This is an important article because it concisely develops this understanding and also provides reasonable predictions regarding the economic and policy dimensions of the problem."

National Science Foundation

Wednesday, October 27, 2010


0 comentarios

A new species of small carnivore, known as Durrell’s vontsira (Salanoia durrelli) has been identified by researchers from the Durrell Wildlife Conservation Trust, the Natural History Museum, London, Nature Heritage, Jersey, and Conservation International (CI). The small, cat-sized, speckled brown carnivore from the marshes of the Lac Alaotra wetlands in central eastern Madagascar weighs just over half a kilogramme and belongs to a family of carnivores only known from Madagascar. It is likely to be one of the most threatened carnivores in the world. The findings are outlined in the latest issue of the taxonomic journal Systematics and Biodiversity.

The carnivore was first seen swimming in a lake by researchers from the Durrell Wildlife Conservation Trust on a field trip surveying bamboo lemurs (Hapalemur griseus alaotrensis) in 2004. After briefly examining the animal, the team suspected they had witnessed a new species and so took photographs. By examining brown-tailed vontsira (Salanoia concolor) specimens in the Natural History Museum’s collections, Museum zoologists confirmed the animal was a new species. The brown-tailed vontsira is the closest relative of the new species, which zoologists named in honour of the conservationist and writer Gerald Durrell, who died 15 years ago.

Fidimalala Bruno Ralainasolo, a conservation biologist working for Durrell Wildlife Conservation Trust who originally captured the new carnivore, commented ‘We have known for some time that a carnivore lives in the Lac Alaotra marshes, but we’ve always assumed it was a brown-tailed vontsira that is also found in the eastern rainforests. However, differences in its skull, teeth, and paws have shown that this animal is clearly a different species with adaptations to life in an aquatic environment. It is a very exciting discovery and we decided to honour our founder, the world renowned conservationist Gerald Durrell, by naming this new species after him. However, the future of the species is very uncertain because the Lac Alaotra marshes are extremely threatened by agricultural expansion, burning and invasive plants and fish. It is a highly significant site for wildlife and the resources it provides people, and Durrell Wildlife Conservation Trust is working closely with local communities to ensure its sustainable use and to conserve Durrell’s vontsira and other important species.’

Paula Jenkins, Natural History Museum zoologist said, ‘We know very little about the small mongoose-like vontsiras because they are poorly known and rarely seen or studied in the field. This research is a fantastic example of the importance and relevance that Museum collections have for contemporary scientific research. Though people may know that museums such as the Natural History Museum hold reference collections, few people are aware how critical these collections are to our understanding of the world today.’

Stephan M Funk of Nature Heritage, formerly at Durrell Wildlife Conservation Trust and co-author of the paper, said ‘Population genetics and evolution of the Durrell’s vontsira and related species remain badly understood, highlighting the importance of future research. More important, however, is the protection of the wetlands around Lac Alaotra, which remain highly threatened.’

The habitat of Durrell’s vontsira has been suffering from a number of threats over the past decades, from introduced fish to silting and pollution from fertiliser and pesticides. While the conservation status of the new species remains to be formally evaluated, it is likely to be threatened as a result of small population size, restricted distribution and the impact of habitat degradation.

Remarkably, Lac Alaotra hit the headlines only a few months ago when the extinction of the Alaotra grebe (Tachybaptus rufolavatus) was announced. Now a new species has been described from the very area where the last Alaotra grebe was seen.

Frank Hawkins of Conservation International, co-author of the paper describing the species, said ‘This species is probably the carnivore with one of the smallest ranges in the world, and likely to be one of the most threatened. The Lac Alaotra wetlands are under considerable pressure, and only urgent conservation work to make this species a flagship for conservation will prevent its extinction.’

(Photo: © Durrell Wildlife Conservation Trust)

Conservation International


0 comentarios
Geologists studying the Jan. 12 Haiti earthquake say the risk of destructive tsunamis is higher than expected in places such as Kingston, Istanbul and Los Angeles.

Like Haiti's capital, these cities all lie near the coast and near an active geologic feature called a strike-slip fault where two tectonic plates slide past each other like two hands rubbing against each other.

Until now, geologists did not consider the tsunami risk to be very high in these places because when these faults rupture, they usually do not vertically displace the seafloor much, which is how most tsunamis are generated. This latest research suggests even a moderate earthquake on a strike-slip fault can generate tsunamis through submarine landslides, raising the overall tsunami risk in these places.

"The scary part about that is you do not need a large earthquake to trigger a large tsunami," said Matt Hornbach, research associate at The University of Texas at Austin's Institute for Geophysics and lead author on a paper describing the research in the Oct. 10 online edition of the journal Nature Geoscience.

"Organizations that issue tsunami warnings usually look for large earthquakes on thrust faults," said Hornbach. "Now we see you don't necessarily need those things. A moderate earthquake on a strike-slip fault can still be cause for alarm."

Within minutes after the magnitude 7 Haiti earthquake, a series of tsunami waves, some as high as 9 feet (3 meters), crashed into parts of the shoreline. A few weeks later, a team of scientists from the U.S. and Haiti conducted geological field surveys of sites on and offshore near the quake's epicenter.

The scientists determined the tsunamis were generated primarily by weak sediment at the shore that collapsed and slid along the seafloor, displacing the overlying water. Combined with newly discovered evidence of historic tsunamis, the survey revealed a third of all tsunamis in the area are generated in this way. Geologists had previously estimated only about 3 percent of tsunamis globally are generated through submarine landslides.

"We found that tsunamis around Haiti are about 10 times more likely to be generated in this way than we would have expected," said Hornbach.

In addition to Hornbach, team members from The University of Texas at Austin include: Paul Mann, Fred Taylor, Cliff Frohlich, Sean Gulick and Marcy Davis. The team also includes researchers from Queens College, City University of New York; U.S. Geological Survey, University of Missouri; Lamont-Doherty Earth Observatory of Columbia University; University of California, Santa Barbara; Bureau of Mines and Energy (Haiti); and Universite d'Etat de Haiti.

The researchers gathered data on faults beneath the seafloor and land, vertical movement of the land, bathymetry (underwater topography) of the seafloor and evidence of tsunami waves. They worked on foot, on a small inflatable boat and on the 165-foot research vessel Endeavor.

This research was funded by a Rapid Response grant from the National Science Foundation and The University of Texas at Austin's Jackson School of Geosciences.

With additional funding from The Society for Geophysics' Geoscientists Without Borders program, Hornbach and others are now conducting a new research project in nearby Jamaica to assess the tsunami threat there.

"The geology of Kingston, Jamaica is nearly identical to Port-au-Prince, Haiti," said Hornbach. "It's primed and ready to go and they need to prepare for it. The good news is, they have a leg up because they're aware of the problem."

The University of Texas at Austin


0 comentarios

More brilliant X-rays, more cost-effective methods for developing new energy sources and advanced manufacturing processes are just some of the benefits which may come from a novel technology, proven at the theoretical level by a consortium of British and European laser scientists.

The research, led by scientists at STFC's Central Laser Facility (CLF), is published in Nature Physics (October 10 2010).

A team of scientists from the Instituto Superior Tecnico in Lisbon, Imperial College London, and the Universities of St Andrews, Lancaster and Strathclyde as well as STFC’s Central Laser Facility staff have demonstrated the feasibility of a groundbreaking method called Raman amplification which can take long laser pulses and compress them to 1000 times shorter, but with intensities 300 times greater. This means that current very expensive and complex laser set-ups could eventually be replaced with smaller and more cost-effective systems. This would make many technologies, including methods used to develop x-rays which rely on lasers, far more accessible and easier to mass-produce. This latest development is another step in laser scientists’ quest to develop ever more powerful lasers, increasingly demanded by new technologies since the invention of the laser 50 years ago.

The technique has been examined over a two year period, using some of the world’s most powerful supercomputers, to test every possible aspect of the theory. "In the past, studies have been carried out to test the theory, but only using simplified models which do not include all of the relevant phenomena. Our new model has shown that, in most cases, the amplified laser beam breaks up into ‘spikes’, making it difficult to focus the beam to a small spot" said Dr Raoul Trines from STFC’s Central Laser Facility. "But for a few special cases, the amplified laser pulse is of excellent quality, enabling exceptionally tight focusing of the beam".

Professor John Collier, Director, STFC’s Central Laser Facility said; "This year’s celebration of 50 years of the laser* is a poignant reminder that we need to start thinking about the next generation of laser technology. We have come to rely on lasers so much in our daily lives, for everything from high speed internet connections to medical techniques, that we can’t afford to pause even for a moment in developing laser techniques further, because these new techniques take years to develop and test".

The next step is to apply the theoretical study on an actual high power laser and fine tune the method through rigorous experimental testing.

(Photo: STFC)

Science and Technology Facilities Council

Tuesday, October 26, 2010


0 comentarios

A study has gained new insight into the minds of dogs, discovering that those that are anxious when left alone also tend to show ‘pessimistic’ like behaviour.

The research by academics at the University of Bristol, and funded by the RSPCA is published in Current Biology (12 October). The study provides an important insight into dogs’ emotions, and enhances our understanding of why behavioural responses to separation occur.

Professor Mike Mendl, Head of the Animal Welfare and Behaviour research group at Bristol University’s School of Clinical Veterinary Science, who led the research, said: “We all have a tendency to think that our pets and other animals experience emotions similar to our own, but we have no way of knowing directly because emotions are essentially private. However, we can use findings from human psychology research to develop new ways of measuring animal emotion.

“We know that people’s emotional states affect their judgements and that happy people are more likely to judge an ambiguous situation positively. What our study has shown is that this applies similarly to dogs – that a ‘glass-half-full’ dog is less likely to be anxious when left alone than one with a more ‘pessimistic’ nature.”

In order to study ‘pessimistic’ or ‘optimistic’ decisions, dogs at two UK animal re-homing centres were trained that when a bowl was placed at one location in a room (the ‘positive’ position) it would contain food, but when placed at another location (the ‘negative’ position) it would be empty. The bowl was then placed at ambiguous locations between the positive and negative positions.

Professor Mendl explained: “Dogs that ran fast to these ambiguous locations, as if expecting the positive food reward, were classed as making relatively ‘optimistic’ decisions. Interestingly, these dogs tended to be the ones who also showed least anxiety-like behaviour when left alone for a short time.

“Around half of dogs in the UK may at some point perform separation-related behaviours - toileting, barking and destroying objects around the home - when they’re apart from their owners. Our study suggests that dogs showing these types of behaviour also appear to make more pessimistic judgements generally.”

Dr Samantha Gaines, Deputy Head of the Companion Animals Department from RSPCA, said: “Many dogs are relinquished each year because they show separation-related behaviour. Some owners think that dogs showing anxious behaviour in response to separation are fine, and do not seek treatment for their pets. This research suggests that at least some of these dogs may have underlying negative emotional states, and owners are encouraged to seek treatment to enhance the welfare of their dogs and minimise the need to relinquish their pet. Some dogs may also be more prone to develop these behaviours, and should be re-homed with appropriate owners.”

(Photo: U. Bristol)

University of Bristol


0 comentarios

The same types of bacteria found in arterial plaque, which causes atherosclerosis, are also found in the mouth and gut, according to the first general survey of all bacteria found in plaques from the mouth, gut and blood.

The study, conducted by researchers from Cornell and University of Gothenburg, Sweden and published online in the Proceedings of the National Academy of Sciences, may one day offer new ways to detect heart disease and could lead to drugs that target these bacteria to reduce arterial plaque and atherosclerosis.

In atherosclerosis, a fatty material called plaque collects along the walls of arteries, where it thickens, hardens (forming calcium deposits) and may eventually block the arteries.

"Our survey shows that bacteria are pretty good at getting out of the mouth and gut and into the blood stream," said Ruth Ley, Cornell assistant professor of microbiology and a senior author of the study with Frederik Bäckhed, a cardiovascular researcher from the University of Gothenburg. Omry Koren, a postdoctoral researcher in Ley's lab, is the paper's lead author.

The study used samples from the mouth and feces (to determine bacteria in the gastrointestinal tract) from 15 heart disease patients who had plaque removed from their arteries. Samples were also taken from a control group of 15 healthy individuals who matched the heart disease patients by sex and age.

The findings show that such bacteria as Veillonella and Streptococcus were the most abundant microbiota found in plaque. Furthermore, when a lot of these two types of bacteria were found in the mouth, the researchers found a corresponding abundance of the same bacteria in the arterial plaque.

Previous studies have provided evidence that bacteria from the mouth and gut may play a role in the formation of arterial plaque, leading to coronary disease. Studies have shown that mice, whose immune systems were compromised in such a way that they could not detect bacteria, were resistant to atherosclerosis, leading researchers to suspect that arterial plaque forms partly due to the body's immune reactions to bacteria.

Ley and colleagues found a positive correlation between amounts of bacteria and leukocytes (white blood cells) in arterial plaque, supporting the theory that higher levels of arterial plaque lead to an immune response and inflammation.

Also, Chryseomonas bacteria were found in all samples of atherosclerotic plaque, suggesting that the bacteria may contribute to its development. Also, Streptococcus in the mouth and gut were positively correlated with HDL ("good") cholesterol, while a number of other gut bacterial were positively correlated with LDL ("bad") and total cholesterol.

The study, which used high throughput DNA sequencing techniques to get more comprehensive data than in previous research, was funded by the Swedish Research Council, National Institutes of Health, National Science Foundation and others.

(Photo: Ian Hewson)

Cornell University


0 comentarios

Researchers at UT Southwestern Medical Center have created an experimental vaccine against beta-amyloid, the small protein that forms plaques in the brain and is believed to contribute to the development of Alzheimer’s disease.

Compared with similar so-called DNA vaccines that the UT Southwestern researchers tested in an animal study, the new experimental vaccine stimulated more than 10 times as many antibodies that bind to and eliminate beta-amyloid. The results appeared in the journal Vaccine.

Future studies will focus on determining the safety of the vaccine and whether it protects mental function in animals, said Dr. Roger Rosenberg, director of the Alzheimer’s Disease Center at UT Southwestern and senior author of the study.

“The antibody is specific; it binds to plaque in the brain. It doesn’t bind to brain tissue that does not contain plaque,” Dr. Rosenberg said. “This approach shows promise in generating enough antibodies to be useful clinically in treating patients.”

A traditional vaccine – an injection of beta-amyloid protein itself into the arm – has been shown in other research to trigger an immune response, including the production of antibodies and other bodily defenses against beta-amyloid. However, the immune response to this type of vaccine sometimes caused significant brain swelling, so Dr. Rosenberg and his colleagues focused on developing a nontraditional DNA vaccine.

The DNA vaccine does not contain beta-amyloid itself but instead a piece of the beta-amyloid gene that codes for the protein. In the current study, the researchers coated tiny gold beads with the beta-amyloid DNA and injected them into the skin of the animals’ ears. Once in the body, the DNA stimulated an immune response, including antibodies to beta-amyloid.

The next step in the research is to test long-term safety in animals, Dr. Rosenberg said.

“After seven years developing this vaccine, we are hopeful it will not show any significant toxicity, and that we will be able to develop it for human use,” he said.

(Photo: UTSMC)

UT Southwestern Medical Center


0 comentarios
A research team that includes UC Berkeley Graduate School of Education Professor Marcia Linn is again offering proof that the mathematical skills of boys and girls, as well as men and women, are substantially equal.

Linn and her fellow researchers examined existing studies between 1990 and 2007 that looked mainly at grade- and high-school students and published the results in the current online edition of journal Psychological Bulletin, an American Psychological Association publication. Linn has been part of three other large studies on gender differences in mathematics and/or science achievement.

One portion of the new study looked systematically at 242 articles that assessed the math skills of nearly 1.3 million people. A second portion of the new study examined the results of several large, long-term scientific studies, such as the National Assessment of Educational Progress.

In both cases, the difference between the two sexes was so close as to be meaningless according to Linn and co-authors Sara Lindberg, a post-doctoral fellow in women’s health at the University of Wisconsin School of Medicine and Public Health; and Janet Hyde, a professor of psychology and women’s studies at the University of Wisconsin-Madison.

The idea that both genders have equal math abilities is now widely accepted among educational researchers, Linn says, but remains surprising to many teachers and parents, who may guide girls away from courses and careers in the sciences and engineering.

Scientists now know that stereotypes affect performance according to Hyde. “There is lots of evidence that what we call ‘stereotype threat’ can hold women back in math,” says Hyde. “If, before a test, you imply that the women should expect to do a little worse than the men, that hurts performance.”

The authors hope the new results will slow the trend toward single-sex schools, which are sometimes justified on the basis of differential math skills. It may also affect standardized tests, which gained clout with the passage of No Child Left Behind, and tend to emphasize lower-level math skills like multiplication, the authors say.

"We found that the tests used for high-stakes testing have too few items that require complex problem solving,” says Linn. “These tests deter teachers from emphasizing problems that are important in everyday situations such as interpreting data about health treatment alternatives."

The new findings reinforce a recent study that ranked gender dead last among nine factors, including parental education, family income and school effectiveness, in influencing the math performance of 10-year olds.

Women have made significant advances in technical fields. Half of medical-school students are female, as are 48 percent of undergraduate math majors. However progress in physics and engineering is much slower.

“Teachers, by encouraging girls in mathematics, can help reduce stereotypes that inhibit persistence in mathematics and science," says Linn, “and they can use this evidence to guide girls to take courses such as physics and computer science that build on mathematics ability.”

UC Berkeley


0 comentarios

University of Florida researchers presenting new fossil evidence of an exceptionally well-preserved 55-million-year-old North American mammal have found it shares a common ancestor with rodents and primates, including humans.

The study, appeared in the Oct. 11 online edition of the Zoological Journal of the Linnean Society, describes the cranial anatomy of the extinct mammal, Labidolemur kayi. High resolution CT scans of the specimens allowed researchers to study minute details in the skull, including bone structures smaller than one-tenth of a millimeter. Similarities in bone features with other mammals show L. kayi’s living relatives are rodents, rabbits, flying lemurs, tree shrews and primates.

Researchers said the new information will aide future studies to better understand the origin of primates.

“The specimens are among the only skulls of apatemyids known that aren’t squashed completely flat,” said study co-author Jonathan Bloch, an associate curator of vertebrate paleontology at the Florida Museum of Natural History on the UF campus. “They’re preserved in three dimensions, which allows us to look at the morphology of the bones in a way that we never could before.”

Scientists have disputed the relationships of Apatemyidae, the family that includes L. kayi, for more than a century because of their unusual physical characteristics. With can opener-shaped upper front teeth and two unusually long fingers, apatemyids have been compared to a variety of animals, from opossums to woodpeckers.

“There are only a few examples in the history of mammals where you get such an incredibly odd ecological adaptation,” Bloch said.

Like a woodpecker’s method of feeding, L. kayi used percussive foraging, or tapping on trees, to locate insects. It stood less than a foot tall, was capable of jumping between trees and looked like a squirrel with a couple of really long fingers, similar to the aye-aye, a lemur native to Madagascar, Bloch said.

Apatemyids have been preserved for tens of millions of years and are well known from Europe and North America.

The skeletons analyzed in the publication were recovered from freshwater limestone in the Bighorn Basin by co-author Peter Houde of New Mexico State University. Located just east of Yellowstone National Park in Wyoming, the site is known as one of the best in the world for studying the evolution of mammals during the 10 million years following the extinction of the dinosaurs, Bloch said.

Mary Silcox, first author of the study and a research associate at the Florida Museum, said scans of the specimens began about 10 years ago, during her postdoctoral work at The Pennsylvania State University.

“It’s not like medical CT, it’s actually an industrial CT scanner,” said Silcox, an assistant professor of anthropology at the University of Toronto Scarborough. “Because this is a small animal, we needed to be able to study it at a very high resolution. The high resolution CT data were a critical part.”

Doug Boyer of Stony Brook University is also a co-author of the study, part of the team’s larger research to understand the relationships of apatemyids to other mammals. Bloch and colleagues are currently writing a detailed analysis of L. kayi’s skeleton.
John Wible, curator of mammals at the Carnegie Museum of Natural History and one of the researchers who reviewed the study, said it provides valuable information for understanding the evolutionary relationships of mammals.

“It is now clear that any assessment of the origins of primates in the future will have to include apatemyids,” Wible said. “Apatemyids are not some freakish dead-end, but significant members of our own history.”

(Photo: Kristen Grace/Florida Museum of Natural History)

University of Florida


0 comentarios

No matter how you slice it, watermelon has a lot going for it –– sweet, low calorie, high fiber, nutrient rich –– and now, there's more. Evidence from a pilot study led by food scientists at The Florida State University suggests that watermelon can be an effective natural weapon against prehypertension, a precursor to cardiovascular disease.

It is the first investigation of its kind in humans. FSU Assistant Professor Arturo Figueroa and Professor Bahram H. Arjmandi found that when six grams of the amino acid L-citrulline/L-arginine from watermelon extract was administered daily for six weeks, there was improved arterial function and consequently lowered aortic blood pressure in all nine of their prehypertensive subjects (four men and five postmenopausal women, ages 51-57).

"We are the first to document improved aortic hemodynamics in prehypertensive but otherwise healthy middle-aged men and women receiving therapeutic doses of watermelon," Figueroa said. "These findings suggest that this 'functional food' has a vasodilatory effect, and one that may prevent prehypertension from progressing to full-blown hypertension, a major risk factor for heart attacks and strokes.

"Given the encouraging evidence generated by this preliminary study, we hope to continue the research and include a much larger group of participants in the next round," he said.

Why watermelon?

"Watermelon is the richest edible natural source of L-citrulline, which is closely related to L-arginine, the amino acid required for the formation of nitric oxide essential to the regulation of vascular tone and healthy blood pressure," Figueroa said.

Once in the body, the L-citrulline is converted into L-arginine. Simply consuming L-arginine as a dietary supplement isn't an option for many hypertensive adults, said Figueroa, because it can cause nausea, gastrointestinal tract discomfort, and diarrhea.
In contrast, watermelon is well tolerated. Participants in the Florida State pilot study reported no adverse effects. And, in addition to the vascular benefits of citrulline, watermelon provides abundant vitamin A, B6, C, fiber, potassium and lycopene, a powerful antioxidant. Watermelon may even help to reduce serum glucose levels, according to Arjmandi.

"Cardiovascular disease (CVD) continues to be the leading cause of death in the United States," Arjmandi said. "Generally, Americans have been more concerned about their blood cholesterol levels and dietary cholesterol intakes rather than their overall cardiovascular health risk factors leading to CVD, such as obesity and vascular dysfunction characterized by arterial stiffening and thickness –– issues that functional foods such as watermelon can help to mitigate.

"By functional foods," said Arjmandi, "we mean those foods scientifically shown to have health-promoting or disease-preventing properties, above and beyond the other intrinsically healthy nutrients they also supply."

Figueroa said oral L-citrulline supplementation might allow a reduced dosage of antihypertensive drugs necessary to control blood pressure.

"Even better, it may prevent the progression from prehypertension to hypertension in the first place," he said.

While watermelon or watermelon extract is the best natural source for L-citrulline, it is also available in the synthetic form in pills, which Figueroa used in a previous study of younger, male subjects. That investigation showed that four weeks of L-citrulline slowed or weakened the increase in aortic blood pressure in response to cold exposure. It was an important finding, said Figueroa, since there is a greater occurrence of myocardial infarction associated with hypertension during the cold winter months.

"Individuals with increased blood pressure and arterial stiffness –– especially those who are older and those with chronic diseases such as type 2 diabetes –– would benefit from L-citrulline in either the synthetic or natural (watermelon) form," Figueroa said. "The optimal dose appears to be four to six grams a day."

Approximately 60 percent of U.S. adults are prehypertensive or hypertensive. Prehypertension is characterized by systolic blood pressure readings of 120-139 millimeters of mercury (mm Hg) over diastolic pressure of 80-89 mm Hg. "Systolic" refers to the blood pressure when the heart is contracting. "Diastolic" reflects the blood pressure when the heart is in a period of relaxation and expansion.

(Photo: FSU)

Florida State University


0 comentarios

A team of scientists at the U.S. Department of Energy's (DOE) Argonne National Laboratory and the Carnegie Institution of Washington has succeeded in "watching" nanoparticles grow in real time.

The revolutionary technique allows researchers to learn about the early stages of nanoparticle generation, long a mystery due to inadequate probing methods, and could lead to improved performance of the nanomaterials in applications including solar cells, sensing and more.

"Nanocrystal growth is the foundation of nanotechnology," said lead researcher Yugang Sun, an Argonne chemist. "Understanding it will allow scientists to more precisely tailor new and fascinating nanoparticle properties."

The way that nanoparticles look and behave depends on their architecture: size, shape, texture and surface chemistry. This, in turn, depends very much on the conditions under which they are grown.

"Accurately controlling nanoparticles is very difficult," Sun explained. "It's even harder to reproduce the same nanoparticles from batch to batch, because we still don't know all the conditions for the recipe. Temperature, pressure, humidity, impurities—they all affect growth, and we keep discovering more factors."

In order to understand how nanoparticles grow, the scientists needed to actually watch them in the act. The problem was that electron microscopy, the usual method for seeing down into the atomic level of nanoparticles, requires a vacuum. But many kinds of nanocrystals have to grow in a liquid medium—and the vacuum in an electron microscope makes this impossible. A special thin cell allows a tiny amount of liquid to be analyzed in an electron microscope, but it still limited the researchers to a liquid layer just 100 nanometers thick, which is significantly different from the real conditions for nanoparticle synthesis.

To solve this conundrum, Sun found he needed to use the very high-energy X-rays provided at Sector 1 of Argonne’s Advanced Photon Source (APS), which adjoins the laboratory’s Center for Nanoscale Materials, where he works. The pattern of X-rays scattered by the sample allowed the researchers to reconstruct the earliest stages of nanocrystals second-by-second.

"This technique yields a treasure trove of information, especially on the nucleation and growth steps of the crystals, that we had never been able to get before," said Sun.

The intensity of the X-rays does affect the growth of the nanocrystals, Sun said, but the effects only became significant after an especially long reaction time. "Getting a clear image of the growth process will allow us to control samples to get better results, and eventually, new nanomaterials that will have a wide range of applications,” Sun explained.

The nanomaterials could be used in photovoltaic solar cells, chemical and biological sensors and even imaging. For example, noble metal nanoplates can absorb near-infrared light, so they can be used to enhance contrast in images. In one possible case, an injection of specially tailored nanoparticles near a cancer patient's tumor site could increase the imaging contrast between normal and cancerous cells so that doctors can accurately map the tumor.

"The key to this breakthrough was the unique ability for us to work with scientists from the Advanced Photon Source, the Center for Nanoscale Materials and the Electron Microscopy Center—all in one place," Sun said.

(Photo: Yugang et. al, Argonne National Laboratory & Carnegie Institute of Washington)

Argonne National Laboratory




Selected Science News. Copyright 2008 All Rights Reserved Revolution Two Church theme by Brian Gardner Converted into Blogger Template by Bloganol dot com