Friday, August 14, 2009

PERCEIVING TOUCH AND YOUR SELF OUTSIDE OF YOUR BODY

0 comentarios

When you feel you are being touched, usually someone or something is physically touching you and you perceive that your "self" is located in the same place as your body. In new research published in the open-access, peer-reviewed journal PLoS ONE, neuroscientists at the Ecole Polytechnique Fédérale de Lausanne (EPFL), Switzerland, investigated the relationship between bodily self-consciousness and the way touch stimuli are spatially represented in humans. They found that sensations of touch can be felt and mislocalised towards where a "virtual" body is seen. These findings will provide new avenues for the animation of virtual worlds and machines.

In their previous research, Professor Olaf Blanke's lab at the EPFL found that the consciousness of one's own body (the sense of self-identification and self-location) can be altered in healthy people under certain experimental conditions, yielding similar sensations to those felt in out-of-body experiences. In this new study, Aspell and colleagues in Blanke's lab used a crossmodal congruency task to determine whether there is a change in touch perception during this illusion.

A number of earlier studies showed that if a rubber hand is positioned such that it extends from a person's arm while her actual hand is hidden from view, and both her real hand and the rubber hand are stroked at the same time, she seems to feel the touch in the location where she sees the rubber hand being touched. This effect and the experienced 'ownership' of the rubber hand is the "rubber hand illusion."

Aspell, a postdoctoral researcher, along with graduate student Bigna Lenggenhager and Professor Olaf Blanke sought to expand on this research to see whether there are changes in touch perception when humans experience ownership of a whole virtual body. They designed a novel behavioural task in which the experimental participants had to try to detect where on their body vibrations were occurring. At the same time, they viewed their own body via a head-mounted display connected to a camera filming the participant's back from two metres away. The participants had to ignore light flashes that appeared on their body near the vibrators. To induce the feeling that they were located in the position where they viewed their body (i.e. two metres in front of them), participants were stroked on their backs with a stick. This induced a "full body illusion" in which a person perceives herself as being located outside the confines of her own body.

By measuring how strongly the light flashes interfered with the perception of the vibrations, the researchers were able to show that the mapping of touch sensations was altered during the full body illusion. The mapping of touch in space was shifted towards the virtual body when subjects felt themselves to be located where the virtual body was seen.

This study demonstrates that changes in self-consciousness ('where am I located?' and 'what is my body?') are accompanied by changes in where touch sensations are experienced in space. Importantly, these data reveal that brain mechanisms of multisensory processing are crucial for the "I" of conscious experience and can be scientifically manipulated in order to animate and incarnate virtual humans, robots, and machines.

(Photo: Aspell JE, Lenggenhager B, Blanke O (2009) Keeping in Touch with One's Self: Multisensory Mechanisms of Self-Consciousness. PLoS ONE 4(8): e6488. doi: 10.1371/journal.pone.0006488)

PLoS ONE

GASOLINE-DIESEL 'COCKTAIL': A POTENT RECIPE FOR CLEANER, MORE EFFICIENT ENGINES

0 comentarios
Diesel and gasoline fuel sources both bring unique assets and liabilities to powering internal combustion engines. But what if an engine could be programmed to harvest the best properties of both fuel sources at once, on the fly, by blending the fuels within the combustion chamber?

The answer, based on tests by the University of Wisconsin-Madison engine research group headed by Rolf Reitz, would be a diesel engine that produces significantly lower pollutant emissions than conventional engines, with an average of 20 percent greater fuel efficiency as well. These dramatic results came from a novel technique Reitz describes as "fast-response fuel blending," in which an engine's fuel injection is programmed to produce the optimal gasoline-diesel mix based on real-time operating conditions.

Under heavy-load operating conditions for a diesel truck, the fuel mix in Reitz' fueling strategy might be as high as 85 percent gasoline to 15 percent diesel; under lighter loads, the percentage of diesel would increase to a roughly 50-50 mix. Normally this type of blend wouldn't ignite in a diesel engine, because gasoline is less reactive than diesel and burns less easily. But in Reitz' strategy, just the right amount of diesel fuel injections provide the kick-start for ignition.

"You can think of the diesel spray as a collection of liquid spark plugs, essentially, that ignite the gasoline," says Reitz, the Wisconsin Distinguished Professor of Mechanical Engineering. "The new strategy changes the fuel properties by blending the two fuels within the combustion chamber to precisely control the combustion process, based on when and how much diesel fuel is injected."

Reitz presented his findings at the 15th U.S. Department of Energy (DOE) Diesel Engine-Efficiency and Emissions Research Conference in Detroit. Reitz estimates that if all cars and trucks were to achieve the efficiency levels demonstrated in the project, it could lead to a reduction in transportation-based U.S. oil consumption by one-third.

"That's roughly the amount that we import from the Persian Gulf," says Reitz.

Two remarkable things happen in the gasoline-diesel mix, Reitz says. First, the engine operates at much lower combustion temperatures because of the improved control — as much as 40 percent lower than conventional engines — which leads to far less energy loss from the engine through heat transfer. Second, the customized fuel preparation controls the chemistry for optimal combustion. That translates into less unburned fuel energy lost in the exhaust, and also fewer pollutant emissions being produced by the combustion process. In addition, the system can use relatively inexpensive low-pressure fuel injection (commonly used in gasoline engines), instead of the high-pressure injection required by conventional diesel engines.

Development of the blending strategy was guided by advanced computer simulation models. These computer predictions were then put to the test using a Caterpillar heavy-duty diesel engine at the UW-Madison Engine Research Center. The results were "really exciting," says Reitz, confirming the predicted benefits of blended fuel combustion. The best results achieved 53 percent thermal efficiency in the experimental test engine. This efficiency exceeds even the most efficient diesel engine currently in the world — a massive turbocharged two-stroke used in the maritime shipping industry, which has 50 percent thermal efficiency.

"For a small engine to even approach these massive engine efficiencies is remarkable," Reitz says. "Even more striking, the blending strategy could also be applied to automotive gasoline engines, which usually average a much lower 25 percent thermal efficiency. Here, the potential for fuel economy improvement would even be larger than in diesel truck engines."

Thermal efficiency is defined by the percentage of fuel that is actually devoted to powering the engine, rather than being lost in heat transfer, exhaust or other variables.

"What's more important than fuel efficiency, especially for the trucking industry, is that we are meeting the EPA's 2010 emissions regulations quite easily," Reitz says.

That is a major commercial concern as the bar set by the U.S. Environmental Protection Agency is quite high, with regulations designed to cut about 90 percent of all particulate matter (soot) and 80 percent of all nitrogen oxides (NOx) out of diesel emissions.

Some companies have pulled from the truck engine market altogether in the face of the stringent new standards. Many other companies are looking to alternatives such as selective catalytic reduction, in which the chemical urea (a second "fuel") is injected into the exhaust stream to reduce NOx emissions. Others propose using large amounts of recirculated exhaust gas to lower the combustion temperature to reduce NOx. In this case, ultra-high high-pressure fuel injection is needed to reduce soot formation in the combustion chamber.

Those processes are expensive and logistically complicated, Reitz says. Both primarily address cleaning up emissions, not fuel efficiency. The new in-cylinder fuel blending strategy is less expensive and less complex, uses widely available fuels and addresses both emissions and fuel efficiency at the same time.

Reitz says there is ample reason to believe the fuel-blending technology would work just as well in cars because dual dual-fuel combustion works with lower-pressure and less expensive fuel injectors than those used in diesel trucks. Applying this technology to vehicles would require separate tanks for both diesel and gasoline fuel — but so would urea, which is carried in a separate tank. The big-picture implications for reduced oil consumption are even more compelling, Reitz says. The United States consumes about 21 million barrels of oil per day, about 65 percent (13.5 million barrels) of which is used in transportation. If this new blended fuel process could convert both diesel and gasoline engines to 53 percent thermal efficiency from current levels, the nation could reduce oil consumption by 4 million barrels per day, or one-third of all oil destined for transportation.

Computer modeling and simulation provided the blueprint for optimizing fuel blending, a process that would have taken years through trial-and-error testing. Reitz used a modeling technique developed in his lab called genetic algorithms, which borrow some of the same techniques of natural selection in the biological world to determine the "fittest" variables for engine performance.

University of Wisconsin

FROM GRAPHENE TO GRAPHANE, NOW THE POSSIBILITIES ARE ENDLESS

0 comentarios
Ever since graphene was discovered in 2004, this one-atom thick, super strong, carbon-based electrical conductor has been billed as a "wonder material" that some physicists think could one day replace silicon in computer chips.

But graphene, which consists of carbon atoms arranged in a honeycomb lattice, has a major drawback when it comes to applications in electronics – it conducts electricity almost too well, making it hard to create graphene-based transistors that are suitable for integrated circuits.

In August's Physics World, Kostya Novoselov - a condensed-matter physicist from the Manchester University group that discovered graphene -- explains how their discovery of graphane, an insulating equivalent of graphene, may prove more versatile still.

Graphane has the same honeycomb structure as graphene, except that it is "spray-painted" with hydrogen atoms that attach themselves to the carbon. The resulting bonds between the hydrogen and carbon atoms effectively tie down the electrons that make graphene so conducting. Yet graphane retains the thinness, super-strength, flexibility and density of its older chemical cousin.

One advantage of graphane is that it could actually become easier to make the tiny strips of graphene needed for electronic circuits. Such structures are currently made rather crudely by taking a sheet of the material and effectively burning away everything except the bit you need. But now such strips could be made by simply coating the whole of a graphene sheet – except for the strip itself - with hydrogen. The narrow bit left free of hydrogen is your conducting graphene strip, surrounded by a much bigger graphane area that electrons cannot go down.

As if this is not enough, the physicists in Manchester have found that by gradually binding hydrogen to graphene they are able to drive the process of transforming a conducting material into an insulating one and watch what happens in between.

Perhaps most importantly of all, the discovery of graphane opens the flood gates to further chemical modifications of graphene. With metallic graphene at one end and insulating graphane at the other, can we fill in the divide between them with, say, graphene-based semiconductors or by, say, substituting hydrogen for fluorine?

As Professor Novoselov writes, "Being able to control the resistivity, optical transmittance and a material's work function would all be important for photonic devices like solar cells and liquid-crystal displays, for example, and altering mechanical properties and surface potential is at the heart of designing composite materials. Chemical modification of graphene – with graphane as its first example – uncovers a whole new dimension of research. The capabilities are practically endless."

Institute of Physics

SOME EVIDENCE THAT DIETS HIGH IN CALCIUM AND DAIRY PRODUCTS IN CHILDHOOD MAY LOWER MORTALITY

0 comentarios



Suggestive evidence points to the possibility that children who have a diet high in calcium and who consume dairy products may have a lower mortality rate than those who don’t, according to a study by researchers in Bristol and Brisbane, published in the journal Heart.

A 65-year follow up of a study into the eating habits of families carried out in the 1930s found that dairy products and diet high in calcium made a difference to how long people lived.

Heart disease risk factors and thickening of the arteries begins in childhood and existing evidence into what effect dairy consumption has, is limited. Some dairy products such as whole milk, butter and cheese have a high content of saturated fatty acids and cholesterol and some studies indicate that consumption of these in adulthood contributes to heart disease.

Although the practice of giving extra milk to school children has been common in Europe since the 20th century, debate continues over the long term health effects of doing so and what are safe intake levels.

Researchers from the Queensland Institute of Medical Research, Brisbane, Australia, and the Department of Social Medicine at the University of Bristol, carried out a 65-year follow-up of a study done in the 1930s.

In 1937-39, children from 1,343 families in England and Scotland took part in a study of family food consumption assessed from 7-day household food inventories. The data came from the Carnegie (“Boyd Orr”) survey of diet and health in pre-war Britain.

The international team of researchers managed to ascertain what had happened to 4,374 of these children between 1948 and 2005. By 2005, 1,468 (34%) of them had died, including 378 deaths due to coronary heart disease and 121 deaths due to stroke.

The researchers looked at two main outcomes – deaths due to stroke and cardiovascular disease – and analysed associations between total dairy intake and mortality, and associations between individual dairy food groups and mortality.

The researchers found that there was no clear evidence that intake of dairy products was associated with coronary heart disease or stroke deaths.

However, childhood calcium intake was inversely associated with stroke mortality but not heart disease mortality.

Children who were in the group that had the highest calcium intake and dairy product consumption were found to have lower mortality rates than those in the lower intake groups.

The authors conclude: “Children whose family diet in the 1930s was high in calcium were at reduced risk of death from stroke. Furthermore, childhood diets rich in dairy or calcium were associated with lower all-cause mortality in adulthood. Replication in other study populations is needed because other factors, such as socioeconomic differences, explains part of these findings.”

(Photo: Bristol U.)

University of Bristol

HYDROCARBONS IN DEEP EARTH?

0 comentarios

The oil and gas that fuels our homes and cars started out as living organisms that died, were compressed, and heated under heavy layers of sediments in the Earth’s crust. Scientists have debated for years whether some of these hydrocarbons could also have been created deeper in the Earth and formed without organic matter. Now for the first time, scientists have found that ethane and heavier hydrocarbons can be synthesized under the pressure-temperature conditions of the upper mantle —the layer of Earth under the crust and on top of the core.

The research was conducted by scientists at the Carnegie Institution’s Geophysical Laboratory, with colleagues from Russia and Sweden, and is published in the July 26, advanced on-line issue of Nature Geoscience.

Methane (CH4) is the main constituent of natural gas, while ethane (C2H6) is used as a petrochemical feedstock. Both of these hydrocarbons, and others associated with fuel, are called saturated hydrocarbons because they have simple, single bonds and are saturated with hydrogen. Using a diamond anvil cell and a laser heat source, the scientists first subjected methane to pressures exceeding 20 thousand times the atmospheric pressure at sea level and temperatures ranging from 1,300 F° to over 2,240 F°. These conditions mimic those found 40 to 95 miles deep inside the Earth. The methane reacted and formed ethane, propane, butane, molecular hydrogen, and graphite. The scientists then subjected ethane to the same conditions and it produced methane. The transformations suggest heavier hydrocarbons could exist deep down. The reversibility implies that the synthesis of saturated hydrocarbons is thermodynamically controlled and does not require organic matter.

The scientists ruled out the possibility that catalysts used as part of the experimental apparatus were at work, but they acknowledge that catalysts could be involved in the deep Earth with its mix of compounds.

“We were intrigued by previous experiments and theoretical predictions,” remarked Carnegie’s Alexander Goncharov a coauthor. “Experiments reported some years ago subjected methane to high pressures and temperatures and found that heavier hydrocarbons formed from methane under very similar pressure and temperature conditions. However, the molecules could not be identified and a distribution was likely. We overcame this problem with our improved laser-heating technique where we could cook larger volumes more uniformly. And we found that methane can be produced from ethane.”

The hydrocarbon products did not change for many hours, but the tell-tale chemical signatures began to fade after a few days.

Professor Kutcherov, a coauthor, put the finding into context: “The notion that hydrocarbons generated in the mantle migrate into the Earth’s crust and contribute to oil-and-gas reservoirs was promoted in Russia and Ukraine many years ago. The synthesis and stability of the compounds studied here as well as heavier hydrocarbons over the full range of conditions within the Earth’s mantle now need to be explored. In addition, the extent to which this ‘reduced’ carbon survives migration into the crust needs to be established (e.g., without being oxidized to CO2). These and related questions demonstrate the need for a new experimental and theoretical program to study the fate of carbon in the deep Earth.”

(Photo: Carnegie I.)

Carnegie Institution of Washington

SCIENTISTS UNLOCK OPTICAL & CHEMICAL SECRETS OF JEWELED BEETLES

0 comentarios

A small green beetle may have some interesting lessons to teach scientists about optics and liquid crystals—complex mechanisms the insect uses to create a shell so strikingly beautiful that for centuries it was used in jewelry.

In an article published in the July 24 issue of the journal Science, researchers provide a detailed analysis of how the jeweled beetle Chrysina gloriosa creates the striking colors using a unique helical structure that reflects light of two specific colors—and of only one polarization: left circular polarization. The reflecting structures used by the beetle consist predominately of three different polygonal shapes whose percentages vary with the curvature of the insect’s shell.

"Iridescent beetles, butterflies, certain sea organisms and many birds derive their unique colors from the interaction of light with physical structures on their external surfaces,” said Mohan Srinivasarao, a professor in the School of Polymer, Textile and Fiber Engineering at the Georgia Institute of Technology. “Understanding how these structures give rise to the stunning colors we see in nature could benefit the quest for miniature optical devices and photonics.”

With support from the National Science Foundation, Srinivasarao and colleagues Vivek Sharma, Matija Crne and Jung Ok Park used two different microscopy techniques to study the surface structures on the shell of the beetle. What they found confirmed earlier suggestions that the colors are produced from liquid crystalline material, which self-assembles into a complex arrangement of polygonal shapes each less than 10 microns in size.

"When we looked at the beetle’s surface, we found tiles in the shapes mostly of hexagons, pentagons and heptagons,” Srinivasarao said. “These patterns arise, we think, because of the nature of the cholesteric liquid crystal and how the liquid crystal phase structures itself at the interface between air and fluid. We think these patterns result because the liquid crystal must have defects on the surface when exposed to air, and those defects create the patterns in the beetle’s shell or exoskeleton.”

Because of simple geometric restrictions, the percentage of each shape depends on the curvature of that particular section of the shell. “This is really a pattern formation issue,” said Srinivasarao. “It is difficult to pack only hexagons onto a curved surface. On flat surfaces, there are fewer defects in the form of five- and seven-sided cells.”

In addition, the five- and seven-sided cells normally appear in pairs, an issue also dictated by the geometric difficulties of packing the shapes onto curved surfaces. The researchers found very similar structures in the ten different beetles purchased from an insect supply house.

Liquid crystalline materials are valuable industrially, used in displays for laptop computers, portable music players and other devices. They are also used in children’s thermometers, where temperature affects the color of light reflected from the material, indicating whether or not a child has a fever.

While the structures are determined genetically, their final form depends on the living conditions the beetle experiences during its growth and development, Srinivasarao noted.

The fact that these jeweled beetles reflect circular polarization was identified in the early 1900s by a Nobel Prize-winning physicist, A.A. Michelson, who hypothesized that the circular polarization might result from a “screw structure” within the insect’s cuticle, but he did not elaborate on it further. The solidified structures produced from a cholesteric liquid crystal and its defects on the beetle’s shell reflect bright green light with a wavelength of 530 nanometers mixed with yellow light in a wavelength of 580 nanometers.

"The most dramatic way to get saturated color is through what this beetle does with the circularly-polarized light,” Srinivasarao said. “The reflection is very metallic and angle-dependent, and this is due to the helical pitch of the cholesteric liquid crystal.”

Sunlight normally contains light in equal quantities with a left circular polarization and a right circular polarization. The jewel beetle’s exoskeleton, however, reflects only light with a left circular polarization. Only a few members of the scarab family of beetles reflect both polarizations.

How the beetles benefit from the specific color and polarization isn’t known for sure, but scientists speculate that the optical properties may confuse predators, causing them to misjudge the location of the insects—or suggest that they may not be good to eat. The colors may also help the insects find mates.

In future research, Srinivasarao hopes to study other insects that use complex structures to create unique colors. He believes that scientists still have a lot to learn by studying the optical structures of beetles and other insects.

"We are just now starting to catch up with what these beetles have been doing for many, many years,” he said. “There are hundreds of thousand of species, and the way they generate color is just stunning—especially since it is all done with water-based systems, mostly based on the biopolymer chitin. This is self-assembly at several levels, and we need to learn a lot more to duplicate what these insects do.”

(Photo: Georgia Tech/Gary Meek)

Georgia Institute of Technology

RESEARCHERS TO STUDY REBIRTH OF AN ISLAND AFTER VOLCANIC ERUPTION

0 comentarios

When Alaska's Kasatochi Volcano erupted on Aug. 7, 2008, it virtually sterilized Kasatochi Island, covering the small Aleutian island with a layer of ash and other volcanic material several meters thick. The eruption also provided a rare research opportunity: the chance to see how an ecosystem develops from the very first species to colonize the island.

A team of researchers organized by the U.S. Geological Survey and the U.S. Fish and Wildlife Service will visit Kasatochi to look for signs of life on the island, almost exactly one year after the catastrophic eruption. The interdisciplinary research team will spend four days surveying the island, using the USFWS research vessel Tiglax as an operational base for the on-site research.

"Since volcanism plays such a big role in shaping the Aleutians, we hope to end up with a better understanding of how disturbances such as volcanic eruptions shape the ecology of these islands," says Tony DeGange, a USGS biologist and one of the research team coordinators. "There hasn't been a study quite like this done in Alaska where scientists are taking such a comprehensive ecological view of the impact of an eruption and its resulting response and recovery."

Researchers expect that insects and birds will be the first animal species that recolonize the island. In preparation for the August survey, biologists set up monitoring and sampling equipment on Kasatochi earlier this summer, including insect traps for Derek Sikes, curator of insects at the University of Alaska Museum of the North. Sikes visited Kasatochi in June 2008 for a one-day survey of the insect fauna on the island before the eruption. He will be part of the research team that visits the island next week.

"Work in similar systems shows that flying- and wind-borne insects and spiders form a fairly constant rain during the summer months," says Sikes, adding that some of these species survive by preying or scavenging on other arthropods. "We'll be looking for spiders, which are all predators, and ground beetles, which are mostly predators, as well as other species associated with bird droppings or vertebrate carrion."

An opportunity like this is extremely rare, according to Sikes. The most comparable example is the emergence of Surtsey Island off the coast of Iceland in 1963, when undersea volcanic eruptions reached the surface. That island was declared a United Nations World Heritage Site for its role as a pristine natural laboratory. Even today, access to Surtsey remains restricted to a small number of researchers each year who study the species that have colonized the island over the past 40 years.

According to the USFWS, the Kasatochi study is unique in that it takes place in an isolated marine ecosystem for which there are pre-eruption ecological data for the island and its nearby marine waters, including data from the Alaska Maritime National Wildlife Refuge dating from the mid-1990s and from Sikes' 2008 field work on the island.

(Photo: Derek Sikes)

University of Alaska Fairbanks

BEEP, BEEP, OOPS, WHAT WAS I DOING?

0 comentarios

Based on a study of 84 students divided into four separate experiments, University of Oregon researchers found that students with high memory storage capacity were clearly better able to ignore distractions and stay focused on their assigned tasks.

Principal investigator Edward K. Vogel, a UO professor of psychology, compares working memory to a computer's random-access memory (RAM) rather than the hard drive's size -- the higher the RAM, the better processing abilities. With more RAM, he said, students were better able to ignore distractions. This notion surfaced in a 2005 paper in Nature by Vogel and colleagues in the Oregon Visual Working Memory & Attention Lab.

In experiments with some variations in approaches -- detailed in the July 8 issue of the Journal of Neuroscience -- students' brain activity was monitored using electroencephalography (EEG) while they studied images on a computer screen, recognizing a shape with a missing component, and then identifying the object after it moved simply to another location or amid distractions. Using a "task irrelevant probe" -- a 50 millisecond-long flash of light -- Vogel and Keisuke Fukuda, a doctoral student of Vogel's and lead author, were able to determine where exactly a subject's attention was focused.

All of the subjects were able to quickly and accurately identify the targets when the objects moved around the screen, but as distracting components were added some maintained accuracy while others diverted their attention and slipped in performing the assigned tasks. (Vogel describes his research in a video at: http://www.youtube.com/watch?v=cmToeopS9bY.)

Vogel is quick to say that the findings don't necessarily signify problems for an easily distracted person, although people who hold their focus more intensely tend to have higher fluid intelligence; they score higher on achievement tests, do better in math and learn second languages easier than peers who are captured by interruptions. Vogel currently is working with other UO researchers to explore if the easily distracted indeed have a positive side, such as in artistic creativity and imagination.

The new research, funded by the National Science Foundation, zeroed in on the brain's prefrontal cortex -- a region linked to executive function and under scrutiny for its association with many neurological disorders -- and the intraparietal sulcus (IPS), which is involved in perceptual-motor coordination, including eye movements.

The IPS, Vogel said, acts as a pointer system that seeks out goal-related cues, and it possibly is the gateway for memory circuitry in the brain.

"Our attention is the continual interplay between what our goals are and what the environment is trying to dictate to us," Vogel said. "Often, to be able to complete complex and important goal-directed behavior, we need to be able to ignore salient but irrelevant things, such as advertisements flashing around an article you are trying to read on a computer screen. We found that some people are really good at overriding attention capture, and other people have a difficult time unhooking from it and are really susceptible to irrelevant stimuli."

Vogel theorizes that people who are good at staying on focus have a good gatekeeper, much like a bouncer or ticket-taker hired to allow only approved people into a nightclub or concert. Understanding how to improve the gatekeeper component, he said, could lead to therapies that help easily distracted people better process what information is allowed in initially, rather than attempting to teach people how to force more information into their memory banks.

(Photo: U. Oregon)

University of Oregon

WASTEWATER PRODUCES ELECTRICITY AND DESALINATES WATER

0 comentarios

Clean water for drinking, washing and industrial uses is a scarce resource in some parts of the world. Its availability in the future will be even more problematic. Many locations already desalinate water using either a reverse osmosis process -- one that pushes water under high pressure through membranes that allow water to pass but not salt -- or an electrodialysis process that uses electricity to draw salt ions out of water through a membrane. Both methods require large amounts of energy.

"Water desalination can be accomplished without electrical energy input or high water pressure by using a source of organic matter as the fuel to desalinate water," the researchers report in a recent online issue of Environmental Science and Technology.

"The big selling point is that it currently takes a lot of electricity to desalinate water and using the microbial desalination cells, we could actually desalinate water and produce electricity while removing organic material from wastewater," said Bruce Logan, Kappe Professor of Environmental Engineering, Penn State.

The team modified a microbial fuel cell -- a device that uses naturally occurring bacteria to convert wastewater into clean water producing electricity -- so it could desalinate salty water.

"Our main intent was to show that using bacteria we can produce sufficient current to do this," said Logan. "However, it took 200 milliliters of an artificial wastewater -- acetic acid in water -- to desalinate 3 milliliters of salty water. This is not a practical system yet as it is not optimized, but it is proof of concept."

A typical microbial fuel cell consists of two chambers, one filled with wastewater or other nutrients and the other with water, each containing an electrode. Naturally occurring bacteria in the wastewater consume the organic material and produce electricity.

The researchers, who also included Xiaoxin Cao, Xia Huang, Peng Liang, Kang Xiao, Yinjun Zhou and Xiaoyuan Zhang, at Tsinghua University, Beijing, changed the microbial fuel cell by adding a third chamber between the two existing chambers and placing certain ion specific membranes -- membranes that allow either positive or negative ions through, but not both -- between the central chamber and the positive and negative electrodes. Salty water to be desalinated is placed in the central chamber.

Seawater contains about 35 grams of salt per liter and brackish water contains 5 grams per liter. Salt not only dissolves in water, it dissociates into positive and negative ions. When the bacteria in the cell consume the wastewater it releases charged ions -- protons -- into the water. These protons cannot pass the anion membrane, so negative ions move from the salty water into the wastewater chamber. At the other electrode protons are consumed, so positively charged ions move from the salty water to the other electrode chamber, desalinating the water in the middle chamber.

The desalination cell releases ions into the outer chambers that help to improve the efficiency of electricity generation compared to microbial fuel cells.

"When we try to use microbial fuel cells to generate electricity, the conductivity of the wastewater is very low," said Logan. "If we could add salt it would work better. Rather than just add in salt, however in places where brackish or salt water is already abundant, we could use the process to additionally desalinate salty water, clean the wastewater and dump it and the resulting salt back into the ocean."

Because the salt in the water helps the cell generate electricity, as the central chamber becomes less salty, the conductivity decreases and the desalination and electrical production decreases, which is why only 90 percent of the salt is removed. However, a 90 percent decrease in salt in seawater would produce water with 3.5 grams of salt per liter, which is less than brackish water. Brackish water would contain only 0.5 grams of salt per liter.

Another problem with the current cell is that as protons are produced at one electrode and consumed at the other electrode, these chambers become more acidic and alkaline. Mixing water from the two chambers together when they are discharged would once again produce neutral, salty water, so the acidity and alkalinity are not an environmental problem assuming the cleaned wastewater is dumped into brackish water or seawater. However, the bacteria that run the cell might have a problem living in highly acidic environments.

For this experiment, the researchers periodically added a pH buffer avoiding the acid problem, but this problem will need to be considered if the system is to produce reasonable amounts of desalinized water.

(Photo: David Jones, Penn State)

Penn State University

Followers

Archive

 

Selected Science News. Copyright 2008 All Rights Reserved Revolution Two Church theme by Brian Gardner Converted into Blogger Template by Bloganol dot com