Thursday, December 3, 2009


0 comentarios

No Scientists have gained new insight into the makeup of ancient meteorites called Carbonaceous Chondrites, in research published in the October edition of the journal Earth Science and Planetary Letters.

Carbonaceous Chondrites are made up of the dust that formed the solar nebula, which is the cloud of dust and gas that made up our early solar system before rocky planets such as Earth and Mars were formed. The asteroids are 'chemically primitive', which means that none of the chemical elements of which they are composed have been moved around, taken out or added since they formed 4.56 billion years ago. This makes Carbonaceous Chondrites valuable for understanding what conditions were like in the early solar system.

In the new study, Imperial College London researchers reveal that the particles which make up Carbonaceous Chondrites are much finer than previously thought – each being approximately 10 to 100 nanometers in size. These tiny grains severely restricted the flow of water through the rock.

This explains why soluble elements such as sodium and chlorine are still present in Carbonaceous Chondrites that have fallen to Earth, in spite of the presence of water. Water would normally be expected to dissolve soluble elements and transport them out of the rock.

Dr Phil Bland, the lead author of the research, from the Department of Earth Science and Engineering at Imperial, explains:

“We couldn’t understand why Carbonaceous Chondrites didn’t seem to follow the same geological rules as other rocks in space and on Earth. In previous studies, computer models predicted that water should have dissolved and transported the soluble material, and yet the geological evidence clearly showed that this was not the case.”

The new research shows that the dust particles which made up the Carbonaceous Chondrites were so small that water particles clung to them and could not move. This prevented the soluble material from being transported and deposited elsewhere.

Dr Bland says: “When you pour water through fine sand particles it gets trapped. The same thing happened inside Carbonaceous Chondrites, where the particles were so fine that the water was bound to it. We calculated that water particles could have flowed less than a millimetre every one million years.

“Studying the geology of Carbonaceous Chondrites is helping us to see what conditions were like in the early solar system. We think the fine-grain particles that make up Carbonaceous Chondrites were some of the last remnants of the original cloud of dust and gas in the solar nebula. We believe that ice formed around each tiny grain before they all clumped together to make rocks that eventually turned into carbonaceous chondrite asteroids,” adds Dr Bland.

The researchers came to their conclusions by studying the size of grains in Carbonaceous Chondrite fragments using powerful microscopes. The resulting data was input into computer models to determine how the water particles would have flowed through Carbonaceous Chondrite rock.

(Photo: ICL)

Imperial College London


0 comentarios

In everything from computer processor chips to car engines to electric powerplants, the need to get rid of excess heat creates a major source of inefficiency. But new research points the way to a technology that might make it possible to harvest much of that wasted heat and turn it into usable electricity.

That kind of waste-energy harvesting might, for example, lead to cellphones with double the talk time, laptop computers that can operate twice as long before needing to be plugged in, or power plants that put out more electricity for a given amount of fuel, says Peter Hagelstein, co-author of a paper on the new concept appearing this month in the Journal of Applied Physics.

Hagelstein, an associate professor of electrical engineering at MIT, says existing solid-state devices to convert heat into electricity are not very efficient. The new research, carried out with graduate student Dennis Wu as part of his doctoral thesis, aimed to find how close realistic technology could come to achieving the theoretical limits for the efficiency of such conversion.

Theory says that such energy conversion can never exceed a specific value called the Carnot Limit, based on a 19th-century formula for determining the maximum efficiency that any device can achieve in converting heat into work. But current commercial thermoelectric devices only achieve about one-tenth of that limit, Hagelstein says. In experiments involving a different new technology, thermal diodes, Hagelstein worked with Yan Kucherov, now a consultant for the Naval Research Laboratory, and coworkers to demonstrate efficiency as high as 40 percent of the Carnot Limit. Moreover, the calculations show that this new kind of system could ultimately reach as much as 90 percent of that ceiling.

Hagelstein, Wu and others started from scratch rather than trying to improve the performance of existing devices. They carried out their analysis using a very simple system in which power was generated by a single quantum-dot device — a type of semiconductor in which the electrons and holes, which carry the electrical charges in the device, are very tightly confined in all three dimensions. By controlling all aspects of the device, they hoped to better understand how to design the ideal thermal-to-electric converter.

Hagelstein says that with present systems it’s possible to efficiently convert heat into electricity, but with very little power. It’s also possible to get plenty of electrical power — what is known as high-throughput power — from a less efficient, and therefore larger and more expensive system. “It’s a tradeoff. You either get high efficiency or high throughput,” says Hagelstein. But the team found that using their new system, it would be possible to get both at once, he says.

A key to the improved throughput was reducing the separation between the hot surface and the conversion device. A recent paper by MIT professor Gang Chen reported on an analysis showing that heat transfer could take place between very closely spaced surfaces at a rate that is orders of magnitude higher than predicted by theory. The new report takes that finding a step further, showing how the heat can not only be transferred, but converted into electricity so that it can be harnessed.

A company called MTPV Corp. (for Micron-gap Thermal Photo-Voltaics), founded by Robert DiMatteo SM ’96, MBA ‘06, is already working on the development of “a new technology closely related to the work described in this paper,” Hagelstein says.

DiMatteo says he hopes eventually to commercialize Hagelstein’s new idea. In the meantime, he says the technology now being developed by his company, which he expects to have on the market next year, could produce a tenfold improvement in throughput power over existing photovoltaic devices, while the further advance described in this new paper could make an additional tenfold or greater improvement possible. The work described in this paper “is potentially a major finding,” he says.

DiMatteo says that worldwide, about 60 percent of all the energy produced by burning fuels or generated in powerplants is wasted, mostly as excess heat, and that this technology could “make it possible to reclaim a significant fraction of that wasted energy.”

When this work began around 2002, Hagelstein says, such devices “clearly could not be built. We started this as purely a theoretical exercise.” But developments since then have brought it much closer to reality.

While it may take a few years for the necessary technology for building affordable quantum-dot devices to reach commercialization, Hagelstein says, “there’s no reason, in principle, you couldn’t get another order of magnitude or more” improvement in throughput power, as well as an improvement in efficiency.

“There’s a gold mine in waste heat, if you could convert it,” he says. The first applications are likely to be in high-value systems such as computer chips, he says, but ultimately it could be useful in a wide variety of applications, including cars, planes and boats. “A lot of heat is generated to go places, and a lot is lost. If you could recover that, your transportation technology is going to work better.”

(Photo: MIT)



0 comentarios

Researchers in MIT’s Department of Civil and Environmental Engineering believe they have pinpointed a pathway by which arsenic may be contaminating the drinking water in Bangladesh, a phenomenon that has puzzled scientists, world health agencies and the Bangladeshi government for nearly 30 years. The research suggests that human alteration to the landscape, the construction of villages with ponds, and the adoption of irrigated agriculture are responsible for the current pattern of arsenic concentration underground. The findings also indicate that drinking-water wells drilled to a greater depth would likely provide clean water.

Bangladesh is the seventh most populous country in the world, and tens of millions of its citizens have been exposed to arsenic in their water over the past several decades. As many as 3,000 Bangladeshis die from arsenic-induced cancer each year and today approximately 2 million people in the country live with arsenic poisoning, which manifests as skin lesions and neurological disorders, and causes cardiovascular and pulmonary diseases and cancer. Allan H. Smith a professor of epidemiology at the University of California, Berkeley, calls it “the largest mass poisoning of a population in history.”

This pervasive incidence of arsenic poisoning and its link to drinking water were first identified in the early 1980s. This was not long after the population began switching from surface water sources like rivers to groundwater from tube wells — part of a national effort to decrease the incidence of bacterial illnesses caused by contaminated drinking water. But most of the tube wells have been drilled to less than 100 feet, where they draw water directly from the arsenic-contaminated shallow aquifer. Scientists have struggled to understand how the arsenic, which is naturally occurring in the underground sediment of the Ganges Delta, is getting into the groundwater.

By 2002, a research team led by Charles Harvey, the Doherty Associate Professor of Civil and Environmental Engineering at MIT, had determined that microbial metabolism of organic carbon was mobilizing the arsenic off the soils and sediments, and that crop irrigation was almost certainly playing a role in the process. But the exact sources of the contaminated water have remained elusive, until now.

In a paper appearing online in Nature Geoscience Nov. 15, Harvey, former graduate students Rebecca Neumann and Khandaker Ashfaque and co-authors explain that ponds are the source of the organic carbon that presently mobilizes the arsenic in their 6-square-mile test site. The carbon settles to the bottom of the ponds, then seeps underground where microbes metabolize it. This creates the chemical conditions that cause arsenic to dissolve off the sediments and soils and into the groundwater.

“Our research shows that water from the ponds carries degradable organic carbon into the shallow aquifer. Groundwater flow, drawn by irrigation pumping, transports that pond water to the depth where dissolved arsenic concentrations are greatest and where it is then pumped up into the irrigation and drinking wells,” says Harvey, whose research was funded by the National Science Foundation and the Singapore-MIT Alliance for Research and Technology. “The other interesting thing we found in our test area is that the rice fields are a sink of arsenic — more arsenic goes in with the irrigation water than comes out in the groundwater.”

Scott Fendorf, a professor at Stanford University who studies arsenic content in soils and sediments along the Mekong River in Cambodia, says Harvey’s previous research, published in 2002, “transformed the scientific community’s outlook on the problem.” The current work, he adds, has two big ramifications: “It shows that human modifications are impacting the arsenic content in the groundwater; and that while the rice cropping system appears to be buffering the arsenic, the ponds excavated to provide fill to build up the villages are having a negative impact on the release of arsenic.”

Ashfaque and Neumann did extensive fieldwork in Bangladesh over a seven-year period, studying the hydrologic behavior and chemical nature of rice fields and ponds and developing an understanding of the surface and underground water flow patterns and creating a 3-D model to track rice field and pond water as it traveled into and through the subsurface.

“When we compared the chemical signatures of the different water sources in our study area to the signatures of the aquifer water, we saw that water with high arsenic content originates from the human-built ponds, and water with lower arsenic content originates from the rice fields,” says Neumann. “It’s likely that these same processes are occurring at other sites, and it suggests that the problem could be alleviated by digging deeper drinking water wells below the influence of the ponds or by locating shallow drinking wells under rice fields.” The researchers suggest that irrigation wells remain at the shallow level.

Harvey is making plans for a multi-year study that would provide deep wells— deeper than 450 feet — for two villages in Bangladesh whose inhabitants suffer from arsenic poisoning. There they would combine continual testing of the well water and hydrogeological modeling of the groundwater system with a study of how the clean water affects the villagers’ health, placing special emphasis on the neurological development of children.

“There are all sorts of studies to show how arsenic hurts people. We’re trying to turn it around and show how removal of the arsenic will help them,” says Harvey.

(Photo: Christine Daniloff)



0 comentarios

No one knows why massive formations of banded iron — some ultimately hundreds of kilometers long, like a sleeping giant’s suspenders — mysteriously began precipitating on Earth’s surface about 3.5 billion years ago. Or why, almost 2 billion years later, the precipitation ceased.

Because these deposits carry information about early Earth’s surface conditions and climate changes, as well as provide much of modern industry’s iron resources, interested researchers have cast a wide net in trying to explain why and how these bands formed. But attempts to explain their existence based on seasonal variations, surface temperature changes and episodic seawater mixing all have foundered on assumptions requiring the unexplained oscillations of external forces.

None of these theories could satisfactorily explain all the observations made by geologists, particularly the existence of alternating structural bands of silica-rich layers with iron-rich layers in these deposits.

A new approach proposed in an October issue of Nature Geoscience by Sandia National Laboratories principal investigator Yifeng Wang and colleagues elsewhere may have the answer.

A key component of the process, the researchers found in computer simulations, may have been the absence of aluminum in early oceanic rocks. That absence chemically favored the formation of banded iron formations. The continual enrichment of oceanic crust by aluminum as Earth evolved ultimately ended the era of iron band formation.

A complete thermodynamic explanation by the research team suggests that iron- and silicon-rich fluids were generated by hydrothermal action on the seafloor. The team’s calculations show that the formation of bands was generated by internal interactions of the chemical system, rather than from external forcing by unexplained changes such as ocean surface temperature variations.

“This concept of the self-organizational origin of banded iron formations is very important,” said Wang. “It allows us to explain a lot of things about them, like their occurrence and band thickness.”

Wang’s Ph.D. advisor, Enrique Merino at Indiana University, may have been the first to consider banded iron formations as formed through self-organization, Wang said: “We started to work on the issue about 15 years ago.” But difficulties in pinning down an actual mechanism persisted.

“Last year, Huifang Xu [at the University of Wisconsin at Madison] and I happened to talk about his work on astrobiology and then we talked about banded iron formations,” said Yifeng. “After that, I got interested again in the topic. Luckily, I came across a very recent publication on silicic acid interactions with metals. With these new data, I did thermodynamic calculations. I looked at the results and talked to both Huifang and Enrique. The whole banded-iron-formation puzzle started to fit together nicely.”

Merino and Xu coauthored the paper with Wang, along with Hironomi Konishi, also at the University of Wisconsin at Madison.

“Our work has two interesting implications,” said Wang. “The Earth’s surface can be divided into four interrelated parts: atmosphere, hydrosphere, biosphere and lithosphere. Our work shows that the lithosphere, that is, the solid rock part, plays a very important role in regulating the surface evolution of the Earth. This may have an implication on the studies of other planets such as Mars. Our work also shows that to understand such evolution requires a careful consideration of nonlinear interactions among different components in the system. Such consideration is important for prediction of modern climatic cycles.”

“After all,” he said, “Earth’s system is inherently complex and the involved processes couple with each other in nonlinear fashion.”

(Photo: Randy Montoya)

Sandia National Laboratories


0 comentarios

Idaho National Laboratory (INL) scientists have set a new world record with next-generation particle fuel for use in high temperature gas reactors (HTGRs).

The Advanced Gas Reactor (AGR) Fuel Program, initiated by the Department of Energy in 2002, used INL’s unique Advanced Test Reactor (ATR) in a nearly three-year experiment to subject more than 300,000 nuclear fuel particles to an intense neutron field and temperatures around 1,250 degrees Celsius.

INL researchers say the fuel experiment set the record for particle fuel by consuming approximately 19 percent of its low-enriched uranium — more than double the previous record set by similar experiments run by German scientists in the 1980s and more than three times that achieved by current light water reactor (LWR) fuel. Additionally, none of the fuel particles experienced failure since entering the extreme neutron irradiation test environment of the ATR in December 2006.

"This level of performance is a major accomplishment," said Dr. David Petti, Director of the Very High Temperature Reactor Technology Development Office at the U.S. Department of Energy's INL.

The purpose of the fuel program is to develop this particle fuel, produce experimental data that demonstrates to the Nuclear Regulatory Commission that the fuel is robust and safe, and re-establish a U.S. fuel manufacturing capability for high temperature gas reactors. INL has been working with Babcock and Wilcox Inc., General Atomics, and Oak Ridge National Laboratory (ORNL) to establish standards and procedures for the manufacture of commercial-scale HTGR fuel. The overarching goal of the AGR Fuel Program is to qualify coated nuclear fuel particles for use in HTGRs such as the Next Generation Nuclear Plant (NGNP). Developing particle fuel capable of achieving very high burnup levels will also reduce the amount of used fuel that is generated by HTGRs.

"An important part of our mission is the development and exploration of advanced nuclear science and technology,” said Dr. Warren F. "Pete" Miller, assistant secretary for Nuclear Energy. “This achievement is an important step as we work to enable the next generation of reactors, decrease fossil fuel use in industrial applications, make fuel cycles more sustainable and reduce proliferation risks."

"AGR-1" is the first of eight similar experiments that aim to confirm designs and fabrication processes and performance characteristics for such fuel. Future AGR fuel tests will include particle fuel produced on a prototypic industrial scale to further prove the irradiation performance of the NGNP-specific fuel design. The 18-foot-long AGR-1 experiment was inserted in INL's ATR core and allowed for each of six capsules containing the particle fuel specimens to be monitored and controlled separately. Inside the ATR core, the fuel specimens were subjected to neutron irradiation many times higher than what they would experience inside an HTGR or a current light water reactor, allowing INL researchers to gain irradiation performance data for nuclear fuel and materials in a shorter time. The team is monitoring the AGR fuel for a number of factors including "burn-up," which is a measurement of the percent of uranium fuel that has undergone fission reactions.

Although the experiment has now left the ATR, researchers still have more work to do before the AGR-1 test campaign will be finished. Post irradiation examination (PIE) will begin at INL and ORNL facilities and allow scientists to examine the fuel up close so that the fuel and its layers of coatings can be evaluated for degradation patterns and other characteristics. In addition, controlled higher temperature testing in furnaces is planned to determine the safety performance of the fuel under postulated accident conditions. These activities will last another two years.

The Next Generation Nuclear Plant Program aims to use a high-temperature gas reactor to produce high-temperature process heat and hydrogen used by many industrial facilities in daily operations and to support the broader goal of developing the next generation of nuclear power systems that provide abundant carbon-free electricity on a 24/7 basis. Excellent fuel irradiation performance must be demonstrated before high-temperature gas reactors can be licensed and co-located with these complementary industrial facilities. Reaching this world record peak burnup of 19 percent without any particle failure demonstrates the robustness of this particle fuel design.

(Photo: INL)

Idaho National Laboratory


0 comentarios
Sleep deprivation adversely affects automatic, accurate responses and can lead to potentially devastating errors, a finding of particular concern among firefighters, police officers, soldiers and others who work in a sleep-deprived state, University of Texas at Austin researchers say.

Psychology professors Todd Maddox and David Schnyer found moderate sleep deprivation causes some people to shift from a faster and more accurate process of information categorization (information-integration) to a more controlled, explicit process (rule-based), resulting in negative effects on performance.

The researchers examined sleep deprivation effects on information-integration, a cognitive operation that relies heavily on implicit split-second, gut-feeling decisions.

"It's important to understand this domain of procedural learning because information-integration—the fast and accurate strategy—is critical in situations when solders need to make split-second decisions about whether a potential target is an enemy soldier, a civilian or one of their own," Maddox said.

The study examined information-integration tasks performed by 49 cadets at the United States Military Academy at West Point over the course of two days. The participants performed the task twice, separated by a 24-hour period, with or without sleep between sessions. Twenty-one cadets were placed in a sleep deprivation group and 28 well-rested participants were designated as controls. It revealed that moderate sleep deprivation can lead to an overall immediate short-term loss of information-integration thought processes.

Performance improved in the control group by 4.3 percent from the end of day one to the beginning of day two (accuracy increased from 74 percent to 78.3 percent); performance in the sleep-deprived group declined by 2.4 percent (accuracy decreased from 73.1 percent to 70.7 percent) from the end of day one to the beginning of day two. This decline was much larger for those participants who shifted from an information-integration to a rule-based approach.

According to the findings, people who rely more on rule-based (over-thinking) strategies are more vulnerable to the ill effects of sleep deprivation. This is the first study that has explored this domain of procedural learning, Schnyer said.

Maddox and Schnyer were surprised to find the adverse effects of sleep deprivation on information processing varied among individuals. Schnyer believes this finding has implications for training purposes for high-pressure, life-and-death jobs, particularly the Army.

University of Texas




Selected Science News. Copyright 2008 All Rights Reserved Revolution Two Church theme by Brian Gardner Converted into Blogger Template by Bloganol dot com