Thursday, July 1, 2010


1 comentarios

A new process for storing and generating hydrogen to run fuel cells in cars has been invented by chemical engineers at Purdue University.

The process, given the name hydrothermolysis, uses a powdered chemical called ammonia borane, which has one of the highest hydrogen contents of all solid materials, said Arvind Varma, R. Games Slayter Distinguished Professor of Chemical Engineering and head of the School of Chemical Engineering.

"This is the first process to provide exceptionally high hydrogen yield values at near the fuel-cell operating temperatures without using a catalyst, making it promising for hydrogen-powered vehicles," he said. "We have a proof of concept."

The new process combines hydrolysis and thermolysis, two hydrogen-generating processes that are not practical by themselves for vehicle applications.

Research findings were presented June 15 during the International Symposium on Chemical Reaction Engineering in Philadelphia. The research also is detailed in a paper appearing online in the AIChE Journal, published by the American Institute of Chemical Engineers, and will be published in an upcoming issue of the journal.

Ammonia borane contains 19.6 percent hydrogen, a high weight percentage that means a relatively small quantity and volume of the material are needed to store large amounts of hydrogen, Varma said.

"The key is how to efficiently release the hydrogen from this compound, and that is what we have discovered," he said.

The paper was written by former Purdue doctoral student Moiz Diwan, now a senior research engineer at Abbott Laboratories in Chicago; Purdue postdoctoral researcher Hyun Tae Hwang; doctoral student Ahmad Al-Kukhun; and Varma. Purdue has filed a patent application on the technology.

In hydrolysis, water is combined with ammonia borane and the process requires a catalyst to generate hydrogen, while in thermolysis the material must be heated to more than 170 degrees Celsius, or more than 330 degrees Fahrenheit, to release sufficient quantities of hydrogen.

However, fuel cells that will be used in cars operate at about 85 degrees Celsius (185 degrees Fahrenheit). Hydrogen fuel cells generate electricity to run an electric motor.

The new process also promises to harness waste heat from fuel cells to operate the hydrogen generation reactor, Varma said.

The researchers conducted experiments using a reactor vessel operating at the same temperature as fuel cells. The process requires maintaining the reactor at a pressure of less than 200 pounds per square inch, far lower than the 5,000 psi required for current hydrogen-powered test vehicles that use compressed hydrogen gas stored in tanks.

In some experiments, the researchers used water containing a form of hydrogen called deuterium. Using water containing deuterium instead of hydrogen enabled the researchers to trace how much hydrogen is generated from the hydrolysis reaction and how much from the thermolysis reaction, details critical to understanding the process.

At the optimum conditions, hydrogen from the hydrothermolysis approach amounted to about 14 percent of the total weight of the ammonia borane and water used in the process. This is significantly higher than the hydrogen yields from other experimental systems reported in the scientific literature, Varma said.

"This is important because the U.S. Department of Energy has set a 2015 target of 5.5 weight percent hydrogen for hydrogen storage systems, meaning available hydrogen should be at least 5.5 percent of a system's total weight," he said. "If you're only yielding, say, 7 percent hydrogen from the material, you're not going to make this 5.5 percent requirement once you consider the combined weight of the entire system, which includes the reactor, tubing, the ammonia borane, water, valves and other required equipment."

The researchers determined that a concentration of 77 percent ammonia borane is ideal for maximum hydrogen yield using the new process.

The research has been funded by the U.S. Department of Energy by a grant through the Energy Center in Purdue's Discovery Park.

Future work on hydrothermolysis will explore scaling up the reactor to the size required for a vehicle to drive 350 miles before refueling. Additional research also is needed to develop recycling technologies for turning waste residues produced in the process back into ammonia borane.

The technology may also be used to produce hydrogen for fuel cells to recharge batteries in portable electronics, such as notebook computers, cell phones, personal digital assistants, digital cameras, handheld medical diagnostic devices and defibrillators.

"The recycling isn't important for small-scale applications, such as portable electronics, but is needed before the process becomes practical for cars," Varma said.

(Photo: Purdue University/Andrew Hancock)

Purdue University


0 comentarios
Hay fever sufferers will learn if the answer to their annual summer discomfort could already be available on supermarket shelves or even lurking in their fridge.

Experts at the Institute of Food Research, University of East Anglia and Norfolk and Norwich University Hospital, are investigating whether yoghurt type drinks can help bring relief to hay fever summer suffering.

A team of six Norwich Research Park researchers led by Professor Claudio Nicoletti from IFR will embark on the year-long study.

Professor Nicoletti has already completed a pilot study with IFR colleagues on a small group of people with seasonal allergic rhinitis (hay fever). In the first human study of its kind, they found that yoghurt type drinks with Lactobacillus casei Shirota (LcS) can modify the immune system's response to grass pollen.

The research is now being widened in collaboration with Dr Andrew Wilson, an expert in allergy and respiratory medicine at the university's school of Medicine Health Policy and Practice. The researchers are inviting volunteers who are troubled by hay fever to join the study to see if the drinks have an impact on the clinical symptoms of the problem.

It is estimated that hay fever affects over 600 million people, and numbers are rising. There is no known cure.

Hay fever, which can lead to asthma, causes significant discomfort, interrupted sleep and impairs concentration at school or work. It is estimated that UK businesses will lose around £324M through lost days this summer alone.

"In this study we want to see if the immunological changes we previously discovered translate into a real reduction in the clinical symptoms of hayfever," said Professor Nicoletti. "We will also analyze the mechanisms involved."

Dr Wilson said: "The benefits of a reasonably cheap and self-administered non-drug 'treatment' are clear."

"Our study will also provide evidence to the viability for the many health claims around these products which could result in clear guidance for the general public."

The team are starting their work immediately to coincide with the peak grass pollen season.

Biotechnology and Biological Sciences Research Council


0 comentarios

Satellite imagery captured hundreds of miles from the Earth’s surface is being used to analyse the flood risks of some of the world’s largest regions, using data that researchers hope could become freely available in efforts to provide a more immediate response to natural disasters.

Using river flow and rain forecasts from global monitoring stations and space imagery that takes in vast areas of 400 by 400 km, geographical scientists from the University of Bristol are developing sophisticated flood forecasting models that map past and present global water levels and could predict future flow patterns to a new degree of accuracy.

The effective management of natural resources and natural disasters requires the use of near-real time data. However, existing technologies used to track changing water levels are spatially limited owing to the declining number of global monitoring stations and the heavy cost implications of collecting more frequent, timely satellite data.

A much cheaper alternative, as is now being applied by Bristol experts, is to use low spatial resolution data, which captures water level data more frequently and across larger expanses. The algorithm in development, a computerised problem-solving program, will allow for the automated retrieval of information relating to fluctuating water levels. When combined with a global flood forecasting model, this should provide improved estimates of future changes in river flows.

“The outcomes of undertaking this work would be better flood forecasts at large scales. And if this proves the utility of high temporal wide swath imagery for smaller scales, it will mean a quantum leap in the volume and resolution of data that can support flood modelling,” says Bristol’s Dr Guy Schumann, from the School for Geographical Sciences. “This is undoubtedly needed and will support planned satellite missions targeted at monitoring hydrological change.”

Conceptual analysis of the Tewkesbury floods of 2007 have already confirmed the feasibility of the models being proposed, by accurately mapping the predictive algorithms on to spatial imagery of the water levels at different times during the floods.

The next step of the project will be to apply the same methodology combining space-borne radar imagery and large scale flood forecasting to areas surrounding the rivers of the Mississippi, the Amazon, the Nile, the Danube, the Po and the river system of Bangladesh.

The results will provide a benchmark data set of global floodplain water levels, which combined with satellite date, could ultimately be made freely available via a search engine facility such as Google Earth, and provide a new resource in efforts to combat climate change.

(Photo: U. Bristol)

University of Bristol


0 comentarios

Your mother was right: Fish really is “brain food.” And it seems that even pre-humans living as far back as 2 million years ago somehow knew it.

A team of researchers that included Johns Hopkins University geologist Naomi Levin has found that early hominids living in what is now northern Kenya ate a wider variety of foods than previously thought, including fish and aquatic animals such as turtles and crocodiles. Rich in protein and nutrients, these foods may have played a key role in the development of a larger, more human-like brain in our early forebears, which some anthropologists believe happened around 2 million years ago, according to the researchers’ study.

“Considering that growing a bigger brain requires many nutrients and calories, anthropologists have posited that adding meat to their diet was key to the development of a larger brain,” said Levin, an assistant professor in the Morton K. Blaustein Department of Earth and Planetary Sciences at Johns Hopkins’ Krieger School of Arts and Sciences. “Before now, we have never had such a wealth of data that actually demonstrates the wide variety of animal resources that early humans accessed.” Levin served as the main geologist on the team, which included scientists from the United States, South Africa, Kenya, Australia and the United Kingdom.

A paper on the study was published recently in Proceedings of the National Academy of Sciences and offers first-ever evidence of such dietary variety among early pre-humans.

In 2004, the team discovered a 1.95 million-year-old site in northern Kenya and spent four years excavating it, yielding thousands of fossilized tools and bones. According to paper’s lead author David Braun of the University of Cape Town (South Africa), the site provided the right conditions to preserve those valuable artifacts.

“At sites of this age, we often consider ourselves lucky if we find any bone associated with stone tools. But here, we found everything from small bird bones to hippopotamus leg bones,” Braun said.

The preservation of the artifacts was so remarkable, in fact, that it allowed the team to meticulously and accurately reconstruct the environment, identifying numerous fossilized plant remains and extinct species that seem to be a sign that these early humans lived in a wet — and possibly even a marshy — environment.

“Results from stable isotopic analysis of the fossil teeth helped refine our picture of the paleoenvironment of the site, telling us that the majority of mammals at the site subsisted on grassy, well-watered resources,” Levin said. “Today, the Turkana region in northern Kenya is an extremely dry and harsh environment. So, clearly, the environment of this butchery site was very different 1.95 million years ago — this spot was much wetter and lush.”

Using a variety of techniques, the team was able to conclude that the hominids butchered at least 10 individual animals — including turtles, fish, crocodiles and antelopes — on the site for use as meals. Cut marks found on the bones indicate that the hominids use simple, sharp-edged stone tools to butcher their prey.

“It’s not clear to us how early humans acquired or processed the butchered meat, but it’s likely that it was eaten raw,” Levin said.

The team theorizes that the wet and marshy environment gave early pre-humans a way to increase the protein in their diets (and grow larger brains!) while possibly avoiding contact with larger carnivores, such as hyenas and lions.

(Photo: Johns Hopkins University)

Johns Hopkins University


0 comentarios

Crayfish make surprisingly complex, cost-benefit calculations, finds a University of Maryland study - opening the door to a new line of research that may help unravel the cellular brain activity involved in human decisions.

The Maryland psychologists conclude that crayfish make an excellent, practical model for identifying the specific neural circuitry and neurochemistry of decision making. They believe their study is the first to isolate individual crayfish neurons involved in value-based decisions. Currently, there's no direct way to do this with a human brain.

The study will be published in the Proceedings of the Royal Society B, and is being released online by the journal today.

"Matching individual neurons to the decision making processes in the human brain is simply impractical for now," explains University of Maryland psychologist Jens Herberholz, the study's senior author.

"History has shown that findings made in the invertebrate nervous systems often translate to more complex organisms. It's unlikely to be exactly the same, but it can inform our understanding of the human brain, nonetheless. The basic organization of neurons and the underlying neurochemistry are similar, involving serotonin and dopamine, for example."

Herberholz adds that his lab's work may inform ongoing studies in rodents and primates. "Combining the findings from different animal models is the only practical approach to work out the complexities of human decision making at the cellular level."

The experiments offered the crayfish stark decisions - a choice between finding their next meal and becoming a meal for an apparent predator. In deciding on a course of action, they carefully weighed the risk of attack against the expected reward, Herberholz says.

Using a non-invasive method that allowed the crustaceans to freely move, the researchers offered juvenile Louisiana Red Swamp crayfish a simultaneous threat and reward: ahead lay the scent of food, but also the apparent approach of a predator.

In some cases, the "predator" (actually a shadow) appeared to be moving swiftly, in others slowly. To up the ante, the researchers also varied the intensity of the odor of food.

How would the animals react? Did the risk of being eaten outweigh their desire to feed? Should they "freeze" - in effect, play dead, hoping the predator would pass by, while the crayfish remained close to its meal - or move away from both the predator and food?

To make a quick escape, the crayfish flip their tails and swim backwards, an action preceded by a strong, measurable electric neural impulse. The specially designed tanks could non-invasively pick up and record these electrical signals. This allowed the researchers to identify the activation patterns of specific neurons during the decision-making process.

Although tail-flipping is a very effective escape strategy against natural predators, it adds critical distance between a foraging animal and its next meal.

The crayfish took decisive action in a matter of milliseconds. When faced with very fast shadows, they were significantly more likely to freeze than tail-flip away.

The researchers conclude that there is little incentive for retreat when the predator appears to be moving too rapidly for escape, and the crayfish would lose its own opportunity to eat. This was also true when the food odor was the strongest, raising the benefit of staying close to the expected reward. A strong predator stimulus, however, was able to override an attractive food signal, and crayfish decided to flip away under these conditions.

"Our results indicate that when the respective values of tail-flipping and freezing change, the crayfish adjust their choices accordingly, thus preserving adaptive action selection," the researchers write. "We have now shown that crayfish, similar to organisms of higher complexity, integrate different sensory stimuli that are present in their environment, and they select a behavioural output according to the current values for each choice."

The next step is to identify the specific cellular and neurochemical mechanisms involved in crayfish decisions, which is more feasible in an animal with fewer and accessible neurons, Herberholz says. That research is now underway.

(Photo: David D. Yager/Jens Herberholz)

University of Maryland


0 comentarios

An international team of scientists from the MINOS experiment at the Fermi National Accelerator laboratory (Fermilab) announced the world’s most precise measurement to date of the parameters that govern antineutrino oscillations, the back-and-forth transformations of antineutrinos from one type to another. This result provides information about the difference in mass between different antineutrino types. The measurement showed an unexpected variance in the values for neutrinos and antineutrinos. This mass difference parameter, called Δm2 (“delta m squared”), is smaller by approximately 40 percent for neutrinos than for antineutrinos.

However, there is a still a five percent probability that Δm2 is actually the same for neutrinos and antineutrinos. With such a level of uncertainty, MINOS physicists need more data and analysis to know for certain if the variance is real.

Neutrinos and antineutrinos behave differently in many respects, but the MINOS results, presented today at the Neutrino 2010 conference in Athens, Greece and in a colloquium at Fermilab, are the first observation of a potential fundamental difference that established physical theory could not explain.

“Everything we know up to now about neutrinos would tell you that our measured mass difference parameters should be very similar for neutrinos and antineutrinos,” said MINOS co-spokesperson Rob Plunkett. “If this result holds up, it would signal a fundamentally new property of the neutrino-antineutrino system. The implications of this difference for the physics of the universe would be profound.”

“We do know that a difference of this size in the behaviour of neutrinos and antineutrinos could not be explained by current theory,” said MINOS co-spokesperson Jenny Thomas of University College London. "While the neutrinos and antineutrinos do behave differently on their journey through the Earth, the Standard Model predicts the effect is immeasurably small in the MINOS experiment. Clearly, more antineutrino running is essential to clarify whether this effect is just due to a statistical fluctuation or the first hint of new physics.”

The NUMI beam is capable of producing intense beams of either antineutrinos or neutrinos. This capability allowed the experimenters to measure the unexpected mass difference parameters. The measurement also relies on the unique characteristics of the MINOS detector, particularly its magnetic field, which allows the detector to separate the positively and negatively charged muons resulting from interactions of antineutrinos and neutrinos, respectively. MINOS scientists have also updated their measurement of the standard oscillation parameters for muon neutrinos, providing an extremely precise value of Δm2.

Muon antineutrinos are produced in a beam originating in Fermilab's Main Injector. The antineutrinos’ extremely rare interactions with matter allow most of them to pass through the Earth unperturbed. A small number, however, interact in the MINOS detector, located 735 km away from Fermilab in Soudan, Minnesota. During their journey, which lasts 2.5 milliseconds, the particles oscillate in a process governed by a difference between their mass states.

(Photo: Fermilab)

Science and Technology Facilities Council


0 comentarios

Scientists at the Carnegie Institution’s Geophysical Laboratory, with colleagues, have discovered a much higher water content in the Moon’s interior than previous studies. Their research suggests that the water, which is a component of the lunar rocks, was preserved from the hot magma that was present when the Moon began to form some 4.5 billion years ago, and that it is likely widespread in the Moon’s interior.

The research is published in the on-line early edition of the Proceedings of the National Academy of Sciences the week of June 14.

“For over 40 years we thought the Moon was dry. The bulk water content of the Moon was estimated to be less than 1 ppb, which would make the Moon at least six orders of magnitude drier than the interiors of Earth and Mars,” remarked lead author Francis McCubbin. “In our study we looked at hydroxyl in the mineral apatite—the only hydrous mineral in the assemblage of minerals we examined in two Apollo samples and a lunar meteorite.

Summarizing his team’s method and results, McCubbin explained, “In the lab of colleague Erik Hauri at Carnegie's Department of Terrestrial Magnetism, we used secondary ion mass spectrometry (SIMS), which can detect elements in the parts per million range and combining these measurements with models that characterize how the material crystallized as the Moon cooled during formation, we found that the minimum water content ranged from 64 parts per billion to 5 parts per million—at least two orders of magnitude greater than previous results.”

The prevailing belief is that the Moon came from a giant-impact event, when a Mars-sized object hit the Earth and the ejected material coalesced into the Moon. From two of the samples, the Carnegie scientists determined that water was likely present very early in the formation history as the hot magma started to cool and crystallize. This result means that water is native to the Moon.

The previous studies showing water on the Moon analyzed volcanic glasses. These researchers looked within KREEP-rich rocks. KREEP comes from the last stages of crystallization. KREEP rocks contain more potassium (K), rare Earth elements (REE), phosphorus (P), and other heat-producing elements such as uranium and thorium. “Since water is insoluble in the main silicates that crystallized, we believed that it should have concentrated in the KREEP,” explained coauthor Andrew Steele. “That’s why we selected it to analyze.”

The researchers specifically studied hydroxyl, a compound with an oxygen atom bound with hydrogen, in the mineral apatite—the only hydrous mineral in the assemblage. After initial analyses, the scientists excluded one of the Apollo samples from further study because it was unlikely to yield good information about magmatic water content. They concentrated on the other Apollo sample and the lunar meteorite to determine water in the lunar interior.

“It is gratifying to see this proof of the OH contents in lunar apatite,” remarked lunar scientist Bradley Jolliff of Washington University in St. Louis. “The concentrations are very low and, accordingly, they have been until recently nearly impossible to detect. We can now finally begin to consider the implications—and the origin—of water in the interior of the Moon.”

(Photo: LBT Collaboration / R. Cerisola)

Carnegie Institution


0 comentarios

Scientists everywhere are trying to study the electrical properties of single molecules. With controlled stretching of such molecules, Cornell researchers have demonstrated that single-molecule devices can serve as powerful new tools for fundamental science experiments. Their work has resulted in detailed tests of long-existing theories on how electrons interact at the nanoscale.

The work, led by professor of physics Dan Ralph, is published in the June 10 online edition of the journal Science. First author is Joshua Parks, a former graduate student in Ralph's lab.

The scientists studied particular cobalt-based molecules with so-called intrinsic spin -- a quantized amount of angular momentum. Theories first postulated in the 1980s predicted that molecular spin would alter the interaction between electrons in the molecule and conduction electrons surrounding it, and that this interaction would determine how easily electrons flow through the molecule. Before now, these theories had not been tested in detail because of the difficulties involved in making devices with controlled spins.

Understanding single-molecule electronics requires expertise in both chemistry and physics, and Cornell's team has specialists in both.

"People know about high-spin molecules, but no one has been able to bring together the chemistry and physics to make controlled contact with these high-spin molecules," Ralph said.

The researchers made their observations by stretching individual spin-containing molecules between two electrodes and analyzing their electrical properties. They watched electrons flow through the cobalt complex, cooled to extremely low temperatures, while slowly pulling on the ends to stretch it. At a particular point, it became more difficult to pass current through the molecule. The researchers had subtly changed the magnetic properties of the molecule by making it less symmetric.

After releasing the tension, the molecule returned to its original shape and began passing current more easily -- thus showing the molecule had not been harmed. Measurements as a function of temperature, magnetic field and the extent of stretching gave the team new insights into exactly what is the influence of molecular spin on the electron interactions and electron flow.

The effects of high spin on the electrical properties of nanoscale devices were entirely theoretical issues before the Cornell work, Ralph said. By making devices containing individual high-spin molecules and using stretching to control the spin, the Cornell team proved that such devices can serve as a powerful laboratory for addressing these fundamental scientific questions.

(Photo: Joshua Parks)

Cornell University


0 comentarios
Say the words “stem cells” and most people envision new therapies that replace brain cells lost to disease or worn-out hearts.

In atherosclerosis, however, too many stem cells are a bad thing, according to a new study from researchers at Columbia University Medical Center and published online in Science.

In their study, excess numbers of stems cells in the bone marrow of mice accelerated the disease’s progression. The researchers found that large number of bone marrow stem cells create excessive numbers of white blood cells, which flock to cholesterol deposits on the artery wall, enlarging and inflaming them.

The role of bone marrow stem cells was a surprise, says the study’s lead investigator, Alan Tall, MD, the Tilden-Weger-Bieler Professor of Medicine, professor of physiology and cellular biophysics, and director of the Cardiovascular Research Initiative at Columbia University Medical Center.

“It’s been known for decades that a high white cell count is associated with atherosclerosis, but it’s been assumed that the white bloods cells were simply a sign of inflammation,” Dr. Tall says. “No one had actually investigated the connection.”

With two researchers in his lab – Laurent Yvan-Charvet, PhD, and Tamara Pagler, PhD – Dr. Tall traced the increase in white blood cells back to a proliferation of blood cell-producing stem cells in the bone marrow. The rapid proliferation of bone marrow stem cells seems to be triggered by too much cholesterol in their cell membranes making them hyper-responsive to growth factors.

The finding changes the traditional view of atherosclerosis as a disease that happens only in the blood vessels. “Our study says atherosclerosis is more complex than that, and we need to look beyond the artery wall to understand the disease more fully,” Dr. Tall says.

Reducing the number of bone marrow stem cells could help prevent atherosclerosis and heart disease, so the researchers also looked for factors that control bone marrow stem cell proliferation.

In another surprise, they found that HDL (a.k.a. “good cholesterol”) – already well-known for its role in removing LDL (“bad” cholesterol) from arteries – also helps prevent atherosclerosis by suppressing bone marrow stem cell proliferation.

Though the treatment of atherosclerosis has been revolutionized by statins, which reduce LDL, large numbers of people are still at risk of heart disease because of low HDL.

HDL can be raised through exercise, moderate alcohol consumption, and drugs such as niacin, but interventions specifically targeted at HDL have not yet been directly investigated for their ability to prevent heart disease.

New treatments may be on the horizon as HDL-raising therapies have moved again to the forefront of atherosclerosis clinical research. Several large clinical studies are ongoing, and results should be available in the next few years. And two other recent studies, also published online in Science in May, have discovered new molecules that regulate HDL, a finding that may lead to a new class of HDL-raising drugs.

Measuring the effectiveness of HDL stimulated by these new drugs remains a challenge in drug development, and the finding that HDL may suppress stem cell proliferation and decrease white cell counts gives researchers a new method for assessing the efficacy of new HDL-raising drugs.

Columbia University




Selected Science News. Copyright 2008 All Rights Reserved Revolution Two Church theme by Brian Gardner Converted into Blogger Template by Bloganol dot com