Wednesday, September 23, 2009


0 comentarios

Scientists at the Naval Research Laboratory are developing a new technology for use in underwater acoustics. The new technology uses flashes of laser light to remotely create underwater sound. The new acoustic source has the potential to expand and improve both Naval and commercial underwater acoustic applications, including undersea communications, navigation, and acoustic imaging. Dr. Ted Jones, a physicist in the Plasma Physics Division, is leading a team of researchers from the Plasma Physics, Acoustics, and Marine Geosciences Divisions in developing this acoustic source.

Efficient conversion of light into sound can be achieved by concentrating the light sufficiently to ionize a small amount of water, which then absorbs laser energy and superheats. The result is a small explosion of steam, which can generate a 220 decibel pulse of sound. Optical properties of water can be manipulated with very intense laser light to act like a focusing lens, allowing nonlinear self-focusing (NSF) to take place. In addition, the slightly different colors of the laser, which travel at different speeds in water due to group velocity dispersion (GVD), can be arranged so that the pulse also compresses in time as it travels through water, further concentrating the light. By using a combination of GVD and NSF, controlled underwater compression of optical pulses can be attained.

The driving laser pulse has the ability to travel through both air and water, so that a compact laser on either an underwater or airborne platform can be used for remote acoustic generation. Since GVD and NSF effects are much stronger in water than air, a properly tailored laser has the ability to travel many hundreds of meters through air, remaining relatively unchanged, then quickly compress upon entry into the water. Atmospheric laser propagation is useful for applications where airborne lasers produce underwater acoustic signals without any required hardware in the water, such as undersea communications from aircraft.

Also, commercially available, high-repetition-rate pulsed lasers, steered by a rapidly movable mirror, can generate arbitrary arrays of phased acoustic sources. On a compact underwater platform with an acoustic receiver, such a setup can rapidly generate oblique-angle acoustic scattering data, for imaging and identifying underwater objects. This would be a significant addition to traditional direct backscattering acoustic data.

(Photo: NRL)

U.S. Navy


0 comentarios

You've heard about flower power. What about tree power? It turns out that it's there, in small but measurable quantities. There's enough power in trees for University of Washington researchers to run an electronic circuit, according to results to be published in an upcoming issue of the Institute of Electrical and Electronics Engineers' Transactions on Nanotechnology.

"As far as we know this is the first peer-reviewed paper of someone powering something entirely by sticking electrodes into a tree," said co-author Babak Parviz, a UW associate professor of electrical engineering.

A study last year from the Massachusetts Institute of Technology found that plants generate a voltage of up to 200 millivolts when one electrode is placed in a plant and the other in the surrounding soil. Those researchers have since started a company developing forest sensors that exploit this new power source.

The UW team sought to further academic research in the field of tree power by building circuits to run off that energy. They successfully ran a circuit solely off tree power for the first time.

Co-author Carlton Himes, a UW undergraduate student, spent last summer exploring likely sites. Hooking nails to trees and connecting a voltmeter, he found that bigleaf maples, common on the UW campus, generate a steady voltage of up to a few hundred millivolts.

The UW team next built a device that could run on the available power. Co-author Brian Otis, a UW assistant professor of electrical engineering, led the development of a boost converter, a device that takes a low incoming voltage and stores it to produce a greater output. His team's custom boost converter works for input voltages of as little as 20 millivolts (a millivolt is one-thousandth of a volt), an input voltage lower than any existing such device. It produces an output voltage of 1.1 volts, enough to run low-power sensors.

The UW circuit is built from parts measuring 130 nanometers and it consumes on average just 10 nanowatts of power during operation (a nanowatt is one billionth of a watt).

"Normal electronics are not going to run on the types of voltages and currents that we get out of a tree. But the nanoscale is not just in size, but also in the energy and power consumption," Parviz said.

"As new generations of technology come online," he added, "I think it's warranted to look back at what's doable or what's not doable in terms of a power source."

Despite using special low-power devices, the boost converter and other electronics would spend most of their time in sleep mode in order to conserve energy, creating a complication.

"If everything goes to sleep, the system will never wake up," Otis said.

To solve this problem Otis' team built a clock that runs continuously on 1 nanowatt, about a thousandth the power required to run a wristwatch, and when turned on operates at 350 millivolts, about a quarter the voltage in an AA battery. The low-power clock produces an electrical pulse once every few seconds, allowing a periodic wakeup of the system.

The tree-power phenomenon is different from the popular potato or lemon experiment, in which two different metals react with the food to create an electric potential difference that causes a current to flow.

"We specifically didn't want to confuse this effect with the potato effect, so we used the same metal for both electrodes," Parviz said.

Tree power is unlikely to replace solar power for most applications, Parviz admits. But the system could provide a low-cost option for powering tree sensors that might be used to detect environmental conditions or forest fires. The electronic output could also be used to gauge a tree's health.

"It's not exactly established where these voltages come from. But there seems to be some signaling in trees, similar to what happens in the human body but with slower speed," Parviz said. "I'm interested in applying our results as a way of investigating what the tree is doing. When you go to the doctor, the first thing that they measure is your pulse. We don't really have something similar for trees."

(Photo: University of Washington)

University of Washington


0 comentarios

Biologists at the University of California, Riverside report new evidence for evolutionary change recorded in both the fossil record and the genomes (or genetic blueprints) of living organisms, providing fresh support for Charles Darwin’s theory of evolution.

The researchers were able to correlate the progressive loss of enamel in the fossil record with a simultaneous molecular decay of a gene, called the enamelin gene, that is involved in enamel formation in mammals.

Enamel is the hardest substance in the vertebrate body, and most mammals have teeth capped with it.

Examples exist, however, of mammals without mineralized teeth (e.g., baleen whales, anteaters, pangolins) and of mammals with teeth that lack enamel (e.g., sloths, aardvarks, and pygmy sperm whales). Further, the fossil record documents when enamel was lost in these lineages.

“The fossil record is almost entirely limited to hard tissues such as bones and teeth,” said Mark Springer, a professor of biology, who led the study. “Given this limitation, there are very few opportunities to examine the co-evolution of genes in the genome of living organisms and morphological features preserved in the fossil record.”

In 2007, Springer, along with Robert Meredith and John Gatesy in the Department of Biology at UC Riverside, initiated a study of enamelless mammals in which the researchers focused on the enamelin gene. They predicted that these species would have copies of the gene that codes for the tooth-specific enamelin protein, but this gene would show evidence of molecular decay in these species.

“Mammals without enamel are descended from ancestral forms that had teeth with enamel,” Springer said. “We predicted that enamel-specific genes such as enamelin would show evidence in living organisms of molecular decay because these genes are vestigial and no longer necessary for survival.”

Now his lab has found evidence of such molecular “cavities” in the genomes of living organisms. Using modern gene sequencing technology, Meredith discovered mutations in the enamelin gene that disrupt how the enamelin protein is coded, resulting in obliteration of the genetic blueprint for the enamelin protein.

Results of the study appeared in the Sept. 4 issue of the open-access journal PLoS Genetics.

Darwin argued that all organisms are descended from one or a few organisms and that natural selection drives evolutionary change. The fossil record demonstrates that the first mammals had teeth with enamel. Mammals without enamel therefore must have descended from mammals with enamel-covered teeth.

“We could therefore predict that nonfunctional vestiges of the genes that code for enamel should be found in mammals that lack enamel,” Springer said. “When we made our predictions, however, we did not have sequences for the enamelin gene in toothless and enamelless mammals. Since then my lab worked on obtaining these sequences so we could test our prediction.”

Previous studies in evolutionary biology have provided only limited evidence linking morphological degeneration in the fossil record to molecular decay in the genome. The study led by Springer takes advantage of the hardness of enamel and teeth to provide more robust evidence for the linkage.

“The molecular counterpart to vestigial organs is pseudogenes that are descended from formerly functional genes,” Springer explained. “In our research we clearly see the parallel evolution of enamel loss in the fossil record and the molecular decay of the enamelin gene into a pseudogene in representatives of four different orders of mammals that have lost enamel.”

Broadly, the research involved the following steps: First, Meredith collected the DNA sequences for the enamelin gene in different mammals. Next, the researchers analyzed sequences using a variety of molecular evolutionary methods, including new approaches developed by Springer’s group. Finally, the group used the results of their analyses to test previous hypotheses and generate new ones.

“Currently, we are actively engaged in deciphering the evolutionary history of other genes that are involved in enamel formation,” Springer said.

(Photo: John Gatesy and Carl Buell)


0 comentarios

Our minds might be able to find the right word quicker than Google and yet we rarely pause to think about how language shapes everything we do.

One person who has delved a little bit deeper is Applied Linguistics Professor Vivian Cook, of Newcastle University, who has been intrigued by the subtle nuances and ever-evolving nature of the English language for the past thirty years.

He suggests that the words we use say much more about us than we might realise. For example, we can do a great deal to appear younger or older than we actually are, but our instinctive use of language often gives away our real age.

He devised a simple test for his new book, It’s all in a Word (published 17 September 2009), which shows how the words we choose can betray our age.

“Some words do not die out, only the people who use them,” explained Professor Cook. “To a certain extent we are labelled by the words of our generation and carry them with us, but this explanation does not always work.

“For example, to me a word like ‘chap’ is very much an older-generation word, but it has been going strong since 1716 and is still used by many people today, of all ages.”

Professor Cook suggests that many of us adopt the words we think are suitable for our years and therefore constantly adapt our vocabulary as we grow older.

And it’s not just what we say about ourselves, but how we address others that says a great deal about how much respect we have for them and how comfortable we feel around them.

“In politics, where being in touch with the public is seen as vitally important, we can learn a lot from how politicians are addressed by both the media and their colleagues,” said Professor Cook. “When Tony Blair was in Number 10, no one ever referred to him as ‘Anthony’, preferring the more informal ‘Tony’, but I can’t imagine anyone calling Gordon Brown ‘Gordie’.”

Shortening someone’s name or adding ‘ie’ or ‘y’ on the end gives away your attitude towards them. Short forms show familiarity, where adding an ‘ie’ or ‘y’ can show a layer of condescension.

In his book, Professor Cook’s ‘Mateyness Scale’ asks readers to think about how accessible politicians, both past and present, appear by rating them according to how they are often referred to by a particular version of their name. For example, is the former American president George W (distant), Dubya (fairly friendly) or Georgie (friendly) Bush?

The words we use can also often define our heritage long after we have moved to another part of the country. Whether we hoy, cob or bung something when we throw it shows roots in either the Midlands, South East/West or North East respectively.

Alleyways also offer a diverse amount of options depending on where you are in the UK, ranging from a snicket (West Midlands), a wynd/lane/close (Scotland), a jetty (East Midlands) and a folly (East Anglia) among others.

“Surprisingly little is known about everyday dialect words such as these as most researchers have collected the vocabulary of people living in the country, known as affectionately as NORMS – non-mobile, older, rural males - rather than ordinary people living in towns,” explained Professor Cook.

As a Southerner moving up to Newcastle, Professor Cook said it took him some time to adjust to stotties (a kind of bun), chares (alleyways) and slippy (slippery) pavements, not to mention the weather forecaster who warns viewers to look out for skitey bits (ice patches).

His book covers many different aspects of words, ranging from their forms to their meanings and how new words are created, to how they organise society and help us think.

“English is a voracious language,” said Professor Cook. “For centuries it has gobbled up words and meanings from all kinds of sources and cultures as well as being a magnet for originality and invention.

“However much we know about words, there’s always something new to learn, which makes it so fascinating. Words and language are crucial in everything we do and the underlying message of this book is to encourage a wider interest in vocabulary.”

It’s all in a Word shows how English has travelled across the world and what language says about us, including practical linguistics tips such as how to learn new vocabulary and a bluffer’s guide to sounding like a wine expert.

(Photo: Newcastle U.)

Newcastle University


0 comentarios

Every movement and every thought requires the passing of specific information between networks of nerve cells. To improve a skill or to learn something new entails more efficient or a greater number of cell contacts. Scientists at the Max Planck Institute of Neurobiology in Martinsried could now show, together with an international team of researchers, that certain cells in the brain, the astrocytes, actively influence this information exchange.

Until now, astrocytes were thought to have their main role in the development and nutrition of the brain's nerve cells. The new findings improve our comprehension of how the brain learns and remembers. They could also aid in the basic research of diseases such as epilepsy and the amyotrophic lateral sclerosis (ALS).

To live is to learn: Even fruit flies can learn to avoid detrimental odors and also in humans, most abilities are based on what we learn through practice and experience. Thus we are able to perform both fundamental processes such as walking and speaking and also master complex tasks such as logical reasoning and social interactions.

In order to learn something, i.e. to process new information, nerve cells grow new connections or strengthen existing contact points. At such contact points, the synapses, information is passed from one cell to the next. Once a synapse is created, new information has a means to be passed on and the information is learned. Enhancing an acquired skill through practice is then accomplished by strengthening the synapses involved. Incoming information elicits a much stronger response in the downstream nerve cell when passing through a strengthened synapse, as compared to a "normal" synapse. At the cellular level, this can be envisioned as follows: At a synapse, the two communicating nerve cells do not come in direct contact but are separated by a small gap. When incoming information reaches the synapse, glutamate is released into the gap. These transmitter molecules cross the gap and bind to special receptors in the downstream nerve cell. This in turn prompts the downstream cell to pass on the information. In a strengthened synapse, the informing cell releases more glutamate into the synaptic gap and/or the informed cell is more efficient at binding the glutamate. As a result, information transmission is significantly enhanced.

In the brain, parts of nerve cells and single synapses are often enclosed by star-shaped cells, the astrocytes. So far, astrocytes were mainly thought to aid nerve cells - for example by supporting them or by promoting the maturation of synapses. Scientists of the Max Planck Institute of Neurobiology and an international team of researchers have now shown that astrocytes also have another, much more active role in the brain: They affect the synapses' ability to strengthen, and thus help to facilitate the learning process.

By removing the glutamate transmitter from the synaptic gap via so-called transporters, astrocytes regulate the availability of glutamate. "These transporters are somewhat like small vacuum cleaners", says Ruediger Klein, the supervisor of the study. "They suck surplus glutamate from the gap, which prevents, for example, glutamate spilling over from one synapse to the next." The existence of this "glutamate vacuum cleaner" was already known to science. So far unheard of, and now shown by the scientists, was that the astrocyte and downstream nerve cell communicate with each other and thus regulate the number of glutamate-eliminating transporters.

This communication was found while the neurobiologists were examining the signaling molecule ephrinA3 and its binding partner EphA4 in mice. Ephrins and Eph-receptors are regularly involved when cells recognize or influence each other. Astrocytes, for example, promote synapse maturation via ephrinA3/EphA4 interaction. "Yet it came as a surprise to find an effect working also in the other direction", Ruediger Klein remembers. The scientists found that if a nerve cell is lacking the EphA4-receptor, the neighboring astrocyte increases its transporter numbers. The resulting superabundant transporters eliminate so much glutamate from the synapse that its strengthening becomes impossible, a sure disadvantage for the ability to learn.

The importance of the ephrinA3/EphA4 signaling pathway was further emphasized by the control studies. If the signaling molecule ephrinA3 was absent in an astrocyte, a synaptic strengthening was impaired due to the lack of glutamate - just what happened when EphA4 was missing. In contrast, if ephrinA3 was experimentally increased, the number of astrocyte-transporters decreased. As a result glutamate accumulated in the synaptic gap which in turn quickly led to cell damages and malfunctions of the affected synapses.

"We are currently investigating the mechanisms through which ephrinA3/EphA4 affect the transporter production", explains Ruediger Klein. The scientists' aim is to better understand the transporters' function. An important task, as malfunctioning of the astrocyte transporters is known to play a role in neurological and neurodegenerative diseases such as epilepsy and the amyotrophic lateral sclerosis (ALS).

(Photo: Max Planck Institute of Neurobiology / Schorner, Klein & Paixão)

Max Planck Institute


0 comentarios

Scientists have used a new approach to sharpen the understanding of one of the most uncertain of mankind’s influences on climate—the effects of atmospheric “haze,” the tiny airborne particles from pollution, biomass burning, and other sources.

The new observations-based study led by NOAA confirms that the particles (“aerosols”) have the net effect of cooling the planet—in agreement with previous understanding—but arrives at the answer in a completely new way that is more straightforward, and has narrowed the uncertainties of the estimate. The findings appear in this week’s Journal of Geophysical Research - Atmospheres.

The researchers, led by NOAA scientist Daniel M. Murphy of NOAA’s Earth System Research Laboratory in Boulder, Colo., applied fundamental conservation of energy principles to construct a global energy “budget” of the climate’s “credits” and “debits”—heating and cooling processes—since 1950, using only observations and straightforward calculations without the more complicated algorithms of global climate models. They then calculated the cooling effect of the aerosols as the only missing term in the budget, arriving at an estimate of 1.1 watts per square meter. A watt is a unit of power.

The results support the 2007 assessment by the Intergovernmental Panel on Climate Change (IPCC) that estimated aerosol cooling at 1.2 watts per square meter. But the new study places that estimate on more solid ground and rules out the larger cooling effects that were previously thought to be possible.

“The agreement boosts our confidence in both the models and the new approach,” said Murphy. “Plus, we’ve been able to pin down the amount of cooling by aerosols better than ever.”

The narrower bounds on aerosol effects will help in predicting climate change and accounting for climate change to date.

In balancing the budget for the processes perturbing the heating and cooling of the Earth, Murphy and colleagues found that since 1950, the planet released about 20 percent of the warming influence of heat-trapping greenhouse gases to outer space as infrared energy. Volcanic emissions lingering in the stratosphere offset about 20 percent of the heating by bouncing solar radiation back to space before it reached the surface. Cooling from the lower-atmosphere aerosols produced by humans balanced 50 percent of the heating. Only the remaining 10 percent of greenhouse-gas warming actually went into heating the Earth, and almost all of it went into the ocean.

The new study tackled what the IPCC has identified as one of the most uncertain aspects of the human impacts on climate. Aerosols, which can be either solid or liquid, have complex effects on climate. Sulfate particles formed from pollution can cool the Earth directly by reflecting sunlight. Soot from biomass burning absorbs sunlight and warms the Earth. Aerosols can also affect the formation and properties of clouds, altering their influence on climate. The net effect of all these direct and indirect factors is a cooling by aerosols, which has partially offset the warming by greenhouse gases.

(Photo: NOAA)



0 comentarios

Like the robotic rovers Spirit and Opportunity, which wheeled tirelessly across the dusty surface of Mars, a new robot spent most of July traveling across the muddy ocean bottom, about 40 kilometers (25 miles) off the California coast. This robot, the Benthic Rover, has been providing scientists with an entirely new view of life on the deep seafloor. It will also give scientists a way to document the effects of climate change on the deep sea. The Rover is the result of four years of hard work by a team of engineers and scientists led by MBARI project engineer Alana Sherman and marine biologist Ken Smith.

About the size and weight of a small compact car, the Benthic Rover moves very slowly across the seafloor, taking photographs of the animals and sediment in its path. Every three to five meters (10 to 16 feet) the Rover stops and makes a series of measurements on the community of organisms living in the seafloor sediment. These measurements will help scientists understand one of the ongoing mysteries of the ocean—how animals on the deep seafloor find enough food to survive.

Most life in the deep sea feeds on particles of organic debris, known as marine snow, which drift slowly down from the sunlit surface layers of the ocean. But even after decades of research, marine biologists have not been able to figure out how the small amount of nutrition in marine snow can support the large numbers of organisms that live on and in seafloor sediment.

The Benthic Rover carries two experimental chambers called "benthic respirometers" that are inserted a few centimeters into the seafloor to measure how much oxygen is being consumed by the community of organisms within the sediment. This, in turn, allows scientists to calculate how much food the organisms are consuming. At the same time, optical sensors on the Rover scan the seafloor to measure how much food has arrived recently from the surface waters.

MBARI researchers have been working on the Benthic Rover since 2005, overcoming many challenges along the way. The most obvious challenge was designing the Rover to survive at depths where the pressure of seawater is about 420 kilograms per square meter (6,000 pounds per square inch). To withstand this pressure, the engineers had to shield the Rover's electronics and batteries inside custom-made titanium pressure spheres.

To keep the Rover from sinking into the soft seafloor mud, the engineers outfitted the vehicle with large yellow blocks of buoyant foam that will not collapse under extreme pressure. This foam gives the Rover, which weighs about 1,400 kilograms (3,000 pounds) in air, a weight of only about 45 kilograms (100 pounds) in seawater.

Other engineering challenges required less high-tech solutions. In constructing the Rover's tractor-like treads, the design team used a decidedly low-tech material—commercial conveyor belts. After watching the Benthic Rover on the seafloor using MBARI's remotely operated vehicles (ROVs), however, the researchers discovered that the belts were picking up mud and depositing it in front of the vehicle, where it was contaminating the scientific measurements. In response, the team came up with a low-tech but effective solution: they removed the heads from two push brooms and bolted them onto the vehicle so that the stiff bristles would clean off the treads as they rotated.

The team also discovered that whenever the Rover moved, it stirred up a cloud of sediment like the cloud of dust that follows the character "Pig-Pen" in the Charlie Brown comic strip. This mud could have affected the Rover's measurements. To reduce this risk, the engineers programmed the Rover to move very, very slowly—about one meter (3 feet) a minute. The Rover is also programmed to sense the direction of the prevailing current, and only move in an up-current direction, so that any stirred-up mud will be carried away from the front of the vehicle.

In its basic configuration, the Benthic Rover is designed to operate on batteries, without any human input. However, during its month-long journey this summer, the Rover was connected by a long extension cord to a newly-completed underwater observatory. This observatory, known as the Monterey Accelerated Research System (MARS), provided power for the robot, as well as a high-speed data link back to shore.

According to Sherman, "Hooking up the Rover to the observatory opened up a whole new world of interactivity. Usually when we deploy the Rover, we have little or no communication with the vehicle. We drop it overboard, cross our fingers, and hope that it works." In this case, however, the observatory connection allowed MBARI researchers to fine tune the Rover's performance and view its data, videos, and still images in real time. Sherman recalls, "One weekend I was at home, with my laptop on the kitchen table, controlling the vehicle and watching the live video from 900 meters below the surface of Monterey Bay. It was amazing!"

Later this fall, the Rover will be sent back down to the undersea observatory site in Monterey Bay for a two-month deployment. Next year the team hopes to take the Rover out to a site about 220 km (140 miles) offshore of Central California. They will let the Rover sink 4,000 meters down to the seafloor, where it will make measurements on its own for six months. The team would also like to take the Rover to Antarctica, to study the unique seafloor ecosystems there. The Rover may also be hooked up to a proposed deep-water observatory several hundred miles off the coast of Washington state.

In addition to answering some key questions of oceanography, the Benthic Rover will help researchers study the effects of climate change in the ocean. As the Earth's atmosphere and oceans become warmer, even life in the deep sea will be affected. The Benthic Rover, and its possible successors, will help researchers understand how deep-sea communities are changing over time.

Just as the rovers Spirit and Opportunity gave us dramatic new perspectives on the planet Mars, so the Benthic Rover is giving researchers new perspectives of a dark world that is in some ways more mysterious than the surface of the distant red planet.

(Photo: © 2007 MBARI)



0 comentarios
In the 2,000 or so years since the Roman Empire employed a naturally occurring form of cement to build a vast system of concrete aqueducts and other large edifices, researchers have analyzed the molecular structure of natural materials and created entirely new building materials such as steel, which has a well-documented crystalline structure at the atomic scale.

Oddly enough, the three-dimensional crystalline structure of cement hydrate - the paste that forms and quickly hardens when cement powder is mixed with water - has eluded scientific attempts at decoding, despite the fact that concrete is the most prevalent man-made material on earth and the focus of a multibillion-dollar industry that is under pressure to clean up its act. The manufacture of cement is responsible for about 5 percent of all carbon dioxide emissions worldwide, and new emission standards proposed by the U.S. Environmental Protection Agency could push the cement industry to the developing world.

"Cement is so widely used as a building material that nobody is going to replace it anytime soon. But it has a carbon dioxide problem, so a basic understanding of this material could be very timely," said MIT Professor Sidney Yip, co-author of a paper published online in the Proceedings of the National Academy of Sciences (PNAS) during the week of Sept. 7 that announces the decoding of the three-dimensional structure of the basic unit of cement hydrate by a group of MIT researchers who have adopted the team name of Liquid Stone.

"We believe this work is a first step toward a consistent model of the molecular structure of cement hydrate, and we hope the scientific community will work with it," said Yip, who is in MIT's Department of Nuclear Science and Engineering (NSE). "In every field there are breakthroughs that help the research frontier moving forward. One example is Watson and Crick's discovery of the basic structure of DNA. That structural model put biology on very sound footing."

Scientists have long believed that at the atomic level, cement hydrate (or calcium-silica-hydrate) closely resembles the rare mineral tobermorite, which has an ordered geometry consisting of layers of infinitely long chains of three-armed silica molecules (called silica tetrahedra) interspersed with neat layers of calcium oxide.

But the MIT team found that the calcium-silica-hydrate in cement isn't really a crystal. It's a hybrid that shares some characteristics with crystalline structures and some with the amorphous structure of frozen liquids, such as glass or ice.

At the atomic scale, tobermorite and other minerals resemble the regular, layered geometric patterns of kilim rugs, with horizontal layers of triangles interspersed with layers of colored stripes. But a two-dimensional look at a unit of cement hydrate would show layers of triangles (the silica tetrahedra) with every third, sixth or ninth triangle turned up or down along the horizontal axis, reaching into the layer of calcium oxide above or below.

And it is in these messy areas - where breaks in the silica tetrahedra create small voids in the corresponding layers of calcium oxide - that water molecules attach, giving cement its robust quality. Those erstwhile "flaws" in the otherwise regular geometric structure provide some give to the building material at the atomic scale that transfers up to the macro scale. When under stress, the cement hydrate has the flexibility to stretch or compress just a little, rather than snapping.

"We've known for several years that at the nano scale, cement hydrates pack together tightly like oranges in a grocer's pyramid. Now, we've finally been able to look inside the orange to find its fundamental signature. I call it the DNA of concrete," said Franz-Josef Ulm, the Macomber Professor in the Department of Civil and Environmental Engineering (CEE), a co-author of the paper. "Whereas water weakens a material like tobermorite or jennite, it strengthens the cement hydrate. The 'disorder' or complexity of its chemistry creates a heterogenic, robust structure.

"Now that we have a validated molecular model, we can manipulate the chemical structure to design concrete for strength and environmental qualities, such as the ability to withstand higher pressure or temperature," said Ulm.

CEE Visiting Professor Roland Pellenq, director of research at the Interdisciplinary Center of Nanosciences at Marseille, which is part of the French National Center of Scientific Research and Marseille University, pinned down the exact chemical shape and structure of C-S-H using atomistic modeling on 260 co-processors and a statistical method called the grand canonical Monte Carlo simulation.

Like its name, the simulation requires a bit of gambling to find the answer. Pellenq first removed all water molecules from the basic unit of tobermorite, watched the geometry collapse, then returned the water molecules singly, then doubly and so on, removing them each time to allow the geometry to reshape as it would naturally. After he added the 104th water molecule, the correct atomic weight of C-S-H was reached, and Pellenq knew he had an accurate model for the geometric structure of the basic unit of cement hydrate.

The team then used that atomistic model to perform six tests that validated its accuracy.

"This gives us a starting point for experiments to improve the mechanical properties and durability of concrete. For instance, we can now start replacing silica in our model with other materials," said Pellenq.



0 comentarios
A molecular cloud is a cloud of gas that acts as a stellar nursery. When a molecular cloud collapses, only a small fraction of the cloud's material forms stars. Scientists aren't sure why.

Gravity favors star formation by drawing material together, therefore some additional force must hinder the process. Magnetic fields and turbulence are the two leading candidates. (A magnetic field is produced by moving electrical charges. Stars and most planets, including Earth, exhibit magnetic fields.) Magnetic fields channel flowing gas, making it hard to drawn the gas from all directions, while turbulence stirs the gas and induces an outward pressure that counteracts gravity.

"The relative importance of magnetic fields versus turbulence is a matter of much debate," said astronomer Hua-bai Li of the Harvard-Smithsonian Center for Astrophysics. "Our findings serve as the first observational constraint on this issue."

Li and his team studied 25 dense patches, or cloud cores, each one about a light-year in size. The cores, which act as seeds from which stars form, were located within molecular clouds as much as 6,500 light-years from Earth. (A light-year is the distance light travels in a year, or 6 trillion miles.)

The researchers studied polarized light, which has electric and magnetic components that are aligned in specific directions. (Some sunglasses work by blocking light with specific polarization.) From the polarization, they measured the magnetic fields within each cloud core and compared them to the fields in the surrounding, tenuous nebula.

The magnetic fields tended to line up in the same direction, even though the relative size scales (1 light-year cores versus 1000 light-year nebulas) and densities were different by orders of magnitude. Since turbulence would tend to churn the nebula and mix up magnetic field directions, their findings show that magnetic fields dominate turbulence in influencing star birth.

"Our result shows that molecular cloud cores located near each other are connected not only by gravity but also by magnetic fields," said Li. "This shows that computer simulations modeling star formation must take strong magnetic fields into account."

In the broader picture, this discovery aids our understanding of how stars form and, therefore, how the universe has come to look the way it is today.

Harvard-Smithsonian Center for Astrophysics




Selected Science News. Copyright 2008 All Rights Reserved Revolution Two Church theme by Brian Gardner Converted into Blogger Template by Bloganol dot com