Thursday, September 30, 2010

RICE STUDY EXAMINES HOW BACTERIA ACQUIRE IMMUNITY

0 comentarios
In a new study, Rice University scientists bring the latest tools of computational biology to bear in examining how the processes of natural selection and evolution influence the way bacteria acquire immunity from disease.

The study is available online from Physical Review Letters. It builds upon one of the major discoveries made possible by molecular genetics in the past decade -- the revelation that bacteria and similar single-celled organisms have an acquired immune system.

"From a purely scientific perspective, this research is teaching us things we couldn't have imagined just a few years ago, but there's an applied interest in this work as well," said Michael Deem, the John W. Cox Professor in Biochemical and Genetic Engineering and professor of physics and astronomy at Rice. "It is believed, for instance, that the bacterial immune system uses a process akin to RNA interference to silence the disease genes it recognizes, and biotechnology companies may find it useful to develop this as a tool for silencing particular genes."

The new study by Deem and graduate student Jiankui He focused on a portion of the bacterial genome called the "CRISPR," which stands for "clustered regularly interspaced short palindromic repeats." The CRISPR contain two types of DNA sequences. One type -- short, repeating patterns that first attracted scientific interest -- is what led to the CRISPR name. But scientists more recently learned that the second type -- originally thought of as DNA "spacers" between the repeats -- is what the organism uses to recognize disease.

"Bacteria get attacked by viruses called phages, and the CRISPR contain genetic sequences from phages," Deem said. "The CRISPR system is both inheritable and programmable, meaning that some sequences may be there when the organism is first created, and new ones may also be added when new phages attack the organism during its life cycle."

The repeating sequences appear to be a kind of bookend or flag that the organism uses to determine where a snippet from a phage begins and ends. The CRISPR will often have between 30 and 50 of these snippets of phage sequences. Previous studies have found that once a bacteria has a phage sequence in its CRISPR, it has the ability to degrade any DNA or RNA that match that sequence -- meaning it can fend off attacks from any phages that have genes matching those in its CRISPR.

"What we wanted to explore was how the history of a bacterium's exposure to phages influences what's in the CRISPR," Deem said. "In other words, how is an organism's previous exposure to viruses reflected in its own genome?"

From earlier published studies, Deem and He knew that phage sequences were added to the CRISPR sequentially. So, in a CRISPR system containing 30 snippets, the newest one would be in position one, at the front of the line. In another study in 2007, researchers examining the CRISPR of whole populations of bacteria noticed some statistical irregularities. They found that the likelihood of two different organisms having the same snippet in their CRISPR increased exponentially as they progressed away from position one. So, in the organism with 30 snippets, the phage gene in position 30 was the most likely to be conserved time and again across all the bacteria in the population.

To use the power of computers to examine why this happens, Deem and He needed a mathematical description of what was happening over time to both the bacterial and phage populations. The equations they created reflect the way the bacterial and phage populations interact via the CRISPR.

"Each population is trying to expand, and selective pressure is constantly being applied on both sides," Deem said. "You can see how this plays out in the CRISPR over time. There's a diverse assortment of genes in the first spacer, but the second spacer has been in there longer, so there's been more selective pressure applied to that spacer. Because bacteria that contain the dominant viral strain in their CRISPR are more likely to survive than those that don't, they tend to squeeze out their neighbors that are more vulnerable. At position N, the farthest way from position one, selection has been at work the longest, so the genes we find there were the most common and the ones that tended to afford the most overall protection to the organism."

In addition to interest from biotechnology firms, Deem said the workings of the CRISPR are of interest to drugmakers who are investigating new types of antibiotics.

Rice University

CHILDREN UNDER 4 AND CHILDREN WITH AUTISM DON'T YAWN CONTAGIOUSLY

0 comentarios
If someone near you yawns, do you yawn, too? About half of adults yawn after someone else does in a phenomenon called contagious yawning. Now a new study has found that most children aren't susceptible to contagious yawning until they're about 4 years old—and that children with autism are less likely to yawn contagiously than others.

The study, conducted by researchers at the University of Connecticut, appears in the September/October 2010 issue of the journal Child Development.

To determine the extent to which children at various stages of social development are likely to yawn contagiously, the researchers studied 120 typically developing 1- to 6-year-olds. Although babies begin to yawn spontaneously even before they leave the womb, most of the children in this study didn't show signs of contagious yawning until they were 4.

The team also studied about 30 6- to 15-year-olds with autism spectrum disorders (ASD), comparing them to two other groups of typically developing children with the same mental and chronological ages. The children with ASD were less likely to yawn contagiously than their typically developing peers, the researchers found. And children with diagnoses that imply more severe autistic symptoms were much less likely to yawn contagiously than those with milder diagnoses.

"Given that contagious yawning may be a sign of empathy, this study suggests that empathy—and the mimicry that may underlie it—develops slowly over the first few years of life, and that children with ASD may miss subtle cues that tie them emotionally to others," according to the researchers. This study may provide guidance for approaches to working with children with ASD so that they focus more on such cues.

Society for Research in Child Development

EARTH - THE EARLY YEARS

0 comentarios

Plate tectonics may not have operated on a younger and hotter Earth according to new research from the University of Bristol carried out on preserved remnants of ancient continental crust in the Hudson Bay region of Canada.

The processes that formed and shaped the early Earth’s crust are still poorly understood, and the onset of plate tectonics has been estimated as being as early as the Hadean (4.6-3.8 billion years ago) or as late as the Neoproterozoic (1,000-542 million years ago).

The Hudson Bay region represents an excellent opportunity to investigate crustal formation during the Precambrian (that is from the formation of the Earth, almost 4.6 billion years ago, until 542 million years ago). It records tectonic events leading to the formation and stabilisation of the Laurentian continent (most of modern-day North America) and also preserves crust with ages spanning around 2 billion years.

David Thompson, a PhD student in Bristol’s Department of Earth Sciences, and colleagues from the University of Calgary and the Geological Survey of Canada analysed the structure and thickness of the crust using seismic waves to investigate what tectonic process were operating on the younger Earth.

They found that the crust between 3.9 and 2.7 billion years old had a relatively simple structure with no clear signs that it was subjected to the kind of plate tectonic processes operating now. The crust that formed about 1.8 billion years, however, showed evidence of the collision of two continents, similar to the collision between India and Asia still ongoing today.

The researchers then compared the observed structure of the Hudson Bay crust with continental crust of a similar age found elsewhere in the world. They concluded that modern-style plate tectonics may not have been established until as late as 1.8 billion years ago.

(Photo: NASA)

RESEARCHERS GIVE ROBOTS THE CAPABILITY FOR DECEPTIVE BEHAVIOR

0 comentarios

A robot deceives an enemy soldier by creating a false trail and hiding so that it will not be caught. While this sounds like a scene from one of the Terminator movies, it's actually the scenario of an experiment conducted by researchers at the Georgia Institute of Technology as part of what is believed to be the first detailed examination of robot deception.

"We have developed algorithms that allow a robot to determine whether it should deceive a human or other intelligent machine and we have designed techniques that help the robot select the best deceptive strategy to reduce its chance of being discovered," said Ronald Arkin, a Regents professor in the Georgia Tech School of Interactive Computing.

The results of robot experiments and theoretical and cognitive deception modeling were published online on Sept. 3 in the International Journal of Social Robotics. Because the researchers explored the phenomena of robot deception from a general perspective, the study's results apply to robot-robot and human-robot interactions. This research was funded by the Office of Naval Research.

In the future, robots capable of deception may be valuable for several different areas, including military and search and rescue operations. A search and rescue robot may need to deceive in order to calm or receive cooperation from a panicking victim. Robots on the battlefield with the power of deception will be able to successfully hide and mislead the enemy to keep themselves and valuable information safe.

"Most social robots will probably rarely use deception, but it's still an important tool in the robot's interactive arsenal because robots that recognize the need for deception have advantages in terms of outcome compared to robots that do not recognize the need for deception," said the study's co-author, Alan Wagner, a research engineer at the Georgia Tech Research Institute.

For this study, the researchers focused on the actions, beliefs and communications of a robot attempting to hide from another robot to develop programs that successfully produced deceptive behavior. Their first step was to teach the deceiving robot how to recognize a situation that warranted the use of deception. Wagner and Arkin used interdependence theory and game theory to develop algorithms that tested the value of deception in a specific situation. A situation had to satisfy two key conditions to warrant deception -- there must be conflict between the deceiving robot and the seeker, and the deceiver must benefit from the deception.

Once a situation was deemed to warrant deception, the robot carried out a deceptive act by providing a false communication to benefit itself. The technique developed by the Georgia Tech researchers based a robot's deceptive action selection on its understanding of the individual robot it was attempting to deceive.

To test their algorithms, the researchers ran 20 hide-and-seek experiments with two autonomous robots. Colored markers were lined up along three potential pathways to locations where the robot could hide. The hider robot randomly selected a hiding location from the three location choices and moved toward that location, knocking down colored markers along the way. Once it reached a point past the markers, the robot changed course and hid in one of the other two locations. The presence or absence of standing markers indicated the hider's location to the seeker robot.

"The hider's set of false communications was defined by selecting a pattern of knocked over markers that indicated a false hiding position in an attempt to say, for example, that it was going to the right and then actually go to the left," explained Wagner.

The hider robots were able to deceive the seeker robots in 75 percent of the trials, with the failed experiments resulting from the hiding robot’s inability to knock over the correct markers to produce the desired deceptive communication.

"The experimental results weren't perfect, but they demonstrated the learning and use of deception signals by real robots in a noisy environment," said Wagner. "The results were also a preliminary indication that the techniques and algorithms described in the paper could be used to successfully produce deceptive behavior in a robot."

While there may be advantages to creating robots with the capacity for deception, there are also ethical implications that need to be considered to ensure that these creations are consistent with the overall expectations and well-being of society, according to the researchers.

"We have been concerned from the very beginning with the ethical implications related to the creation of robots capable of deception and we understand that there are beneficial and deleterious aspects," explained Arkin. "We strongly encourage discussion about the appropriateness of deceptive robots to determine what, if any, regulations or guidelines should constrain the development of these systems."

(Photo: GIT)

Georgia Institute of Technology

NEW CU-BOULDER RESEARCH SHEDS LIGHT ON WHY OUR BRAINS GET TRIPPED UP WHEN WE'RE ANXIOUS

0 comentarios

A new University of Colorado at Boulder study sheds light on the brain mechanisms that allow us to make choices and ultimately could be helpful in improving treatments for the millions of people who suffer from the effects of anxiety disorders.

In the study, CU-Boulder psychology Professor Yuko Munakata and her research colleagues found that "neural inhibition," a process that occurs when one nerve cell suppresses activity in another, is a critical aspect in our ability to make choices.

"The breakthrough here is that this helps us clarify the question of what is happening in the brain when we make choices, like when we choose our words," Munakata said. "Understanding more about how we make choices, how the brain is doing this and what the mechanisms are, could allow scientists to develop new treatments for things such as anxiety disorders."

Researchers have long struggled to determine why people with anxiety can be paralyzed when it comes to decision-making involving many potential options. Munakata believes the reason is that people with anxiety have decreased neural inhibition in their brain, which leads to difficulty making choices.

"A lot of the pieces have been there," she said. "What's new in this work is bringing all of this together to say here's how we can fit all of these pieces of information together in a coherent framework explaining why it's especially hard for people with anxiety to make decisions and why it links to neural inhibitors."

A paper on the findings titled "Neural inhibition enables selection during language processing" appeared in the Aug. 30 Proceedings of the National Academy of Sciences. CU-Boulder professors Tim Curran, Marie Banich and Randall O'Reilly, graduate students Hannah Snyder and Erika Nyhus and undergraduate honors thesis student Natalie Hutchison co-authored the paper.

In the study, they tested the idea that neural inhibition in the brain plays a big role in decision-making by creating a computer model of the brain called a neural network simulation.

"We found that if we increased the amount of inhibition in this simulated brain then our system got much better at making hard choices," said Hannah Snyder, a psychology graduate student who worked with Munakata on the study. "If we decreased inhibition in the brain, then the simulation had much more trouble making choices."

Through their model they looked at the brain mechanisms involved when we choose words. They then tested the model's predictions on people by asking them to think of the first verb that comes to mind when they are presented with a noun.

"We know that making decisions, in this case choosing our words, taps into this left-front region of the brain, called the left ventrolateral prefrontal cortex," Munakata said. "We wanted to figure out what is happening in that part of the brain that lets us make these choices. Our idea here, which we have shown through the word-choosing model, is that there's a fight between neurons in this area of the brain that lets us choose our words."

They then tested the model's predictions that more neural inhibition in the brain makes it easier to make choices by examining the effects of increased and decreased inhibition in people's brains. They increased inhibition by using a drug called midazolam and found that people got much better at making hard choices. It didn't affect other aspects of their thinking, but rather only the area of making choices. They investigated the effects of decreased inhibition by looking at people with anxiety.

"We found that the worse their anxiety was, the worse they were at making decisions, and the activity in their left ventrolateral prefrontal cortex was less typical," Munakata said.

There are two ways in which the research could be helpful in improving treatments for anxiety, according to Snyder. While specific medications that increase neural inhibition are currently used to treat the emotional symptoms of anxiety disorders, the findings suggest that they might also be helpful in treating the difficulty those suffering from anxiety have in selecting one option when there are too many choices.

"Secondly, a more precise understanding of what aspects of cognition patients are struggling with could be extremely valuable in designing effective approaches to therapy for each patient," she said. "For example, if someone with an anxiety disorder has difficulty selecting among multiple options, he or she might benefit from learning how to structure their environment to avoid choice overload."

The work was done in CU-Boulder's Center for Determinants of Executive Function and Dysfunction, which brings together researchers from different areas of expertise on campus and beyond including experts on drug studies, neuroimaging and anxiety. The center is funded by the National Institute of Mental Health.

(Photo: Marie Banich)

University of Colorado at Boulder

AMINO ACIDS COULD BE PRODUCED WITHIN IMPACTING COMETS, BRINGING LIFE TO EARTH

0 comentarios

Life on Earth as we know it really could be from out of this world. New research from Lawrence Livermore National Laboratory scientists shows that comets that crashed into Earth millions of years ago could have produced amino acids – the building blocks of life.

Amino acids are critical to life and serve as the building blocks of proteins, which are linear chains of amino acids.

In the Sept. 12 online edition of the journal Nature Chemistry, LLNL’s Nir Goldman and colleagues found that simple molecules found within comets (such as water, ammonia, methylene and carbon dioxide) just might have been instigators of life on Earth. His team discovered that the sudden compression and heating of cometary ices crashing into Earth can produce complexes resembling the amino acid, glycine.

Origins of life research initially focused on the production of amino acids from organic materials already present on the planet. However, further research showed that Earth’s atmospheric conditions consisted mainly of carbon dioxide, nitrogen and water. Shock-heating experiments and calculations eventually proved that synthesis of organic molecules necessary for amino acid production will not occur in this type of environment.

“There’s a possibility that the production or delivery of prebiotic molecules came from extraterrestrial sources,” Goldman said. “On early Earth, we know that there was a heavy bombardment of comets and asteroids delivering up to several orders of magnitude greater mass of organics than what likely was already here.”

Comets range in size from 1.6 kilometers up to 56 kilometers. Comets of these sizes passing through the Earth’s atmosphere are heated externally but remain cool internally. Upon impact with the planetary surface, a shock wave is generated due to the sudden compression.

Shock waves can create sudden, intense pressures and temperatures, which could affect chemical reactions within a comet before it interacts with the ambient planetary environment. The previous general consensus was that the delivery or production of amino acids from these impact events was improbable because the extensive heating (1000s of Kelvin degrees) from the impact would destroy any potential life-building molecules. (One Kelvin equals -457 degrees Fahrenheit).

However, Goldman and his colleagues studied how a collision, where an extraterrestrial ice impacts a planet with a glancing blow, could generate much lower temperatures.

“Under this situation, organic materials could potentially be synthesized within the comet’s interior during shock compression and survive the high pressures and temperatures,” Goldman said. “Once the compressed material expands, stable amino acids could survive interactions with the planet’s atmosphere or ocean. These processes could result in concentrations of prebiotic organic species ’shock-synthesized’ on Earth from materials that originated in the outer regions of space.”

Using molecular dynamic simulations, the LLNL team studied shock compression in a prototypical astrophysical ice mixture (similar to a comet crashing into Earth) to extremely high pressures and temperatures. They found that as the material decompresses, protein-building amino acids are likely to form.

(Photo: Liam Krauss/LLNL)

Lawrence Livermore National Laboratory

SOLAR FUNNEL

0 comentarios

New antenna made of carbon nanotubes could make photovoltaic cells more efficient by concentrating solar energy.

Solar cells are usually grouped in large arrays, often on rooftops, because each cell can generate only a limited amount of power. However, not every building has enough space for a huge expanse of solar panels.

Using carbon nanotubes (hollow tubes of carbon atoms), MIT chemical engineers have found a way to concentrate solar energy 100 times more than a regular photovoltaic cell. Such nanotubes could form antennas that capture and focus light energy, potentially allowing much smaller and more powerful solar arrays.

“Instead of having your whole roof be a photovoltaic cell, you could have little spots that were tiny photovoltaic cells, with antennas that would drive photons into them,” says Michael Strano, the Charles and Hilda Roddey Associate Professor of Chemical Engineering and leader of the research team.

Strano and his students describe their new carbon nanotube antenna, or “solar funnel,” in the Sept. 12 online edition of the journal Nature Materials. Lead authors of the paper are postdoctoral associate Jae-Hee Han and graduate student Geraldine Paulus.

Their new antennas might also be useful for any other application that requires light to be concentrated, such as night-vision goggles or telescopes. The work was funded by a National Science Foundation Career Award, a Sloan Fellowship, the MIT-Dupont Alliance and the Korea Research Foundation.

Solar panels generate electricity by converting photons (packets of light energy) into an electric current. Strano’s nanotube antenna boosts the number of photons that can be captured and transforms the light into energy that can be funneled into a solar cell.

The antenna consists of a fibrous rope about 10 micrometers (millionths of a meter) long and four micrometers thick, containing about 30 million carbon nanotubes. Strano’s team built, for the first time, a fiber made of two layers of nanotubes with different electrical properties — specifically, different bandgaps.

In any material, electrons can exist at different energy levels. When a photon strikes the surface, it excites an electron to a higher energy level, which is specific to the material. The interaction between the energized electron and the hole it leaves behind is called an exciton, and the difference in energy levels between the hole and the electron is known as the bandgap.

The inner layer of the antenna contains nanotubes with a small bandgap, and nanotubes in the outer layer have a higher bandgap. That’s important because excitons like to flow from high to low energy. In this case, that means the excitons in the outer layer flow to the inner layer, where they can exist in a lower (but still excited) energy state.

Therefore, when light energy strikes the material, all of the excitons flow to the center of the fiber, where they are concentrated. Strano and his team have not yet built a photovoltaic device using the antenna, but they plan to. In such a device, the antenna would concentrate photons before the photovoltaic cell converts them to an electrical current. This could be done by constructing the antenna around a core of semiconducting material.

The interface between the semiconductor and the nanotubes would separate the electron from the hole, with electrons being collected at one electrode touching the inner semiconductor, and holes collected at an electrode touching the nanotubes. This system would then generate electric current. The efficiency of such a solar cell would depend on the materials used for the electrode, according to the researchers.

Strano’s team is the first to construct nanotube fibers in which they can control the properties of different layers, an achievement made possible by recent advances in separating nanotubes with different properties. “It shows how far the field has really come over the last decade,” says Michael Arnold, professor of materials science and engineering at the University of Wisconsin at Madison.

Solar cells that incorporate carbon nanotubes could become a good lower-cost alternative to traditional silicon solar cells, says Arnold. “What needs to be shown next is whether the excitons in the inner shell can be harvested and converted to electrical energy,” he says.

While the cost of carbon nanotubes was once prohibitive, it has been coming down in recent years as chemical companies build up their manufacturing capacity. “At some point in the near future, carbon nanotubes will likely be sold for pennies per pound, as polymers are sold,” says Strano. “With this cost, the addition to a solar cell might be negligible compared to the fabrication and raw material cost of the cell itself, just as coatings and polymer components are small parts of the cost of a photovoltaic cell.”

Strano’s team is now working on ways to minimize the energy lost as excitons flow through the fiber, and on ways to generate more than one exciton per photon. The nanotube bundles described in the Nature Materials paper lose about 13 percent of the energy they absorb, but the team is working on new antennas that would lose only 1 percent.

(Photo: Geraldine Paulus)

MIT

JOHNS HOPKINS NEUROSCIENTISTS GOAL: A PROSTHETIC LIMB WITH FEELING

0 comentarios

Back in 1980 when The Empire Strikes Back hit the big screen, it seemed like the most fantastic of science fiction scenarios: Luke Skywalker getting a fully functional bionic arm to replace the one he had lost to arch enemy Darth Vader.

Thirty years later, such a device is more the stuff of fact and less of fiction, as increasingly sophisticated artificial limbs are being developed that allow users a startlingly life-like range of motion and fine motor control.

Johns Hopkins neuroscientist Steven Hsiao, however, isn’t satisfied that a prosthetic limb simply allows its user to move. He wants to provide the user the ability to feel what the artificial limb is touching, such as the texture and shape of a quarter, or the comforting perception of holding hands. Accomplishing these goals requires understanding how the brain processes the multitude of sensations that come in daily through our fingers and hands.

Using a $600,000 grant administered through the federal stimulus act, Hsiao is leading a team that is working to decode those sensations, which could lead to the development of truly “bionic” hands and arms that use sensitive electronics to activate neurons in the touch centers of the cerebral cortex.

“The truth is, it is still a huge mystery how we humans use our hands to move about in the world and interact with our environment,” said Hsiao, of the Zanvyl Krieger Mind/Brain Institute. “How we reach into our pockets and grab our car keys or some change without looking requires that the brain analyze the inputs from our hands and extract information about the size, shape and texture of objects. How the brain accomplishes this amazing feat is what we want to find out and understand.”

Hsiao hypothesizes that our brains do this by transforming the inputs from receptors in our fingers and hands into “neural code” that the brain then matches against a stored, central “databank” of memories of those objects. When a match occurs, the brain is able to perceive and recognize what the hand is feeling, experiencing and doing.

In recent studies, Hsiao’s team found that neurons in the area of the brain that respond to touch are able to “code for” (understand) orientation of bars pressed against the skin, the speed and direction of motion, and curved edges of objects. In their stimulus-funded study, Hsiao’s team will investigate the detailed neural codes for more complex shapes, and will delve into how the perception of motion in the visual system is integrated with the perception of tactile motion.

The team will do this by first investigating how complex shapes are processed in the somatosensory cortex (the part of the brain that responds to touch) and second, by studying the responses of individual neurons in an area that has traditionally been associated with visual motion but appears to also have neurons that respond to tactile motion (motion of things moving across your skin).

“The practical goal of all of this is to find ways to restore normal sensory function to patients whose hands have been damaged, or to amputees with prosthetic or robotic arms and hands,” Hsiao said. “It would be fantastic if we could use electric stimulation to activate the same brain pathways and neural codes that are normally used in the brain. I believe that these neural coding studies will provide a basic understanding of how signals should be fed back into the brain to produce the rich percepts that we normally receive from our hands.”

(Photo: JHU)

Johns Hopkins University

NERVE CELLS IN OPTIC FLOW

0 comentarios

Generally speaking, animals and humans maintain their sense of balance in their three-dimensional environment without difficulty. In addition to the vestibular system, their navigation is often aided by the eyes. Every movement causes the environment to move past the eyes in a characteristic way. On the basis of this "optic flow", the nerve cells then calculate the organism's self-motion. Scientists at the Max Planck Institute of Neurobiology have now shown how nerve cells succeed in calculating self-motion while confronted with differing backgrounds. So far, none of the established models for optical processing were able to cope with this requirement.

In one respect, humans and flies bear a close resemblance to each other: both rely heavily on their eyesight for navigation. Despite the fact that the visual impressions are constantly changing, this navigation is exceptionally proficient. For example, if I walk past a whitewashed wall, tiny asperities pass my eyes in the opposite direction and reassure me that I am indeed moving forwards. If, on the other hand, I pass a billboard pasted with bright posters, a great variety of colours and structural changes flow past me as I move. Although the visual information is very different, my nerve cells are able to confirm in both cases that I am moving forwards at a certain pace. Something that, at first glance, appears to be rather mundane thus turns out to be a remarkable feat of our brain.

In order to understand how nerve cells process such different optical information, neurobiologists study the brains of flies. Using the fly brain as a model has some obvious advantages: Flies are experts in optical motion processing, yet their brains contain comparatively few nerve cells. This allows scientists to examine the function of every nerve cell in a network. In laboratory experiments, the flies are presented with moving striped patterns while the reactions of individual nerve cells are measured. The knowledge gained from these experiments resulted in models that serve well to show how and to which stimuli a nerve cell reacts and what information it relays to the next cells in line. However, once the patterns the flies see varied too greatly in their complexity, the models' predictions failed.

"These models considered only the input-output relationship of nerve cells, but ignored anything that happened within the cell", explains Alexander Borst, who examines optic processing in the fly brain with his department at the Max Planck Institute of Neurobiology in Martinsried. That these cell-internal activities cannot be ignored was now shown by his PhD student, Franz Weber. Together with Christian Machens from the Ecole Normale Superieure in Paris, Weber developed a model that not only takes the input-output function into account, but that also makes provisions for the biophysical properties of the cell.

Franz Weber presented the flies with patterns of dots with different dot densities. In the course of the experiments, he established that the nerve cells show essentially the same reaction to high and low dot densities. This is astonishing, given that a pattern with a lower number of dots provides a nerve cell with considerably less visual motion information than one with a high dot density (bringing us back to the whitewashed wall and the billboard). The cells evidently compensate for the differences in incoming information by means of an internal amplifier. With this signal enhancement in mind, the scientists devised a new model. It now provides a reliable description of the nerve cells' behaviour within the network - no matter how complex the surrounding world turns out to be.

(Photo: Max Planck Institute of Neurobiology / Schorner)

Max Planck Institute

Wednesday, September 29, 2010

UA ENGINEERS BUILD LUNAR VEGETABLE GARDEN

0 comentarios

Researchers are demonstrating that plants from Earth could be grown without soil on the moon or Mars, setting the table for astronauts who would find potatoes, peanuts, tomatoes, peppers and other vegetables awaiting their arrival.

The first extraterrestrials to inhabit the moon probably won't be little green men, but they could be little green plants.

Researchers at the University of Arizona Controlled Environment Agriculture Center, known as CEAC, are demonstrating that plants from Earth could be grown hydroponically (without soil) on the moon or Mars, setting the table for astronauts who would find potatoes, peanuts, tomatoes, peppers and other vegetables awaiting their arrival.

The research team has built a prototype lunar greenhouse in the CEAC Extreme Climate Lab at UA's Campus Agricultural Center. It represents the last 18 feet of one of several tubular structures that would be part of a proposed lunar base. The tubes would be buried beneath the moon's surface to protect the plants and astronauts from deadly solar flares, micrometeorites and cosmic rays.

The membrane-covered module can be collapsed to a 4-foot-wide disk for interplanetary travel. It contains water-cooled sodium vapor lamps and long envelopes that would be loaded with seeds, ready to sprout hydroponically.

"We can deploy the module and have the water flowing to the lamps in just ten minutes," said Phil Sadler, president of Sadler Machine Co., which designed and built the lunar greenhouse. "About 30 days later, you have vegetables."

Standing beside the growth chamber, which was overflowing with greenery despite the windowless CEAC lab, principal investigator and CEAC Director Gene Giacomelli said, "You can think of this as a robotic mechanism that is providing food, oxygen and fresh drinking water."

Giacomelli, a professor of agricultural and biosystems engineering and a member of the UA's BIO5 Institute, said that although this robot is built around living green plants – instead of the carbon fiber or steel usually associated with engineering devices – it still requires all the components common to any autonomous robotic system.

These components, which include sensors that gather data, algorithms to analyze that data and a control system to optimize performance, are being designed by assistant professor Roberto Furfaro of systems and industrial engineering, and associate professor Murat Kacira of agricultural and biosystems engineering.

"We want the system to operate itself," Kacira said. "However, we're also trying to devise a remote decision-support system that would allow an operator on Earth to intervene. The system can build its own analysis and predictions, but we want to have access to the data and the control system."

This is similar to the way a CEAC food-production system has been operating at the South Pole for the past six years.

The South Pole Growth Chamber, where many ideas now used in the lunar greenhouse were developed, was also designed and fabricated by Sadler Machine Co. It provides fresh food to the South Pole research station, which is physically cut off from the outside world for six to eight months each year.

In addition to food, the growth chamber provides a valuable psychological boost for scientists who overwinter at the station.

"There's only 5 percent humidity and all you can normally smell is diesel fuel and body odor," Sadler said. But now researchers can go into the growth chamber and smell vegetables and flowers and see living green things, breaking the monotony of thousands of square miles of ice and snow surrounding their completely man-made environment.

Lane Patterson, a master's student in agricultural and biosystems engineering and primary systems operations manager of CEAC's lunar greenhouse lab, also works for Raytheon Polar Services, which provides operations support for the South Pole Growth Chamber.

"If I need to be there in the chamber looking over an operator's shoulder, that's possible with the web camera," Patterson explained. "But if I need to make an adjustment to the chamber without the operator's assistance, I can do that electronically via computer communication."

Recycling and efficient use of resources are just as important to the South Pole operation as they will be on the moon, Sadler noted. A dozen 1,000-watt sodium-vapor lights generate a lot of heat, which is siphoned away by each lamp's cooling system and used to heat the station. "Energy is expensive there," Sadler said. "It's about $35 a gallon for diesel fuel."

In fact, efficient use of resources is just as important for hydroponic greenhouses anywhere on the globe, Giacomelli emphasized. "All that we learn from the life support system in the prototype lunar greenhouse can be applied right here on Earth," he added.

"On another planet, you need to minimize your labor, recycle all you can and operate as efficiently as possible," he said. "If I ask the manager of a hydroponic greenhouse in Willcox [Ariz.] what's most important, he or she will tell me those same things – recycle, minimize labor, minimize resource use."

Carbon dioxide is fed into the prototype greenhouse from pressurized tanks, but astronauts would provide CO2 at the lunar base just by breathing. Similarly, water for the plants would be extracted from astronaut urine, and the water-cooled electric lights might be replaced by fiber optic cable – essentially light pipes – which would channel sunlight from the surface to the plants underground.

The lunar greenhouse contains approximately 220 pounds of wet plant material that can provide 53 quarts of potable water and about three-quarters of a pound of oxygen during a 24-hour period, while consuming about 100 kilowatts of electricity and a pound of carbon dioxide.

"We turned the greenhouse on about eight months ago to see how it would operate and that test run will be completed on Sept. 30," Giacomelli said.

NASA is funding that research under a $70,000 Ralph Steckler Space Grant Colonization Research and Technology Development Opportunity, which CEAC obtained with help from UA's Lunar and Planetary Laboratory. The Steckler grants are designed to support research that could lead to space colonization, a better understanding of the lunar environment and creation of technologies that will support space colonies.

CEAC now is applying for Phase II of this grant, which would provide an additional $225,000 for two years.

Although NASA funds the test run, "everything you see in this room – the greenhouse module, lights, water system – came out of Phil Sadler's pocket," Giacomelli said. "I paid for the student help and pay the bills for the research space. Obviously, we think this is important work."

The UA researchers and Sadler Machine also are collaborating with two Italian firms on this project: Thales Alenia Space, a company that builds hardware for the International Space Station, and Aero Sekur, which builds inflatable structures.

Giacomelli said the research also could lead to plant colonization in another traditionally hostile environment – large urban centers.

"There's great interest in providing locally grown, fresh food in cities, for growing food right where masses of people are living," Giacomelli said. "It's the idea of growing high-quality fresh food that only has to be transported very short distances. There also would be a sense of agriculture returning to the everyday lives of urban dwellers."

"I think that idea is as exciting as establishing plant colonies on the moon."

(Photo: Norma Jean Gargasz / UANews)

University of Arizona

ENGINEERS MAKE ARTIFICIAL SKIN OUT OF NANOWIRES

0 comentarios

Engineers at UC Berkeley have developed a pressure-sensitive electronic material from semiconductor nanowires that could one day give new meaning to the term "thin-skinned."

"The idea is to have a material that functions like the human skin, which means incorporating the ability to feel and touch objects," said Ali Javey, associate professor of electrical engineering and computer sciences and head of the UC Berkeley research team developing the artificial skin.

The artificial skin, dubbed "e-skin" by the UC Berkeley researchers, is described in a Sept. 12 paper in the advanced online publication of the journal Nature Materials. It is the first such material made out of inorganic single crystalline semiconductors.

A touch-sensitive artificial skin would help overcome a key challenge in robotics: adapting the amount of force needed to hold and manipulate a wide range of objects.

"Humans generally know how to hold a fragile egg without breaking it," said Javey, who is also a member of the Berkeley Sensor and Actuator Center and a faculty scientist at the Lawrence Berkeley National Laboratory Materials Sciences Division. "If we ever wanted a robot that could unload the dishes, for instance, we’d want to make sure it doesn’t break the wine glasses in the process. But we’d also want the robot to be able to grip a stock pot without dropping it."

A longer term goal would be to use the e-skin to restore the sense of touch to patients with prosthetic limbs, which would require significant advances in the integration of electronic sensors with the human nervous system.

Previous attempts to develop an artificial skin relied upon organic materials because they are flexible and easier to process.

"The problem is that organic materials are poor semiconductors, which means electronic devices made out of them would often require high voltages to operate the circuitry," said Javey. "Inorganic materials, such as crystalline silicon, on the other hand, have excellent electrical properties and can operate on low power. They are also more chemically stable. But historically, they have been inflexible and easy to crack. In this regard, works by various groups, including ours, have recently shown that miniaturized strips or wires of inorganics can be made highly flexible — ideal for high performance, mechanically bendable electronics and sensors."

The UC Berkeley engineers utilized an innovative fabrication technique that works somewhat like a lint roller in reverse. Instead of picking up fibers, nanowire "hairs" are deposited.

The researchers started by growing the germanium/silicon nanowires on a cylindrical drum, which was then rolled onto a sticky substrate. The substrate used was a polyimide film, but the researchers said the technique can work with a variety of materials, including other plastics, paper or glass. As the drum rolled, the nanowires were deposited, or “printed,” onto the substrate in an orderly fashion, forming the basis from which thin, flexible sheets of electronic materials could be built.

In another complementary approach utilized by the researchers, the nanowires were first grown on a flat source substrate, and then transferred to the polyimide film by a direction-rubbing process.

For the e-skin, the engineers printed the nanowires onto an 18-by-19 pixel square matrix measuring 7 centimeters on each side. Each pixel contained a transistor made up of hundreds of semiconductor nanowires. Nanowire transistors were then integrated with a pressure sensitive rubber on top to provide the sensing functionality. The matrix required less than 5 volts of power to operate and maintained its robustness after being subjected to more than 2,000 bending cycles.

The researchers demonstrated the ability of the e-skin to detect pressure from 0 to 15 kilopascals, a range comparable to the force used for such daily activities as typing on a keyboard or holding an object. In a nod to their home institution, the researchers successfully mapped out the letter C in Cal.

"This is the first truly macroscale integration of ordered nanowire materials for a functional system — in this case, an electronic skin," said study lead author Kuniharu Takei, post-doctoral fellow in electrical engineering and computer sciences. "It’s a technique that can be potentially scaled up. The limit now to the size of the e-skin we developed is the size of the processing tools we are using."

(Photo: Ali Javey and Kuniharu Takei, UC Berkeley)

University of California, Berkeley

PERCEPTION OF EMOTION IS CULTURE-SPECIFIC

0 comentarios
Want to know how a Japanese person is feeling? Pay attention to the tone of his voice, not his face. That’s what other Japanese people would do, anyway. A new study examines how Dutch and Japanese people assess others’ emotions and finds that Dutch people pay attention to the facial expression more than Japanese people do.

“As humans are social animals, it’s important for humans to understand the emotional state of other people to maintain good relationships,” says Akihiro Tanaka of Waseda Institute for Advanced Study in Japan. “When a man is smiling, probably he is happy, and when he is crying, probably he’s sad.” Most of the research on understanding the emotional state of others has been done on facial expression; Tanaka and his colleagues in Japan and the Netherlands wanted to know how vocal tone and facial expressions work together to give you a sense of someone else’s emotion.

For the study, Tanaka and colleagues made a video of actors saying a phrase with a neutral meaning—“Is that so?”—two ways: angrily and happily. This was done in both Japanese and Dutch. Then they edited the videos so that they also had recordings of someone saying the phrase angrily but with a happy face, and happily with an angry face. Volunteers watched the videos in their native language and in the other language and were asked whether the person was happy or angry. They found that Japanese participants paid attention to the voice more than Dutch people did—even when they were instructed to judge the emotion by the faces and to ignore the voice. The results are published in Psychological Science, a journal of the Association for Psychological Science.

This makes sense if you look at the differences between the way Dutch and Japanese people communicate, Tanaka speculates. “I think Japanese people tend to hide their negative emotions by smiling, but it’s more difficult to hide negative emotions in the voice.” Therefore, Japanese people may be used to listening for emotional cues. This could lead to confusion when a Dutch person, who is used to the voice and the face matching, talks with a Japanese person; they may see a smiling face and think everything is fine, while failing to notice the upset tone in the voice. “Our findings can contribute to better communication between different cultures,” Tanaka says.

Association for Psychological Science

RANDOM NUMBERS GAME WITH QUANTUM DICE

0 comentarios

Behind every coincidence lies a plan - in the world of classical physics, at least. In principle, every event, including the fall of dice or the outcome of a game of roulette, can be explained in mathematical terms. Researchers at the Max Planck Institute for the Science of Light in Erlangen have constructed a device that works on the principle of true randomness. With the help of quantum physics, their machine generates random numbers that cannot be predicted in advance. The researchers exploit the fact that measurements based on quantum physics can only produce a special result with a certain degree of probability, that is, randomly. True random numbers are needed for the secure encryption of data and to enable the reliable simulation of economic processes and changes in the climate.

The phenomenon we commonly refer to as chance is merely a question of a lack of knowledge. If we knew the location, speed and other classical characteristics of all of the particles in the universe with absolute certainty, we would be able to predict almost all processes in the world of everyday experience. It would even be possible to predict the outcome of a puzzle or lottery numbers. Even if they are designed for this purpose, the results provided by computer programs are far from random: "They merely simulate randomness but with the help of suitable tests and a sufficient volume of data, a pattern can usually be identified," says Christoph Marquardt. In response to this problem, a group of researchers working with Gerd Leuchs and Christoph Marquardt at the Max Planck Institute for the Science of Light and the University of Erlangen-Nuremberg and Ulrik Andersen from the Technical University of Denmark have developed a generator for true random numbers.

True randomness only exists in the world of quantum mechanics. A quantum particle will remain in one place or another and move at one speed or another with a certain degree of probability. "We exploit this randomness of quantum-mechanical processes to generate random numbers," says Christoph Marquardt.

The scientists use vacuum fluctuations as quantum dice. Such fluctuations are another characteristic of the quantum world: there is nothing that does not exist there. Even in absolute darkness, the energy of a half photon is available and, although it remains invisible, it leaves tracks that are detectable in sophisticated measurements: these tracks take the form of quantum noise. This completely random noise only arises when the physicists look for it, that is, when they carry out a measurement.

To make the quantum noise visible, the scientists resorted once again to the quantum physics box of tricks: they split a strong laser beam into equal parts using a beam splitter. A beam splitter has two input and output ports. The researchers covered the second input port to block light from entering. The vacuum fluctuations were still there, however, and they influenced the two partial output beams. The physicists then send them to the detectors and measure the intensity of the photon stream. Each photon produces an electron and the resulting electrical current is registered by the detector.

When the scientists subtract the measurement curves produced by the two detectors from each other, they are not left with nothing. What remains is the quantum noise. "During measurement the quantum-mechanical wave function is converted into a measured value," says Christian Gabriel, who carried out the experiment with the random generator with his colleagues at the Max Planck Institute in Erlangen: "The statistics are predefined but the intensity measured remains a matter of pure chance." When plotted in a Gaussian bell-shaped curve, the weakest values arise frequently while the strongest occur rarely. The researchers divided the bell-shaped curve of the intensity spread into sections with areas of equal size and assigned a number to each section.

Needless to say, the researchers did not decipher this quantum mechanics puzzle to pass the time during their coffee breaks. "True random numbers are difficult to generate but they are needed for a lot of applications," says Gerd Leuchs, Director of the Max Planck Institute for the Science of Light in Erlangen. Security technology, in particular, needs random combinations of numbers to encode bank data for transfer. Random numbers can also be used to simulate complex processes whose outcome depends on probabilities. For example, economists use such Monte Carlo simulations to predict market developments and meteorologists use them to model changes in the weather and climate.

There is a good reason why the Erlangen-based physicists chose to produce the random numbers using highly complex vacuum fluctuations rather than other random quantum processes. When physicists observe the velocity distribution of electrons or the quantum noise of a laser, for example, the random quantum noise is usually superimposed by classical noise, which is not random. "When we want to measure the quantum noise of a laser beam, we also observe classical noise that originates, for example, from a shaking mirror," says Christoffer Wittmann who also worked on the experiment. In principle, the vibration of the mirror can be calculated as a classical physical process and therefore destroys the random game of chance.

"Admittedly, we also get a certain amount of classical noise from the measurement electronics," says Wolfgang Mauerer who studied this aspect of the experiment. "But we know our system very well and can calculate this noise very accurately and remove it." Not only can the quantum fluctuations enable the physicists to eavesdrop on the pure quantum noise, no one else can listen in. "The vacuum fluctuations provide unique random numbers," says Christoph Marquardt. With other quantum processes, this proof is more difficult to provide and the danger arises that a data spy will obtain a copy of the numbers. "This is precisely what we want to avoid in the case of random numbers for data keys," says Marquardt.

Although the quantum dice are based on mysterious phenomena from the quantum world that are entirely counterintuitive to everyday experience, the physicists do not require particularly sophisticated equipment to observe them. The technical components of their random generator can be found among the basic equipment used in many laser laboratories. "We do not need either a particularly good laser or particularly expensive detectors for the set-up," explains Christian Gabriel. This is, no doubt, one of the reasons why companies have already expressed interest in acquiring this technology for commercial use.

(Photo: MPI for the Physics of Light)

Max Planck Institute

MALE MATURITY SHAPED BY EARLY NUTRITION

0 comentarios
It seems the old nature versus nurture debate can’t be won. But a new Northwestern University study of men in the Philippines makes a strong case for nurture’s role in male to female differences -- suggesting that rapid weight gain in the first six months of life predicts earlier puberty for boys.

Males who experienced rapid growth as babies -- an indication that they were not nutritionally stressed -- also were taller, had more muscle and were stronger, and had higher testosterone levels as young adults. They had sex for the first time at a younger age and were more likely to report having had sex in the past month, resulting in more lifetime sex partners.

The researchers think that testosterone may hold the key to understanding these long-term effects.

“Most people are unaware that male infants in the first six months of life produce testosterone at approximately the same level as an adult male,” said study author Christopher W. Kuzawa, associate professor of anthropology in the Weinberg College of Arts and Sciences and Institute for Policy Research fellow at Northwestern. “We looked at weight gain during this particular window of early life development, because testosterone is very high at this age and helps shape the differences between males and females.”

The study provides more evidence that genes alone do not shape our fate.

“The environment has a very strong hand in how we turn out,” Kuzawa said. “And this study extends that idea to the realm of sex differences and male biology.”

The study found men, on average, tend to be taller and more muscular than females, and the magnitude of that difference appears to be the result of nutrition within the first six months of an infant male’s life, according to the study.

“There is a perennial question about how important heredity is versus the environment as shapers of who we turn out to be,” said Kuzawa. “In the last 20 years, a lot has been learned about a process called developmental plasticity -- how the body responds early in life to things like nutrition and stress. Early experiences can have a permanent effect on how the body develops, and this effect can linger into adulthood. There is a lot of evidence that this can influence risk of diseases like heart attack, diabetes and hypertension -- really important diseases.”

Kuzawa and his collaborators applied the same framework in this study and found evidence that male characteristics -- such as height, muscle mass and testosterone levels as opposed to disease characteristics -- also relate back to early life developmental plasticity.

“Another way to look at it is that the differences between the sexes are not hard wired, but are responsive to the environment, and in particular to nutrition,” Kuzawa said.

Testosterone has long been known to increase muscle mass and puts a person on a higher growth trajectory to be taller. The Northwestern study suggests that the age of puberty also is influenced by events in the first six months of life.

The study, which was funded by the National Science Foundation and the Wenner Gren Foundation, was conducted among a group of 770 Filipino males aged 20 to 22 who have been followed their entire lives. Since 1983 a team of researchers in the United States and the Philippines (including Kuzawa for about the last 10 years) has been working to understand how early life nutrition influences adult health, such as risk for cardiovascular disease and diabetes.

“Rapid Weight Gain After Birth Predicts Life History and Reproductive Strategy in Filipino Males” was published Sept. 13 in the Proceedings of the National Academy of Sciences. The study’s co-authors are Thomas W. McDade, associate professor of anthropology and Institute for Policy Research fellow, Northwestern University, Linda S. Adair, University of North Carolina at Chapel Hill, and Nanette Lee, University of San Carlos of the Office of Population Studies in Cebu City, Philippines.

Northwestern University

INFANTS' PERIPHERAL VISION IS CLUTTERED

0 comentarios
Our eyes are windows to the world, but what is the visual experience of infants? We know that infant vision tends to be blurrier than adults'. Now researchers from UC Davis, UC Berkeley and Stanford University have discovered that they also have much poorer peripheral vision.

Infants only perceive what they stare at, and they perceive a jumbled, tangled mass of features in their periphery," said Faraz Farzin, a postdoctoral researcher at Stanford University who conducted the work as a graduate student at the UC Davis Center for Mind and Brain.

At play is a phenomenon that brain scientists call "crowding," in which the clutter of images in the peripheral vision makes it difficult to recognize an individual object. Adults also suffer from crowding, but much less than infants, Farzin said.

The results were published online this month in the journal Psychological Science.

Farzin and co-authors Susan Rivera, associate professor at the UC Davis Department of Psychology, Center for Mind and Brain and the MIND Institute; and David Whitney, associate professor of psychology at UC Berkeley and a research associate at the Center for Mind and Brain, used eye-tracking to find what babies will turn their eyes to look at in their periphery.

They showed the infants, six to 15 months of age, pairs of images of faces, one upright and one upside-down, with or without flanking images around them. The infants could discriminate the upright face in their peripheral vision, but not when it was crowded by flanking images, they found.

The finding has both theoretical and practical value, Farzin said.

"Identifying and reaching for a specific toy in a pile is no easy feat for a baby," she said. Knowing the visual limits of a typically developing baby also helps psychologists to identify when visual development is lagging.

The study was supported by grants from the National Institutes of Health and the National Science Foundation.

University of California, Davis

Tuesday, September 28, 2010

HOME'S ELECTRICAL WIRING ACTS AS ANTENNA TO RECEIVE LOW-POWER SENSOR DATA

0 comentarios
If these walls had ears, they might tell a homeowner some interesting things. Like when water is dripping into an attic crawl space, or where an open window is letting hot air escape during winter.

The walls do have ears, thanks to a device that uses a home's electrical wiring as a giant antenna. Sensors developed by researchers at the University of Washington and the Georgia Institute of Technology use residential wiring to transmit information to and from almost anywhere in the home, allowing for wireless sensors that run for decades on a single watch battery. The technology, which could be used in home automation or medical monitoring, will be presented this month at the Ubiquitous Computing conference in Copenhagen, Denmark.

Low-cost sensors recording a building's temperature, humidity, light level or air quality are central to the concept of a smart, energy-efficient home that automatically adapts to its surroundings. But that concept has yet to become a reality.

"When you look at home sensing, and home automation in general, it hasn't really taken off," said principal investigator Shwetak Patel, a UW assistant professor of computer science and of electrical engineering. "Existing technology is still power hungry, and not as easy to deploy as you would want it to be."

That's largely because today's wireless devices either transmit a signal only several feet, Patel said, or consume so much energy they need frequent battery replacements.

"Here, we can imagine this having an out-of-the-box experience where the device already has a battery in it, and it's ready to go and run for many years," Patel said. Users could easily sprinkle dozens of sensors throughout the home, even behind walls or in hard-to-reach places like attics or crawl spaces.

Patel's team has devised a way to use copper electrical wiring as a giant antenna to receive wireless signals at a set frequency. A low-power sensor placed within 10 to 15 feet of electrical wiring can use the antenna to send data to a single base station plugged in anywhere in the home.

The device is called Sensor Nodes Utilizing Powerline Infrastructure, or SNUPI. It originated when Patel and co-author Erich Stuntebeck were doctoral students at Georgia Tech and worked with thesis adviser Gregory Abowd to develop a method using electrical wiring to receive wireless signals in a home. They discovered that home wiring is a remarkably efficient antenna at 27 megahertz. Since then, Patel's team at the UW has built the actual sensors and refined this method. Other co-authors are UW's Gabe Cohn, Jagdish Pandey and Brian Otis.

Cohn, a UW doctoral student in electrical engineering, was lead student researcher and tested the system. In a 3,000-square-foot house he tried five locations in each room and found that only 5 percent of the house was out of the system's range, compared to 23 percent when using over-the-air communication at the same power level. Cohn also discovered some surprising twists -- that the sensors can transmit near bathtubs because the electrical grounding wire is typically tied to the copper plumbing pipes, that a lamp cord plugged into an outlet acts as part of the antenna, and that outdoor wiring can extend the sensors' range outside the home.

While traditional wireless systems have trouble sending signals through walls, this system actually does better around walls that contain electrical wiring.

Most significantly, SNUPI uses less than 1 percent of the power for data transmission compared to the next most efficient model.

"Existing nodes consumed the vast majority of their power, more than 90 percent, in wireless communication," Cohn said. "We've flipped that. Most of our power is consumed in the computation, because we made the power for wireless communication almost negligible."

The existing prototype uses UW-built custom electronics and consumes less than 1 milliwatt of power when transmitting, with less than 10 percent of that devoted to communication. Depending on the attached sensor, the device could run continuously for 50 years, much longer than the decade-long shelf life of its battery.

"Basically, the battery will start to decompose before it runs out of power," Patel said.

Longer-term applications might consider using more costly medical-grade batteries, which have a longer shelf life. The team is also looking to reduce the power consumption even further so no battery would be needed. They say they're already near the point where solar energy or body motion could provide enough energy.

The researchers are commercializing the base technology, which they believe could be used as a platform for a variety of sensing systems.

Another potential application is in health care. Medical monitoring needs a compact device that can sense pulse, blood pressure or other properties and beam the information back to a central database, without requiring patients to replace the batteries.

The technology does not interfere with electricity flow or with other emerging systems that use electrical wiring to transmit Ethernet signals between devices plugged into two outlets.

University of Washington

LINK TO AUTISM IN BOYS FOUND IN MISSING DNA

0 comentarios

New research from the Centre for Addiction and Mental Health (CAMH) and The Hospital for Sick Children (SickKids), both in Toronto, Canada provides further clues as to why Autism Spectrum Disorder (ASD) affects four times more males than females. The scientists discovered that males who carry specific alterations of DNA on the sole X-chromosome they carry are at high risk of developing ASD. The research is published in the September 15 issue of Science Translational Medicine.

ASD is a neurological disorder that affects brain functioning, resulting in challenges with communication and social interaction, unusual patterns of behaviour, and often, intellectual deficits. ASD affects one in every 120 children and a startling one in 70 boys. Though all of the causes of ASD are not yet known, research has increasingly pointed towards genetic factors. In recent years, several genes involved in ASD have successfully been identified.

The research team was led by Dr. John B. Vincent, Senior Scientist and head of CAMH's Molecular Neuropsychiatry and Development Laboratory and Dr. Stephen Scherer, Senior Scientist and Director of The Centre for Applied Genomics at SickKids, and Director of the McLaughlin Centre at the University of Toronto. The scientists analyzed the gene sequences of 2,000 individuals with ASD, along with others with an intellectual disability, and compared the results to thousands of population controls. They found that about one per cent of boys with ASD had mutations in the PTCHD1 gene on the X-chromosome. Similar mutations were not found in thousands of male controls. Also, sisters carrying the same mutation are seemingly unaffected.

"We believe that the PTCHD1 gene has a role in a neurobiological pathway that delivers information to cells during brain development – this specific mutation may disrupt crucial developmental processes, contributing to the onset of autism." said Dr. Vincent. "Our discovery will facilitate early detection, which will, in turn, increase the likelihood of successful interventions."

"The male gender bias in autism has intrigued us for years and now we have an indicator that starts to explain why this may be," says Dr. Scherer. "Boys are boys because they inherit one X-chromosome from their mother and one Y-chromosome from their father. If a boy's X-chromosome is missing the PTCHD1 gene or other nearby DNA sequences, they will be at high risk of developing ASD or intellectual disability. Girls are different in that, even if they are missing one PTCHD1 gene, by nature they always carry a second X-chromosome, shielding them from ASD." Scherer adds, "While these women are protected, autism could appear in future generations of boys in their families."

Researchers hope further investigation into the PTCHD1 gene will also indicate potential avenues for new therapy.

(Photo: CAMH)

CAMH

TOWARD RESOLVING DARWIN'S 'ABOMINABLE MYSTERY'

0 comentarios

What, in nature, drives the incredible diversity of flowers? This question has sparked debate since Darwin described flower diversification as an 'abominable mystery.' The answer has become a lot clearer, according to scientists at the University of Calgary whose research on the subject is published today in the on-line edition of the journal Ecology Letters.

Drs. Jana Vamosi and Steven Vamosi of the Department of Biological Sciences have found through extensive statistical analysis that the size of the geographical area is the most important factor when it comes to biodiversity of a particular flowering plant family.

The researchers were looking at the underlying forces at work spurring diversity -- such as why there could be 22,000 varieties of some families of flowers, orchids for example, while there could be only forty species of others, like the buffaloberry family. In other words, what factors have produced today's biodiversity?

"Our research found that the most important factor is available area. The number of species in a lineage is most keenly determined by the size of the continent (or continents) that it occupies," says Jana Vamosi.

Steven Vamosi adds that while the findings of this research mostly shed light on what produces the world's diversity, it may comment on what produces extinction patterns as well.

"The next step is to determine if patterns of extinction risk mirror those observed for diversification, specifically to contrast the relative influence of available area and traits," he says.

Typically, when it comes to explaining the biodiversity of flowering plants, biologists' opinions fall into three different camps: family traits (for example a showy flower versus a plain flower), environment (tropic versus arid climate) or sheer luck in geography (a seed makes it way to a new continent and expands the geographical range of a family).

But the Vamosi research demonstrates that geography isn't the only answer, traits of the family came in a close second to geography. Traits that may encourage greater diversity are known as "key innovations" and scientists have hypothesized that some families possess more species because they are herbs, possess fleshy fruits (such as an apple or peach), or that their flowers have a more complex morphology. Zygomorphy (or when a flower can only be divided down the middle to make two equal mirror images) is thought to restrict the types of pollinators that can take nectar and pollen from the flower. Flies, for instance, won't often visit zygomorphic flowers. Bees, on the other hand, adore them.

"Although geography may play a primary role, a close second is the flower morphology of the plants in a particular family," says Jana Vamosi. "So essentially all camps may claim partial victory because morphological traits should be considered in the context of geographical area."

(Photo: Riley Brandt)

University of Calgary

CARBON NANOTUBES TWICE AS STRONG AS ONCE THOUGHT

0 comentarios
Carbon nanotubes — those tiny particles poised to revolutionize electronics, medicine, and other areas — are much bigger in the strength department than anyone ever thought, scientists are reporting. New studies on the strength of these submicroscopic cylinders of carbon indicate that on an ounce-for-ounce basis they are at least 117 times stronger than steel and 30 times stronger than Kevlar, the material used in bulletproof vests and other products. The findings, which could expand commercial and industrial applications of nanotube materials, appear in the monthly journal ACS Nano.

Stephen Cronin and colleagues point out that nanotubes — barely 1/50,000th the width of a human hair — have been renowned for exceptional strength, high electrical conductivity, and other properties. Nanotubes can stretch considerably like toffee before breaking. This makes them ideal for a variety of futuristic applications, even, if science fiction ever become reality, as cables in "space elevators" that lift objects from the Earth's surface into orbit.

To resolve uncertainties about the actual strength of nanotubes, the scientists applied immense tension to individual carbon nanotubes of different lengths and widths. They found that nanotubes could be stretched up to 14 percent of their normal length without breaking, or more than twice that of previous reports by others. The finding establishes "a new lower limit for the ultimate strength of carbon nanotubes," the article noted.

ACS

UNDERSTANDING BEHAVIORAL PATTERNS: WHY BIRD FLOCKS MOVE IN UNISON

0 comentarios

Animal flocks, be it honeybees, fish, ants or birds, often move in surprising synchronicity and seemingly make unanimous decisions at a moment's notice, a phenomenon which has remained puzzling to many researchers.

New research published in New Journal of Physics (co-owned by the Institute of Physics and German Physical Society), uses a particle model to explain the collective decision making process of flocks of birds landing on foraging flights.

Using a simple self-propelled particle (SPP) system, which sees the birds represented by particles with such parameters as position and velocity, the researchers from Budapest, Hungary, find that the collective switching from the flying to the landing state overrides the individual landing intentions of each bird.

In the absence of a decision making leader, the collective shift to land is heavily influenced by perturbations the individual birds are subject to, such as the birds' flying position within the flock. This can be compared to an avalanche of piled up sand, which would occur even for perfectly symmetric and cautiously placed grains, but in reality happens much sooner because of increasing, non-linear fluctuations.

As the researchers explain, "Our main motivation was to better understand something which is puzzling and out there in nature, especially in cases involving the stopping or starting of a collective behavioural pattern in a group of people or animals.

"We propose a simple model for a system whose members have the tendency to follow the others both in space and in their state of mind concerning a decision about stopping an activity. This is a very general model, which can be applied to similar situations."

Possible applications include collectively flying, unmanned aerial vehicles, initiating a desired motion pattern in crowds or groups of animals and even finance, where the results could be used to interpret collective effects on selling or buying shares on the stock market.

(Photo: IoP)

IoP

LEARNING TO LIVE ON LAND: HOW SOME EARLY PLANTS OVERCAME AN EVOLUTIONARY HURDLE

0 comentarios

The diversity of life that can be seen in environments ranging from the rainforests of the Amazon to the spring blooms of the Mohave Desert is awe-inspiring. But this diversity would not be possible if the ancestors of modern plants had just stayed in the water with their green algal cousins. Moving onto dry land required major lifestyle changes to adapt to this new "hostile" environment, and in turn helped change global climate and atmospheric conditions to conditions we recognize today. By absorbing carbon while making food, and releasing oxygen, early plants shaped ecosystems into a more hospitable environment, paving the way for animals to make a parallel journey onto land.

New research by Dr. Linda Graham and colleagues at the University of Wisconsin, Madison focuses on this transition and adaptive changes in the uptake of carbon-based compounds, such as sugars. This work, which is published in the September issue of the American Journal of Botany, suggests a basis for incorporating evolutionary/paleontological information into global carbon cycling models.

All plants descended from a group of ancestral green algae, whose modern representatives thrive in aqueous environments. The simplest of modern land plants—several groups of bryophytes—are the closest living relatives to the first plants to colonize land. By comparing green algae and bryophytes, Graham and her co-researchers obtained insight into the evolutionary hurdles that plants needed to overcome to transition successfully to life on land, and how early plants' success influenced carbon cycling.

The researchers quantified and compared growth responses to exogenously (externally) supplied sugars in two green algae, Cylindrocystis brebissoni and Mougeotia sp., and one peat moss species, Sphagnum compactum. They found that sugar/carbon uptake in peat moss was not restricted to the products of photosynthesis. Rather, addition of sugars to the growth media increased biomass by almost 40-fold. This ability to utilize sugars not only from photosynthesis but also from the environment is called mixotrophy, not previously thought to play a significant role in the growth of mosses. The two green algae also responded to external sugar, though less so than the peat moss.

Peat mosses "store a large percentage of global soil carbon, thereby helping to stabilize Earth's atmospheric chemistry and climate," stated Graham.

This has far-ranging implications to global carbon cycling because previous work examining the response of mosses to carbon availability assumed that carbon dioxide was the only carbon source available to peat mosses and ancestral plants. The new results indicate that efforts to model global atmospheric and climate changes, both in the present and millions of years ago during the colonization of land, should take mixotrophic behavior of early diverging plants into account.

Graham and her co-researchers have enjoyed a cross-hemispheric partnership, from Wisconsin north to Canada and south to Chile, and look forward to comparing the biology of Northern and Southern hemisphere peat mosses. In particular, they would like to "explore in more depth the role of sugars in the establishment of ecologically important microbial symbioses, particularly nitrogen-fixing cyanobacteria living with peat mosses," explained Graham.

(Photo: Lee Wilcox, University of Wisconsin, Madison, Wisconsin)

American Journal of Botany

Monday, September 27, 2010

NEW SUPERCOMPUTER SEES WELL ENOUGH TO DRIVE A CAR SOMEDAY

0 comentarios
Navigating our way down the street is something most of us take for granted; we seem to recognize cars, other people, trees and lampposts instantaneously and without much thought. In fact, visually interpreting our environment as quickly as we do is an astonishing feat requiring an enormous number of computations—which is just one reason that coming up with a computer-driven system that can mimic the human brain in visually recognizing objects has proven so difficult.

Now Eugenio Culurciello of Yale’s School of Engineering & Applied Science has developed a supercomputer based on the human visual system that operates much more quickly and efficiently than ever before. Dubbed NeuFlow, the system takes its inspiration from the mammalian visual system, mimicking its neural network to quickly interpret the world around it. Culurciello presented the results Sept. 15 at the High Performance Embedded Computing (HPEC) workshop in Boston, Mass.

The system uses complex vision algorithms developed by Yann LeCun at New York University to run large neural networks for synthetic vision applications. One idea—the one Culurciello and LeCun are focusing on, is a system that would allow cars to drive themselves. In order to be able to recognize the various objects encountered on the road—such as other cars, people, stoplights, sidewalks, not to mention the road itself—NeuFlow processes tens of megapixel images in real time.

The system is also extremely efficient, simultaneously running more than 100 billion operations per second using only a few watts (that’s less than the power a cell phone uses) to accomplish what it takes bench-top computers with multiple graphic processors more than 300 watts to achieve.

“One of our first prototypes of this system is already capable of outperforming graphic processors on vision tasks,” Culurciello said.

Culurciello embedded the supercomputer on a single chip, making the system much smaller, yet more powerful and efficient, than full-scale computers. “The complete system is going to be no bigger than a wallet, so it could easily be embedded in cars and other places,” Culurciello said.

Beyond the autonomous car navigation, the system could be used to improve robot navigation into dangerous or difficult-to-reach locations, to provide 360-degree synthetic vision for soldiers in combat situations, or in assisted living situations where it could be used to monitor motion and call for help should an elderly person fall, for example.

Yale University

RESEARCH SHOWS RADIOMETRIC DATING STILL RELIABLE (AGAIN)

0 comentarios

Recent puzzling observations of tiny variations in nuclear decay rates have led some to question the science of using decay rates to determine the relative ages of rocks and organic materials. Scientists from the National Institute of Standards and Technology (NIST), working with researchers from Purdue University, the University of Tennessee, Oak Ridge National Laboratory and Wabash College, tested the hypothesis that solar radiation might affect the rate at which radioactive elements decay and found no detectable effect.

Atoms of radioactive isotopes are unstable and decay over time by shooting off particles at a fixed rate, transmuting the material into a more stable substance. For instance, half the mass of carbon-14, an unstable isotope of carbon, will decay into nitrogen-14 over a period of 5,730 years. The unswerving regularity of this decay allows scientists to determine the age of extremely old organic materials—such as remains of Paleolithic campfires—with a fair degree of precision. The decay of uranium-238, which has a half-life of nearly 4.5 billion years, enabled geologists to determine the age of the Earth.

Many scientists, including Marie and Pierre Curie, Ernest Rutherford and George de Hevesy, have attempted to influence the rate of radioactive decay by radically changing the pressure, temperature, magnetic field, acceleration, or radiation environment of the source. No experiment to date has detected any change in rates of decay.

Recently, however, researchers at Purdue University observed a small (a fraction of a percent), transitory deviation in radioactive decay at the time of a huge solar flare. Data from laboratories in New York and Germany also have shown similarly tiny deviations over the course of a year. This has led some to suggest that Earth's distance from the sun, which varies during the year and affects the planet's exposure to solar neutrinos, might be related to these anomalies.

Researchers from NIST and Purdue tested this by comparing radioactive gold-198 in two shapes, spheres and thin foils, with the same mass and activity. Gold-198 releases neutrinos as it decays. The team reasoned that if neutrinos are affecting the decay rate, the atoms in the spheres should decay more slowly than the atoms in the foil because the neutrinos emitted by the atoms in the spheres would have a greater chance of interacting with their neighboring atoms. The maximum neutrino flux in the sample in their experiments was several times greater than the flux of neutrinos from the sun. The researchers followed the gamma-ray emission rate of each source for several weeks and found no difference between the decay rate of the spheres and the corresponding foils.

According to NIST scientist emeritus Richard Lindstrom, the variations observed in other experiments may have been due to environmental conditions interfering with the instruments themselves.

"There are always more unknowns in your measurements than you can think of," Lindstrom says.

(Photo: ©Zoltan Pataki/courtesy Shutterstock)

National Institute of Standards and Technology (NIST)

AEROBIC EXERCISE RELIEVES INSOMNIA

0 comentarios
The millions of middle-aged and older adults who suffer from insomnia have a new drug-free prescription for a more restful night’s sleep. Regular aerobic exercise improves the quality of sleep, mood and vitality, according to a small but significant new study from Northwestern Medicine.

The study is the first to examine the effect of aerobic exercise on middle-aged and older adults with a diagnosis of insomnia. About 50 percent of people in these age groups complain of chronic insomnia symptoms.

The aerobic exercise trial resulted in the most dramatic improvement in patients’ reported quality of sleep, including sleep duration, compared to any other non-pharmacological intervention.

“This is relevant to a huge portion of the population,” said Phyllis Zee, M.D., director of the Sleep Disorders Center at Northwestern Medicine and senior author of a paper to be published in the October issue of Sleep Medicine. The lead author is Kathryn Reid, research assistant professor at Feinberg.

“Insomnia increases with age,” Zee said. “Around middle age, sleep begins to change dramatically. It is essential that we identify behavioral ways to improve sleep. Now we have promising results showing aerobic exercise is a simple strategy to help people sleep better and feel more vigorous.”

The drug-free strategy also is desirable, because it eliminates the potential of a sleeping medication interacting with other drugs a person may be taking, Reid said.

Sleep is an essential part of a healthy lifestyle, like nutrition and exercise, noted Zee, a professor of neurology, neurobiology, and physiology at Northwestern University Feinberg School of Medicine and a physician at Northwestern Memorial Hospital.

“By improving a person’s sleep, you can improve their physical and mental health,” Zee said. “Sleep is a barometer of health, like someone’s temperature. It should be the fifth vital sign. If a person says he or she isn’t sleeping well, we know they are more likely to be in poor health with problems managing their hypertension or diabetes.”

The study included 23 sedentary adults, primarily women, 55 and older who had difficulty falling sleep and/or staying asleep and impaired daytime functioning. Women have the highest prevalence of insomnia. After a conditioning period, the aerobic physical activity group exercised for two 20-minute sessions four times per week or one 30-to-40-minute session four times per week, both for 16 weeks. Participants worked at 75 percent of their maximum heart rate on at least two activities including walking or using a stationary bicycle or treadmill.

Participants in the non-physical activity group participated in recreational or educational activities, such as a cooking class or a museum lecture, which met for about 45 minutes three to five times per week for 16 weeks.

Both groups received education about good sleep hygiene, which includes sleeping in a cool, dark and quiet room, going to bed the same time every night and not staying in bed too long, if you can’t fall asleep.

Exercise improved the participants’ self-reported sleep quality, elevating them from a diagnosis of poor sleeper to good sleeper. They also reported fewer depressive symptoms, more vitality and less daytime sleepiness.

“Better sleep gave them pep, that magical ingredient that makes you want to get up and get out into the world to do things,” Reid said.

The participants’ scores on the Pittsburgh Sleep Quality Index dropped an average of 4.8 points. (A higher score indicates worse sleep.) In a prior study using t’ai chi as a sleep intervention, for example, participants’ average scores dropped 1.8 points.
“Exercise is good for metabolism, weight management and cardiovascular health and now it’s good for sleep,” Zee said.

The research was funded by the National Institute on Aging.

Northwestern University

Friday, September 24, 2010

AVOIDING AN ASTEROID COLLISION

0 comentarios

Though it was once believed that all asteroids are giant pieces of solid rock, later hypotheses have it that some are actually a collection of small gravel-sized rocks, held together by gravity. If one of these "rubble piles" spins fast enough, it's speculated that pieces could separate from it through centrifugal force and form a second collection — in effect, a second asteroid.

Now researchers at Tel Aviv University, in collaboration with an international group of scientists, have proved the existence of these theoretical "separated asteroid" pairs.

Ph.D. student David Polishook of Tel Aviv University's Department of Geophysics and Planetary Sciences and his supervisor Dr. Noah Brosch of the university's School of Physics and Astronomy say the research has not only verified a theory, but could have greater implications if an asteroid passes close to earth. Instead of a solid mountain colliding with earth's surface, says Dr. Brosch, the planet would be pelted with the innumerable pebbles and rocks that comprise it, like a shotgun blast instead of a single cannonball. This knowledge could guide the defensive tactics to be taken if an asteroid were on track to collide with the Earth.

A large part of the research for the study, recently published in the journal Nature, was done at Tel Aviv University's Wise Observatory, located deep in the Negev Desert — the first and only modern astronomical observatory in the Middle East.

According to Dr. Brosch, separated asteroids are composed of small pebbles glued together by gravitational attraction. Their paths are affected by the gravitational pull of major planets, but the radiation of the sun, he says, can also have an immense impact. Once the sun's light is absorbed by the asteroid, rotation speeds up. When it reaches a certain speed, a piece will break off to form a separate asteroid.

The phenomenon can be compared to a figure skater on the ice. "The faster they spin, the harder it is for them to keep their arms close to their bodies," explains Dr. Brosch.

As a result, asteroid pairs are formed, characterized by the trajectory of their rotation around the sun. Though they may be millions of miles apart, the two asteroids share the same orbit. Dr. Brosch says this demonstrates that they come from the same original asteroid source.

During the course of the study, Polishook and an international group of astronomers studied 35 asteroid pairs. Traditionally, measuring bodies in the solar system involves studying photographic images. But the small size and extreme distance of the asteroids forced researchers to measure these pairs in an innovative way.

Instead, researchers measured the light reflected from each member of the asteroid pairs. The results proved that in each asteroid pair, one body was formed from the other. The smaller asteroid, he explains, was always less than forty percent of the size of the bigger asteroid. These findings fit precisely into a theory developed at the University of Colorado at Boulder, which concluded that no more than forty percent of the original asteroid can split off.

With this study, says Dr. Brosch, researchers have been able to prove the connection between two separate spinning asteroids and demonstrate the existence of asteroids that exist in paired relationships.

(Photo: TAU)

Tel Aviv University

Followers

Archive

 

Selected Science News. Copyright 2008 All Rights Reserved Revolution Two Church theme by Brian Gardner Converted into Blogger Template by Bloganol dot com