Thursday, September 30, 2010

RICE STUDY EXAMINES HOW BACTERIA ACQUIRE IMMUNITY

0 comentarios
In a new study, Rice University scientists bring the latest tools of computational biology to bear in examining how the processes of natural selection and evolution influence the way bacteria acquire immunity from disease.

The study is available online from Physical Review Letters. It builds upon one of the major discoveries made possible by molecular genetics in the past decade -- the revelation that bacteria and similar single-celled organisms have an acquired immune system.

"From a purely scientific perspective, this research is teaching us things we couldn't have imagined just a few years ago, but there's an applied interest in this work as well," said Michael Deem, the John W. Cox Professor in Biochemical and Genetic Engineering and professor of physics and astronomy at Rice. "It is believed, for instance, that the bacterial immune system uses a process akin to RNA interference to silence the disease genes it recognizes, and biotechnology companies may find it useful to develop this as a tool for silencing particular genes."

The new study by Deem and graduate student Jiankui He focused on a portion of the bacterial genome called the "CRISPR," which stands for "clustered regularly interspaced short palindromic repeats." The CRISPR contain two types of DNA sequences. One type -- short, repeating patterns that first attracted scientific interest -- is what led to the CRISPR name. But scientists more recently learned that the second type -- originally thought of as DNA "spacers" between the repeats -- is what the organism uses to recognize disease.

"Bacteria get attacked by viruses called phages, and the CRISPR contain genetic sequences from phages," Deem said. "The CRISPR system is both inheritable and programmable, meaning that some sequences may be there when the organism is first created, and new ones may also be added when new phages attack the organism during its life cycle."

The repeating sequences appear to be a kind of bookend or flag that the organism uses to determine where a snippet from a phage begins and ends. The CRISPR will often have between 30 and 50 of these snippets of phage sequences. Previous studies have found that once a bacteria has a phage sequence in its CRISPR, it has the ability to degrade any DNA or RNA that match that sequence -- meaning it can fend off attacks from any phages that have genes matching those in its CRISPR.

"What we wanted to explore was how the history of a bacterium's exposure to phages influences what's in the CRISPR," Deem said. "In other words, how is an organism's previous exposure to viruses reflected in its own genome?"

From earlier published studies, Deem and He knew that phage sequences were added to the CRISPR sequentially. So, in a CRISPR system containing 30 snippets, the newest one would be in position one, at the front of the line. In another study in 2007, researchers examining the CRISPR of whole populations of bacteria noticed some statistical irregularities. They found that the likelihood of two different organisms having the same snippet in their CRISPR increased exponentially as they progressed away from position one. So, in the organism with 30 snippets, the phage gene in position 30 was the most likely to be conserved time and again across all the bacteria in the population.

To use the power of computers to examine why this happens, Deem and He needed a mathematical description of what was happening over time to both the bacterial and phage populations. The equations they created reflect the way the bacterial and phage populations interact via the CRISPR.

"Each population is trying to expand, and selective pressure is constantly being applied on both sides," Deem said. "You can see how this plays out in the CRISPR over time. There's a diverse assortment of genes in the first spacer, but the second spacer has been in there longer, so there's been more selective pressure applied to that spacer. Because bacteria that contain the dominant viral strain in their CRISPR are more likely to survive than those that don't, they tend to squeeze out their neighbors that are more vulnerable. At position N, the farthest way from position one, selection has been at work the longest, so the genes we find there were the most common and the ones that tended to afford the most overall protection to the organism."

In addition to interest from biotechnology firms, Deem said the workings of the CRISPR are of interest to drugmakers who are investigating new types of antibiotics.

Rice University

CHILDREN UNDER 4 AND CHILDREN WITH AUTISM DON'T YAWN CONTAGIOUSLY

0 comentarios
If someone near you yawns, do you yawn, too? About half of adults yawn after someone else does in a phenomenon called contagious yawning. Now a new study has found that most children aren't susceptible to contagious yawning until they're about 4 years old—and that children with autism are less likely to yawn contagiously than others.

The study, conducted by researchers at the University of Connecticut, appears in the September/October 2010 issue of the journal Child Development.

To determine the extent to which children at various stages of social development are likely to yawn contagiously, the researchers studied 120 typically developing 1- to 6-year-olds. Although babies begin to yawn spontaneously even before they leave the womb, most of the children in this study didn't show signs of contagious yawning until they were 4.

The team also studied about 30 6- to 15-year-olds with autism spectrum disorders (ASD), comparing them to two other groups of typically developing children with the same mental and chronological ages. The children with ASD were less likely to yawn contagiously than their typically developing peers, the researchers found. And children with diagnoses that imply more severe autistic symptoms were much less likely to yawn contagiously than those with milder diagnoses.

"Given that contagious yawning may be a sign of empathy, this study suggests that empathy—and the mimicry that may underlie it—develops slowly over the first few years of life, and that children with ASD may miss subtle cues that tie them emotionally to others," according to the researchers. This study may provide guidance for approaches to working with children with ASD so that they focus more on such cues.

Society for Research in Child Development

EARTH - THE EARLY YEARS

0 comentarios

Plate tectonics may not have operated on a younger and hotter Earth according to new research from the University of Bristol carried out on preserved remnants of ancient continental crust in the Hudson Bay region of Canada.

The processes that formed and shaped the early Earth’s crust are still poorly understood, and the onset of plate tectonics has been estimated as being as early as the Hadean (4.6-3.8 billion years ago) or as late as the Neoproterozoic (1,000-542 million years ago).

The Hudson Bay region represents an excellent opportunity to investigate crustal formation during the Precambrian (that is from the formation of the Earth, almost 4.6 billion years ago, until 542 million years ago). It records tectonic events leading to the formation and stabilisation of the Laurentian continent (most of modern-day North America) and also preserves crust with ages spanning around 2 billion years.

David Thompson, a PhD student in Bristol’s Department of Earth Sciences, and colleagues from the University of Calgary and the Geological Survey of Canada analysed the structure and thickness of the crust using seismic waves to investigate what tectonic process were operating on the younger Earth.

They found that the crust between 3.9 and 2.7 billion years old had a relatively simple structure with no clear signs that it was subjected to the kind of plate tectonic processes operating now. The crust that formed about 1.8 billion years, however, showed evidence of the collision of two continents, similar to the collision between India and Asia still ongoing today.

The researchers then compared the observed structure of the Hudson Bay crust with continental crust of a similar age found elsewhere in the world. They concluded that modern-style plate tectonics may not have been established until as late as 1.8 billion years ago.

(Photo: NASA)

RESEARCHERS GIVE ROBOTS THE CAPABILITY FOR DECEPTIVE BEHAVIOR

0 comentarios

A robot deceives an enemy soldier by creating a false trail and hiding so that it will not be caught. While this sounds like a scene from one of the Terminator movies, it's actually the scenario of an experiment conducted by researchers at the Georgia Institute of Technology as part of what is believed to be the first detailed examination of robot deception.

"We have developed algorithms that allow a robot to determine whether it should deceive a human or other intelligent machine and we have designed techniques that help the robot select the best deceptive strategy to reduce its chance of being discovered," said Ronald Arkin, a Regents professor in the Georgia Tech School of Interactive Computing.

The results of robot experiments and theoretical and cognitive deception modeling were published online on Sept. 3 in the International Journal of Social Robotics. Because the researchers explored the phenomena of robot deception from a general perspective, the study's results apply to robot-robot and human-robot interactions. This research was funded by the Office of Naval Research.

In the future, robots capable of deception may be valuable for several different areas, including military and search and rescue operations. A search and rescue robot may need to deceive in order to calm or receive cooperation from a panicking victim. Robots on the battlefield with the power of deception will be able to successfully hide and mislead the enemy to keep themselves and valuable information safe.

"Most social robots will probably rarely use deception, but it's still an important tool in the robot's interactive arsenal because robots that recognize the need for deception have advantages in terms of outcome compared to robots that do not recognize the need for deception," said the study's co-author, Alan Wagner, a research engineer at the Georgia Tech Research Institute.

For this study, the researchers focused on the actions, beliefs and communications of a robot attempting to hide from another robot to develop programs that successfully produced deceptive behavior. Their first step was to teach the deceiving robot how to recognize a situation that warranted the use of deception. Wagner and Arkin used interdependence theory and game theory to develop algorithms that tested the value of deception in a specific situation. A situation had to satisfy two key conditions to warrant deception -- there must be conflict between the deceiving robot and the seeker, and the deceiver must benefit from the deception.

Once a situation was deemed to warrant deception, the robot carried out a deceptive act by providing a false communication to benefit itself. The technique developed by the Georgia Tech researchers based a robot's deceptive action selection on its understanding of the individual robot it was attempting to deceive.

To test their algorithms, the researchers ran 20 hide-and-seek experiments with two autonomous robots. Colored markers were lined up along three potential pathways to locations where the robot could hide. The hider robot randomly selected a hiding location from the three location choices and moved toward that location, knocking down colored markers along the way. Once it reached a point past the markers, the robot changed course and hid in one of the other two locations. The presence or absence of standing markers indicated the hider's location to the seeker robot.

"The hider's set of false communications was defined by selecting a pattern of knocked over markers that indicated a false hiding position in an attempt to say, for example, that it was going to the right and then actually go to the left," explained Wagner.

The hider robots were able to deceive the seeker robots in 75 percent of the trials, with the failed experiments resulting from the hiding robot’s inability to knock over the correct markers to produce the desired deceptive communication.

"The experimental results weren't perfect, but they demonstrated the learning and use of deception signals by real robots in a noisy environment," said Wagner. "The results were also a preliminary indication that the techniques and algorithms described in the paper could be used to successfully produce deceptive behavior in a robot."

While there may be advantages to creating robots with the capacity for deception, there are also ethical implications that need to be considered to ensure that these creations are consistent with the overall expectations and well-being of society, according to the researchers.

"We have been concerned from the very beginning with the ethical implications related to the creation of robots capable of deception and we understand that there are beneficial and deleterious aspects," explained Arkin. "We strongly encourage discussion about the appropriateness of deceptive robots to determine what, if any, regulations or guidelines should constrain the development of these systems."

(Photo: GIT)

Georgia Institute of Technology

NEW CU-BOULDER RESEARCH SHEDS LIGHT ON WHY OUR BRAINS GET TRIPPED UP WHEN WE'RE ANXIOUS

0 comentarios

A new University of Colorado at Boulder study sheds light on the brain mechanisms that allow us to make choices and ultimately could be helpful in improving treatments for the millions of people who suffer from the effects of anxiety disorders.

In the study, CU-Boulder psychology Professor Yuko Munakata and her research colleagues found that "neural inhibition," a process that occurs when one nerve cell suppresses activity in another, is a critical aspect in our ability to make choices.

"The breakthrough here is that this helps us clarify the question of what is happening in the brain when we make choices, like when we choose our words," Munakata said. "Understanding more about how we make choices, how the brain is doing this and what the mechanisms are, could allow scientists to develop new treatments for things such as anxiety disorders."

Researchers have long struggled to determine why people with anxiety can be paralyzed when it comes to decision-making involving many potential options. Munakata believes the reason is that people with anxiety have decreased neural inhibition in their brain, which leads to difficulty making choices.

"A lot of the pieces have been there," she said. "What's new in this work is bringing all of this together to say here's how we can fit all of these pieces of information together in a coherent framework explaining why it's especially hard for people with anxiety to make decisions and why it links to neural inhibitors."

A paper on the findings titled "Neural inhibition enables selection during language processing" appeared in the Aug. 30 Proceedings of the National Academy of Sciences. CU-Boulder professors Tim Curran, Marie Banich and Randall O'Reilly, graduate students Hannah Snyder and Erika Nyhus and undergraduate honors thesis student Natalie Hutchison co-authored the paper.

In the study, they tested the idea that neural inhibition in the brain plays a big role in decision-making by creating a computer model of the brain called a neural network simulation.

"We found that if we increased the amount of inhibition in this simulated brain then our system got much better at making hard choices," said Hannah Snyder, a psychology graduate student who worked with Munakata on the study. "If we decreased inhibition in the brain, then the simulation had much more trouble making choices."

Through their model they looked at the brain mechanisms involved when we choose words. They then tested the model's predictions on people by asking them to think of the first verb that comes to mind when they are presented with a noun.

"We know that making decisions, in this case choosing our words, taps into this left-front region of the brain, called the left ventrolateral prefrontal cortex," Munakata said. "We wanted to figure out what is happening in that part of the brain that lets us make these choices. Our idea here, which we have shown through the word-choosing model, is that there's a fight between neurons in this area of the brain that lets us choose our words."

They then tested the model's predictions that more neural inhibition in the brain makes it easier to make choices by examining the effects of increased and decreased inhibition in people's brains. They increased inhibition by using a drug called midazolam and found that people got much better at making hard choices. It didn't affect other aspects of their thinking, but rather only the area of making choices. They investigated the effects of decreased inhibition by looking at people with anxiety.

"We found that the worse their anxiety was, the worse they were at making decisions, and the activity in their left ventrolateral prefrontal cortex was less typical," Munakata said.

There are two ways in which the research could be helpful in improving treatments for anxiety, according to Snyder. While specific medications that increase neural inhibition are currently used to treat the emotional symptoms of anxiety disorders, the findings suggest that they might also be helpful in treating the difficulty those suffering from anxiety have in selecting one option when there are too many choices.

"Secondly, a more precise understanding of what aspects of cognition patients are struggling with could be extremely valuable in designing effective approaches to therapy for each patient," she said. "For example, if someone with an anxiety disorder has difficulty selecting among multiple options, he or she might benefit from learning how to structure their environment to avoid choice overload."

The work was done in CU-Boulder's Center for Determinants of Executive Function and Dysfunction, which brings together researchers from different areas of expertise on campus and beyond including experts on drug studies, neuroimaging and anxiety. The center is funded by the National Institute of Mental Health.

(Photo: Marie Banich)

University of Colorado at Boulder

AMINO ACIDS COULD BE PRODUCED WITHIN IMPACTING COMETS, BRINGING LIFE TO EARTH

0 comentarios

Life on Earth as we know it really could be from out of this world. New research from Lawrence Livermore National Laboratory scientists shows that comets that crashed into Earth millions of years ago could have produced amino acids – the building blocks of life.

Amino acids are critical to life and serve as the building blocks of proteins, which are linear chains of amino acids.

In the Sept. 12 online edition of the journal Nature Chemistry, LLNL’s Nir Goldman and colleagues found that simple molecules found within comets (such as water, ammonia, methylene and carbon dioxide) just might have been instigators of life on Earth. His team discovered that the sudden compression and heating of cometary ices crashing into Earth can produce complexes resembling the amino acid, glycine.

Origins of life research initially focused on the production of amino acids from organic materials already present on the planet. However, further research showed that Earth’s atmospheric conditions consisted mainly of carbon dioxide, nitrogen and water. Shock-heating experiments and calculations eventually proved that synthesis of organic molecules necessary for amino acid production will not occur in this type of environment.

“There’s a possibility that the production or delivery of prebiotic molecules came from extraterrestrial sources,” Goldman said. “On early Earth, we know that there was a heavy bombardment of comets and asteroids delivering up to several orders of magnitude greater mass of organics than what likely was already here.”

Comets range in size from 1.6 kilometers up to 56 kilometers. Comets of these sizes passing through the Earth’s atmosphere are heated externally but remain cool internally. Upon impact with the planetary surface, a shock wave is generated due to the sudden compression.

Shock waves can create sudden, intense pressures and temperatures, which could affect chemical reactions within a comet before it interacts with the ambient planetary environment. The previous general consensus was that the delivery or production of amino acids from these impact events was improbable because the extensive heating (1000s of Kelvin degrees) from the impact would destroy any potential life-building molecules. (One Kelvin equals -457 degrees Fahrenheit).

However, Goldman and his colleagues studied how a collision, where an extraterrestrial ice impacts a planet with a glancing blow, could generate much lower temperatures.

“Under this situation, organic materials could potentially be synthesized within the comet’s interior during shock compression and survive the high pressures and temperatures,” Goldman said. “Once the compressed material expands, stable amino acids could survive interactions with the planet’s atmosphere or ocean. These processes could result in concentrations of prebiotic organic species ’shock-synthesized’ on Earth from materials that originated in the outer regions of space.”

Using molecular dynamic simulations, the LLNL team studied shock compression in a prototypical astrophysical ice mixture (similar to a comet crashing into Earth) to extremely high pressures and temperatures. They found that as the material decompresses, protein-building amino acids are likely to form.

(Photo: Liam Krauss/LLNL)

Lawrence Livermore National Laboratory

SOLAR FUNNEL

0 comentarios

New antenna made of carbon nanotubes could make photovoltaic cells more efficient by concentrating solar energy.

Solar cells are usually grouped in large arrays, often on rooftops, because each cell can generate only a limited amount of power. However, not every building has enough space for a huge expanse of solar panels.

Using carbon nanotubes (hollow tubes of carbon atoms), MIT chemical engineers have found a way to concentrate solar energy 100 times more than a regular photovoltaic cell. Such nanotubes could form antennas that capture and focus light energy, potentially allowing much smaller and more powerful solar arrays.

“Instead of having your whole roof be a photovoltaic cell, you could have little spots that were tiny photovoltaic cells, with antennas that would drive photons into them,” says Michael Strano, the Charles and Hilda Roddey Associate Professor of Chemical Engineering and leader of the research team.

Strano and his students describe their new carbon nanotube antenna, or “solar funnel,” in the Sept. 12 online edition of the journal Nature Materials. Lead authors of the paper are postdoctoral associate Jae-Hee Han and graduate student Geraldine Paulus.

Their new antennas might also be useful for any other application that requires light to be concentrated, such as night-vision goggles or telescopes. The work was funded by a National Science Foundation Career Award, a Sloan Fellowship, the MIT-Dupont Alliance and the Korea Research Foundation.

Solar panels generate electricity by converting photons (packets of light energy) into an electric current. Strano’s nanotube antenna boosts the number of photons that can be captured and transforms the light into energy that can be funneled into a solar cell.

The antenna consists of a fibrous rope about 10 micrometers (millionths of a meter) long and four micrometers thick, containing about 30 million carbon nanotubes. Strano’s team built, for the first time, a fiber made of two layers of nanotubes with different electrical properties — specifically, different bandgaps.

In any material, electrons can exist at different energy levels. When a photon strikes the surface, it excites an electron to a higher energy level, which is specific to the material. The interaction between the energized electron and the hole it leaves behind is called an exciton, and the difference in energy levels between the hole and the electron is known as the bandgap.

The inner layer of the antenna contains nanotubes with a small bandgap, and nanotubes in the outer layer have a higher bandgap. That’s important because excitons like to flow from high to low energy. In this case, that means the excitons in the outer layer flow to the inner layer, where they can exist in a lower (but still excited) energy state.

Therefore, when light energy strikes the material, all of the excitons flow to the center of the fiber, where they are concentrated. Strano and his team have not yet built a photovoltaic device using the antenna, but they plan to. In such a device, the antenna would concentrate photons before the photovoltaic cell converts them to an electrical current. This could be done by constructing the antenna around a core of semiconducting material.

The interface between the semiconductor and the nanotubes would separate the electron from the hole, with electrons being collected at one electrode touching the inner semiconductor, and holes collected at an electrode touching the nanotubes. This system would then generate electric current. The efficiency of such a solar cell would depend on the materials used for the electrode, according to the researchers.

Strano’s team is the first to construct nanotube fibers in which they can control the properties of different layers, an achievement made possible by recent advances in separating nanotubes with different properties. “It shows how far the field has really come over the last decade,” says Michael Arnold, professor of materials science and engineering at the University of Wisconsin at Madison.

Solar cells that incorporate carbon nanotubes could become a good lower-cost alternative to traditional silicon solar cells, says Arnold. “What needs to be shown next is whether the excitons in the inner shell can be harvested and converted to electrical energy,” he says.

While the cost of carbon nanotubes was once prohibitive, it has been coming down in recent years as chemical companies build up their manufacturing capacity. “At some point in the near future, carbon nanotubes will likely be sold for pennies per pound, as polymers are sold,” says Strano. “With this cost, the addition to a solar cell might be negligible compared to the fabrication and raw material cost of the cell itself, just as coatings and polymer components are small parts of the cost of a photovoltaic cell.”

Strano’s team is now working on ways to minimize the energy lost as excitons flow through the fiber, and on ways to generate more than one exciton per photon. The nanotube bundles described in the Nature Materials paper lose about 13 percent of the energy they absorb, but the team is working on new antennas that would lose only 1 percent.

(Photo: Geraldine Paulus)

MIT

JOHNS HOPKINS NEUROSCIENTISTS GOAL: A PROSTHETIC LIMB WITH FEELING

0 comentarios

Back in 1980 when The Empire Strikes Back hit the big screen, it seemed like the most fantastic of science fiction scenarios: Luke Skywalker getting a fully functional bionic arm to replace the one he had lost to arch enemy Darth Vader.

Thirty years later, such a device is more the stuff of fact and less of fiction, as increasingly sophisticated artificial limbs are being developed that allow users a startlingly life-like range of motion and fine motor control.

Johns Hopkins neuroscientist Steven Hsiao, however, isn’t satisfied that a prosthetic limb simply allows its user to move. He wants to provide the user the ability to feel what the artificial limb is touching, such as the texture and shape of a quarter, or the comforting perception of holding hands. Accomplishing these goals requires understanding how the brain processes the multitude of sensations that come in daily through our fingers and hands.

Using a $600,000 grant administered through the federal stimulus act, Hsiao is leading a team that is working to decode those sensations, which could lead to the development of truly “bionic” hands and arms that use sensitive electronics to activate neurons in the touch centers of the cerebral cortex.

“The truth is, it is still a huge mystery how we humans use our hands to move about in the world and interact with our environment,” said Hsiao, of the Zanvyl Krieger Mind/Brain Institute. “How we reach into our pockets and grab our car keys or some change without looking requires that the brain analyze the inputs from our hands and extract information about the size, shape and texture of objects. How the brain accomplishes this amazing feat is what we want to find out and understand.”

Hsiao hypothesizes that our brains do this by transforming the inputs from receptors in our fingers and hands into “neural code” that the brain then matches against a stored, central “databank” of memories of those objects. When a match occurs, the brain is able to perceive and recognize what the hand is feeling, experiencing and doing.

In recent studies, Hsiao’s team found that neurons in the area of the brain that respond to touch are able to “code for” (understand) orientation of bars pressed against the skin, the speed and direction of motion, and curved edges of objects. In their stimulus-funded study, Hsiao’s team will investigate the detailed neural codes for more complex shapes, and will delve into how the perception of motion in the visual system is integrated with the perception of tactile motion.

The team will do this by first investigating how complex shapes are processed in the somatosensory cortex (the part of the brain that responds to touch) and second, by studying the responses of individual neurons in an area that has traditionally been associated with visual motion but appears to also have neurons that respond to tactile motion (motion of things moving across your skin).

“The practical goal of all of this is to find ways to restore normal sensory function to patients whose hands have been damaged, or to amputees with prosthetic or robotic arms and hands,” Hsiao said. “It would be fantastic if we could use electric stimulation to activate the same brain pathways and neural codes that are normally used in the brain. I believe that these neural coding studies will provide a basic understanding of how signals should be fed back into the brain to produce the rich percepts that we normally receive from our hands.”

(Photo: JHU)

Johns Hopkins University

NERVE CELLS IN OPTIC FLOW

0 comentarios

Generally speaking, animals and humans maintain their sense of balance in their three-dimensional environment without difficulty. In addition to the vestibular system, their navigation is often aided by the eyes. Every movement causes the environment to move past the eyes in a characteristic way. On the basis of this "optic flow", the nerve cells then calculate the organism's self-motion. Scientists at the Max Planck Institute of Neurobiology have now shown how nerve cells succeed in calculating self-motion while confronted with differing backgrounds. So far, none of the established models for optical processing were able to cope with this requirement.

In one respect, humans and flies bear a close resemblance to each other: both rely heavily on their eyesight for navigation. Despite the fact that the visual impressions are constantly changing, this navigation is exceptionally proficient. For example, if I walk past a whitewashed wall, tiny asperities pass my eyes in the opposite direction and reassure me that I am indeed moving forwards. If, on the other hand, I pass a billboard pasted with bright posters, a great variety of colours and structural changes flow past me as I move. Although the visual information is very different, my nerve cells are able to confirm in both cases that I am moving forwards at a certain pace. Something that, at first glance, appears to be rather mundane thus turns out to be a remarkable feat of our brain.

In order to understand how nerve cells process such different optical information, neurobiologists study the brains of flies. Using the fly brain as a model has some obvious advantages: Flies are experts in optical motion processing, yet their brains contain comparatively few nerve cells. This allows scientists to examine the function of every nerve cell in a network. In laboratory experiments, the flies are presented with moving striped patterns while the reactions of individual nerve cells are measured. The knowledge gained from these experiments resulted in models that serve well to show how and to which stimuli a nerve cell reacts and what information it relays to the next cells in line. However, once the patterns the flies see varied too greatly in their complexity, the models' predictions failed.

"These models considered only the input-output relationship of nerve cells, but ignored anything that happened within the cell", explains Alexander Borst, who examines optic processing in the fly brain with his department at the Max Planck Institute of Neurobiology in Martinsried. That these cell-internal activities cannot be ignored was now shown by his PhD student, Franz Weber. Together with Christian Machens from the Ecole Normale Superieure in Paris, Weber developed a model that not only takes the input-output function into account, but that also makes provisions for the biophysical properties of the cell.

Franz Weber presented the flies with patterns of dots with different dot densities. In the course of the experiments, he established that the nerve cells show essentially the same reaction to high and low dot densities. This is astonishing, given that a pattern with a lower number of dots provides a nerve cell with considerably less visual motion information than one with a high dot density (bringing us back to the whitewashed wall and the billboard). The cells evidently compensate for the differences in incoming information by means of an internal amplifier. With this signal enhancement in mind, the scientists devised a new model. It now provides a reliable description of the nerve cells' behaviour within the network - no matter how complex the surrounding world turns out to be.

(Photo: Max Planck Institute of Neurobiology / Schorner)

Max Planck Institute

Followers

Archive

 

Selected Science News. Copyright 2008 All Rights Reserved Revolution Two Church theme by Brian Gardner Converted into Blogger Template by Bloganol dot com