Wednesday, July 15, 2009

NOË TAKES NOVEL PHILOSOPHICAL VIEW OF CONSCIOUSNESS

0 comentarios

Most scientists would say the brain houses all of our capacity for consciousness: no functioning brain, no consciousness. René Descartes, the father of modern philosophy, disagreed, and so do the followers of many religions. For them, the seat of consciousness is a mind or soul, and isn’t physical at all.

Berkeley philosopher Alva Noë argues that this debate, between neurons and the immaterial soul, misses a third option. Consciousness, he writes in Out of Our Heads: Why You Are Not Your Brain, and Other Lessons From the Biology of Consciousness, is not immaterial, but it’s also not just something inside our brains. It is something we do— a porous and seamless exchange between us and other people, us and the things we see and hear, us and the technology and tools we use.

Noë, a member of Berkeley’s philosophy department as well as the Institute for Cognitive and Brain Sciences and the Center for New Media, argues that while neuroscience is indispensable for understanding consciousness, it can’t be the whole story. The explanation that brain scientists give for our capacity to think, feel and perceive — that it takes places exclusively inside our skulls—is bad science.

“I like to think of consciousness in terms of how the world shows up for us,” Noë says, seated in his Moses Hall office. “I experience a world that shows up for me as meaningful, full of things that count: the people walking by outside the window, the airplane going overhead. If there is graffiti on the wall in a language I don’t understand, I might appreciate it simply for the way it looks. Meaning shows up for me not only because of images on my retina, but because of context I bring to what I see.”

In Out of Our Heads Noë argues that we have too narrowly defined the study of consciousness. Neuroscience, while an important field of study, does not take into account the ways that our environments, cultures and everyday interactions with one another help form our conscious selves.

“We need a science that is contextual, that will look at the way brains are embodied in animals, which are in turn situated in environments with histories and cultures,” Noë says. “It needs to be much more nuanced and indeed more humanistic. It needs to be more like history, or evolutionary biology, than it is like molecular biology.”

Looking at consciousness this way stresses the link between our brains and our animal selves, Noë says, and moves beyond the more limited neuroscientific understanding of consciousness.

Noë’s metaphor for human consciousness is that it is a dance. It happens inside us but it depends upon our attunement to the world. The world outside of us plays a role in how we perceive, just as the musicians and the audience play a role in a dancer’s performance.

He takes this metaphor very seriously. Noë is philosopher-in-residence with the Frankfurt, Germany-based Forsythe Company. Over the next year Noë and the dance company’s founder, William Forsythe, will meet to develop a series of lecture-performances that incorporate conversation about philosophy, neuroscience, dance and art.

Noë says the collaboration has nourished his inquiry into how the body is a part of how we think and perceive.

“Performers have to have quite sophisticated working theories about how audiences think and perceive in order to engage audiences,” he says. “I believe that to a surprising degree the aims of art and philosophy are the same, and that therefore it’s quite a natural thing to try to do philosophy with a dance company.”

Noë quite purposefully wrote Out of Our Heads for a mainstream audience, hoping to reach a variety of readers. He has more than met his goal. Since the book came out earlier this year, Noë has spoken at venues as varied as Kepler’s Books in Menlo Park, Powell’s Books in Portland, Google, and Boston University’s Philosophy of Neuroscience group, and his book has been reviewed by outlets as varied as the Washington Post, San Francisco Chronicle, and Salon.com.

“It’s been fantastic,” he says. “I’ve been very moved by the depth of interest that people have in these questions. I feel like there’s a certain cultural urgency about them.”

(Photo: UC Berkeley)

University of Berkeley

READING THE BRAIN WITHOUT POKING IT

0 comentarios

Experimental devices that read brain signals have helped paralyzed people use computers and may let amputees control bionic limbs. But existing devices use tiny electrodes that poke into the brain. Now, a University of Utah study shows that brain signals controlling arm movements can be detected accurately using new microelectrodes that sit on the brain but don't penetrate it.

"The unique thing about this technology is that it provides lots of information out of the brain without having to put the electrodes into the brain," says Bradley Greger, an assistant professor of bioengineering and coauthor of the study. "That lets neurosurgeons put this device under the skull but over brain areas where it would be risky to place penetrating electrodes: areas that control speech, memory and other cognitive functions."

For example, the new array of microelectrodes someday might be placed over the brain's speech center in patients who cannot communicate because they are paralyzed by spinal injury, stroke, Lou Gehrig's disease or other disorders, he adds. The electrodes would send speech signals to a computer that would covert the thoughts to audible words.

For people who have lost a limb or are paralyzed, "this device should allow a high level of control over a prosthetic limb or computer interface," Greger says. "It will enable amputees or people with severe paralysis to interact with their environment using a prosthetic arm or a computer interface that decodes signals from the brain."

The findings represent "a modest step" toward use of the new microelectrodes in systems that convert the thoughts of amputees and paralyzed people into signals that control lifelike prosthetic limbs, computers or other devices to assist people with disabilities, says University of Utah neurosurgeon Paul A. House, the study's lead author.

"The most optimistic case would be a few years before you would have a dedicated system," he says, noting more work is needed to refine computer software that interprets brain signals so they can be converted into actions, like moving an arm.

Such technology already has been developed in experimental form using small arrays of penetrating electrodes that stick into the brain. The University of Utah pioneered development of the 100-electrode Utah Electrode Array used to read signals from the brain cells of paralyzed people. In experiments in Massachusetts, researchers used the small, brain-penetrating electrode array to help paralyzed people move a computer cursor, operate a robotic arm and communicate.

Meanwhile, researchers at the University of Utah and elsewhere are working on a $55 million Pentagon project to develop a lifelike bionic arm that war veterans and other amputees would control with their thoughts, just like a real arm. Scientists are debating whether the prosthetic devices should be controlled from nerve signals collected by electrodes in or on the brain, or by electrodes planted in the residual limb.

The new study was funded partly by the Defense Advanced Research Projects Agency's bionic arm project, and by the National Science Foundation and Blackrock Microsystems, which provided the system to record brain waves.

House and Greger conducted the research with Spencer Kellis, a doctoral student in electrical and computer engineering; Kyle Thomson, a doctoral student in bioengineering; and Richard Brown, professor of electrical and computer engineering and dean of the university's College of Engineering.

Not only are the existing, penetrating electrode arrays undesirable for use over critical brain areas that control speech and memory, but the electrodes likely wear out faster if they are penetrating brain tissue rather than sitting atop it, Greger and House say. Nonpenetrating electrodes may allow a longer life for devices that will help disabled people use their own thoughts to control computers, robotic limbs or other machines.

"If you're going to have your skull opened up, would you like something put in that is going to last three years or 10 years?" Greger asks.

"No one has proven that this technology will last longer," House says. "But we are very optimistic that by being less invasive, it certainly should last longer and provide a more durable interface with the brain."

The new kind of array is called a microECoG - because it involves tiny or "micro" versions of the much larger electrodes used for electrocorticography, or ECoG, developed a half century ago.

For patients with severe epileptic seizures that are not controlled by medication, surgeons remove part of the skull or cranium and place a silicone mat containing ECoG electrodes over the brain for days to weeks while the cranium is held in place but not reattached. The large electrodes - each several millimeters in diameter - do not penetrate the brain but detect abnormal electrical activity and allow surgeons to locate and remove a small portion of the brain causing the seizures.

ECoG and microECoG represent an intermediate step between electrodes that poke into the brain and EEG (electroencephalography), in which electrodes are placed on the scalp. Because of distortion as brain signals pass through the skull and as patients move, EEG isn't considered adequate for helping disabled people control devices.

The regular-size ECoG electrodes are too large to detect many of the discrete nerve impulses controlling the arms or other body movements. So the researchers designed and tested microECoGs in two severe epilepsy patients who already were undergoing craniotomies.

The epilepsy patients were having conventional ECoG electrodes placed on their brains anyway, so they allowed House to place the microECoG electrode arrays at the same time because "they were brave enough and kind enough to help us develop the technology for people who are paralyzed or have amputations," Greger says.

The researchers tested how well the microelectrodes could detect nerve signals from the brain that control arm movements. The two epilepsy patients sat up in their hospital beds and used one arm to move a wireless computer "mouse" over a high-quality electronic draftsman's tablet in front of them. The patients were told to reach their arm to one of two targets: one was forward to the left and the other was forward to the right.

The patients' arm movements were recorded on the tablet and fed into a computer, which also analyzed the signals coming from the microelectrodes placed on the area of each patient's brain controlling arm and hand movement.

The study showed that the microECoG electrodes could be used to distinguish brain signals ordering the arm to reach to the right or left, based on differences such as the power or amplitude of the brain waves.

The microelectrodes were formed in grid-like arrays embedded in rubbery clear silicone. The arrays were over parts of the brain controlling one arm and hand.

The first patient received two identical arrays, each with 16 microelectrodes arranged in a four-by-four square. Individual electrodes were spaced 1 millimeter apart (about one-25th of an inch). Patient 1 had the ECoG and microECoG implants for a few weeks. The findings indicated the electrodes were so close that neighboring microelectrodes picked up the same signals.

So, months later, the second patient received one array containing about 30 electrodes, each 2 millimeters apart. This patient wore the electrode for several days.

"We were trying to understand how to get the most information out of the brain," says Greger. The study indicates optimal spacing is 2 to 3 millimeters between electrodes, he adds.

Once the researchers develop more refined software to decode brain signals detected by microECoG in real-time, it will be tested by asking severe epilepsy patients to control a "virtual reality arm" in a computer using their thoughts.

(Photo: Neurosurgical Focus and University of Utah Department of Neurosurgery)

University of Utah

SCIENTISTS CREATE FIRST ELECTRONIC QUANTUM PROCESSOR

0 comentarios

A team led by Yale University researchers has created the first rudimentary solid-state quantum processor, taking another step toward the ultimate dream of building a quantum computer.

They also used the two-qubit superconducting chip to successfully run elementary algorithms, such as a simple search, demonstrating quantum information processing with a solid-state device for the first time. Their findings appear in Nature’s advanced online publication.

“Our processor can perform only a few very simple quantum tasks, which have been demonstrated before with single nuclei, atoms and photons,” said Robert Schoelkopf, the William A. Norton Professor of Applied Physics & Physics at Yale. “But this is the first time they’ve been possible in an all-electronic device that looks and feels much more like a regular microprocessor.”

Working with a group of theoretical physicists led by Steven Girvin, the Eugene Higgins Professor of Physics & Applied Physics, the team manufactured two artificial atoms, or qubits (“quantum bits”). While each qubit is actually made up of a billion aluminum atoms, it acts like a single atom that can occupy two different energy states. These states are akin to the “1” and “0” or “on” and “off” states of regular bits employed by conventional computers. Because of the counterintuitive laws of quantum mechanics, however, scientists can effectively place qubits in a “superposition” of multiple states at the same time, allowing for greater information storage and processing power.

For example, imagine having four phone numbers, including one for a friend, but not knowing which number belonged to that friend. You would typically have to try two to three numbers before you dialed the right one. A quantum processor, on the other hand, can find the right number in only one try.

“Instead of having to place a phone call to one number, then another number, you use quantum mechanics to speed up the process,” Schoelkopf said. “It’s like being able to place one phone call that simultaneously tests all four numbers, but only goes through to the right one.”

These sorts of computations, though simple, have not been possible using solid-state qubits until now in part because scientists could not get the qubits to last long enough. While the first qubits of a decade ago were able to maintain specific quantum states for about a nanosecond, Schoelkopf and his team are now able to maintain theirs for a microsecond—a thousand times longer, which is enough to run the simple algorithms. To perform their operations, the qubits communicate with one another using a “quantum bus”—photons that transmit information through wires connecting the qubits—previously developed by the Yale group.

The key that made the two-qubit processor possible was getting the qubits to switch “on” and “off” abruptly, so that they exchanged information quickly and only when the researchers wanted them to, said Leonardo DiCarlo, a postdoctoral associate in applied physics at Yale’s School of Engineering & Applied Science and lead author of the paper.

Next, the team will work to increase the amount of time the qubits maintain their quantum states so they can run more complex algorithms. They will also work to connect more qubits to the quantum bus. The processing power increases exponentially with each qubit added, Schoelkopf said, so the potential for more advanced quantum computing is enormous. But he cautions it will still be some time before quantum computers are being used to solve complex problems.

“We’re still far away from building a practical quantum computer, but this is a major step forward.”

(Photo: Blake Johnson/Yale University)

Yale University

FIRST EVER WORLDWIDE CENSUS OF CARIBOU AND REINDEER REVEALS A DRAMATIC DECLINE

0 comentarios

Caribou and reindeer numbers worldwide have plunged almost 60 per cent in the last three decades.

The dramatic revelation, published recently in the journal Global Change Biology, came out of the first-ever comprehensive census carried out by biologists at the University of Alberta.

Study co-author and PhD student Liv Vors says global warming and industrial development are responsible for driving this dramatic decline in caribou and reindeer numbers around the world.

"With Peary caribou on Canada's high arctic islands, for example, there were probably at least 50,000 of them 60 years ago; now there are fewer than 1,000," said Vors.

An earlier spring green-up, which is especially important for pregnant females who need this highly nutritious flush of growth to provide high-quality milk for their newborns, seems to be wreaking havoc with migrating caribou and reindeer.

"Now that spring arrives earlier in the Arctic than it used to, the plants may be past their prime by the time the caribou reach the calving grounds," said Vors. "They don't appear to have adjusted their timing of migration."

Vors says more freezing rain during the winter months force a layer of ice on the ground, which restricts access to lichen and other forage, forcing migrating animals to travel further to find food. During the summer, an increased insect presence interrupts feeding patterns.

"Caribou and reindeer that are harassed by biting and parasitic insects spend less time feeding and more time running around to shake off these pests," she said. "Less feeding equals less body fat, which is crucial to surviving winter."

Hunting likely played a role in the decline of non-migratory woodland caribou in the early part of the 20th century when new roads gave greater access into the boreal forest and small groups of woodland caribou were frequently shot on site. Vors says, though, that it is no longer legal for non-Aboriginal hunters to hunt woodland caribou. For Aboriginal populations who are dependent on caribou and reindeer for sustenance, the decline of caribou is leading to both economic and spiritual hardships.

"If people cannot hunt caribou, they must purchase meat, which is prohibitively expensive in the Arctic," said Vors. "Caribou and reindeer also play significant roles in northern spirituality and mythology. Some values that caribou and reindeer sustain include education in traditional ways of life, kinship and bonding through hunting caribou and reindeer and the bequest of these herds to future generations."

(Photo: U. Alberta)

University of Alberta

THINKING OF YOU

0 comentarios
Human beings constantly make inferences about other people's state of mind, usually without even realizing they are doing it. Cognitive scientists call this ability "theory of mind," and until recently, not much has been known about the brain mechanisms underlying it.

A new paper by MIT neuroscientists suggests that the process does not involve actually imagining yourself in the other person's position, as some scientists have theorized. Instead, humans carry an abstract model of how other people's minds work, which they can apply to others' situations to predict how they feel, even if they have never had the same experience.

The study also offers evidence that theory of mind is seated in specific brain regions, even in the congenitally blind - people whose brains have never received any visual input, a major source of information about other people's state of mind.

The work appeared this week in the online edition of the Proceedings of the National Academy of Sciences.

Humans use theory of mind every time they evaluate someone else's mental state and determine what they know, what they want, and why they are happy or sad, angry or scared.

"It's something we do all the time in our everyday interactions," says Marina Bedny, postdoctoral associate in MIT's Department of Brain and Cognitive Sciences and lead author of the paper.

Though theory of mind is an old concept that philosophers, including Descartes, have long studied, very little is known about how it works. Two theories predominate, according to Rebecca Saxe, MIT assistant professor of brain and cognitive sciences and senior author of the paper.

The first theory, known as simulation, suggests that when people try to figure out others' mental reactions to an event, they imagine themselves in the same situation. In other words, "we're trying to achieve a matching of the experience we've had to the experience the other person is having," says Saxe.

The second theory proposes that the human brain uses an abstract model of how minds work, analogous to the model we have of how the physical world works. This model would allow people to understand others' minds without having the same experiences, just as we know that an egg dropped from a 10-story building will crack when it hits the ground, even if we have never tried it.

Saxe and Bedny decided to approach the problem by studying congenitally blind people, who have never had visual input. If the simulation theory were correct, it would be expected that blind people could not reason about the visual experiences of others the same way that sighted people do, because they cannot mentally re-create the experience of seeing something.

For example, though a blind person could understand the experience of seeing a love letter from a boyfriend and feeling happy - one of the examples the researchers used in their study - she would have no memories of having that exact experience herself.

However, the researchers found that blind people performed just as well in predicting the feelings of other people as sighted people did, and used the same brain regions to do it, suggesting that simulation is not necessary and the brain is using an abstract model of others' mental states.

The new paper also addresses a related question: How much does the location of higher-order brain functions such as theory of mind depend on genetic preprogramming and how much is determined by sensory experience?

Several studies have shown that under certain circumstances, the brain is capable of re-organizing itself in response to sensory input, or lack thereof. For example, in congenitally blind people, the cortex that normally processes basic visual information can be taken over for language processing.

Because sighted people often gain insight into others' emotions through vision (by seeing facial expressions, for example), some theories suggested that blind people would use different brain regions when performing theory of mind tasks.

However, the team's fMRI (functional magnetic resonance imaging) brain scans revealed no differences between brain regions activated in blind and sighted people as they predicted others' mental states. This offers evidence that the organization of higher-level cognitive functions, such as theory of mind, is not determined by sensory experience, according to Saxe. It's an open question whether brain organization in these regions is genetically preprogrammed, or depends on other aspects of experience, like language.

Massachusetts Institute of Technology

EXTENDING THE SHELF LIFE OF ANTIBODY DRUGS

0 comentarios

A new computer model developed at MIT can help solve a problem that has plagued drug companies trying to develop promising new treatments made of antibodies: Such drugs have a relatively short shelf life because they tend to clump together, rendering them ineffective.

Antibodies are the most rapidly growing class of human drugs, with the potential to treat cancer, arthritis and other chronic inflammatory and infectious diseases. About 200 such drugs are now in clinical trials, and a few are already on the market.

Patients can administer these drugs to themselves, but this requires high doses - and the drugs must therefore be stored at high concentrations. However, under these conditions the drugs tend to clump, or aggregate. Even if they are stored at lower concentrations and administered by a doctor intravenously, they often have stability issues. Addressing such issues typically takes place later in the drug development process, and the cost - both in time and money - is often high.

Currently there is no straightforward way to address these storage issues early in the development process.

"Drugs are usually developed with the criteria of how effective they'll be, and how well they'll bind to whatever target they're supposed to bind," says Bernhardt Trout, professor of chemical engineering and leader of the MIT team. "The problem is there are all of these issues down the line that were never taken into account."

Trout and his colleagues, including Bernhard Helk of Novartis, have developed a computer model that can help designers identify which parts of an antibody are most likely to attract other molecules, allowing them to alter the antibodies to prevent such clumping. The model, which the researchers aim to incorporate in the drug discovery process, is described in a paper appearing in the online edition of the Proceedings of the National Academy of Sciences the week of June 29.

Most of the aggregation seen in antibodies is due to interactions between exposed hydrophobic (water-fearing) regions of the proteins.

Trout's new model, known as SAP (spatial aggregation propensity), offers a dynamic, three-dimensional simulation of antibody molecules. Unlike static representations such as those provided by X-ray crystallography, the new model can reveal hydrophobic regions and also indicates how much those regions are exposed when the molecule is in solution. The other important aspect of the model is that it selects out regions responsible for aggregation, as opposed to just single sites.

Once the hydrophobic regions are known, researchers can mutate the amino acids in those regions to decrease hydrophobicity and make the molecule more stable. Using the model, the team produced mutated antibodies with greatly enhanced stability (up to 50 percent more than the original antibodies), and the mutations had no adverse affect on their function.

(Photo: Donna Coveney)

Massachusetts Institute of Technology

Followers

Archive

 

Selected Science News. Copyright 2008 All Rights Reserved Revolution Two Church theme by Brian Gardner Converted into Blogger Template by Bloganol dot com