Tuesday, November 16, 2010

GIRL POWER: NO MALE? NO PROBLEM FOR FEMALE BOA CONSTRICTOR

0 comentarios

In a finding that upends decades of scientific theory on reptile reproduction, researchers at North Carolina State University have discovered that female boa constrictors can squeeze out babies without mating.

More strikingly, the finding shows that the babies produced from this asexual reproduction have attributes previously believed to be impossible.

Large litters of all-female babies produced by the “super mom” boa constrictor show absolutely no male influence – no genetic fingerprint that a male was involved in the reproductive process. All the female babies also retained their mother’s rare recessive color mutation.

This is the first time asexual reproduction, known in the scientific world as parthenogenesis, has been attributed to boa constrictors, says Dr. Warren Booth, an NC State postdoctoral researcher in entomology and the lead author of a paper describing the study. He adds that the results may force scientists to re-examine reptile reproduction, especially among more primitive snake species like boa constrictors.

The study is published online in Biology Letters, a Royal Society journal.

Snake sex chromosomes are a bit different from those in mammals – male snakes’ cells have two Z chromosomes, while female snakes’ cells have a Z and a W chromosome. Yet in the study, all the female babies produced by asexual reproduction had WW chromosomes, a phenomenon Booth says had not been seen before and was believed to be impossible. Only through complex manipulation in lab settings could such WW females be produced – and even then only in fish and amphibians, Booth says.

Adding to the oddity is the fact that within two years, the same boa mother produced not one, but two different snake broods of all-female, WW-chromosome babies that had the mother’s rare color mutation. One brood contained 12 babies and the second contained 10 babies. And it wasn’t because she lacked options: Male snakes were present and courted the female before she gave birth to the rare babies. And the versatile super-mom had previously had babies the “old-fashioned way” by mating with a male well before her two asexual reproduction experiences.

Booth doubts that the rare births were caused by environmental changes. He notes that while environmental stresses have been associated with asexual reproduction in some fish and other animals, no changes occurred in the mother boa’s environment or routine.

It’s possible that this one snake is some sort of genetic freak of nature, but Booth says that asexual reproduction in snakes could be more common than people think.

“Reproducing both ways could be an evolutionary ‘get-out-of-jail-free card’ for snakes,” Booth says. “If suitable males are absent, why waste those expensive eggs when you have the potential to put out some half-clones of yourself? Then, when a suitable mate is available, revert back to sexual reproduction.”

A reptile keeper and snake breeder, Booth now owns one of the young females from the study. When the all-female snake babies reach sexual maturity in a few years, Booth will be interested to see if they mate with a male, produce babies without a mate, or – like their mother – do both. In any case, these WW-chromosomed females will continue their version of “girl power,” as any baby they produce will also be female.

(Photo: NCSU)

North Carolina State University

LANGUAGE APPEARS TO SHAPE OUR IMPLICIT PREFERENCES

0 comentarios
The language we speak may influence not only our thoughts, but our implicit preferences as well. That's the finding of a study by psychologists at Harvard University, who found that bilingual individuals' opinions of different ethnic groups were affected by the language in which they took a test examining their biases and predilections.

The paper appears in the Journal of Experimental Social Psychology.

"Charlemagne is reputed to have said that to speak another language is to possess another soul," says co-author Oludamini Ogunnaike, a graduate student at Harvard. "This study suggests that language is much more than a medium for expressing thoughts and feelings. Our work hints that language creates and shapes our thoughts and feelings as well."

Implicit attitudes, positive or negative associations people may be unaware they possess, have been shown to predict behavior towards members of social groups. Recent research has shown that these attitudes are quite malleable, susceptible to factors such as the weather, popular culture -- or, now, by the language people speak.

"Can we shift something as fundamental as what we like and dislike by changing the language in which our preferences are elicited?" asks co-author Mahzarin R. Banaji, the Richard Clarke Cabot Professor of Social Ethics at Harvard. "If the answer is yes, that gives more support to the idea that language is an important shaper of attitudes."

Ogunnaike, Banaji, and Yarrow Dunham, now at the University of California, Merced, used the well-known Implicit Association Test (IAT), where participants rapidly categorize words that flash on a computer screen or are played through headphones. The test gives participants only a fraction of a second to categorize words, not enough to think about their answers.

"The IAT bypasses a large part of conscious cognition and taps into something we're not aware of and can't easily control," Banaji says.

The researchers administered the IAT in two different settings: once in Morocco, with bilinguals in Arabic and French, and again in the U.S. with Latinos who speak both English and Spanish.

In Morocco, participants who took the IAT in Arabic showed greater preference for other Moroccans. When they took the test in French, that difference disappeared. Similarly, in the U.S., participants who took the test in Spanish showed a greater preference for other Hispanics. But again, in English, that preference disappeared.

"It was quite shocking to see that a person could take the same test, within a brief period of time, and show such different results," Ogunnaike says. "It's like asking your friend if he likes ice cream in English, and then turning around and asking him again in French and getting a different answer."

In the Moroccan test, participants saw "Moroccan" names (such as Hassan or Fatimah) or "French" names (such as Jean or Marie) flash on a monitor, along with words that are "good" (such as happy or nice) or "bad" (such as hate or mean). Participants might press one key when they see a Moroccan name or a good word, and press another when they see a French name or a bad word. Then the key assignments are switched so that "Moroccan" and "bad" share the same key and "French" and "good" share the other.

Linguist Benjamin Lee Whorf first posited in the 1930s that language is so powerful that it can determine thought. Mainstream psychology has taken the more skeptical view that while language may affect thought processes, it doesn't influence thought itself. This new study suggests that Whorf's idea, when not caricatured, may generate interesting hypotheses that researchers can continue to test.

"These results challenge our views of attitudes as stable," Banaji says. "There still remain big questions about just how fixed or flexible they are, and language may provide a window through which we will learn about their nature."

Harvard University

TRANSPARENT CONDUCTIVE MATERIAL COULD LEAD TO POWER-GENERATING WINDOWS

0 comentarios
Scientists at the U.S. Department of Energy’s (DOE) Brookhaven National Laboratory and Los Alamos National Laboratory have fabricated transparent thin films capable of absorbing light and generating electric charge over a relatively large area. The material, described in the journal Chemistry of Materials, could be used to develop transparent solar panels or even windows that absorb solar energy to generate electricity.

The material consists of a semiconducting polymer doped with carbon-rich fullerenes. Under carefully controlled conditions, the material self-assembles to form a reproducible pattern of micron-size hexagon-shaped cells over a relatively large area (up to several millimeters).

“Though such honeycomb-patterned thin films have previously been made using conventional polymers like polystyrene, this is the first report of such a material that blends semiconductors and fullerenes to absorb light and efficiently generate charge and charge separation,” said lead scientist Mircea Cotlet, a physical chemist at Brookhaven’s Center for Functional Nanomaterials (CFN).

Furthermore, the material remains largely transparent because the polymer chains pack densely only at the edges of the hexagons, while remaining loosely packed and spread very thin across the centers. “The densely packed edges strongly absorb light and may also facilitate conducting electricity,” Cotlet explained, “while the centers do not absorb much light and are relatively transparent.”
Mircea Cotlet

“Combining these traits and achieving large-scale patterning could enable a wide range of practical applications, such as energy-generating solar windows, transparent solar panels, and new kinds of optical displays,” said co-author Zhihua Xu, a materials scientist at the CFN.

“Imagine a house with windows made of this kind of material, which, combined with a solar roof, would cut its electricity costs significantly. This is pretty exciting,” Cotlet said.

The scientists fabricated the honeycomb thin films by creating a flow of micrometer-size water droplets across a thin layer of the polymer/fullerene blend solution. These water droplets self-assembled into large arrays within the polymer solution. As the solvent completely evaporates, the polymer forms a hexagonal honeycomb pattern over a large area.

“This is a cost-effective method, with potential to be scaled up from the laboratory to industrial-scale production,” Xu said.

The scientists verified the uniformity of the honeycomb structure with various scanning probe and electron microscopy techniques, and tested the optical properties and charge generation at various parts of the honeycomb structure (edges, centers, and nodes where individual cells connect) using time-resolved confocal fluorescence microscopy.

The scientists also found that the degree of polymer packing was determined by the rate of solvent evaporation, which in turn determines the rate of charge transport through the material.

“The slower the solvent evaporates, the more tightly packed the polymer, and the better the charge transport,” Cotlet said.

“Our work provides a deeper understanding of the optical properties of the honeycomb structure. The next step will be to use these honeycomb thin films to fabricate transparent and flexible organic solar cells and other devices,” he said.

Brookhaven National Laboratory

WATER FLOWING THROUGH ICE SHEETS ACCELERATES WARMING, COULD SPEED UP ICE FLOW

0 comentarios

Melt water flowing through ice sheets via crevasses, fractures and large drains called moulins can carry warmth into ice sheet interiors, greatly accelerating the thermal response of an ice sheet to climate change, according to a new study involving the University of Colorado at Boulder.

The new study showed ice sheets like the Greenland Ice Sheet can respond to such warming on the order of decades rather than the centuries projected by conventional thermal models. Ice flows more readily as it warms, so a warming climate can increase ice flows on ice sheets much faster than previously thought, said the study authors.

"We are finding that once such water flow is initiated through a new section of ice sheet, it can warm rather significantly and quickly, sometimes in just 10 years, " said lead author Thomas Phillips, a research scientist with Cooperative Institute for Research in Environmental Sciences. CIRES is a joint institute between CU-Boulder and the National Oceanic and Atmospheric Administration.

Phillips, along with CU-Boulder civil, environmental and architectural engineering Professor Harihar Rajaram and CIRES Director Konrad Steffen described their results in a paper published online in Geophysical Research Letters.

Conventional thermal models of ice sheets do not factor in the presence of water within the ice sheet as a warming agent, but instead use models that primarily consider ice-sheet heating by warmer air on the ice sheet surface. In water's absence, ice warms slowly in response to the increased surface temperatures from climate change, often requiring centuries to millennia to happen.

But the Greenland ice sheet is not one solid, smooth mass of ice. As the ice flows towards the coast, grating on bedrock, crevasses and new fractures form in the upper 100 feet of the ice sheet. Melt water flowing through these openings can create "ice caves" and networks of "pipes" that can carry water through the ice and spreading warmth, the authors concluded.

To quantify the influence of melt water, the scientists modeled what would happen to the ice sheet temperature if water flowed through it for eight weeks every summer -- about the length of the active melt season. The result was a significantly faster-than-expected increase in ice sheet warming, which could take place on the order of years to decades depending on the spacing of crevasses and other "pipes" that bring warmer water into the ice sheet in summer.

"The key difference between our model and previous models is that we include heat exchange between water flowing through the ice sheet and the ice," said Rajaram.

Several factors contributed to the warming and resulting acceleration of ice flow, including the fact that flowing water into the ice sheets can stay in liquid form even through the winter, slowing seasonal cooling. In addition, warmer ice sheets are more susceptible to increases of water flow, including the basal lubrication of ice that allows ice to flow more readily on bedrock.

A third factor is melt water cascading downward into the ice, which warms the surrounding ice. In this process the water can refreeze, creating additional cracks in the more vulnerable warm ice, according to the study.

Taken together, the interactions between water, temperature, and ice velocity spell even more rapid changes to ice sheets in a changing climate than currently anticipated, the authors concluded. After comparing observed temperature profiles from Greenland with the new model described in the paper, the authors concluded the observations were unexplainable unless they accounted for warming.

"The fact that the ice temperatures warm rather quickly is really the key piece that's been overlooked in models currently being used to determine how Greenland responds to climate warming," Steffen said. "However, this process is not the ‘death knell' for the ice sheet. Even under such conditions, it would still take thousands of years for the Greenland ice sheet to disappear, Steffen said.

(Photo: Konrad Steffen, CIRES)

University of Colorado at Boulder

UV LIGHT NEARLY DOUBLES VACUUM’S EFFECTIVENESS IN REDUCING CARPET MICROBES

0 comentarios

New research suggests that the addition of ultraviolet light to the brushing and suction of a vacuum cleaner can almost double the removal of potentially infectious microorganisms from a carpet’s surface when compared to vacuuming alone.

Researchers say the findings suggest that incorporating the germicidal properties of UV light into vacuuming might have promise in reducing allergens and pathogens from carpets, as well.

“What this tells us is there is a commercial vacuum with UV technology that’s effective at reducing surface microbes. This has promise for public health, but we need more data,” said Timothy Buckley, associate professor and chair of environmental health sciences at Ohio State University and senior author of the study.

“Carpets are notorious as a source for exposure to a lot of bad stuff, including chemicals, allergens and microbes. We need tools that are effective and practical to reduce the associated public health risk. This vacuum technology appears to be a step in the right direction.”

The research appears online in the journal Environmental Science & Technology.

For this study, Buckley and colleagues tested a commercially available upright vacuum cleaner, evaluating separately and in combination the standard beater-bar, or rotating brush, as well as a lamp that emits germicidal radiation.

UV-C light with a wavelength of 253.7 nanometers has been studied extensively for its disinfection properties in water, air and food and on a variety of surfaces. This is the first study of its effects on carpet surfaces.

The Ohio State research group selected multiple 3-by-3-foot sections of carpeting of different types from three settings: a commercial tight-loop carpet in a university conference room, and medium Berber carpet with longer, dense loops in a common room of an apartment complex and a single-family home.

Researchers collected samples from each carpet section using contact plates that were pressed onto the flooring to lift microbes from the carpet surfaces. They collected samples from various locations on each test site to obtain a representative sample of the species present on the carpets.

After sampling, the plates were incubated for 24 hours in a lab and the number of colonies was counted. The plates contained growth media particularly suited for fungi commonly found in indoor environments, including Penicillium and Zygomycetes.

Each treatment was tested separately by collecting multiple samples from each 3-by-3-foot section before and after treatment: vacuuming alone, the application of UV-C light alone, or a combination of UV-C light and vacuuming. In each case, the carpets were vacuumed at a speed of 1.8 feet per second for two minutes.

Overall, vacuuming alone reduced microbes by 78 percent, UV-C light alone produced a 60 percent reduction in microbes, and the combination of beater-bar vacuuming and UV-C light reduced microbes on the carpet surfaces by 87 percent. When looking at the microbe quantities, the researchers found that, on average, vacuuming alone removed 7.3 colony-forming units of microbes per contact plate and the UV-C light removed 6.6 colony-forming units per plate. The combination of UV-C light and vacuuming yielded the largest reduction in colony-forming units: 13 per plate.

“We concluded that the combined UV-C-equipped vacuum produced approximately the sum of the individual effects, and therefore the UV-C was responsible for an approximate doubling of the vacuum cleaner’s effectiveness in reducing the surface-bound microbial load,” Buckley said.

Surfaces in residential settings, and especially carpets, are seen as potentially posing health risks because they are reservoirs for the accumulation of a variety of contaminants. Those most susceptible to infection, including the elderly, asthmatics, the very young and people with compromised immune systems, might be at particular risk because they spend most of their time indoors, Buckley noted.

“The best next step would be to test this UV-C vacuum technology in some environments that are high risk, where we could sample for specific pathogens,” Buckley said. “The home environment would be particularly important, because that’s where people spend the lion’s share of their time and are likely to be in close contact with carpet.”

Most natural UV-C rays from the sun are absorbed in the atmosphere, but long-term exposure to artificial UV-C sources can cause skin and eye damage. The vacuum has been engineered to prevent exposure to harmful radiation from the UV-C lamp, Buckley said.

Upright vacuum retail prices generally range from about $100 to $900. The equipment in this study falls within that range.

(Photo: OSU)

Ohio State University

PHANTOM IMAGES STORED IN FLEXIBLE NETWORK THROUGHOUT BRAIN

0 comentarios

Brain research over the past 30 years has shown that if a part of the brain controlling movement or sensation or language is lost because of a stroke or injury, other parts of the brain can take over the lost function – often as well as the region that was lost.

New research at the University of California, Berkeley, shows that this holds true for memory and attention as well, though — at least for memory — the intact brain helps out only when needed and conducts business as usual when it's not.

These results support the hypothesis that memory is not stored in one place, but rather, is distributed in many regions of the brain, which means that damage to one storage area is easier to compensate for.

"It's not just specific regions, but a whole network, that's supporting memory," said Bradley Voytek, a UC Berkeley postdoctoral fellow in the Helen Wills Neuroscience Institute and first author of two recent journal articles describing EEG (electroencephalogram) studies of people with strokes. Voytek recently completed his Ph.D. in neuroscience at UC Berkeley.

"The view has always been, if you lose point A, point B will be on all the time to take over," said co-author Dr. Robert Knight, UC Berkeley professor of psychology and head of the Wills Institute. "Brad has shown that's not true. It actually only comes on if it's needed.

"Most of the time, it acts like a normal piece of brain tissue. It only kicks into hyperdrive when the bad part of the brain is particularly challenged, and it does it in less than a second. This is a remarkably fluid neural plasticity, but it isn't the standard 'B took over for A,' it's really 'B will take over if and when needed.'"

One of the papers, published Nov. 3 in the online edition of Neuron, describes a study of stroke patients who have lost partial function in their prefrontal cortex, the area at the top front of each hemisphere of the brain that governs memory and attention.

Voytek put electrodes on the scalps of six stroke patients as well as six controls with normal prefrontal cortex function, and showed each patient a series of pictures to test his or her ability to remember images for a brief time, so-called visual working memory. Visual working memory is what allows us to compare two objects, keeping one in memory while we look at another, as when we choose the ripest of two bananas.

"We presented each subject with a really quick flash of a visual stimulus and then showed them a second one a little while later, and they had to say whether it was the same as the first," Voytek explained. "The idea is that you're building a representation of your visual world somehow in your brain — and we don't know how that happens — so that later you can compare this internal phantom representation you're holding in your mind to a real world visual stimulus, something you actually see. These patients can't do that as well."

EEGs provide millisecond measurements of brain activity, though they do not pinpoint active areas as precisely as other techniques, such as functional magnetic resonance imaging (fMRI). On the other hand, fMRI averages brain activity over seconds, making it impossible to distinguish split-second brain processes or even tell which occur first.

The neuroscientists discovered that when images were shown to the eye opposite the lesion (output of the left eye goes to the right hemisphere, and vice versa), the damaged prefrontal cortex did not respond, but the intact prefrontal cortex on the same side as the image responded within 300 to 600 milliseconds.

"EEG, which is very good for looking at the timing of activity in the brain, showed that part of the brain is compensating on a subsecond basis," Voytek said. "It is very rapid compensation: Within a second of challenging the bad side, the intact side of the brain is coming online to pick up the slack."

"This has implications for what physicians measure to see if there's effective recovery after stroke," Knight said, "and suggests that you can take advantage of this to train the area you would like to take over from a damaged area instead of just globally training the brain."

In a second paper that appeared online Oct. 4 in the journal Proceedings of the National Academy of Sciences, Voytek and Knight looked at visual working memory in patients with damage not only to the prefrontal cortex, but also to the basal ganglia. The basal ganglia are a pair of regions directly below the brain's cortex that are involved in motor control and learning and that are impaired in patients with Parkinson's disease.

The patients with stroke damage to the prefrontal cortex had, as suspected, problems when images were presented to the eye on the side opposite the lesion. Those with basal ganglia damage, however, had problems with visual working memory no matter which part of the visual field was shown the image.

"The PNAS paper shows that the basal ganglia lesions cause a more broad network deficit, whereas the prefrontal cortex lesions cause a more within-hemisphere deficit in memory," Voytek said. "This demonstrates, again, that memory is a network phenomenon rather than a specifically regional phenomenon."

"If you take out one basal ganglia, the logic would be that you would be Parkinsonian on half your body. But you're not," Knight said. "One basal ganglia on one side is able to somehow control fluid movement on both sides."

"Brad's data show that for cognitive control, it's just the opposite. One small basal ganglia lesion on one side has global effects on both sides of your body," he added. "This really points out that for this deep subcortical basal ganglia area, you need all of it to function normally. I don't think anybody would have really suspected that."

Knight hopes to conduct follow up studies using direct recordings from electrodes in the brain to further explore the various brain regions involved in visual memory and other types of memory and attention governed by the prefrontal cortex.

"Cognition and memory are the highest forms of human behavior," Knight said. "It is not just about raising or lowering your hand, or whether you can or cannot see. These are the things that make us human, and that is what makes it so interesting for us."

Other coauthors of the Neuron paper are Matar Davis and Elena Yago of UC Berkeley's Helen Wills Neuroscience Institute; Francisco Barceló of the Institut Universitari d'Investigació en Ciències de la Salut at the Universitat de les Illes Balears in Palma de Mallorca, Spain; and Edward K. Vogel of the University of Oregon in Eugene.

(Photo: Bradley Voytek. Robert Knight/UC Berkeley)

University of California, Berkeley

NEUTRON STARS MAY BE TOO WEAK TO POWER SOME GAMMA-RAY BURSTS

0 comentarios

A gamma-ray burst is an immensely powerful blast of high-energy light thought to be generated by a collapsing star in a distant galaxy, but what this collapse leaves behind has been a matter of debate.

A new analysis of four extremely bright bursts observed by NASA's Fermi satellite suggests that the remnant from a long-duration gamma-ray burst is most likely a black hole – not a rapidly spinning, highly magnetized neutron star, or magnetar since such a burst emits more energy than is theoretically possible from a magnetar.

"Some of the events we have been finding seem to be pushing right up against this total limit for a neutron star progenitor system," said S. Bradley Cenko, a post-doctoral fellow from the University of California, Berkeley.

Long-duration gamma-ray bursts (GRBs) are presumed to be created by the explosive collapse in distant galaxies of massive stars. The explosion is visible from Earth because the light is emitted in a narrow cone, like a beam from a lighthouse. First discovered in 1967 by satellites looking for nuclear blasts on Earth, gamma-ray bursts have been the focus of several satellite missions, most recently NASA's Fermi gamma-ray space telescope, launched in 2008, and NASA's Swift satellite, launched in 2004.

With accumulating observations, astronomers have been able to create models of how the collapse of a rapidly rotating, massive star can accelerate matter to nearly the speed of light and collimate it into two oppositely directed, tightly focused beams along the spin axis. They have also studied how these particles generate gamma rays and other emissions.

The two leading candidates for powering these long-duration bursts are a magnetar and a black hole, sometimes referred to as a collapsar. In both cases, material from the star falls inward and is catapulted out by the spinning neutron star or black hole. What distinguishes these models is that magnetar-powered bursts cannot be as powerful as black hole-powered bursts.

"The question we have been trying to answer is: What is the true energy release from these events?" Cenko said. "We can measure all the light emitted – very high energy gamma rays, and, at later times, X-ray, optical and radio afterglow emissions – but that doesn't provide a very good estimate, because GRBs emit in relatively narrow jets. We have to get an idea of the geometry of this outflow, that is, how collimated the jets are."

Previous studies have shown that light measured in the afterglow begins to drop steeply at a certain point, and the sooner this drop-off, called a jet break, the narrower the jet. Typically, the gamma-ray burst itself lasts from a few seconds to as long as 100 seconds, but the afterglow, produced when the jets interact with gas and dust surrounding the star, emits visible light for a couple of weeks and radio radiation for several months.

While Swift has observed hundreds of bursts in the past five years and notified astronomers within seconds of detection, the instruments aboard the satellite detect mostly medium-sized bursts that are not as highly collimated and that have a jet break many days or weeks after the burst.

Fermi's Large Area Telescope, however, is sensitive to very bright bursts with jet breaks within several days of the burst, making follow-up observations easier with Swift's X-ray and ultraviolet-optical telescopes and the ground-based Very Large Array, a radio telescope operated by NRAO.

Fermi detects few extremely bright bursts, however – only four in 2009, Cenko said – and does not notify astronomers for nearly a day afterward. Once alerted, however, Cenko's team was able to observe the optical, X-ray and radio afterglow of these four events, find the jet break and use this information, along with the star's distance calculated from its redshift, to estimate the total energy output.

If the energy from these bright bursts were emitted in all directions, it would be equivalent to the mass of the sun being converted instantaneously into pure energy. Because the gamma-ray burst was focused in a cone only a few degrees wide, however, the energies for all four bursts were about 100𔂿,000 times less than this.

Theoretical models of how these beams are produced place a limit on how much energy a magnetar can generate in one of these explosive bursts: about 100 times less than if one converted the sun entirely into energy. Several of these bright bursts exceed this limit.

"The magnetar model is in serious trouble for such incredibly powerful events," noted coauthor Alex Filippenko, UC Berkeley professor of astronomy. "Even if the magnetar energy limit is not strictly violated, the tremendous efficiency required by this process strains credulity."

"In the future, we will be trying to make more precise measurements and be looking for more events to rule out a neutron star model," Cenko said.

(Photo: Tony Piro/Caltech)

University of California, Berkeley

STUDY REVEALS WHY BRAIN HAS LIMITED CAPACITY FOR REPAIR AFTER STROKE, IDS NEW DRUG TARGET

0 comentarios
Stroke is the leading cause of adult disability, due to the brain's limited capacity for recovery. Physical rehabilitation is the only current treatment following a stroke, and there are no medications available to help promote neurological recovery.

Now, a new UCLA study published in the Nov. 11 issue of the journal Nature offers insights into a major limitation in the brain's ability to recover function after a stroke and identifies a promising medical therapy to help overcome this limitation.

Researchers interested in how the brain repairs itself already know that when the brain suffers a stroke, it becomes excitable, firing off an excessive amount of brain cells, which die off. The UCLA researchers found that a rise in a chemical system known as "tonic inhibition" immediately after a stroke causes a reduction in this level of excitability.

But while this "damping down" initially helps limit the spread of stroke damage, the increased tonic inhibition level and reduced brain excitability persists for weeks, eventually becoming detrimental to the brain's recovery.

Based on this finding, the researchers identified a new way to "turn off" this inhibitory response in order to promote stroke recovery and determined the window of time in which this and other brain-repair therapies after stroke should be administered. These findings offer new targets for drug development to promote stroke recovery.

"It was surprising to find that the level of tonic inhibition was increased for so long after stroke and that there was an inflection point where the increased level eventually hindered the brain from recovering," said Dr. Tom Carmichael, associate professor of neurology at the David Geffen School of Medicine at UCLA and a member of the UCLA Stroke Center. "It was also surprising that we could easily manipulate tonic inhibition in the brain after stroke to restore it back to a normal, 'non-stroke' level and, in doing this, enhance behavioral recovery."

Other studies have looked in general terms at excitatory signaling or at a type of inhibitory signaling after stroke known as "phasic inhibition." However, this previous work focused on the direct connections between brain cells.

The UCLA research is the first to examine tonic inhibition in stroke, focusing on the chemical system, which does not directly link brain cells together but instead senses the overall activity level in the brain and sets the thresholds for when brain cells will fire off new signals.

By studying stroke and stroke recovery in mice, the researchers found that since stroke causes a reduction in the normal clearance of an inhibitory brain chemical, it causes neurons in the tissue that borders the stroke to be less excitable. They found that by applying specific blockers of this inhibitory brain chemical, they could then "turn off the switch."

The resulting enhanced brain excitability immediately improved behavioral recovery after stroke. As a result, these findings identified the potential for a new target in the brain for effective stroke recovery treatments.

"An important element in stroke treatment is the timing of drug delivery," added Carmichael. "We found that blocking tonic inhibition too early could produce cell death, but by delaying treatment to three days after stroke, it promoted functional recovery without altering the stroke size."

The next stage of research will be to validate the findings in other pre-clinical models of stroke, and then to design clinical trials for humans. Pharmaceutical companies have been active in this region of neuroscience and there are some promising candidate drugs for human use that exist.

University of California

META-FLEX: YOUR NEW BRAND FOR INVISIBILITY CLOTHING

0 comentarios

Flexible smart materials that can manipulate light to shield objects from view have been much-theorised but now researchers in Scotland have made a practical breakthrough that brings the possibility of an invisibility cardigan – or any other item of invisibility clothing - one step closer.

Two challenges to the creation of smart flexible materials that can cloak from visible light are making meta-atoms small enough to interact with visible light, and the fabrication of metamaterials that can be detached from the hard surfaces they are developed on to be used in more flexible constructs.

Research published Thursday 4 November 2010, in New Journal of Physics (co-owned by the Institute of Physics and German Physical Society), details how Meta-flex, a new material designed by researchers from the University of St Andrews, overcomes both of these challenges.

Although cloaks designed to shield objects from both Terahertz and Near Infrared waves have already been designed, a flexible material designed to cloak objects from visible light poses a greater challenge because of visible light's smaller wavelength and the need to make the metamaterial's constituent part – meta-atoms – small enough to interact with visible light.

These tiny meta-atoms have been designed but they have only traditionally been realized on flat, hard surfaces, making them rigid constructs impractical for use in clothing or other possible applications that would benefit from flexibility, such as super lenses.

The research team, led by EPSRC Career Acceleration Fellow Dr Andrea Di Falco, has developed an elaborate technique which frees the meta-atoms from the hard surface ('substrate') they are constructed on. The researchers predict that stacking them together can create an independent, flexible material, which can be adopted for use in a wide range of applications.

Di Falco says, "Metamaterials give us the ultimate handle on manipulating the behaviour of light. The impact of our new material Meta-flex is ubiquitous. It could be possible to use Meta-flex for creating smart fabrics and, in the paper, we show how easy it is to place Meta-flex on disposable contact lenses, showing how flexible superlenses could be used for visual prostheses."

(Photo: IoP)

The Institute of Physics

Followers

Archive

 

Selected Science News. Copyright 2008 All Rights Reserved Revolution Two Church theme by Brian Gardner Converted into Blogger Template by Bloganol dot com