Friday, July 31, 2009

EXPERIMENTS SHOW 'ARTIFICIAL GRAVITY' CAN PREVENT MUSCLE LOSS IN SPACE

0 comentarios
When the Apollo 11 crew got back from the moon, 40 years ago, they showed no ill effects from seven days spent in weightlessness. But as American astronauts and Soviet cosmonauts began conducting longer-duration space flights, scientists noticed a disturbing trend: the longer humans stay in zero gravity, the more muscle they lose. Space travelers exposed to weightlessness for a year or more — such as those on a mission to Mars, for example — could wind up crippled on their return to Earth, unable to walk or even sit up.

Now, researchers at the University of Texas Medical Branch at Galveston have conducted the first human experiments using a device intended to counteract this effect — a NASA centrifuge that spins a test subject with his or her feet outward 30 times a minute, creating an effect similar to standing against a force two and half times that of gravity. Working with volunteers kept in bed for three weeks to simulate zero-gravity conditions, they found that just one hour a day on the centrifuge was sufficient to restore muscle synthesis.

"This gives us a potential countermeasure that we might be able to use on extended space flights and solve a lot of the problems with muscle wasting," said UTMB associate professor Douglas Paddon-Jones, senior author of a paper on the centrifuge research in the July issue of the Journal of Applied Physiology. "This small amount of loading, one hour a day of essentially standing up, maintained the potential for muscle growth."

Fifteen healthy male volunteers participated in the study, carried out in UTMB's General Clinical Research Center. All spent 21 days lying in a slightly head-down position that previous investigations have shown produces effects on muscles like those of weightlessness. Eight rode the centrifuge daily. Measurements of protein synthesis and breakdown in thigh and calf muscle were taken at the beginning and end of the investigation, using muscle biopsies and blood samples. The results showed that members of the centrifuge group continued to make thigh muscle protein at a normal rate, while the control group's muscle synthesis rate dropped by almost half.

Paddon-Jones cautioned that the rate of muscle protein synthesis alone does not necessarily predict changes in muscle function. But, he pointed out, it was still a strong indicator that a relatively brief intervention could have a positive effect in preventing zero-gravity muscle loss — one that might also be applied on Earth.

"We've studied elderly inpatients here at UTMB — 95 percent of the time they're completely inactive, and in three days they lose more than a kilogram of muscle," Paddon-Jones said. "A human centrifuge may not be the answer, but we are interested in seeing if something as simple as increasing the amount of time our patients spend standing and moving can slow down this process. This NASA research is one of a series of important studies that we hope to ultimately translate to a clinical population."

University of Texas Medical Branch

NEW DISCOVERY SUGGESTS TREES EVOLVED CAMOUFLAGE DEFENSE AGAINST LONG EXTINCT PREDATOR

0 comentarios
Many animal species such as snakes, insects and fish have evolved camouflage defences to deter attack from their predators. However research published in New Phytologist has discovered that trees in New Zealand have evolved a similar defence to protect themselves from extinct giant birds, providing the first evidence of this strategy in plant life.

"Plants are attacked by a bewildering array of herbivores and in response they have evolved a variety of defences to deter predators such as thorns and noxious chemicals," said lead researcher Dr Kevin Burns from Victoria University of Wellington, New Zealand. "In contrast animals often use colours to hide from predators or advertise defences, but until now there has been little evidence of colour based defences in plants."

Dr Burns' team studied the leaves of the Araliaceae tree (P. crassifolius) a heteroblastic species which is native to New Zealand. This species goes through several strange colour transitions during the process from germination to maturity and the reason for these changes is now thought to be a defence strategy from an extinct predator, the moa.

Before the arrival of humans New Zealand had no native land mammals, but was home to moa, giant flightless birds, closely related to the modern ostrich and the top herbivore predator in the food chain. However moa were hunted to extinction 750 years ago.

The Araliaceae tree has several defences which the team suggest are linked to the historic presence of moa. Seedlings produce small narrow leaves, which appear mottled to the human eye. Saplings meanwhile produce larger, more elongated leaves with thorn-like dentitions.

The mottled colours of seedling leaves are similar to the appearance of leaf litter, which would have made them difficult for a moa to distinguish. The unusual colouring may also reduce the probability of leaf outlines and help camouflage leaves against the sunlight-draped forest floor.

Moa also lacked teeth and swallowed leaves by placing them in their bills and snapping their head forward. The long rigid leaves produced by P. crassifolius would have been difficult for a moa to swallow. The maximum browsing height of the largest known moa was approximately 300cm and once P. crassifolius grow above this height they produce leaves that are ordinary in size, shape and colour, lacking any defence.

To prove that these defences were linked to the presence of moa the team compared Araliaceae leaves to samples from a similar species of tree, P chathamicus, from the Chatham Islands, which are 800 kilometres east of New Zealand. Unlike New Zealand the islands lacked large browsers such as Moa and so the plant life did not evolve a defence against them.

"The Chatham island species displays less morphological changes between adults and juveniles," said Burns. "If these colouring changes developed in response to the presence of moa in New Zealand they are reduced when they have evolved in the absence of moa."

Wiley InterScience

CHIMPS, LIKE HUMANS, FOCUS ON FACES

0 comentarios



A chimp's attention is captured by faces more effectively than by bananas. A series of experiments described in BioMed Central's open access journal Frontiers in Zoology suggests that the apes are wired to respond to faces in a similar manner to humans.

Masaki Tomonaga and Tomoko Imura from the Primate Research Institute at Kyoto University, Japan, tested the effects of a series of different images on chimps' reaction times. Tomonaga said, "It is well known that faces are processed in a different manner from other types of complex visual stimuli. Recent studies of face perception in humans clarified that faces represent special stimuli with regard to visuospatial attention as well. That is, they are able to capture our attention. We've shown that chimps share this tendency to notice and pay attention to faces in preference to other objects."

The researchers gave chimps the option of playing a game for food. If the chimps chose to, they could approach a computer screen where an image would be displayed, followed by a target. If the chimps pressed the target, they would receive a reward. In one set of experiments, the image was displayed on one side of the screen followed by the target either on the same side or the previously blank side. Reaction times were shown to improve when the target appeared behind the image. The chimps were then presented with two images side by side, one of which was a chimpanzee face. When the target appeared behind the face, reaction times were better than when it appeared behind the other object – showing that attention had indeed been drawn to the face-side of the screen.

Chimpanzee faces were shown to attract attention more effectively than bananas and other objects such as flowers, houses or trains. This effect was reduced when the faces were inverted, suggesting that it is the specific configuration of an upright face that catches the eye. According to Tomonaga, "This attentional capture was also observed when upright human faces were presented, indicating that this effect is not limited to their own species".

(Photo: Tomonaga et al., Frontiers in Zoology)

Frontiers in Zoology

ANCIENT HUMANS LEFT EVIDENCE FROM THE PARTY THAT ENDED 4,000 YEARS AGO

0 comentarios

The party was over more than 4,000 years ago, but the remnants still remain in the gourds and squashes that served as dishware. For the first time, University of Missouri researchers have studied the residues from gourds and squash artifacts that date back to 2200 B.C. and recovered starch grains from manioc, potato, chili pepper, arrowroot and algarrobo. The starches provide clues about the foods consumed at feasts and document the earliest evidence of the consumption of algarrobo and arrowroot in Peru.

"Archaeological starch grain research allows us to gain a better understanding of how ancient humans used plants, the types of food they ate, and how that food was prepared," said Neil Duncan, doctoral student of anthropology in the MU College of Arts and Science and lead author of the study that was published in the Proceedings of the National Academy of Science (PNAS). "This is the first study to analyze residue from bottle gourd or squash artifacts. Squash and bottle gourds had a variety of uses 4,000 years ago, including being used as dishes, net floats and symbolic containers. Residue analysis can help determine the specific use."

In the study, researchers recovered starch grains from squash and gourd artifacts by a method that currently is used to recover microfossils from stone tools and ceramics. First, the artifact was placed in a special water bath to loosen and remove adhering residue. Then the artifact's interior surface was lightly brushed to remove any remaining residue. The residues were collected, and starch grains were isolated from each of these sediments.

"The starch residues of edible plants found on the artifacts and the special archaeological context from which these artifacts were recovered suggest that the artifacts were used in a ritual setting for the serving and production of food," Duncan said. "The method used in this study could be used in other areas and time periods in which gourds and squash rinds are preserved."

Scientists believe the Buena Vista site, where the starch grains were recovered, served as a small ceremonial center in the central Chillon Valley. The social and ritual use of food is not well understood during this time period in Peru, but this research will enhance the potential for understanding, Duncan said.

The study, "Gourd and squash artifacts yield starch grains of feasting foods from preceramic Peru," is coauthored by Duncan; Deborah Pearsall, professor of anthropology; and Robert Benfer, emeritus professor of anthropology.

(Photo: U. Missouri)

University of Missouri

UCLA SCIENTISTS PRESENT FIRST GENETIC EVIDENCE FOR WHY PLACEBOS WORK

0 comentarios
Placebos are a sham — usually mere sugar pills designed to represent "no treatment" in a clinical treatment study. The effectiveness of the actual medication is compared with the placebo to determine if the medication works.

And yet, for some people, the placebo works nearly as well as the medication. How well placebos work varies widely among individuals. Why that is so, and why they work at all, remains a mystery, thought to be based on some combination of biological and psychological factors.

Now, researchers at UCLA have found a new explanation: genetics. Dr. Andrew Leuchter, a professor of psychiatry at the UCLA Semel Institute for Neuroscience and Human Behavior, and colleagues report that in people suffering from major depressive disorder, or MDD, genes that influence the brain's reward pathways may modulate the response to placebos. The research appears in the August edition of the Journal of Clinical Psychopharmacology.

Placebos are thought to act by stimulating the brain's central reward pathways by releasing a class of neurotransmitters called monoamines, specifically dopamine and norepinephrine. These are the brain chemicals that make us "feel good." Because the chemical signaling done by monoamines is under strong genetic control, the scientists reasoned that common genetic variations between individuals — called genetic polymorphisms — could influence the placebo response.

Researchers took blood samples from 84 people diagnosed with MDD; 32 were given medication and 52 a placebo. The researchers looked at the polymorphisms in genes that coded for two enzymes that regulate monoamine levels: catechol-O-methyltransferase (COMT) and monoamine oxidase A (MAO-A). Subjects with the highest enzyme activity within the MAO-A polymorphism had a significantly lower placebo response than those with other genotypes. With respect to COMT, those with lower enzyme activity within this polymorphism had a lower placebo response.

"Our findings suggest that patients with MDD who have specific MAO-A and COMT genotypes may be biologically advantaged or disadvantaged in mounting a placebo response, because of the activity of these two enzymes," said Leuchter, who directs the Laboratory of Brain, Behavior and Pharmacology at the UCLA Semel Institute.

"To our knowledge, this is the first study to examine the association between MAO-A and COMT polymorphisms and a response to placebo in people who suffer from major depressive disorder," he said.

Leuchter noted that this is not the sole explanation for a response to a placebo, which is likely to be caused by many factors, both biological and psychosocial. "But the data suggests that individual differences in response to placebo are significantly influenced by individual genotypes," he said.

Including the influence of genotype in the design of clinical trials could facilitate more powerful testing of future treatments, Leuchter said.

UCLA

NEW FINDINGS ON THE BIRTH OF THE SOLAR SYSTEM

0 comentarios

A team of international astrophysicists, including Dr Maria Lugaro from Monash University, has discovered a new explanation for the early composition of our solar system.

The team has found that radioactive nuclei found in the earliest meteorites, dating back billions of years, could have been delivered by a nearby dying giant star of six times the mass of the sun.

Dr Lugaro said the findings could change our current ideas on the origin of the solar system.

"We have known about the early presence of these radioactive nuclei in meteorites since the 1960s, but we do not know where they originated from. The presence of the radioactive nuclei has been previously linked to a nearby supernova explosion, but we are showing now that these nuclei are more compatible with an origin from the winds coming from a large dying star," Dr Lugaro said.

The conclusion was reached by combining stellar observations from telescopes with recently developed theoretical models reproduced on powerful computers of how stars evolve and which nuclear reaction occurs within their interiors.

"We need to know if the presence of radioactive nuclei in young planetary systems is a common or a special event in our galaxy because their presence affected the evolution of the first large rocks (the parent bodies of asteroids and meteorites) in the solar system. These are believed to be the source of much of earth's water, which is essential for life," Dr Lugaro said.

"Within one million years of the formation of the solar system the radioactive nuclei decayed inside the rocks where they were trapped, releasing high-energy photons, which caused the rocks to heat. Since much of earth's water is believed to have originated from these first rocks, the possibility of life on earth depends on their heating history and, in turn, on the presence of radioactive nuclei." Dr Lugaro said.

"What we need to do now is investigate the probability that a dying giant star could have actually been nearby our then young solar system and polluted it with radioactive nuclei. This will inform us on the place where the solar system was born, on the probability that other young planetary system also are polluted with radioactive nuclei, and, eventually, on the probability of having water on terrestrial planets in other planetary systems."

(Photo: Gabriel Perez Diaz)

Monash University

Thursday, July 30, 2009

GRAPHENE'S VERSATILITY PROMISES NEW APPLICATIONS

0 comentarios

Since its discovery just a few years ago, graphene has climbed to the top of the heap of new super-materials poised to transform the electronics and nanotechnology landscape.

As N.J. Tao, a researcher at the Biodesign Institute of Arizona State University explains, this two dimensional honeycomb structure of carbon atoms is exceptionally strong and versatile. Its unusual properties make it ideal for applications that are pushing the existing limits of microchips, chemical sensing instruments, biosensors, ultracapacitance devices, flexible displays and other innovations.

In the latest issue of Nature Nanotechnology, Tao describes the first direct measurement of a fundamental property of graphene, known as quantum capacitance, using an electrochemical gate method. A better understanding of this crucial variable should prove invaluable to other investigators participating in what amounts to a gold rush of graphene research.

Although theoretical work on single atomic layer graphene-like structures has been going on for decades, the discovery of real graphene came as a shock.

"When they found it was a stable material at room temperature," Tao says, "everyone was surprised."

As it happens, minute traces of graphene are shed whenever a pencil line is drawn, though producing a 2-D sheet of the material has proven trickier. Graphene is remarkable in terms of thinness and resiliency. A one-atom thick graphene sheet sufficient in size to cover a football field, would weigh less than a gram. It is also the strongest material in nature – roughly 200 times the strength of steel. Most of the excitement however, has to do with the unusual electronic properties of the material.

Graphene displays outstanding electron transport, permitting electricity to flow rapidly and more or less unimpeded through the material. In fact, electrons have been shown to behave as massless particles similar to photons, zipping across a graphene layer without scattering. This property is critical for many device applications and has prompted speculation that graphene could eventually supplant silicon as the substance of choice for computer chips, offering the prospect of ultrafast computers operating at terahertz speeds, rocketing past current gigahertz chip technology.

Yet, despite encouraging progress, a thorough understanding of graphene's electronic properties has remained elusive. Tao stresses that quantum capacitance measurements are an essential part of this understanding.

Capacitance is a material's ability to store energy. In classical physics, capacitance is limited by the repulsion of like electrical charges, for example, electrons. The more charge you put into a device, the more energy you have to expend to contain it, in order to overcome charge repulsion.

However, another kind of capacitance exists, and dominates overall capacitance in a two-dimensional material like graphene. This quantum capacitance is the result of the Pauli exclusion principle, which states that two fermions – a class of common particles including protons, neutrons and electrons – cannot occupy the same location at the same time. Once a quantum state is filled, subsequent fermions are forced to occupy successively higher energy states.

As Tao explains, "it's just like in a building, where people are forced to go to the second floor once the first level is occupied."

In the current study, two electrodes were attached to graphene, and a voltage applied across the material's two-dimensional surface by means of a third, gate electrode. In Tao's experiments, graphene's ability to store charge according to the laws of quantum capacitance, were subjected to detailed measurement.

The results show that graphene's capacitance is very small. Further, the quantum capacitance of graphene did not precisely duplicate theoretical predictions for the behavior of ideal graphene. This is due to the fact that charged impurities occur in experimental samples of graphene, which alter the behavior relative to what is expected according to theory.

Tao stresses the importance of these charged impurities and what they may mean for the development of graphene devices. Such impurities were already known to affect electron mobility in graphene, though their effect on quantum capacitance has only now been revealed. Low capacitance is particularly desirable for chemical sensing devices and biosensors as it produces a lower signal-to-noise ratio, providing for extremely fine-tuned resolution of chemical or biological agents.

Improvements to graphene will allow its electrical behavior to more closely approximate theory. This can be accomplished by adding counter ions to balance the charges resulting from impurities, thereby further lowering capacitance.

The sensitivity of graphene's single atomic layer geometry and low capacitance promise a significant boost for biosensor applications. Such applications are a central topic of interest for Tao, who directs the Biodesign Institute's Center for Bioelectronics and Biosensors.

As Tao explains, any biological substance that interacts with graphene's single atom surface layer can be detected, causing a huge change in the properties of the electrons.

One possible biosensor application under consideration would involve functionalizing graphene's surface with antibodies, in order to precisely study their interaction with specific antigens. Such graphene-based biosensors could detect individual binding events, given a suitable sample. For other applications, adding impurities to graphene could raise overall interfacial capacitance. Ultracapacitors made of graphene composites would be capable of storing much larger amounts of renewable energy from solar, wind or wave energy than current technologies permit.

Because of graphene's planar geometry, it may be more compatible with conventional electronic devices than other materials, including the much-vaunted carbon nanotubes.

"You can imagine an atomic sheet, cut into different shapes to create different device properties," Tao says.

Since the discovery of graphene, the hunt has been on for similar two-dimensional crystal lattices, though so far, graphene remains a precious oddity.

(Photo: ASU)

Arizona State University

BY MANIPULATING OXYGEN, SCIENTISTS COAX BACTERIA INTO A WAVE

0 comentarios

Bacteria know that they are too small to make an impact individually. So they wait, they multiply, and then they engage in behaviors that are only successful when all cells participate in unison. There are hundreds of behaviors that bacteria carry out in such communities. Now researchers at Rockefeller University have discovered one that has never been observed or described before in a living system.

In research published in the May 12 issue of Physical Review Letters, Albert J. Libchaber, head of the Laboratory of Experimental Condensed Matter Physics, and his colleagues, including first author Carine Douarche, a postdoctoral associate in the lab, show that when oxygen penetrates a sample of oxygen-deprived Escherichia coli bacteria, they do something that no living community had been seen to do before: The bacteria accumulate and form a solitary propagating wave that moves with constant velocity and without changing shape. But while the front is moving, each bacterium in it isn’t moving at all.

“It’s like a soliton,” says Douarche. “A self-reinforcing solitary wave.”

Unlike the undulating pattern of an ocean wave, which flattens or topples over as it approaches the shore, a soliton is a solitary, self-sustaining wave that behaves like a particle. For example, when two solitons collide, they merge into one and then separate into two with the same shape and velocity as before the collision. The first soliton was observed in 1834 at a canal in Scotland by John Scott Russell, a scientist who was so fascinated with what he saw that he followed it on horseback for miles and then set up a 30-foot water tank in his yard where he successfully simulated it, sparking considerable controversy.

The work began when Libchaber, Douarche and their colleagues placed E. coli bacteria in a sealed square chamber and measured the oxygen concentration and the density of bacteria every two hours until the bacteria consumed all the oxygen. (Bacteria, unlike humans, don’t die when starved for oxygen, but switch to a nonmotile state from which they can be revived.) The researchers then cracked the seals of the chamber, allowing oxygen to flow in.

The result: The motionless bacteria, which had spread out uniformly, began to move; first those around the perimeter, nearest to the seals, and then those further away. A few hours later, the bacteria began to spatially segregate into two domains of moving and nonmoving bacteria and pile up into a ring at the border of low-oxygen and no-oxygen. There they formed a solitary wave that propagated slowly but steadily toward the center of the chamber without changing its shape.

The effect, which lasted for more than 15 hours and covered a considerable distance (for bacteria), could not be explained by the expression of new proteins or by the addition of energy in the system. Instead, the creation of the front depends on the dispersion of the active bacteria and on the time it takes for oxygen-starved bacteria to completely stop moving, 15 minutes. The former allows the bacteria to propagate at a constant velocity, while the latter keeps the front from changing shape.

However, a propagating front of bacteria wasn’t all that was created. “To me, the biggest surprise was that the bacteria control the flow of oxygen in the regime,” says Libchaber. “There’s a propagating front of bacteria, but there is a propagating front of oxygen, too. And the bacteria, by absorbing the oxygen, control it very precisely.”

Oxygen, Libchaber explains, is one of the fastest-diffusing molecules, moving from regions of high concentration to low concentration such that the greater the distance it needs to travel, the faster it will diffuse there. But that is not what they observed. Rather, oxygen penetrated the chamber very slowly in a linear manner. Equal time, equal distance. “This pattern is not due to biology,” says Libchaber. “It has to do with the laws of physics. And it is organized in such an elegant way that the only thing it tells us is that we have a lot to learn from bacteria.”

(Photo: Rockefeller U.)

Rockefeller University

HUSH LITTLE BABY... LINKING GENES, BRAIN, AND BEHAVIOR IN CHILDREN

0 comentarios
It comes as no surprise that some babies are more difficult to soothe than others but frustrated parents may be relieved to know that this is not necessarily an indication of their parenting skills. According to a new report in Psychological Science, a journal of the Association for Psychological Science, children's temperament may be due in part to a combination of a certain gene and a specific pattern of brain activity.

The pattern of brain activity in the frontal cortex of the brain has been associated with various types of temperament in children. For example, infants who have more activity in the left frontal cortex are characterized as temperamentally "easy" and are easily calmed down. Conversely, infants with greater activity in the right half of the frontal cortex are temperamentally "negative" and are easily distressed and more difficult to soothe.

In this study, Louis Schmidt from McMaster University and his colleagues investigated the interaction between brain activity and the DRD4 gene to see if it predicted children's temperament. In a number of previous studies, the longer version (or allele) of this gene had been linked to increased sensory responsiveness, risk-seeking behavior, and attention problems in children. In the present study, brain activity was measured in 9-month-old infants via electroencephalography (EEG) recordings. When the children were 48 months old, their mothers completed questionnaires regarding their behavior and DNA samples were taken from the children for analysis of the DRD4 gene.

The results reveal interesting relations among brain activity, behavior, and the DRD4 gene. Among children who exhibited more activity in the left frontal cortex at 9 months, those who had the long version of the DRD4 gene were more soothable at 48 months than those who possessed the shorter version of the gene. However, the children with the long version of the DRD4 gene who had more activity in the right frontal cortex were the least soothable and exhibited more attention problems compared to the other children.

These findings indicate that the long version of the DRD4 gene may act as a moderator of children's temperament. The authors note that the "results suggest that it is possible that the DRD4 long allele plays different roles (for better and for worse) in child temperament" depending on internal conditions (the environment inside their bodies) and conclude that the pattern of brain activity (that is, greater activation in left or right frontal cortex) may influence whether this gene is a protective factor or a risk factor for soothability and attention problems. The authors cautioned that there are likely other factors that interact with these two measures in predicting children's temperament.

Association for Psychological Science

GENETIC DISCOVERY MAY DETERMINE ALZHEIMER'S DISEASE RISK AND AGE OF DISEASE ONSET

0 comentarios

A newly identified gene appears to be highly predictive of not only the risk of developing Alzheimer's disease, but also the approximate age at which the disease will begin to manifest itself, according to researchers at Duke University Medical Center.

This new gene may be the most highly predictive gene discovered to date in Alzheimer's disease.

In findings presented today at the International Conference on Alzheimer's Disease, the gene TOMM40 was found to predict the age of Alzheimer's disease development within a five- to seven-year window among people over age 60.

"If borne out through additional research, a doctor could evaluate a patient based on age, especially among those over age 60, their APOE genotype and their TOMM40 status, to calculate an estimated disease risk and age of onset," said Allen Roses, MD, director of the recently established Deane Drug Discovery Institute and the study's lead author.

Roses earlier uncovered the association of apolipoprotein E (APOE) genotypes, particularly APOE4, with the risk and lower age of onset for Alzheimer's disease. This discovery remains one of the most confirmed genetic associations for any complex disease.
"It now looks fairly clear that there are two major genes -- APOE4 and TOMM40 -- and together they account an estimated 85-90 percent of the genetic effect," Roses said.

APOE4 accounts genetically for 50 percent of late onset cases of Alzheimer's disease but the other half remained a mystery. Genome-wide screening and other new techniques have been used repeatedly without success. Roses' team employed a different approach.

The APOE gene is present in all people and is characterized by three variants numbered two through four. The Duke researchers found that a variant of TOMM40 apparently evolved independently when attached to the APOE3 version of the gene than it did when attached to the APOE4 version.

Roses said the genetic association with Alzheimer's disease age of onset now goes beyond just APOE4. The researchers found that TOMM40 linked to APOE3 had either short or long repeated sequences, while all APOE4-linked repeat sequences were long. The study concluded that a longer version of TOMM40 attached to both APOE3 and APOE4 are significantly associated with an earlier disease onset, while the short repeat sequences were associated with a later onset of disease.

"Genome-wide screening detects big blocks of DNA inherited together, but it doesn't tell us all the differences within that block," Roses said. "We conducted a phylogenetic analysis to explore the evolution of the DNA and to see what changes take place on the backbone of other changes."

The technology has not been widely used in human genetics, Roses explained. "From all the genome-wide scans that were performed over the past four years, it was apparent that the variance within APOE could not account for the extremely high statistical significance which characterized this small block of genes, including APOE and TOMM40, which were inherited in a block."

"If someone gets APOE4 from their mother and APOE3 from their father, they also get TOMM40 as a linked caboose," Roses said. "If the TOMM40 is a short version of the gene attached to APOE3, then that person has a better chance of getting Alzheimer's disease very late, after age 80. But if it's a long TOMM40 they have a better chance of getting the disease before age 80."

The Duke team now plans to validate the association of the APOE genotypes and TOMM40 with age of disease onset and to determine how well these genes predict age of onset. They are planning a prospective, five-year study combined with a drug trial aimed at prevention or delay of disease onset.

The prevalence of Alzheimer's disease is predicted to quadruple world-wide by 2050 to more than 107 million cases, meaning that 1 in 85 persons will be living with the disease. It has been estimated that delaying disease onset by one or two years will decrease the disease burden in 2050 by 9.5 million or 23 million cases, respectively.

The research team also plans to use this type of phylogenetic analysis to uncover genetic associations in other diseases, including autism, diabetes and chronic obstructive pulmonary disease (COPD).

Duke University Medical Center

ADULT BRAIN CAN CHANGE WITHIN SECONDS

0 comentarios
The brain's tendency to call upon these connections could help explain the curious phenomenon of "referred sensations," in which a person with an amputated arm "feels" sensations in the missing limb when he or she is touched on the face. Scientists believe this happens because the part of the brain that normally receives input from the arm begins "referring" to signals coming from a nearby brain region that receives information from the face.

"We found these referred sensations in the visual cortex, too," said senior author Nancy Kanwisher of the McGovern Institute for Brain Research at MIT, referring to the findings of a paper being published in the July 15 issue of the Journal of Neuroscience. "When we temporarily deprived part of the visual cortex from receiving input, subjects reported seeing squares distorted as rectangles. We were surprised to find these referred visual sensations happening as fast as we could measure, within two seconds."

Many scientists think that this kind of reorganized response to sensory information reflects a rewiring in the brain, or a growth of new connections.

"But these distortions happened too quickly to result from structural changes in the cortex," Kanwisher explained. "So we think the connections were already there but were silent, and that the brain is constantly recalibrating the connections through short-term plasticity mechanisms."

First author Daniel Dilks, a postdoctoral researcher in Kanwisher's lab, first found the square-to-rectangle distortion in a patient who suffered a stroke that deprived a portion of his visual cortex from receiving input. The stroke created a blind region in his field of vision. When a square object was placed outside this blind region, the patient perceived it as a rectangle stretching into the blind area - a result of the deprived neurons now responding to a neighboring part of the visual field.

"But the patient's cortex had been deprived of visual information for a long time, so we did not know how quickly the adult visual cortex could change following deprivation," Dilks said. "To find out, we took advantage of the natural blind spot in each eye, using a simple perceptual test in healthy volunteers with normal vision."

Blind spots occur because the retina has no photoreceptors where the optic nerve exits the eye, so the visual cortex receives no stimulation from that point. We do not perceive our blind spots because the left eye sees what is in the right eye's blind area, and vice versa. Even when one eye is closed, we are not normally aware of a gap in our visual field.

It takes a perceptual test to reveal the blind spot, which involves covering one eye and moving an object towards the blind spot until it "disappears" from view.

Dilks and colleagues used this test to see how soon after the cortex is deprived of information that volunteers begin to perceive shape distortions. They presented different-sized rectangles just outside the subjects' blind spot and asked subjects to judge the height and width at different time points after one eye was patched.

The volunteers perceived the rectangles elongating just two seconds after their eye was covered - much quicker than expected. When the eye patch was removed, the distortions vanished just as fast as they had appeared.

"So the visual cortex changes its response almost immediately to sensory deprivation and to new input," Kanwisher explained. "Our study shows the stunning ability of the brain to adapt to moment-to-moment changes in experience even in adulthood."

MIT

MIT TEAM AIMS TO TAILOR SURGICAL GLUES FOR SPECIFIC APPLICATIONS

0 comentarios

MIT researchers aim to change that with glues tailored to specific tissues. In a recent issue of Advanced Materials, they identified for the first time how one kind of glue material bonds to tissue and how that adhesion varies depending on the tissue involved, from the intestine to the lung. They then showed how by adjusting certain properties of the materials it was possible to create a range of adhesives optimized for specific tissues and applications.

"The delineation of tissue-specific mechanisms for material adhesion leads the way for tailoring materials to individual needs and applications. This exciting work may well change the clinical use and continued evolution of soft-tissue sealants and adhesive materials," said Elazer R. Edelman, principal investigator and MIT's Thomas D. and Virginia W. Cabot Professor of Health Sciences and Technology.

Adhesive sealants could improve patient care and reduce healthcare costs by cutting medical complications after surgery, such as leakage through incisions, and improved wound healing, according to Natalie Artzi, a postdoctoral associate who led the research in Edelman's lab.

Although there is already a billion-dollar market for such adhesives, "they haven't reached their true potential," Artzi said. Existing materials have limitations that often force doctors to compromise between adhesion strength and tissue reaction. For example, said Artzi, for a given tissue, the material may be adhesive but release toxins that could affect healing. Alternatively, the material could be quite tissue compatible, but degrade quickly, becoming non-adhesive. If the glue doesn't work, a doctor must switch to sutures or staples.

The problem, according to the MIT team is that while surgical adhesives rely on intimate interactions between the adhesive and the tissue in question, the properties of the target tissue have been largely ignored in designing adhesives. Instead, "one general formulation is proposed for application to the full range of soft tissues across diverse clinical applications," Artzi and colleagues wrote in their Advanced Materials paper.

The new work characterized a variety of interactions between one kind of glue (hyrogels composed of polyethylene glycol and dextran aldehyde, or PEG: dextran for short) and tissue from a rat's heart, lung, liver and duodenum (the first section of the intestine). The team found, for example, that the glue worked well with tissue from the duodenum, but poorly with that from the lung.

They then went on to "identify the functional groups in the material that are responsible for adhesion with tissue functional groups, and created a model to optimize adhesion for each tissue," Artzi said. In particular the paper explains how variation of chemical reactive groups in the material could be matched to the variability in the density of respective reactive groups on different tissues to regulate tissue-material interaction.

The team will use these findings to "develop a platform of adhesive materials" for specific tissues. Although it could take three to five years before the work translates into a product, "the concept is there," she concluded.

(Photo: Donna Coveney)

MIT

Wednesday, July 29, 2009

SMALLER PLANTS PUNCH ABOVE THEIR WEIGHT IN THE FOREST

0 comentarios

New findings from Queen’s University biologists show that in the plant world, bigger isn’t necessarily better.

“Until now most of the thinking has suggested that to be a good competitor in the forest, you have to be a big plant,” says Queen’s Biology professor Lonnie Aarssen. “But our research shows it’s virtually the other way around.”

Previous studies revealed that larger plant species monopolize sunlight, water and other resources, limiting the number of smaller plant species that can exist around them. But new research has proven that this is not generally the case in natural vegetation.

In the Queen’s project, biology student Laura Keating targeted the largest individuals or “host plants” of 16 woody plant species growing in the Okanagan Valley, British Columbia. The research team calculated the number and variety of plants that neighboured each large host plant. They then randomly selected plots without host plants and calculated the plant species there as well. The research showed that the massive trees have no effect on the number of species with which they coexist.

“Think of the plants like professional boxers,” says Professor Aarssen. “To win the fight, you need more than a solid punch; you need to be able to tolerate all the punches you’re going to take. The winner may be the competitor with the superior ‘staying power’.”

Smaller plants have many advantages over their overbearing neighbours, Professor Aarssen notes. Larger species generate physical space niches under their canopies where smaller species thrive. Smaller plants are much more effective than large trees at utilizing available resources. They also produce seeds at a much younger age and higher rate than their bigger counterparts, and establish much more quickly – thus competing with the seedlings of larger species.

“A growing body of literature is calling for re-evaluation of traditional views on the role of plant size in affecting competitive ability, community assembly and species coexistence,” he adds.

(Photo: Queen’s U.)

Queen’s University

RESEARCHERS FIND EARLY MARKERS OF ALZHEIMER'S DISEASE

0 comentarios

A large study of patients with mild cognitive impairment revealed that results from cognitive tests and brain scans can work as an early warning system for the subsequent development of Alzheimer's disease.

The research found that among 85 participants in the study with mild cognitive impairment, those with low scores on a memory recall test and low glucose metabolism in particular brain regions, as detected through positron emission tomography (PET), had a 15-fold greater risk of developing Alzheimer's disease within two years, compared with the others in the study.

The results, reported by researchers at the University of California, Berkeley, on Tuesday, July 14, at the Alzheimer's Association 2009 International Conference on Alzheimer's Disease in Vienna, Austria, are a major step forward in the march toward earlier diagnoses of the debilitating disease.

"Not all people with mild cognitive impairment go on to develop Alzheimer's, so it would be extremely useful to be able to identify those who are at greater risk of converting using a clinical test or biological measurement," said the study's lead author, Susan Landau, a post-doctoral fellow at UC Berkeley's Helen Wills Neuroscience Institute and the Lawrence Berkeley National Laboratory.

"The field, in general, is moving toward ways to select people during earlier stages of Alzheimer's disease, including those who show no outward signs of cognitive impairment," said Dr. William Jagust, a faculty member of UC Berkeley's Helen Wills Neuroscience Institute and principal investigator of the study. "By the time a patient is diagnosed with Alzheimer's disease, there is usually little one can do to stop or reverse the decline. Researchers are trying to determine whether treating patients before severe symptoms appear will be more effective, and that requires better diagnostic tools than what is currently available."

In the latest study, researchers compared a variety of measurements that had previously shown promise as early detectors of Alzheimer's. The measurements included scores on the Auditory Verbal Learning Test; the volume of the hippocampus, the part of the brain associated with the formation of new memory; the presence of the apolipoprotein E4 gene, which has been linked to increased risk of Alzheimer's; certain proteins found in the cerebrospinal fluid; and glucose metabolism detected in PET brain scans. A low rate of glucose metabolism in a particular brain region is considered a sign of poor neural function, most likely due to the loss of synapses in that area.

"What's really novel about our study is that we evaluated all of these biomarkers in the same subjects, so we could more easily compare the predictive value of any one measure over the others," said Landau. "The Auditory-Verbal Learning Test, which measures memory recall ability, and the PET scans measuring glucose metabolism were the two markers that clearly stood out over the others."

The researchers pointed out that other measurements - in particular, hippocampus volume and the cerebrospinal fluid markers - also showed promise in predicting disease progression. However, when considering all the measurements together, PET scans and memory recall ability were the most consistent predictors. The researchers expect to have more complete information about which measures serve as the best predictors in a year as they continue to gather data for this ongoing study.

An earlier study led by Jagust, a professor with joint appointments at UC Berkeley's School of Public Health and the Lawrence Berkeley National Laboratory, found that PET scans and magnetic resonance imaging (MRI) could detect neurological changes in asymptomatic people who subsequently developed dementia or mental impairment, although it was too soon to say if those people would go on to develop Alzheimer's.

The research is part of the nationwide Alzheimer's Disease Neuroimaging Initiative, a 60-center study funded by the National Institute on Aging. The ultimate goal of the initiative is to find a biomarker for Alzheimer's that would predict individuals who will later develop Alzheimer's disease. Ideally, this marker would be identifiable very early, even in individuals who do not yet show signs of mental impairment.

(Photo: Cindee Madison and Susan Landau, UC Berkeley)

University of California, Berkeley

MAINTAINING OR INCREASING PHYSICAL ACTIVITY SLOWS COGNITIVE DECLINE IN ELDERS

0 comentarios

Elders who maintained or increased their level of physical activity showed significantly less cognitive decline over seven years than those who were not active or whose activity levels declined during that time, according to a study led by researchers at the San Francisco VA Medical Center and the University of California, San Francisco.

Elders whose activity rates fluctuated at times but who ultimately stayed physically active over the course of the study also showed less decline.

The results were reported at the 2009 Alzheimer’s Association International Conference on Alzheimer’s Disease in Vienna, Austria by lead author Deborah E. Barnes, PhD, MPH, a geriatric researcher at SFVAMC and an assistant professor of psychiatry at UCSF.
The study looked at the self-reported activity levels of 3,075 white and African-American non-demented elders, age 70 to 79, living in Memphis, Tenn. or Pittsburgh, Pa. Half were women, and one fourth had not graduated from high school. Study participants reported the number of minutes they typically walked per week at the beginning of the study and then at intervals of two, four, and seven years.

At each time point, the participants were classified as sedentary (zero minutes per week), low activity (less than 150 minutes per week), or high activity (150 minutes per week or more), based on the Surgeon General’s exercise guidelines. Their level of cognitive function was also measured at each time point using the Modified Mini-Mental State Exam, a standard test with a maximum score of 100.

After seven years, the adjusted scores of the participants who were consistently sedentary declined by an average of .62 points per year, while the scores of those whose activity level decreased over seven years declined by an average of .54 points per year.

In contrast, participants whose activity levels increased either consistently or in a fluctuating pattern over time showed a decline of only .44 points per year. The scores of those who were consistently active declined only .40 points annually.

The results indicate that overall activity level is not as important as a long-term commitment to being active, according to Barnes. “If you follow the Surgeon General’s guidelines of exercising 30 minutes a day, five days a week some of the time but not all of the time, you’re probably OK as long as you resume being active, preferably as soon as possible,” she says. “The fact that people in the fluctuating activity group ranked with the increasingly active and consistently active groups was something of a surprise to us, albeit a pleasant one.”

Barnes says that the physical benefits of exercise and activity are well known, and notes a growing body evidence showing that physical activity also improves mental function. “However, until now, little was known about the impact of changes in physical activity levels on rates of cognitive decline,” she says. “This tells us that it’s important to maintain or increase your activity level in order to better sustain cognitive function as you age.”

(Photo: UCSF)

University of California, San Francisco

HUMAN SPERM CREATED FROM EMBRYONIC STEM CELLS

0 comentarios

Human sperm have been created using embryonic stem cells for the first time in a scientific development which will lead researchers to a better understanding of the causes of infertility.

Researchers led by Professor Karim Nayernia at Newcastle University and the NorthEast England Stem Cell Institute (NESCI) have developed a new technique which has made the creation of human sperm possible in the laboratory.

The work is published (8th July 2009) in the academic journal Stem Cells and Development.

The NorthEast England Stem Cell Institute (NESCI) is a collaboration between Newcastle and Durham Universities, Newcastle NHS Foundation Trust and other partners.

Professor Nayernia says: “This is an important development as it will allow researchers to study in detail how sperm forms and lead to a better understanding of infertility in men – why it happens and what is causing it. This understanding could help us develop new ways to help couples suffering infertility so they can have a child which is genetically their own.”

“It will also allow scientists to study how cells involved in reproduction are affected by toxins, for example, why young boys with leukaemia who undergo chemotherapy can become infertile for life – and possibly lead us to a solution.”

The team also believe that studying the process of forming sperm could lead to a better understanding of how genetic diseases are passed on.

In the technique developed at Newcastle, stem cells with XY chromosomes (male) were developed into germline stem cells which were then prompted to complete meiosis - cell division with halving of the chromosome set. These were shown to produce fully mature, sperm called scientifically, In Vitro Derived sperm (IVD sperm).

In contrast, stem cells with XX chromosomes (female) were prompted to form early stage sperm, spermatagonia, but did not progress further. This demonstrates to researchers that the genes on a Y chromosome are essential for meiosis and for sperm maturation (see a video)

The IVD sperm will not and cannot be used for fertility treatment. As well as being prohibited by UK law, the research team say fertilization of human eggs and implantation of embryos would hold no scientific merit for them as they want to study the process as a model for research.

“While we can understand that some people may have concerns, this does not mean that humans can be produced ‘in a dish’ and we have no intention of doing this. This work is a way of investigating why some people are infertile and the reasons behind it. If we have a better understanding of what’s going on it could lead to new ways of treating infertility,” adds Professor Nayernia.

The Newcastle University team have developed a method for establishing early stage sperm from human embryonic stem cells in the laboratory.

The embryonic stem cells were cultured in a new medium containing vitamin A derivative (retinoic acid), in a new technique established by the team. Based on this technique, the cells differentiated into germline stem cells.

These expressed a protein which was stained with a green fluorescent marker and they were separated out by FACSTM (Fluorescence-activated cell sorting) using a laser.

After further differentiation, these in vitro derived germline stem cells expressed markers which are specific to primordial germ cells, spermatogonial stem cells, meiotic (spermatocytes) and post meiotic germ cells (spermatids and sperm).

These results indicated maturation of the primodial germ cells to haploid male gametes – called IVD sperm - characterised by containing half a chromosome set (23 chromosomes).

(Photo: Newcastle U.)

Newcastle University

NEW THEORY ON WHY MALE, FEMALE LEMURS SAME SIZE

0 comentarios
When it comes to investigating mysteries, Sherlock Holmes has nothing on Rice University biologist Amy Dunham. In a newly published paper, Dunham offers a new theory for one of primatology's long-standing mysteries: Why are male and female lemurs the same size?

In most primate species, males have evolved to be much larger than females. Size is an advantage for males that guard females to keep other males from mating with them, and evolutionary biologists have long wondered why lemurs evolved differently. Some theories have suggested that environment played a role or that lemur social development was altered due to the extinction of predatory birds.

"Scientifically, this is quite a big question that researchers have debated for over 20 years," said Dunham, assistant professor of ecology and evolutionary biology. "I actually started doing research on lemurs as an undergraduate, working in Ranomafana (National Park in Madagascar), and the question about size monomorphism has bugged me since then."

In a paper featured on the cover of this month's Journal of Evolutionary Biology, Dunham offers one of the first new theories on lemur monomorphism in more than a decade.

After an exhaustive review of the observational work done on lemurs, Dunham came to the conclusion that male lemurs do guard their mates, just like other primates. But unlike gorillas and other primates that fight for mating rights with females, male lemurs have evolved to passively guard their mates.

They do this by depositing a solid plug inside the female's reproductive tract just as they finish mating. The plug is deposited as a liquid protein but quickly hardens and stays in place for a day or two. Since many female lemurs are sexually responsive to males for only one day out of the entire year, the plug serves the purpose of preventing other males from mating with the female, while also freeing the male to mate with other females during the brief time they are available.

"If the female has a short receptivity period, as most lemurs do, then we hypothesize that this is likely to be an advantageous strategy," said Dunham, who co-authored the paper with Rice evolutionary biologist Volker Rudolf.

To test their hypothesis, Dunham and Rudolf examined 62 primate species and found that copulatory plugs were most likely to occur in species where female sexual receptivity was very brief and where males and females were the same size. This was true both for lemur species and for a few other species, like South American squirrel monkeys.

"Our idea needs further testing because it's new, but it's more parsimonious than some of the old theories, and we're very excited about looking into it further," Dunham said. "We've made some explicit predictions about the conditions where this strategy should be favored, so there are plenty of ways it can be tested."

Dunham said she hopes to travel to Madagascar within the next year to begin gathering data for a new project that will examine the impacts of climate change on lemur populations.

Lemurs evolved on the African island in isolation from other primates for 65 million years, and they are well-known for having odd traits not found in other primates. For example, some lemurs hibernate, storing fat in their tails, and all have toothcombs -- teeth that are perfectly shaped for grooming. Lemurs also differ from other primates in another key respect that has also stymied primatologists for years: The females are usually the dominant sex.

Dunham's investigations into the long-standing mystery of female dominance among lemurs led her to put forward another important theory last year. Published in the journal Animal Behavior, the theory suggests that female lemurs tend to dominate males because the females do all of the work in rearing the young and therefore have more will to fight and win.

"Game theory predicts that when the fighting abilities of two contestants are comparable, the outcome will depend upon the value that each contestant places on the resources they are fighting over," she said. "In this case, the females clearly have more at stake, but the only reason the females are in a position to compete for dominance is because they're roughly the same size and strength as the males."

Rice University

GLOBAL WARMING: OUR BEST GUESS IS LIKELY WRONG

0 comentarios
No one knows exactly how much Earth's climate will warm due to carbon emissions, but a new study this week suggests scientists' best predictions about global warming might be incorrect.

The study, which appears in Nature Geoscience, found that climate models explain only about half of the heating that occurred during a well-documented period of rapid global warming in Earth's ancient past. The study, which was published online today, contains an analysis of published records from a period of rapid climatic warming about 55 million years ago known as the Palaeocene-Eocene thermal maximum, or PETM.

"In a nutshell, theoretical models cannot explain what we observe in the geological record," said oceanographer Gerald Dickens, a co-author of the study and professor of Earth science at Rice University. "There appears to be something fundamentally wrong with the way temperature and carbon are linked in climate models."

During the PETM, for reasons that are still unknown, the amount of carbon in Earth's atmosphere rose rapidly. For this reason, the PETM, which has been identified in hundreds of sediment core samples worldwide, is probably the best ancient climate analogue for present-day Earth.

In addition to rapidly rising levels of atmospheric carbon, global surface temperatures rose dramatically during the PETM. Average temperatures worldwide rose by about 7 degrees Celsius -- about 13 degrees Fahrenheit -- in the relatively short geological span of about 10,000 years.

Many of the findings come from studies of core samples drilled from the deep seafloor over the past two decades. When oceanographers study these samples, they can see changes in the carbon cycle during the PETM.

"You go along a core and everything's the same, the same, the same, and then suddenly you pass this time line and the carbon chemistry is completely different," Dickens said. "This has been documented time and again at sites all over the world."

Based on findings related to oceanic acidity levels during the PETM and on calculations about the cycling of carbon among the oceans, air, plants and soil, Dickens and co-authors Richard Zeebe of the University of Hawaii and James Zachos of the University of California-Santa Cruz determined that the level of carbon dioxide in the atmosphere increased by about 70 percent during the PETM.

That's significant because it does not represent a doubling of atmospheric carbon dioxide. Since the start of the industrial revolution, carbon dioxide levels are believed to have risen by about one-third, largely due to the burning of fossil fuels. If present rates of fossil-fuel consumption continue, the doubling of carbon dioxide from fossil fuels will occur sometime within the next century or two.

Doubling of atmospheric carbon dioxide is an oft-talked-about threshold, and today's climate models include accepted values for the climate's sensitivity to doubling. Using these accepted values and the PETM carbon data, the researchers found that the models could only explain about half of the warming that Earth experienced 55 million years ago.

The conclusion, Dickens said, is that something other than carbon dioxide caused much of the heating during the PETM. "Some feedback loop or other processes that aren't accounted for in these models -- the same ones used by the IPCC for current best estimates of 21st Century warming -- caused a substantial portion of the warming that occurred during the PETM."

Rice University

ROBOT FISH TO HELP UNDERSTAND HOW REAL FISH SWIM AGAINST THE FLOW

0 comentarios

A major EU grant worth €1.8M has been awarded to a consortium of five European research institutions including the University of Bath to build a robot that will help researchers understand how fish can swim upstream.

The consortium, led by the Tallinn University of Technology (TUT), with partners Riga Technical University (RTU), Italian Institute of Technology (IIT) and the Universities of Verona (UV) and Bath (UB), will be using biology to inspire the design of a swimming robot that can react to changes in current or flow, such as a fish might encounter in a fast-flowing stream or near the seashore.

The robot could be used to film and study the diverse marine life near the seashore, where conventional propeller-driven submersible robots have difficulty manoeuvring due to the shallow water, kelp, and currents created by waves.

What makes the project unique is that the researchers will be trying to mimic the sense organ found in fish, called the lateral line, which allows the fish to detect the flow of water around it and react to it.

The robot and its propulsion system will be designed by TUT in Estonia and RTU in Latvia; researchers at IIT in Lecce, Italy, will be developing a sensor to mimic the fish’s lateral line that will detect the flow of water over the robot’s body.

The fish’s complex nervous system will be emulated by computer software, developed by the University of Verona, which will allow the robot to interpret changes in flow outside the robot so it can adjust its swimming behaviour to compensate accordingly.

A second team from the IIT, based in Genoa, will be designing the computer hardware and electronics to interpret the lateral line information and generally control the robot.

The researchers from the Ocean Technologies Lab at Bath in the University’s Department of Mechanical Engineering will be leading the fish biology for the project, looking at how fish respond to changes in flow.

Dr William Megill, Lecturer in Biomimetics at the University of Bath, explained: “Currently, most aquatic robots can’t manoeuvre very well in the shallow water near the shore because they just get smashed against the rocks by the force of the waves.

“However, even in a Tsunami, fish manage to sense and swim against the current so that they stay in the water, rather than ending up on the beach.

“So this project is interesting on two levels - firstly we want to understand more about how the fish manages to react to changes in current, and secondly we want to create a robot that mimics this artificially.”

When the robot hits the water in a few years’ time, it will provide a tool to safely study and monitor the wave-swept nearshore environment, which will find uses in biological research, demining activities, pollution control, and general monitoring of the world’s most productive ecosystems.

The FILOSE (Robotic FIsh LOcomotion and SEnsing) project is financed by the European Union 7th Framework Program.

(Photo: Mike Johnston)

University of Bath

SCIENTISTS DISCOVER LIGHT FORCE WITH “PUSH” POWER

0 comentarios

A team of Yale University researchers has discovered a "repulsive" light force that can be used to manipulate components on silicon microchips, meaning future nanodevices could be controlled by light rather than electricity.

The team previously discovered an "attractive" force of light and showed how it could be manipulated to move components in semiconducting micro- and nano-electrical systems-tiny mechanical switches on a chip. The scientists have now uncovered a complementary repulsive force. Researchers had theorized the existence of both the attractive and repulsive forces since 2005, but the latter had remained unproven until now. The team, led by Hong Tang, assistant professor at Yale's School of Engineering & Applied Science, reports its findings in the July 13 edition of Nature Photonics's advanced online publication.

"This completes the picture," Tang said. "We've shown that this is indeed a bipolar light force with both an attractive and repulsive component."

The attractive and repulsive light forces Tang's team discovered are separate from the force created by light's radiation pressure, which pushes against an object as light shines on it. Instead, they push out or pull in sideways from the direction the light travels.

Previously, the engineers used the attractive force they discovered to move components on the silicon chip in one direction, such as pulling on a nanoscale switch to open it, but were unable to push it in the opposite direction.

Using both forces means they can now have complete control and can manipulate components in both directions. "We've demonstrated that these are tunable forces we can engineer," Tang said.

In order to create the repulsive force, or the "push," on a silicon chip, the team split a beam of infrared light into two separate beams and forced each one to travel a different length of silicon nanowire, called a waveguide. As a result, the two light beams became out of phase with one another, creating a repulsive force with an intensity that can be controlled-the more out of phase the two light beams, the stronger the force.

"We can control how the light beams interact," said Mo Li, a postdoctoral associate in electrical engineering at Yale and lead author of the paper. "This is not possible in free space-it is only possible when light is confined in the nanoscale waveguides that are placed so close to each other on the chip."

"The light force is intriguing because it works in the opposite way as charged objects," said Wolfram Pernice, another postdoctoral fellow in Tang's group. "Opposite charges attract each other, whereas out-of-phase light beams repel each other in this case."

These light forces may one day control telecommunications devices that would require far less power but would be much faster than today's conventional counterparts, Tang said. An added benefit of using light rather than electricity is that it can be routed through a circuit with almost no interference in signal, and it eliminates the need to lay down large numbers of electrical wires.

(Photo: Hong Tang/Yale University)

Yale University

HOW NOISE AND NERVOUS SYSTEM GET IN WAY OF READING SKILLS

0 comentarios

A child’s brain has to work overtime in a noisy classroom to do its typical but very important job of distinguishing sounds whose subtle differences are key to success with language and reading.

But that simply is too much to ask of the nervous system of a subset of poor readers whose hearing is fine, but whose brains have trouble differentiating the “ba,” “da” and “ga” sounds in a noisy environment, according to a new Northwestern University study.

“The ‘b,’ ‘d’ and ‘g’ consonants have rapidly changing acoustic information that the nervous system has to resolve to eventually match up sounds with letters on the page,” said Nina Kraus, Hugh Knowles Professor of Communication Sciences and Neurobiology and director of Northwestern's Auditory Neuroscience Laboratory, where the work was performed.

In other words, the brain’s unconscious faulty interpretation of sounds makes a big difference in how words ultimately will be read. “What your ear hears and what your brain interprets are not the same thing,” Kraus stressed.

The Northwestern study is the first to demonstrate an unambiguous relationship between reading ability and neural encoding of speech sounds that previous work has shown present phonological challenges for poor readers.

The research offers an unparalleled look at how noise affects the nervous system’s transcription of three little sounds that mean so much to literacy.

The online version of the study was published July 13 by the Proceedings of the National Academy of Sciences (PNAS) (http://www.pnas.org/papbyrecent.shtml).

The new Northwestern study as well as much of the research that comes out of the Kraus lab focuses on what is happening in the brainstem, an evolutionarily ancient part of the brain that scientists in the not too distant past believed simply relayed sensory information from the ear to the cortex.

As such, much of the earlier research relating brain transcription errors to poor reading has focused on the cortex -- associated with high-level functions and cognitive processing.

Focusing earlier in the sensory system, the study demonstrates that the technology developed during the last decade in the Kraus lab now offers a neural metric that is sensitive enough to pick up how the nervous system represents differences in acoustic sounds in individual subjects, rather than, as in cortical-response studies, in groups of people. Importantly, this metric reflects the negative influence of background noise on sound encoding in the brain.

“There are numerous reasons for reading problems or for difficulty hearing speech in noisy situations, and we now have a metric that is practically applicable for measuring sound transcription deficits in individual children,” said Kraus, the senior author of the study. “Auditory training and reducing background noise in classrooms, our research suggests, may provide significant benefit to poor readers.”

For the study, electrodes were attached to the scalps of children with good and poor speech-in-noise perception skills. Sounds were delivered through earphones to measure the nervous system’s ability to distinguish between “ba,” “da” and “ga.” In another part of the study, sentences were presented in increasingly noisy environments, and children were asked to repeat what they heard.

“In essence, the kids were called upon to do what they would do in a classroom, which is to try to understand what the kid next to them is saying while there is a cacophony of sounds, a rustling of papers, a scraping of chairs,” Kraus said.

In a typical neural system there is a clear distinction in how “ba,” “da” and “ga” are represented. The information is more accurately transcribed in good readers and children who are good at extracting speech presented in background noise.

“So if a poor reader is having difficulty making sound-to-meaning associations with the ‘ba,’ ‘da’ and ‘ga’ speech sounds, it will show up in the objective measure we used in our study,” Kraus said.

Reflecting the interaction of cognitive and sensory processes, the brainstem response is not voluntary.

“The brainstem response is just what the brain does based on our auditory experience throughout our lives, but especially during development,” Kraus said. “The way the brain responds to sound will reflect what language you speak, whether you’ve had musical experience and how you have used sounds.”

The Auditory Neuroscience Lab has been a frontrunner in research that has helped establish the relationship between sound encoding in the brainstem, and how this process is affected by an individual’s experience throughout the lifespan. In related research with significant implications, recent studies from the Kraus lab show that the process of hearing speech in noise is enhanced in musicians.

“The very transcription processes that are deficient in poor readers are enhanced in people with musical experience,” Kraus said. “It makes sense for training programs for poor readers to involve music as well as speech sounds.”

Northwestern University

Tuesday, July 28, 2009

NIH LAUNCHES THE HUMAN CONNECTOME PROJECT TO UNRAVEL THE BRAINS CONNECTIONS

0 comentarios
The National Institutes of Health Blueprint for Neuroscience Research is launching a $30 million project that will use cutting-edge brain imaging technologies to map the circuitry of the healthy adult human brain. By systematically collecting brain imaging data from hundreds of subjects, the Human Connectome Project (HCP) will yield insight into how brain connections underlie brain function, and will open up new lines of inquiry for human neuroscience.

Investigators have been invited to submit detailed proposals to carry out the HCP, which will be funded at up to $6 million per year for five years. The HCP is the first of three Blueprint Grand Challenges, projects that address major questions and issues in neuroscience research.

The Blueprint Grand Challenges are intended to promote major leaps in the understanding of brain function, and in approaches for treating brain disorders. The three Blueprint Grand Challenges to be launched in 2009 and 2010 address:

-The connectivity of the adult, human brain
-Targeted drug development for neurological diseases
-The neural basis of chronic pain disorders

"The HCP is truly a grand and critical challenge: to map the wiring diagram of the entire, living human brain. Mapping the circuits and linking these circuits to the full spectrum of brain function in health and disease is an old challenge but one that can finally be addressed rigorously by combining powerful, emerging technologies," says Thomas Insel, M.D., director of the National Institute of Mental Health (NIMH), which is part of the NIH Blueprint.

Scientists have studied the relationship between the structure and function of the human brain since the 1800s. Some parts of the brain serve basic functions such as movement, sensation, emotion, learning and memory. Others are more important for uniquely human functions such as abstract thinking. The connections between brain regions are important for shaping and coordinating these functions, but scientists know little about how different parts of the human brain connect.

"Neuroscientists have only a piecemeal understanding of brain connectivity. If we knew more about the connections within the brain — and especially their susceptibility to change — we would know more about brain dysfunction in aging, mental health disorders, addiction and neurological disease," says Story Landis, Ph.D., director of the National Institute of Neurological Disorders and Stroke (NINDS), also part of the NIH Blueprint.

For example, there is evidence that the growth of abnormal brain connections during early life contributes to autism and schizophrenia. Changes in connectivity also appear to occur when neurons degenerate, either as a consequence of normal aging or of diseases such as Alzheimer’s.

In addition to brain imaging, the HCP will involve collection of DNA samples, demographic information and behavioral data from the subjects. Together, these data could hint at how brain connectivity is influenced by genetics and the environment, and in turn, how individual differences in brain connectivity relate to individual differences in behavior. Primarily, however, the data will serve as a baseline for future studies. These data will be freely available to the research community.

The complexity of the brain and a lack of adequate imaging technology have hampered past research on human brain connectivity. The brain is estimated to contain more than 100 billion neurons that form trillions of connections with each other. Neurons can connect across distant regions of the brain by extending long, slender projections called axons — but the trajectories that axons take within the human brain are almost entirely uncharted.

In the HCP, researchers will optimize and combine state-of-the-art brain imaging technologies to probe axonal pathways and other brain connections. In recent years, sophisticated versions of magnetic resonance imaging (MRI) have emerged that are capable of looking beyond the brain’s gross anatomy to find functional connections. Functional MRI (fMRI), for example, uses changes in blood flow and oxygen consumption within the brain as markers for neuronal activity, and can highlight the brain circuits that become active during different behaviors. Three imaging techniques are suggested, but are not required, for carrying out the HCP:

-High angular resolution diffusion imaging with magnetic resonance (HARDI), which detects the diffusion of water along fibrous tissue, and can be used to visualize axon bundles.
-Resting state fMRI (R-fMRI), which detects fluctuations in brain activity while a person is at rest, and can be used to look for coordinated networks within the brain.
-Electrophysiology and magnetoencephalography (MEG) combined with fMRI (E/M fMRI), which adds information about the brain’s electrical activity to the fMRI signal. In this procedure, the person performs a task so that the brain regions associated with that task become active.

Since this is the first time that researchers will combine these brain imaging technologies to systematically map the brain’s connections, the HCP will support development of new data models, informatics and analytic tools to help researchers make the most of the data. Funds will be provided for building an on-line platform to disseminate HCP data and tools, and for engaging and educating the research community about how to use these data and tools.

"Human connectomics has been gaining momentum in the research community for a few years," says Michael Huerta, Ph.D., associate director of NIMH and the lead NIH contact for the HCP. "The data, the imaging tools and the analytical tools produced through the HCP will play a major role in launching connectomics as a field."

The field of neuroscience emerged in the late 19th century, when scientists observed individual brain cells for the first time. Since then, researchers have made breathtaking progress in understanding the anatomy, cell biology, physiology and chemistry of the brain in both health and disease. Yet many fundamental questions remain unanswered, including how brain function translates into mental function and why brain function declines with age. Advances in neuroimaging, genomics, computational neuroscience and engineering have put us on the brink of another great era in neuroscience, when we can expect to make unprecedented discoveries regarding normal brain activity, disorders of the brain and our very sense of self.

National Institutes of Health

NOAA SCIENTISTS FIND TSUNAMI SHADOW VISIBLE FROM SPACE

0 comentarios

For the first time, NOAA scientists have demonstrated that tsunamis in the open ocean can change sea surface texture in a way that can be measured by satellite-borne radars. The finding could one day help save lives through improved detection and forecasting of tsunami intensity and direction at the ocean surface.

“We’ve found that roughness of the surface water provides a good measure of the true strength of the tsunami along its entire leading edge. This is the first time that we can see tsunami propagation in this way across the open ocean,” said lead author Oleg Godin of NOAA’s Earth System Research Laboratory and the Cooperative Institute for Research in Environmental Sciences, in Boulder, Colo.

Large tsunamis crossing the open ocean stir up and darken the surface waters along the leading edge of the wave, according to the study. The rougher water forms a long, shadow-like strip parallel to the wave and proportional to the strength of the tsunami. That shadow can be measured by orbiting radars and may one day help scientists improve early warning systems. The research is published online this week in the journal, Natural Hazards and Earth System Sciences.

The new research challenges the traditional belief that tsunamis are too subtle in the open ocean to be seen at the surface. The findings confirm a theory, developed by Godin and published in 2002-05, that tsunamis in the deep ocean can be detected remotely through changes in surface roughness.

In 1994, a tsunami shadow was captured by video from shore moments before the wave struck Hawaii. That observation and earlier written documentation of a shadow that accompanied a deadly tsunami on April 1, 1946, inspired Godin to develop his theory. He tested the theory during the deadly December 26, 2004, Indian Ocean tsunami, the result of the Sumatra-Andaman earthquake.

Godin and colleagues analyzed altimeter measurements of the 2004 tsunami from NASA’s Jason-1 satellite. The data revealed clear evidence of an increased surface roughness along the leading edge of the tsunami as it passed across the Indian Ocean between two and six degrees south latitude.

Tsunamis can be detected in several ways. One detection method uses a buoy system that warns coastal communities in the United States of an approaching tsunami. NOAA’s Deep-ocean Assessment and Reporting of Tsunamis (DART) early warning system uses sensors on the ocean floor to measure changes in pressure at each location. The DART network of 39 stations extends around the perimeter of the Pacific Ocean and along the western edge of the North Atlantic Ocean and Gulf of Mexico. The technology provides accurate, real-time information on the amplitude, over time, of an approaching tsunami. NOAA's tsunami warning centers then use this information to forecast the tsunami's impact on coastlines.

A second method uses space-borne altimeters to detect tsunamis by measuring small changes in sea surface height. Only a handful of these instruments are in orbit and the observations are limited to points along a line.

The new study presents a third way to detect tsunamis — by changes in the texture of the surface water across a wide span of the open ocean.

Godin’s research confirmed his theory that a tsunami wave roughens the surface water through air-sea interaction. First the leading edge of the tsunami wave stirs up the surface winds. Those same winds, which become more chaotic than the wave itself, then churn the surface waters along the slope of the wave.

Because rough water is darker than smooth water, a contrast forms between the dark, rough water of the wave and the bright, smooth water on either side of it. Common scientific instruments, called microwave radars and radiometers, are able to detect this contrast, known as a tsunami shadow.

When orbiting the Earth, microwave radars and radiometers can observe a band of ocean surface hundreds of kilometers wide and thousands of kilometers long. If programmed correctly to observe sea surface roughness, they could potentially map an entire tsunami, said Godin.

(Photo: NOAA)

National Oceanic and Atmospheric Administration

STEALTHY GENE NETWORK MAKES BRAIN TUMORS FLOURISH

0 comentarios
The brain tumor afflicting Sen. Edward Kennedy – a glioblastoma – is the most aggressive and wily form of brain cancer. It has foiled researchers’ decades-long efforts to thwart its explosive growth in the brain. The lethal tumor – the most common brain tumor in humans -- nimbly alters its genes like a quick-change artist to elude treatments to destroy it.

But scientists from Northwestern University Feinberg School of Medicine have discovered the formidable tumor’s soft underbelly. They have identified a network of 31 mutated genes that stealthily work together to create the perfect molecular landscape to allow the tumor to flourish and mushroom to the size of an apple in just a few months.

Northwestern researchers have also identified a new gene, Annexin A7, a vital guard whose job is to halt tumor growth and whose level in the tumor predicts how long a glioblastoma patient will survive. The genetic landscape of glioblastomas eliminates Annexin A7 by wiping out its home base, chromosome 10.

The discoveries help researchers understand the tumor's vulnerabilities and offer new targets for therapies to treat the disease.

"These 31 genes are the kingpins in what you could call an organized crime network of genes that enable the tumor to grow with breathtaking speed," said Markus Bredel, M.D., director of the Northwestern Brain Tumor Institute research program, assistant professor of neurological surgery at the Feinberg School and the principal investigator of the two studies reporting these new findings. "These 31 genes are highly connected to and affect hundreds of other genes involved in this process." Bredel also is a member of the Robert H. Lurie Comprehensive Cancer Center of Northwestern University.

The studies are published in the July 15 issue of Journal of the American Medical Association.

It was no small task for Bredel to identify the kingpins of the network. Glioblastomas are among the most biologically complex cancers, involving changes in thousands of genes. Leukemia, by contrast, involves changes in just a few genes.

Bredel said the way to identify the key players was to determine which genes had the most connections to other genes in the network. "We don't care about the gangster on the street. We wanted to find the big bosses," Bredel said. "If you knock them out, then you have a big effect on all the other genes in the network."

To accomplish that, Bredel and colleagues looked at the molecular levels and genetic profiles of more than 500 brain tumors of patients from around the country as well as the clinical profiles of the patients. Bredel's first study reports on the 31 key mutated genes he discovered that comprise the tumor’s primary network. These genes represent a recurrent pattern of the most important mutations in the tumor.

"If many of those genes were mutated, the more aggressive the tumor and the less time the patient would survive," Bredel said. These are called hub genes because they are at the hub of all the mutated gene interaction.

In the second study, Bredel reports on the interaction of two of the 31 genes that are most frequently and concurrently affected by genetic alterations. One of those genes is EGFR (epidermal growth factor receptor), a well-known player in many cancers and known as an oncogene. EGFR has physiological importance in normal development. In nearly half of the glioblastoma patients, EGFR is mutated and abnormally activated, as if its dial is cranked permanently to "high."

Bredel discovered that the other gene, Annexin A7, is a vital guard whose job is to halt tumor growth by regulating the EGFR gene. Bredel found Annexin A7 was lost or diminished in many of the patients' malignant brain tumors. The reason was its home base -- chromosome 10 -- had been wiped out in about 75% of the tumors.

The study showed the presence and quantity of Annexin A7 in the malignant brain tumor accurately predicts how long a glioblastoma patient will survive. The more Annexin A7, the more restrictions on the tumor growth and the longer the survival. Bredel said the identification of Annexin A7 as a major regulator of EGFR provides a biological reason for the frequent parallel loss of chromosome 10 and gain of the EGFR gene in glioblastomas.

In the laboratory, Bredel tested the relationship between EGFR and Annexin A7 to try to understand exactly how they affected the tumor. Working with brain tumor cells, Bredel increased the level of EGFR and, in parallel, knocked out Annexin A7. The tumor cells grew much faster compared to tumor cells in which only one gene was modified.

"It's like a 'buy two, get one free' sale," Bredel said. "The tumor says, 'I'm making a good deal. I'm going to buy those two mutations because it's going to be very rewarding for me'."

With one mutation that increased EFGR and the other that eliminated chromosome 10 and Annexin A7, the tumor was free to proliferate unchecked.

“Understanding the key role of Annexin A7 in malignant brain tumors offers the opportunity for a new therapeutic target,” Bredel said. "The challenge is now that we've established that this gene is important, how can we modulate it through molecular cancer therapy?"

"We want to extend the survival of the patients, transform this hyper-acute disease into a more chronic tumor disease," Bredel said. "Maybe someday, a glioblastoma patient will be able to live for 10 or 20 years after a diagnosis."

Northwestern University

CALTECH, JPL SCIENTISTS SAY THAT MICROBIAL MATS BUILT 3.4-BILLION-YEAR-OLD STROMATOLITES

0 comentarios

Stromatolites are dome- or column-like sedimentary rock structures that are formed in shallow water, layer by layer, over long periods of geologic time. Now, researchers from the California Institute of Technology (Caltech) and the Jet Propulsion Laboratory (JPL) have provided evidence that some of the most ancient stromatolites on our planet were built with the help of communities of equally ancient microorganisms, a finding that "adds unexpected depth to our understanding of the earliest record of life on Earth," notes JPL astrobiologist Abigail Allwood, a visitor in geology at Caltech.

Their research, published in a recent issue of the Proceedings of the National Academy of Sciences (PNAS), might also provide a new avenue for exploration in the search for signs of life on Mars.

"Stromatolites grow by accreting sediment in shallow water," says John Grotzinger, the Fletcher Jones Professor of Geology at Caltech. "They get molded into these wave forms and, over time, the waves turn into discrete columns that propagate upward, like little knobs sticking up."

Geologists have long known that the large majority of the relatively young stromatolites they study—those half a billion years old or so—have a biological origin; they're formed with the help of layers of microbes that grow in a thin film on the seafloor.
How? The microbes' surface is coated in a mucilaginous substance to which sediment particles rolling past get stuck. "It has a strong flypaper effect," says Grotzinger. In addition, the microbes sprout a tangle of filaments that almost seem to grab the particles as they move along.

"The end result," says Grotzinger, "is that wherever the mat is, sediment gets trapped."

Thus it has become accepted that a dark band in a young stromatolite is indicative of organic material, he adds. "It's matter left behind where there once was a mat."

But when you look back 3.45 billion years, to the early Archean period of geologic history, things aren't quite so simple.

"Because stromatolites from this period of time have been around longer, more geologic processing has happened," Grotzinger says. Pushed deeper toward the center of Earth as time went by, these stromatolites were exposed to increasing, unrelenting heat. This is a problem when it comes to examining the stromatolites' potential biological beginnings, he explains, because heat degrades organic matter. "The hydrocarbons are driven off," he says. "What's left behind is a residue of nothing but carbon."

This is why there has been an ongoing debate among geologists as to whether or not the carbon found in these ancient rocks is diagnostic of life or not.

Proving the existence of life in younger rocks is fairly simple—all you have to do is extract the organic matter, and show that it came from the microorganisms. But there's no such cut-and-dried method for analyzing the older stromatolites. "When the rocks are old and have been heated up and beaten up," says Grotzinger, "all you have to look at is their texture and morphology."

Which is exactly what Allwood and Grotzinger did with samples gathered at the Strelley Pool stromatolite formation in Western Australia. The samples, says Grotzinger, were "incredibly well preserved." Dark lines of what was potentially organic matter were "clearly associated with the lamination, just like we see in younger rocks. That sort of relationship would be hard to explain without a biological mechanism."

"We already knew from our earlier work that we had an assemblage of stromatolites that was most plausibly interpreted as a microbial reef built by Early Archean microorganisms," adds Allwood, "but direct evidence of actual microorganisms was lacking in these ancient, altered rocks. There were no microfossils, no organic material, not even any of the microtextural hallmarks typically associated with microbially mediated sedimentary rocks."

So Allwood set about trying to find other types of evidence to test the biological hypothesis. To do so, she looked at what she calls the "microscale textures and fabrics in the rocks, patterns of textural variation through the stromatolites and—importantly—organic layers that looked like actual fossilized organic remnants of microbial mats within the stromatolites."

What she saw were "discrete, matlike layers of organic material that contoured the stromatolites from edge to edge, following steep slopes and continuing along low areas without thickening." She also found pieces of microbial mat incorporated into storm deposits, which disproved the idea that the organic material had been introduced into the rock more recently, rather than being laid down with the original sediment. "In addition," Allwood notes, "Raman spectroscopy showed that the organics had been 'cooked' to the same burial temperature as the host rock, again indicating the organics are not young contaminants."

Allwood says she, Grotzinger, and their team have collected enough evidence that it's no longer any "great leap" to accept these stromatolites as biological in origin. "I think the more we dig at these stromatolites, the more evidence we'll find of Early Archean life and the nature of Earth's early ecosystems," she says.

That's no small feat, since it's been difficult to prove that life existed at all that far back in the geologic record. "Recently there has been increasing but still indirect evidence suggesting life existed back then, but direct evidence of microorganisms, at the microscale, remained elusive due to poor preservation of the rocks," Allwood notes. "I think most people probably thought that these Early Archean rocks were too poorly preserved to yield such information."

The implications of the findings don't stop at life on Earth.

"One of my motivations for understanding stromatolites," Allwood says, "is the knowledge that if microbial communities once flourished on Mars, of all the traces they might leave in the rock record for us to discover, stromatolite and microbial reefs are arguably the most easily preserved and readily detected. Moreover, they're particularly likely to form in evaporative, mineral-precipitating settings such as those that have been identified on Mars. But to be able to interpret stromatolitic structures, we need a much more detailed understanding of how they form."

(Photo: Abigail Allwood)

California Institute of Technology

Followers

Archive

 

Selected Science News. Copyright 2008 All Rights Reserved Revolution Two Church theme by Brian Gardner Converted into Blogger Template by Bloganol dot com