Thursday, September 16, 2010


0 comentarios
Scientists have discovered that a protein that helps make cells sticks together also keeps them from dividing excessively, a hallmark of cancer progression. The discovery could lead to new ways to control cancer.

The findings, arising from a collaboration between Aaron Putzke, assistant professor of biology at Hope College in Holland, Mich., and Joel H. Rothman, a biology professor and chair of the Department of Molecular, Cellular and Developmental Biology at UC Santa Barbara, were described in a paper published in the Proceedings of the National Academy of Sciences, a widely cited interdisciplinary scientific journal.

"When we develop from an egg, cells divide many times, generating the vast number of cells present in an adult," Rothman said. "It is not only critical for cells to know when to divide, but when to stop dividing."

Putzke added: "Without this brake on division, cells would keep dividing and we might end up with arms that reach the ground or ears that flap in the wind. Or, even more seriously, with cancer."

It is equally important, according to Rothman, for cells to stick to each other so that they can work together in communities, rather than as free agents with no regard for the others around them. Cancer cells do not stick together properly, allowing them to break free from the community and spread the disease.

Working with a tiny roundworm known as C. elegans, a major experimental model animal in biomedical science, the scientists discovered that a protein called Fer, which acts with other proteins to glue cells together, also prevents them from dividing excessively. When Fer is removed from cells, the researchers found that they keep proliferating. Their experiments may mimic what happens in certain human cancers.

"Other studies have shown that Fer protein levels are altered in cancers such as prostate cancer and myeloid leukemia," said Putzke. The Fer protein was well known for its role in sticking cells together, and the obvious conclusion was that the role in cancer progression was related to its function in cellular glue. But Putzke noted that altered Fer levels might actually be associated with tumors for a very different reason. "Our results suggest that Fer may act in cancer not by changing cell adhesion but by allowing cells to divide unchecked."

The research showed that the Fer protein normally acts by constraining a cell signaling system known as the Wnt (pronounced "wint") that causes cells to multiply. When the Fer protein is absent, the Wnt signal becomes overzealous, overriding the normal brakes on cell division and causing cells to multiply when they shouldn't.

"Cell stickiness and the brake on multiplication of cells are coordinated during normal human development and both are affected in cancer," said Rothman, adding that the results are a reminder that obvious conclusions are not always the correct ones. "A complex set of events goes wrong in cancer cells. It may well be that Fer is an important player in cancer because it acts both in cell adhesion and in stopping cells from dividing."

University of California, Santa Barbara


0 comentarios

Ants are usually regarded as the unwanted guests at a picnic. But a recent study of California seed harvester ants (Pogonomyrmex californicus) examining their metabolic rate in relation to colony size may lead to a better appreciation for the social, six-legged insects, whose colonies researchers say provide a theoretical framework for understanding cellular networks.

A team of researchers led by James Waters of Arizona State University in Tempe, Ariz. conducted a series of experiments designed to measure the components of ant metabolism, such as oxygen and carbon dioxide, in individual ants and in colonies of ants. The team studied 13 colonies of seed harvester ants taken from a nearby desert and housed in the university’s research lab. By using flow-through respirometry and factors such as growth rates, patterns of movement, behavior and size, the team measured standard metabolic rates (i.e., energy expenditures) of the functioning colonies as well as in individual ants.

The researchers found that the metabolic rate of seed harvester ant colonies could not be predicted by adding and dividing the by-products of the metabolisms of all individual colony members. In fact, the colony as a whole produced only 75% of the by-products that its individual members would produce individually if each ant lived alone. Thus, the colonies’ metabolism was less than the sum of all the individual ants’ metabolisms.

The team also found that the larger the colony, the lower its overall metabolic rate. “Larger colonies consumed less energy per mass than smaller colonies,” said Mr. Waters. “Size affects the scaling of metabolic rate for the whole colony.”

Colony size appeared to influence patterns of behavior and the amount of energy individual ants spent. “In smaller colonies, more ants were moving fast, and there was a more even distribution of fast-moving ants,” said Mr. Waters. “But in larger colonies, there were more ants that moved more slowly, and fewer that were moving really fast.”

That the distribution of individual walking speeds became less uniform as colony size increased suggests that disparities in effort among individuals increased with colony size.

The 0.75 scaling exponent for colony metabolic rate strikes Mr. Waters as important because it indicates that colony metabolism is influenced in a way similar to what most individual organisms experience.

“As creatures go from small to large, their mass-specific metabolic rate decreases. It’s a broad pattern in biology,” he said. “When you graph these patterns, you can see how metabolism decreases as a creature gets bigger, and the exponent is usually near 0.75.”

Yet a colony of ants experienced this decline as though it was one single “super-organism”. Mr. Waters noted that the team isn’t sure why this is so, but he has a few ideas.

“Ants need to stay in contact with each other in a colony, and it’s possible that in larger colonies, certain ants take on the role of a network hub to keep the other ants in the colony more in touch with each other,” he said. “That would relax the demand placed on the other ants.”

He added that a larger size might afford a colony a division of labor not possible in a smaller colony. Individuals in a smaller colony would have to work harder to satisfy basic energy demands.

According to Mr. Waters, because ant colonies behave metabolically like individual organisms, studying how a colony’s size changes its metabolism could offer useful insight for developing theories about medication dosage in humans.

“It’s hard to figure out how size affects metabolic rate in individuals because it’s not easy to change an individual’s size,” he said. “With an ant colony, it’s as easy as adding or removing individual ants.”

This is not to say that ant colonies function like individual humans. Rather, ant colonies could serve as a model for testing theories about the role of networks among cells in human metabolism.

“We’ve got this pattern where the larger an organism is, the slower its metabolism, and we don’t really understand why,” said Mr. Waters. “It’s important to find out because we really don’t have any sort of theoretical basis for deciding the right dose of medication. We can do charts on weight, and we can run tests on animals, but it’s really more alchemy than science.”

(Photo: APS)

The American Physiological Society


0 comentarios

In what could one day change the definition of "power plant," researchers at the Technion-Israel Institute of Technology have manipulated the photosynthetic process of plants in a way that may possibly enable the energy produced in the process to be harnessed for later use as electricity.

Published in the Proceedings of the National Academy of Sciences (PNAS) – and called a “must-read” by the Faculty of 1000 Biology service (comprised of more than 2300 of the world's leading scientists) – the achievement is a first step in a process that could some day create true green energy, or in the words of the Faculty of 1000, “the greenest of the green.”

The research team led by Prof. Gadi Schuster, dean of the Technion Faculty of Biology and Prof. Noam Adir from the Schulich Faculty of Chemistry, studied a key protein in the process of moving electrons along the photosynthesis production line. In its natural state, this protein extracts electrons from water and moves them through a cell membrane in plants. The membrane then isolates the biological electricity flow from the escaping to side processes.

By altering one amino acid out of the hundreds found in the protein from positive to negative, the researchers changed the direction of electron emission to one allowing for harnessing the energy produced in the process for later use. This modified protein “exports” electrons at a high enough frequency to produce a useful quantity of energy. The positive to negative change does not harm the protein’s function or development of the plant, making it possible to obtain large amounts of protein at minimal cost.

The next step, say the researchers, is to engineer a mechanism that will be able to convert the biochemical energy into electricity, as used in everyday processes.

“This will not replace power stations,” says Prof. Schuster. “But in the future, it might supply useable amounts of clean electricity, especially in places with infrastructure problems that traditional electricity cannot reach. We hope to reach the stage in which a few leaves, for example – tobacco leaves – can supply electricity for a number of hours exactly like a photoelectric board.”

Technion graduate students Shirley Larom and Faris Salama also contributed to these research findings, for which the Technion has registered a patent.

(Photo: ATS)

American Technion Society


0 comentarios

Computational scientists and geophysicists at the University of Texas at Austin and the California Institute of Technology (Caltech) have developed new computer algorithms that for the first time allow for the simultaneous modeling of the earth's mantle flow, large-scale tectonic plate motions, and the behavior of individual fault zones, to produce an unprecedented view of plate tectonics and the forces that drive it.

A paper describing the whole-earth model and its underlying algorithms was published in the August 27 issue of the journal Science and also featured on the cover.

The work "illustrates the interplay between making important advances in science and pushing the envelope of computational science," says Michael Gurnis, the John E. and Hazel S. Smits Professor of Geophysics, director of the Caltech Seismological Laboratory, and a coauthor of the Science paper.

To create the new model, computational scientists at Texas's Institute for Computational Engineering and Sciences (ICES)—a team that included Omar Ghattas, the John A. and Katherine G. Jackson Chair in Computational Geosciences and professor of geological sciences and mechanical engineering, and research associates Georg Stadler and Carsten Burstedde—pushed the envelope of a computational technique known as Adaptive Mesh Refinement (AMR).

Partial differential equations such as those describing mantle flow are solved by subdividing the region of interest (such as the mantle) into a computational grid. Ordinarily, the resolution is kept the same throughout the grid. However, many problems feature small-scale dynamics that are found only in limited regions. "AMR methods adaptively create finer resolution only where it's needed," explains Ghattas. "This leads to huge reductions in the number of grid points, making possible simulations that were previously out of reach.”

"The complexity of managing adaptivity among thousands of processors, however, has meant that current AMR algorithms have not scaled well on modern petascale supercomputers," he adds. Petascale computers are capable of one million billion operations per second. To overcome this long-standing problem, the group developed new algorithms that, Burstedde says, "allows for adaptivity in a way that scales to the hundreds of thousands of processor cores of the largest supercomputers available today."

With the new algorithms, the scientists were able to simulate global mantle flow and how it manifests as plate tectonics and the motion of individual faults. According to Stadler, the AMR algorithms reduced the size of the simulations by a factor of 5,000, permitting them to fit on fewer than 10,000 processors and run overnight on the Ranger supercomputer at the National Science Foundation (NSF)-supported Texas Advanced Computing Center.

A key to the model was the incorporation of data on a multitude of scales. "Many natural processes display a multitude of phenomena on a wide range of scales, from small to large," Gurnis explains. For example, at the largest scale—that of the whole earth—the movement of the surface tectonic plates is a manifestation of a giant heat engine, driven by the convection of the mantle below. The boundaries between the plates, however, are composed of many hundreds to thousands of individual faults, which together constitute active fault zones. "The individual fault zones play a critical role in how the whole planet works," he says, "and if you can't simulate the fault zones, you can't simulate plate movement"—and, in turn, you can't simulate the dynamics of the whole planet.

In the new model, the researchers were able to resolve the largest fault zones, creating a mesh with a resolution of about one kilometer near the plate boundaries. Included in the simulation were seismological data as well as data pertaining to the temperature of the rocks, their density, and their viscosity—or how strong or weak the rocks are, which affects how easily they deform. That deformation is nonlinear—with simple changes producing unexpected and complex effects.

"Normally, when you hit a baseball with a bat, the properties of the bat don't change—it won't turn to Silly Putty. In the earth, the properties do change, which creates an exciting computational problem," says Gurnis. "If the system is too nonlinear, the earth becomes too mushy; if it's not nonlinear enough, plates won't move. We need to hit the 'sweet spot.'"

After crunching through the data for 100,000 hours of processing time per run, the model returned an estimate of the motion of both large tectonic plates and smaller microplates—including their speed and direction. The results were remarkably close to observed plate movements.

In fact, the investigators discovered that anomalous rapid motion of microplates emerged from the global simulations. "In the western Pacific," Gurnis says, "we have some of the most rapid tectonic motions seen anywhere on Earth, in a process called 'trench rollback.' For the first time, we found that these small-scale tectonic motions emerged from the global models, opening a new frontier in geophysics."

One surprising result from the model relates to the energy released from plates in earthquake zones. "It had been thought that the majority of energy associated with plate tectonics is released when plates bend, but it turns out that's much less important than previously thought," Gurnis says. "Instead, we found that much of the energy dissipation occurs in the earth's deep interior. We never saw this when we looked on smaller scales."

(Photo: Georg Stadler, Institute for Computational Engineering & Sciences, UT Austin)



0 comentarios

"Please hold absolutely still": This instruction is crucial for patients being examined by magnetic resonance imaging (MRI). It is the only way to obtain clear images for diagnosis. Up to now, it was therefore almost impossible to image moving organs using MRI. Max Planck researchers from Göttingen have now succeeded in significantly reducing the time required for recording images - to just one fiftieth of a second. With this breakthrough, the dynamics of organs and joints can be filmed "live" for the first time: movements of the eye and jaw as well as the bending knee and the beating heart. The new MRI method promises to add important information about diseases of the joints and the heart. In many cases MRI examinations may become easier and more comfortable for patients.

A process that required several minutes until well into the 1980s, now only takes a matter of seconds: the recording of cross-sectional images of our body by magnetic resonance imaging (MRI). This was enabled by the FLASH (fast low angle shot) method developed by Göttingen scientists Jens Frahm and Axel Haase at the Max Planck Institute for Biophysical Chemistry. FLASH revolutionised MRI and was largely responsible for its establishment as a most important modality in diagnostic imaging. MRI is completely painless and, moreover, extremely safe. Because the technique works with magnetic fields and radio waves, patients are not subjected to any radiation exposure as is the case with X-rays. At present, however, the procedure is still too slow for the examination of rapidly moving organs and joints. For example, to trace the movement of the heart, the measurements must be synchronised with the electrocardiogram (ECG) while the patient holds the breath. Afterwards, the data from different heart beats have to be combined into a film.

The researchers working with Jens Frahm, Head of the non-profit "Biomedizinische NMR Forschungs GmbH", now succeeded in further accelerating the image acquisition process. The new MRI method developed by Jens Frahm, Martin Uecker and Shuo Zhang reduces the image acquisition time to one fiftieth of a second (20 milliseconds), making it possible to obtain "live recordings" of moving joints and organs at so far inaccessible temporal resolution and without artefacts. Filming the dynamics of the jaw during opening and closing of the mouth is just as easy as filming the movements involved in speech production or the rapid beating of the heart. "A real-time film of the heart enables us to directly monitor the pumping of the heart muscle and the resulting blood flow - heartbeat by heartbeat and without the patient having to hold the breath," explains Frahm. The scientists believe that the new method could help to improve the diagnosis of conditions such as coronary heart disease and myocardial insufficiency. Another application involves minimally invasive interventions which, thanks to this discovery, could be carried out in future using MRI instead of X-rays. "However, as it was the case with FLASH, we must first learn how to use the real-time MRI possibilities for medical purposes," says Frahm. "New challenges therefore also arise for doctors. The technical progress will have to be ‘translated’ into clinical protocols that provide optimum responses to the relevant medical questions."

To achieve the breakthrough to MRI measurement times that only take very small fractions of a second, several developments had to be successfully combined with each other. Whilst still relying on the FLASH technique, the scientists used a radial encoding of the spatial information which renders the images insensitive to movements. Mathematics was then required to further reduce the acquisition times. "Considerably fewer data are recorded than are usually necessary for the calculation of an image. We developed a new mathematical reconstruction technique which enables us to calculate a meaningful image from data which are, in fact, incomplete," explains Frahm. In the most extreme case it is possible to calculate an image of comparative quality out of just five percent of the data required for a normal image - which corresponds to a reduction of the measurement time by a factor of 20. As a result, the Göttingen scientists have accelerated MRI from the mid 1980s by a factor of 10000.

Although these fast MRI measurements can be easily implemented on today’s MRI devices, something of a bottleneck exists when it comes to the availability of sufficiently powerful computers for image reconstruction. Physicist Martin Uecker explains: "The computational effort required is gigantic. For example, if we examine the heart for only a minute in real time, between 2000 and 3000 images arise from a data volume of two gigabytes." Uecker consequently designed the mathematical process in such a way that it is divided into steps that can be calculated in parallel. These complex calculations are carried out using fast graphical processing units that were originally developed for computer games and three-dimensional visualization. "Our computer system requires about 30 minutes at present to process one minute’s worth of film," says Uecker. Therefore, it will take a while until MRI systems are equipped with computers that will enable the immediate calculation and live presentation of the images during the scan. In order to minimise the time their innovation will take to reach practical application, the Göttingen researchers are working in close cooperation with the company Siemens Healthcare.

(Photo: Frahm)

Max Planck Society


0 comentarios

Many engineering disciplines rely on supercomputers to simulate complicated physical phenomena — how cracks form in building materials, for instance, or fluids flow through irregular channels. Now, researchers in MIT’s Department of Mechanical Engineering have developed software that can perform such simulations on an ordinary smart phone. Although the current version of the software is for demonstration purposes, the work could lead to applications that let engineers perform complicated calculations in the field, and even to better control systems for vehicles or robotic systems.

The new software works in cases where the general form of a problem is known in advance, but not the particulars. For instance, says Phuong Huynh, a postdoc who worked on the project, a computer simulation of fluid flow around an obstacle in a pipe could depend on a single parameter: the radius of the obstacle. But for a given value of the parameter, calculating the fluid flow could take an hour on a supercomputer with 500 processing units. The researchers’ new software can provide a very good approximation of the same calculation in a matter of seconds.

“This is a very relevant situation,” says David Knezevic, another postdoc in the department who helped lead the project. “Often in engineering contexts, you know a priori that your problem is parameterized, but you don’t know until you get into the field what parameters you’re interested in.”

Each new problem the researchers’ software is called upon to solve requires its own mathematical model. The models, however, take up very little space in memory: A cell phone could hold thousands of them. The software, which is available for download, comes preloaded with models for nine problems, including heat propagation in objects of several different shapes, fluid flow around a spherical obstacle, and the effects of forces applied to a cracked pillar. As the researchers develop models for new classes of problems, they post them on a server, from which they can be downloaded.

But while the models are small, creating them is a complicated process that does in fact require a supercomputer. “We’re not trying to replace a supercomputer,” Knezevic says. “This is a model of computation that works in conjunction with supercomputing. And the supercomputer is indispensable.”

Knezevic, his fellow postdoc Phuong Huynh, Ford Professor of Engineering Anthony T. Patera, and John Peterson of the Texas Advanced Computer Center describe their approach in a forthcoming issue of the journal Computers and Fluids. Once they have identified a parameterized problem, they use a supercomputer to solve it for somewhere between 10 and 50 different sets of values. Those values, however, are carefully chosen to map out a large space of possible solutions to the problem. The model downloaded to a smart phone finds an approximate solution for a new set of parameters by reference to the precomputed solutions.

The key to the system, Knezevic says, is the ability to quantify the degree of error in an approximation of a supercomputing calculation, a subject that Patera has been researching for almost a decade. As the researchers build a problem model, they select parameters that will successively minimize error, according to analytic techniques Patera helped developed. The calculation of error bounds is also a feature of the phone application itself. For each approximate solution of a parameterized problem, the app also displays the margin of error. The user can opt to trade speed of computation for a higher margin of error, but the app can generally get the error under 1 percent in less than a second.

While the researchers’ software can calculate the behavior of a physical system on the basis of its parameters, it could prove even more useful by doing the opposite: calculating the parameters of a physical system on the basis of its behavior. Instead of, say, calculating fluid flow around an obstacle based on the obstacle’s size, the software could calculate the size of the obstacle based on measurements of the fluid flow at the end of a pipe. Ordinarily, that would require several different computations on a supercomputer, trying out several different sets of parameters. But if testing, say, 30 options on a supercomputer would take 30 hours, it might take 30 seconds on a phone. Indeed, the researchers have already developed a second application that calculates such “inverse problems.”

In the same way that a simulation of a physical system describes its behavior on the basis of parametric measurements, control systems, of the type that govern, say, automotive brake systems or autonomous robots, determine devices’ behavior on the basis of sensor measurements. Control-systems researchers spend a great deal of energy trying to come up with practical approximations of complex physics in order to make their systems responsive enough to work in real time. But Knezevic, Huynh and Patera’s approach could make those approximations both more accurate and easier to calculate.

Max Gunzberger, Frances Eppes Eminent Professor of Scientific Computing at Florida State University says that the MIT researchers’ work has a “cuteness aspect” that has already won it some attention. But “once you get over the cuteness factor,” he says, “if you talk about serious science or serious engineering, there’s a potential there.” Gunzberger points out that while the researchers’ demo concentrates on fluid mechanics, “there’s lots of other problems that their approach can be applied to. They built the structure that they themselves or others can start using to solve problems in different application areas.”

(Photo: David Knezevic and Dinh Bao Phuong Huynh)



0 comentarios

Cow belches, a major source of greenhouse gases, could be decreased by an unusual feed supplement developed by a Penn State dairy scientist.

In a series of laboratory experiments and a live animal test, an oregano-based supplement not only decreased methane emissions in dairy cows by 40 percent, but also improved milk production, according to Alexander Hristov, an associate professor of dairy nutrition.

The natural methane-reduction supplement could lead to a cleaner environment and more productive dairy operations.

"Cattle are actually a major producer of methane gas and methane is a significant greenhouse gas," Hristov said. "In fact, worldwide, livestock emits 37 percent of anthropogenic methane."

Anthropegenic methane is methane produced by human activities, such as agriculture.

Compared to carbon dioxide, methane has 23 times the potential to create global warming, Hristov said. The Environmental Protection Agency bases the global warming potential of methane on the gas's absorption of infrared radiation, the spectral location of its absorbing wavelengths and the length of time methane remains in the atmosphere.

Methane production is a natural part of the digestive process of cows and other ruminants, such as bison, sheep and goats. When the cow digests food, bacteria in the rumen, the largest of the four-chambered stomach, break the material down intro nutrients in a fermentation process. Two of the byproducts of this fermentation are carbon dioxide and methane.

"Any cut in the methane emissions would be beneficial," Hristov said.

Experiments revealed another benefit of the gas-reducing supplement. It increased daily milk production by nearly three pounds of milk for each cow during the trials. The researcher anticipated the higher milk productivity from the herd.

"Since methane production is an energy loss for the animal, this isn’t really a surprise," Hristov said. "If you decrease energy loss, the cows can use that energy for other processes, such as making milk."

Hristov said that finding a natural solution for methane reduction in cattle has taken him approximately six years. Natural methane reduction measures are preferable to current treatments, such as feed antibiotics.

Hristov first screened hundreds of essential oils, plants and various compounds in the laboratory before arriving at oregano as a possible solution. During the experiments, oregano consistently reduced methane without demonstrating any negative effects.

Following the laboratory experiments, Hristov conducted an experiment to study the effects of oregano on lactating cows at Penn State's dairy barns. He is currently conducting follow-up animal trials to verify the early findings and to further isolate specific compounds involved in the suppression of methane.

Hristov said that some compounds that are found in oregano, including carvacrol, geraniol and thymol, seem to play a more significant role in methane suppression. Identifying the active compounds is important because pure compounds are easier to produce commercially and more economical for farmers to use.

"If the follow-up trials are successful, we will keep trying to identify the active compounds in oregano to produce purer products," said Hristov.

Hristov has filed a provisional patent for this work.

(Photo: Penn State Department of Dairy and Animal Science)

Penn State


0 comentarios

Palaeontologists are forever claiming that their latest fossil discovery will 'rewrite evolutionary history'. Is this just boasting or is our 'knowledge' of evolution so feeble that it changes every time we find a new fossil?

A team of researchers at the University of Bristol decided to find out, with investigations of dinosaur and human evolution. Their study, which is published in Proceedings of the Royal Society B, suggests most fossil discoveries do not make a huge difference, confirming, not contradicting our understanding of evolutionary history.

This is especially true of the fossil record of human origins from their monkey relatives. Even though early human fossils are immensely rare, and new discoveries make a big splash in the scientific literature and in the media, they sit randomly across the evolutionary tree of early humans. In other words, most discoveries of new fossil species simply fill in gaps in the fossil record that we already knew existed.

As Dr James Tarver, leader of the study, said: “Human fossils are very rare, and they are costly to recover because of the time involved and their often remote locations. Scientists may be pushed by their sponsors, or by news reporters, to exaggerate the importance of their new find and make claims that ‘this new species completely changes our understanding’.”

The story of dinosaur evolution is a bit more complicated. New dinosaur fossils are being found in places around the world where they've never been looked for before, such as China, South America and Australia. These fossils are fundamentally challenging existing ideas about dinosaur evolution but this seems to tell us that there are still many new species of dinosaurs out there in the rocks.

“These are important results,” said Professor Michael Benton, another member of the team. “It might seem negative to say that new finds do, or do not, change our views. However, to find that they don’t means that we may be close to saturation in some areas, meaning we know enough of the fossil record in some cases to have a pretty good understanding of that part of the evolutionary tree.”

Professor Phil Donoghue commented further: “We can use these studies as a way of targeting new expeditions. If dinosaurs are poorly understood from a particular part of the world, or if some other group is altogether incompletely known, that’s where we need to devote greater efforts.”

(Photo: Bristol U.)

University of Bristol


0 comentarios

One of the most spectacular migrations on Earth is that of the Christmas Island red crab (Gecarcoidea natalis). Acknowledged as one of the wonders of the natural world, every year millions of the crabs simultaneously embark on a five-kilometre breeding migration. Now, scientists have discovered the key to their remarkable athletic feat.

A three-year project conducted by a team led by the late Professor Steve Morris from the University of Bristol’s School of Biological Sciences in collaboration with Professor Simon Webster from Bangor University, has discovered that hormonal changes play a significant role in enabling the crabs to make their journey.

Lucy Turner, a researcher at the University of Bristol, said: “During the wet season on the island, in November or December, and prompted by the arrival of the monsoon rains, millions of the crabs undertake an arduous breeding migration from their home on the high rainforest plateau to the ocean to reproduce. This is a journey of several kilometres - a long way when you are a relatively small land crab (less than 20cm long).

“Scientists have long been puzzled by what mechanisms enable the necessary changes to take place in the crabs’ physiology to allow this journey to take place, and how they make such a dramatic switch from hypoactivity to hyperactivity.”

The results of this project have proven that it is a Crustacean Hyperglycaemic Hormone (CHH) that enables the crabs to make the most efficient use of their stored energy in the muscles (glycogen) and its conversion to glucose to fuel the migration.

Professor Webster, an endocrinologist at Bangor University, added: "Their migration is extremely energetically demanding, since the crabs must walk several kilometres over a few days. During the non-migratory period, the crabs are relatively inactive and stay in their burrows on the floor of the rain forest, only emerging for a brief period at dawn, to feed. The behaviour change reflects a fundamental change in the metabolic status of the animal.

"Surprisingly, we found that hyperglycaemic hormone levels were lower in actively migrating crabs than those that were inactive during the dry season. However, studying the crabs running and walking after giving them glucose resolved the puzzle. During the dry season, forced activity resulted in a tremendous release of hormone, within two minutes, irrespective of whether glucose had been administered. However, in the wet season, the glucose completely prevented the release of the exercise-dependent hormone, showing that they were controlled by a negative feedback loop.

“Glucose levels were clearly regulating hormone release at this time. This made sense since it ensures that during migration, glucose is only released from glycogen stores when glucose levels are low, using the crabs’ precious reserves of glycogen, to ensure that they can complete the migration.”

(Photo: Mrinalini at Bangor University)

University of Bristol




Selected Science News. Copyright 2008 All Rights Reserved Revolution Two Church theme by Brian Gardner Converted into Blogger Template by Bloganol dot com