Tuesday, August 11, 2009


0 comentarios

Harvard researchers at Children’s Hospital Boston have identified a protein in the urine of appendicitis patients that they believe may provide the basis of a quick, noninvasive, accurate, and inexpensive test for the common condition.

Acute inflammation of the vermiform appendix, commonly known as appendicitis, is one of the oldest emergencies in the annals of medicine, but its diagnosis still can be challenging, leading to delays and mistakes that can result in complications and death. But all that may be changing.

In a report released in the online edition of Annals of Emergency Medicine, the researchers at Children’s Proteomics Center say the marker they found is 500 times more abundant in the urine and tissue of those with appendicitis than in healthy individuals.

The little worm-shaped organ featured in Leonardo da Vinci’s anatomical drawings, and whose inflammation may have been forever preserved in an Egyptian mummy of the Byzantine era, continues to be the cause of one of the most common diseases in children and adults in this century, and remains among the most urgent surgical emergencies worldwide.

In the United States, approximately one in 1,000 people are afflicted by the disease yearly. About 3 percent to 30 percent of children arriving at the emergency room with abdominal pain are subject to unnecessary appendectomies, and a larger percentage, about 30 percent to 45 percent of those who are suffering from the disease, are diagnosed when the appendix has already ruptured.

“We wanted to develop ways to identify markers and to establish protocols for early diagnosis of disease,” says clinical fellow Alex Kentsis. “As a first step in the process, we thought we could use appendicitis as a test case because it is still a tremendously important problem in medicine for both kids and adults.”

“People come to the emergency room with abdominal pain; they can spend hours there before they get a slot for the CT, and while being there their appendix bursts because it just took so long,” says Hanno Steen, director of the Proteomics Center and assistant professor of pathology. “We wanted to come up with something that’s much faster and much easier; and what could be easier than peeing into a cup and putting a dipstick in and waiting a few minutes to see whether the color changes? That’s basically the motivation to take the technology out; because technology is always associated with significant costs and also takes time.”

Using proteomics, the systematic study of the proteins in tissues, cells, or body fluids, Kentsis, Steen, and Richard Bachur, associate professor of pediatrics and chief of emergency medicine at Children’s Hospital, analyzed the urine samples of 12 children: six healthy and six with appendicitis, the latter both before and after the removal of their appendixes. “We were looking for proteins that were significantly more abundant in urine,” recounts Steen.

“Out of this study and other means we came up with a list of 57 proteins. Then we took 67 urine samples from children with suspected appendicitis. This time we only looked at those proteins that were both the most abundant in some samples and less abundant in others.”

This was possible using state-of-the-art mass spectrometry, what Steen calls “a very fine balance” that allows researchers to measure and detect a wide range between the most abundant proteins and the least abundant proteins of about nine to 12 orders of magnitude. “That’s equivalent to the difference between the height of an ant and the distance from the Earth to the moon,” says Steen.

The team identified seven diagnostic marker candidates. “Out of those, we found this one protein, LRG, which was 500 times more abundant in urine from patients with appendicitis compared to controls,” says Steen. “That’s much more than we expected. It was a dream situation from our perspective. It made the study very easy.”

The protein, LRG, or leucine-rich alpha-2-glycoprotein, appears to be a specific marker of local inflammation. “During the second phase of the study, we attempted to validate this candidate blindly to determine how well it identified patients with appendicitis,” explains Kentsis. The researchers saw that LRG had the best performance in terms of specificity (how much it can distinguish those patients with appendicitis from those without it) and sensitivity (how well it serves as a marker to detect the disease).

The team also looked at the extracted appendix tissue, finding a correlation between the abundance of LRG in the tissue and its abundance in the urine samples. “Patients with very high concentration of the protein in the urine also had very high concentration of the protein in their appendix,” adds Steen.

If the observed performance of LRG is confirmed in studies with a larger number of patients, the use of the protein as a diagnostic marker could be implemented in the emergency departments of hospitals, says Kentsis. “We hope using LRG will improve the accuracy in diagnostic evaluations for appendicitis, and hopefully make them faster, reducing the complications from delayed or wrong diagnosis, and the costs and mortality associated with that.”

It’s unlikely that the new diagnostic marker will be used in the form of a home test anytime soon.

“Having the test at the pharmacy is a dream,” says Steen. “But we first have to confirm our findings doing further and larger patient studies, and we also have to validate it in adults.”

“In a way, you could think of having this test available at any pharmacy as a dipstick, like you buy a pregnancy test,” says Steen, “but you have to be very careful because when you do the pregnancy test, and you get one result or the other, it doesn’t matter much what you do next. But if you have a dipstick test for appendicitis, and you don’t have the specificity or the sensitivity you need, if the test is negative, but it turns out you really do have the disease, you probably won’t go to the doctor, and in the case of appendicitis this would be very dangerous.”

The biggest contribution to the diagnosis of appendicitis in this country, says Kentsis, has been the use of high-resolution imagery such as CT scanners. “Unfortunately, the technology is available in a more limited way in most of the rest of world, and in some places it is not available at all,” he says. “The diagnosis of appendicitis there really depends on the experience and skill of individual doctors, and it remains a challenging diagnosis to make.”

The scientists hope that testing for LRG will make that easier.

(Photo: Hanno Steen/Proteomics Center)

Harvard University


0 comentarios

New research at Newcastle University shows that it's not enough to be noble and do a courageous act to be considered a hero.

Studying the reactions of the public to five tales of heroism, researchers at Newcastle University found that off-duty emergency service workers were judged more harshly than their civilian counterparts.

Dr Joan Harvey who led the research, says: “We found that people were making judgements on how heroic a deed is based on whether it was personal – so involving a neighbour or children, whether they could empathise with the situation and whether the person worked for the emergency services or not.

“It seems that people consider someone a hero if they go beyond the call of duty – but in the case of the emergency services, that duty never seems to stop.”

The work is published in the Journal of Risk Research.

The Newcastle University researchers found that people appeared to think that those who worked for the emergency services were trained to deal with difficult situations - even if they were off-duty - so therefore they were in a position to apply their knowledge before acting. Psychologists describe this as being able to cognitively appraise the situation.

In the case of 9/11 terrorist attack on the World Trade Centre, Dr Harvey says: “The fire-fighters were trained to cope with fire and smoke but they weren’t trained to judge when a building might collapse from 80 floors up. This may influence how the public perceives them - they are considered heroes - and our admiration of them may have increased because they made judgements based on their knowledge but they didn’t have the correct knowledge about the building.”

Given five real-life scenarios to read, members of the public rated the heroism of the acts and indicated whether it was a risk worth taking. A clear difference emerged when there was a successful outcome and people were rescued but also between the perception of the public about professionals (fire fighter and off duty police officer) and lay-people.

In one scenario, an off-duty police officer stops two young men trying to steal a car but as a result is stabbed in the chest with a screwdriver. The public perceived this as a risk not worth taking (a mean score of 3.47 out of 10) and he received average admiration as a hero ( 5.41).

In another scenario, an accounts clerk rescues two children and a baby from his neighbour’s burning house. The public perceived this as a risk worth taking (8.05) and he received a high level of admiration as a hero (8.90).

(Photo: Newcastle U.)

Newcastle University


0 comentarios

Evidence buried in the chromosomes of animals and plants strongly suggests only one group -- mammals -- have seen their genomes shrink after the dinosaurs' extinction. What's more, that trend continues today, say Indiana University Bloomington scientists in the first issue of a new journal, Genome Biology and Evolution.

The scientists' finding might seem counter-intuitive, given that the last 65 million years have seen mammals expand in diversity and number, not to mention dominance in a wide variety of ecological roles. But it is precisely their success in numbers that could have led to the contraction of their genomes.

"Larger population sizes make natural selection more efficient," said IU Bloomington evolutionary biologist Michael Lynch, who led the study. "If we are correct, we have shown how to bring ancient genomic information together with the paleontological record to learn more about the past."

And the present. Lynch says the data he and his colleagues analyzed suggest human genomes are still undergoing a contraction -- though you shouldn't expect to see noticeable changes in our chromosomes for a few million years yet.

Lynch's group examined the genomes of seven mammals, eight non-mammalian animals and three plants, specifically with regard for the long terminal repeat (LTR) sequences of transposable elements, a curious sort of "jumping" genetic sequence initially dropped into genomes by viruses. IU School of Informatics (Bloomington) bioinformaticians Mina Rho and Haixu Tang oversaw the survey of mammalian and non-mammalian genomes.

Transposable elements often lose their functionality soon after insertion but nevertheless are disturbingly common. In the human genome, for example, transposable elements constitute as much as 45 percent of an individual's total DNA. Long terminal repeat sequences, part of that figure, make up about 8 percent of humans' total DNA.

LTRs come in a range of sizes and ages, and it is the age distribution of LTRs that interested Lynch and his colleagues.

"This study started out as independent observations in the literature," Lynch said. "The data we saw suggested a bulge in age distribution of transposable elements in humans and mouse."

Left enough time, Lynch says, transposable elements are eventually lost from the genome, sometimes by accident and sometimes, perhaps, as the result of natural selection against excess DNA. An LTR is far more likely to survive a few years of cell divisions -- and the chance of obliteration via a DNA replication error -- than 10 million years of cell divisions. Plotting the full range of 17 species' LTRs, young and old, Lynch and his colleagues usually saw a descending curve with lots of new transposable elements and a dramatic drop-off in the number of older elements.

But not in most mammals. In humans, macaques, cows, dogs and mouse, Lynch's group observed a hill-shaped curve, with a peak of middle-aged LTRs and drop-offs both in the number of older and younger LTRs. The shape of the curve is consistent with previously published data for other types of so-called "junk" DNA elements.

The depressed numbers of very young LTRs, Lynch says, strongly suggests a contraction in overall genome sizes of the lineages of the mammals the scientists studied. That could come about in one of two ways, he says. One possibility is an increase in the efficiency of natural selection that accompanies population growth.

"We think that's the most likely explanation," Lynch said. "Another possibility is that natural selection was just stronger, but we doubt it. For that to be the case, natural selection would have to act in the same way on several lineages around the globe simultaneously."

(Photo: Indiana University)

Indiana University


0 comentarios

When the legendary science fiction writer Isaac Asimov penned the “Three Laws of Responsible Robotics,” he forever changed the way humans think about artificial intelligence, and inspired generations of engineers to take up robotics.

In the current issue of journal IEEE Intelligent Systems, two engineers propose alternative laws to rewrite our future with robots.

The future they foresee is at once safer, and more realistic.

“When you think about it, our cultural view of robots has always been anti-people, pro-robot,” explained David Woods, professor of integrated systems engineering at Ohio State University. “The philosophy has been, ‘sure, people make mistakes, but robots will be better -- a perfect version of ourselves.’ We wanted to write three new laws to get people thinking about the human-robot relationship in more realistic, grounded ways.”

Asimov’s laws are iconic not only among engineers and science fiction enthusiasts, but the general public as well. The laws often serve as a starting point for discussions about the relationship between humans and robots.

But while evidence suggests that Asimov thought long and hard about his laws when he wrote them, Woods believes that the author did not intend for engineers to create robots that followed those laws to the letter.

“Go back to the original context of the stories,” Woods said, referring to Asimov’s I, Robot among others. “He’s using the three laws as a literary device. The plot is driven by the gaps in the laws -- the situations in which the laws break down. For those laws to be meaningful, robots have to possess a degree of social intelligence and moral intelligence, and Asimov examines what would happen when that intelligence isn’t there.”

“His stories are so compelling because they focus on the gap between our aspirations about robots and our actual capabilities. And that’s the irony, isn’t it? When we envision our future with robots, we focus on our hopes and desires and aspirations about robots -- not reality.”

In reality, engineers are still struggling to give robots basic vision and language skills. These efforts are hindered in part by our lack of understanding of how these skills are managed in the human brain. We are far from a time when humans may teach robots a moral code and responsibility.

Woods and his coauthor, Robin Murphy of Texas A&M University, composed three laws that put the responsibility back on humans.

Woods directs the Cognitive Systems Engineering Laboratory at Ohio State, and is an expert in automation safety. Murphy is the Raytheon Professor of Computer Science and Engineering at Texas A&M, and is an expert in both rescue robotics and human-robot interaction.

Together, they composed three laws that focus on the human organizations that develop and deploy robots. They looked for ways to ensure high safety standards.

Here are Asimov’s original three laws:

-A robot may not injure a human being, or through inaction, allow a human being to come to harm.

-A robot must obey orders given to it by human beings, except where such orders would conflict with the First Law.

-A robot must protect its own existence as long as such protection does not conflict with the First or Second Law.

And here are the three new laws that Woods and Murphy propose:

-A human may not deploy a robot without the human-robot work system meeting the highest legal and professional standards of safety and ethics.

-A robot must respond to humans as appropriate for their roles.

-A robot must be endowed with sufficient situated autonomy to protect its own existence as long as such protection provides smooth transfer of control which does not conflict with the First and Second Laws.

The new first law assumes the reality that humans deploy robots. The second assumes that robots will have limited ability to understand human orders, and so they will be designed to respond to an appropriate set of orders from a limited number of humans.

The last law is the most complex, Woods said.

“Robots exist in an open world where you can’t predict everything that’s going to happen. The robot has to have some autonomy in order to act and react in a real situation. It needs to make decisions to protect itself, but it also needs to transfer control to humans when appropriate. You don’t want a robot to drive off a ledge, for instance -- unless a human needs the robot to drive off the ledge. When those situations happen, you need to have smooth transfer of control from the robot to the appropriate human,” Woods said.

“The bottom line is, robots need to be responsive and resilient. They have to be able to protect themselves and also smoothly transfer control to humans when necessary.”

Woods admits that one thing is missing from the new laws: the romance of Asimov’s fiction -- the idea of a perfect, moral robot that sets engineers’ hearts fluttering.

“Our laws are little more realistic, and therefore a little more boring,” he laughed.

(Photo: OSU)

Ohio State University


0 comentarios
It is widely known that the brain perceives information before it reaches a person’s awareness. But until now, there was little way to determine what specific mental tasks were taking place prior to the point of conscious awareness.

That has changed with the findings of scientists at Rutgers University in Newark and the University of California, Los Angeles who have developed a highly accurate way to peer into the brain to uncover a person’s mental state and what sort of information is being processed before it reaches awareness. With this new window into the brain, scientists now also are provided with the means of developing a more accurate model of the inner functions of the brain.

As reported in a forthcoming (Oct. 2009) issue of Psychological Science, the findings obtained by Stephen José Hanson, psychology professor at Rutgers; Russell A. Poldrack, professor at UCLA, and Yaroslav Halchenko, (now a post-doctoral student at Dartmouth College), have provided direct evidence that a person’s mental state can be predicted with a high degree of accuracy through functional magnetic resonance imaging (fMRI). The research also suggests that a more comprehensive approach is needed for mapping brain activity and that the widely held belief that localized areas of the brain are responsible for specific mental functions is misleading and incorrect.

The research was funded with grants from the U.S. Office of Naval Research, the James S. McDonnell Foundation and National Science Foundation. The McDonnell Foundation recently awarded Hanson another $1 million for ongoing studies in this area.

Over the last several years, much of neuroimaging has focused on pinpointing areas of the brain that are uniquely responsible for specific mental functions, such as learning, memory, fear and love. But this latest research shows that the brain is more complex than that simple model. In their analysis of global brain activity, the researchers found that different processing tasks have their own distinct pattern of neural connections stretching across the brain, similar to the fingerprints that distinctively identify each of us. Rather than being a static pattern, however, the brain is able to arrange and rearrange the connections based on the mental task being undertaken.

“You can’t just pinpoint a specific area of the brain, for example, and say that is the area responsible for our concept of self or that part is the source of our morality,” says Hanson. “It turns out the brain is much more complex and flexible than that. It has the ability to rearrange neural connections for different functions. By examining the pattern of neural connections, you can predict with a high degree of accuracy what mental processing task a person is doing.”

The findings open up the possibility of categorizing a multitude of mental tasks with their unique pattern of neural circuitry and also represent a potential first, early step in developing a means for identifying higher-level mental functions, such as 'lying' or abstract reasoning. They potentially also could pave the way for earlier diagnosis and better treatment of mental disorders, such as autism and schizophrenia, by offering a means for identifying very subtle abnormalities in brain activity and synchrony.

The research showing that specific mental functions do not correspond directly with certain brain areas but rather a unique pattern of neural connections also provides a more accurate direction for mapping the effective connectivity of the brain. Known as the Connectome Project, the goal of researchers involved in that work is to provide a complete map of the neural circuitry of the central nervous system.

“What our research shows is that if you want to understand human cognitive function, you need to look at system-wide behavior across the entire brain,” explains Hanson. “You can’t do it by looking at single cells or areas. You need to look at many areas of the brain to even understand the simplest of functions.”

The study involved 130 participants, each of whom performed a different mental task, ranging from reading, to memorizing a list, to making complex decisions about whether to take monetary risks, while being scanned using fMRI. The researchers were able to identify which of eight tasks participants were involved in with more than 80-percent accuracy by analyzing the participants’ fMRI data against classifications developed from the fMRIs of other individuals. The researchers also were able to identify what class of objects (faces, houses, animals, etc.) a person was viewing before he or she could report that information by analyzing the pattern of brain activity at the back of the brain where information is processed and then conveyed towards the frontal regions associated with awareness.

“It’s the same principle experienced during a car accident. The car accident actually happens tens of a milliseconds before you are aware you have actually been hit,” explains Hanson. “By looking at the back of the brain, we can 'read out,' for example, that a person is looking at dogs and cats before they actually know they are looking at a dog or a cat.”

Unlike most research that has focused on specific areas of the brain, Hanson and his team looked at the pattern of activity across a half million points in the brain. Interestingly, the patterns of neural networks involved in each of the eight tasks on the surface appear very similar. The reason, Hanson explains, is that various mental functions tend to draw on many of the same processes. For example, memorizing a list of words that include the word dog is likely to draw up a memory of a pet, the same as reading a story about a dog would. Using machine learning techniques (a support vector machine), capable of analyzing and categorizing large amounts of data, the researchers were able to identify those slight differences that allowed them to predict the specific mental function of the participants and what information they would report back.

“It’s like looking at two patterns of identical flower arrangements,” says Hanson. “They each may have the same flowers but they will not be arranged exactly in the same manner, consequently leading to slight differences in the overall pattern. Using the pattern analysis methods we have developed, there are clues that can be detected and pulled out.”

As part of their continuing research, Hanson and his team plan to develop a system for identifying neural connectivity abnormalities to assist with the study of such mental disorders as attention-deficit hyperactivity and autism and to produce a handbook for many of the new tools used for pattern analysis and the classification of mental states based on neuroimaging data.

Rutgers, The State University of New Jersey


0 comentarios

A well-established physical law describes the transfer of heat between two objects, but some physicists have long predicted that the law should break down when the objects are very close together. Scientists had never been able to confirm, or measure, this breakdown in practice. For the first time, however, MIT researchers have achieved this feat, and determined that the heat transfer can be 1,000 times greater than the law predicts.

The new findings could lead to significant new applications, including better design of the recording heads of the hard disks used for computer data storage, and new kinds of devices for harvesting energy from heat that would otherwise be wasted.

Planck's blackbody radiation law, formulated in 1900 by German physicist Max Planck, describes how energy is dissipated, in the form of different wavelengths of radiation, from an idealized non-reflective black object, called a blackbody. The law says that the relative thermal emission of radiation at different wavelengths follows a precise pattern that varies according to the temperature of the object. The emission from a blackbody is usually considered as the maximum that an object can radiate.

The law works reliably in most cases, but Planck himself had suggested that when objects are very close together, the predictions of his law would break down. But actually controlling objects to maintain the tiny separations required to demonstrate this phenomenon has proved incredibly difficult.

"Planck was very careful, saying his theory was only valid for large systems," explains Gang Chen, MIT's Carl Richard Soderberg Professor of Power Engineering and director of the Pappalardo Micro and Nano Engineering Laboratories. "So he kind of anticipated this [breakdown], but most people don't know this."

Part of the problem in measuring the way energy is radiated when objects are very close is the mechanical difficulty of maintaining two objects in very close proximity, without letting them actually touch. Chen and his team, graduate student Sheng Shen and Columbia University Professor Arvind Narayaswamy, solved this problem in two ways, as described in a paper to be published in the August issue of the journal Nano Letters (available now online). First, instead of using two flat surfaces and trying to maintain a tiny gap between them, they used a flat surface next to a small round glass bead, whose position is easier to control. "If we use two parallel surfaces, it is very hard to push to nanometer scale without some parts touching each other," Chen explains, but by using a bead there is just a single point of near-contact, which is much easier to maintain. Then, they used the technology of the bi-metallic cantilever from an atomic-force microscope to measure the temperature changes with great precision.

"We tried for many years doing it with parallel plates," Chen says. But with that method, they were unable to sustain separations of closer than about a micron (one millionth of a meter). By using the glass (silica) beads, they were able to get separations as small as 10 nanometers (10 billionths of a meter, or one-hundredth the distance achieved before), and are now working on getting even closer spacings.

Professor Sir John Pendry of Imperial College London, who has done extensive work in this field, calls the results "very exciting," noting that theorists have long predicted such a breakdown in the formula and the activation of a more powerful mechanism.
"Experimental confirmation has proved elusive because of the extreme difficulty in measuring temperature differences over very small distances," Pendry says. "Gang Chen's experiments provide a beautiful solution to this difficulty and confirm the dominant contribution of near field effects to heat transfer."

In today's magnetic data recording systems - such as the hard disks used in computers - the spacing between the recording head and the disk surface is typically in the 5 to 6 nanometer range, Chen says. The head tends to heat up, and researchers have been looking for ways to manage the heat or even exploit the heating to control the gap. "It's a very important issue for magnetic storage," he says. Such applications could be developed quite rapidly, he says, and some companies have already shown a strong interest in this work

The new findings could also help in the development of new photovoltaic energy conversion devices to harness photons emitted by a heat source, called thermophovoltaic, Chen says. "The high photon flux can potentially enable higher efficiency and energy density thermophovoltaic energy converters, and new energy conversion devices," he says.

The new findings could have "a broad impact," says Shen. People working with devices using small separations will now have a clear understanding that Planck's law "is not a fundamental limitation," as many people now think, he says. But further work is needed to explore even closer spacings, Chen says, because "we don't know exactly what the limit is yet" in terms of how much heat can be dissipated in closely spaced systems. "Current theory will not be valid once we push down to 1 nanometer spacing."

And in addition to practical applications, he says, such experiments "might provide a useful tool to understand some basic physics."

(Photo: Gang Chen)

Massachusetts Institute of Technology


0 comentarios

Lawrence, a small town in Kansas, May 20, 2009: A group of scientists and a camera team stand side by side on the local airfield. Their concentration is palpable: a small six-legged visitor is expertly held between three fingers and carefully fitted with a small package with tweezers and adhesive. A minute radio transmitter on its abdomen, which weighs half as much as the animal itself, is to accompany the monarch butterfly on the next leg of its journey after the stopover. The scientists, Chip Taylor, director of the organisation Monarch Watch, which is based at Kansas University, and Professor Martin Wikelski, Professor for Ornithology at the University of Konstanz and Director at the Max Planck Institute for Ornithology in Radolfzell, Germany, hope to succeed, for the first time, in getting the butterflies, which have been fitted with a transmitter, to fly and in following them in a small aircraft.

The fluttery visitors are first fed a suitable amount of food to give them enough strength to carry the transmitters. After one butterfly had been released, Wikelski and Taylor followed its signals. "Just as we expected, it flew in a north-northeast direction," said Taylor after the flight, "And it covered a distance of 18 kilometres." The monarch butterfly was then located, first from the aircraft and then from the car, relieved of its extra load. The researchers are hoping for less windy conditions over the next few days to be able to release a larger number of tracked specimens. The collected data will then be entered into the "Movebank", a database of global animal movements initiated by Martin Wikelski, which can be consulted, for example, by national environmental protection agencies and by schools.

The monarch butterfly is the best known of the migratory butterfly species. Every March, millions of individuals fly in large swarms from the mountains of Mexico in the direction of north. By April, they have reached the Rio Grande on the American border, where the females lay the first of their 400 eggs. The caterpillars live off poisonous spurges, which make them unfit for consumption by birds, their natural predators. The conspicuous colour of their wings acts as a warning sign for other animals. By early May, the butterflies have arrived in Texas where this generation’s hazardous and exhausting journey ends with death. The journey, however, is continued by the next generation. This second generation then reaches the latitude of the city of Washington. The grandchildren of the Mexican overwinterers hatch and lay their eggs in Pennsylvania, then in Canada at the Great Lakes. The imagines, the adult animals of this generation, achieve an incredible feat: With the help of strong upwinds, they cross Lake Erie and then negotiate the 3,600 kilometres southwards to reach Mexico again in late autumn.

In the past, it has not been possible to observe an individual butterfly in detail on its migratory journey. Thanks to the successful work of the biologists, however, this is no longer the case. "Now we can compare the migrations of whales, birds, bats and insects, and describe trends that will enable us to predict migratory phenomena, such as the migratory locust."

Despite his extensive experience in observing migrating animal species with the help of small transmitters, Martin Wikelski wanted to first test, on a small scale, whether the adhesive mixture would fix the metallic transmitter to the belly of the butterfly securely, as well as whether the butterfly would be able to fly with this considerable load and whether the signal can be received with directional antennae - hardly a suitable task to be carried out in a laboratory.

The butterfly house on Mainau Island, a well-known tourist attraction on Lake Constance, provided the best possible conditions for the tests. The unique flower and plant paradise was opened to the public by Count Lennart Bernadotte who wanted to provide access to an oasis of harmony and natural beauty. In keeping with this philosophy, a butterfly house was established there in 1996. Visitors can view over 50 species of magnificently coloured butterflies in the tropical landscape of the butterfly house. The surrounding garden complex also provides a place for egg deposition and feeding for native butterfly species.

Here, the Mainau gardeners collected the monarch butterfly pupae for the test run. As soon as they hatched, Wikelski caught one rather dazed butterfly in the cold morning air. A spot of adhesive was carefully applied to its abdomen, the transmitter fitted on top, the drying process completed and the butterfly placed on a Lantana camara flower to recover and gather its strength. Exhausted by the procedure, the monarch did not initially attempt to lift off into the air. However, it eventually teetered off unsteadily, gradually becoming more confident as it became used to the unaccustomed extra weight. The National Geographic team filmed this exciting sequence on location.

Those lucky enough to have observed the flight of the millions of painted lady butterflies this past May, which arrive in Germany every spring from North Africa and the Mediterranean, cannot have failed to have been fascinated by the contrast between these delicate creatures and their ability to overcome enormous distances and altitudes. The migratory paths of these and other butterfly species that manage to cross the Alps every year can finally also be researched in detail.

(Photo: MPI for Ornithology, Radolfzel / Carrie Fudickar)

Max Planck Society


0 comentarios

An international team of astronomers, led by Keiichi Ohnaka at the Max Planck Institute for Radio Astronomy (MPIfR) in Bonn, has made the most high resolution images of a dying giant star to date. For the first time they could show how the gas is moving in different areas over the surface of a distant star. This was made possible by combining three 1.8 metre telescopes as an interferometer, giving the astronomers the resolving power of a virtual, gigantic 48 metre telescope.

Using the ESO VLT Interferometer in Chile, they discovered that the gas in the dying star’s atmosphere is vigorously moving up and down, but the size of such "convection cell or bubble" is as large as the star itself. These colossal bubbles are a key for pushing material out of the star’s atmosphere into space, before the star explodes as a supernova.

When one looks up at the clear night sky in winter, it is easy to spot a bright, orange star on the shoulder of the constellation Orion (the Hunter) even in light-flooded large cities. This is the star Betelgeuse. It is a gigantic star, which is so huge as to almost reach the orbit of Jupiter, swallowing the inner planets Mercury, Venus, Earth, and Mars, when placed at the centre of our solar system. It is also glaringly bright, emitting 100 000 times more light than the Sun. Betelgeuse is a so-called red supergiant and approaching the end of its short life of several million years. Red supergiants shed a large amount of material made of various molecules and dust, which are recycled for the next generation of stars and planets possibly like the Earth. Betelgeuse is losing material equivalent to the Earth’s mass every year.

How do such giant stars lose mass, which would normally be bound to the star by the gravitational pull? This is a long-standing mystery. The best way to tackle this issue is to observe the situation where the material is ejected from a star’s surface, but this is a very challenging task. Although Betelgeuse is such a huge star, it looks like a mere reddish dot even with the today’s largest, 8 - 10 metre telescopes, because the star is 640 light years away.

Therefore, astronomers need a special technique to overcome this problem. By combining two or more telescopes as a so-called interferometer, astronomers can achieve a much higher resolution than provided with individual telescopes. The Very Large Telescope Interferometer (VLTI) on Cerro Paranal in Chile, operated by the European Southern Observatory (ESO), is one of the world’s largest interferometer. A team of astronomers in German, French, and Italian institutions observed Betelgeuse with the AMBER instrument operating at near-infrared wavelengths. The resolving power achieved with AMBER is so great that one can recognize a 1-Euro coin placed on the Brandenburg Gate in Berlin from Bonn.

"Our AMBER observations mark the sharpest images ever made of Betelgeuse", says Keiichi Ohnaka at the MPIfR, the first author of the publication presenting the result. "And for the first time, we have spatially resolved the gas motion in the atmosphere of a star other than the Sun. Thus, we could observe how the gas is moving in different areas over the star’s surface."

The AMBER observations have revealed that the gas in Betelgeuse’s atmosphere is moving vigorously up and down. The size of these "bubbles" is also gigantic, as large as the supergiant star itself (that is, one bubble as large as the orbit of Mars is moving at some 40 000 km/h). While the origin of these bubbles is not yet entirely clear, the AMBER observations have shed new light on the question about how red supergiant stars lose mass: such colossal bubbles can expel the material from the surface of the star into space. It also means that the material is not spilling out in a quiet, ordered fashion, but is flung out more violently in arcs or clumps.

The death of the gigantic star, which is expected in the next few thousand to hundred thousand years, will be accompanied by cosmic fireworks known as a supernova like the famous SN1987A. However, as Betelgeuse is much closer to the Earth than SN1987A, the supernova can be clearly seen with the unaided eye, even in daylight.

(Photo: ESO/L. Calcada)

Max Planck Society


0 comentarios

In a paper published this month in Nano Research, Hauge's Rice University team describes a method for making "odako," bundles of single-walled carbon nanotubes (SWNT) named for the traditional Japanese kites they resemble. It may lead to a way to produce meter-long strands of nanotubes, which by themselves are no wider than a piece of DNA.

Hauge, a distinguished faculty fellow in chemistry at Rice's Richard E. Smalley Institute for Nanoscale Science and Technology, and his co-authors, graduate students Cary Pint and Noe Alvarez, explained the odako after which the bundles are named are gigantic kites that take many hands to fly, hence the many lines that trail from them.

In this case, the lines are nanotubes, hollow cylinders of pure carbon. Individually, they're thousands of times smaller than a living cell, but Hauge's new method creates bundles of SWNTs that are sometimes measured in centimeters, and he said the process could eventually yield tubes of unlimited length.

Large-scale production of nanotube threads and cables would be a godsend for engineers in almost every field. They could be used in lightweight, superefficient power-transmission lines for next-generation electrical grids, for example, and in ultra-strong and lightning-resistant versions of carbon-fiber materials found in airplanes. Hauge said the SWNT bundles may also prove useful in batteries, fuel cells and microelectronics.

To understand how Hauge makes nanokites, it helps to have a little background on flying carpets.

Last year, Hauge and colleagues found they could make compact bundles of nanotubes starting with the same machinery the U.S. Treasury uses to embed paper money with unique markings that make the currency difficult to counterfeit.

Hauge and his team -- which included senior research fellow Howard Schmidt and Professor Matteo Pasquali, both of Rice's Department of Chemical and Biomolecular Engineering; graduate students Pint and Sean Pheasant; and Kent Coulter of San Antonio's Southwest Research Institute -- used this printing process to create thin layers of iron and aluminum oxide on a Mylar roll. They then removed the layers and ground them into small flakes.

Here's where the process took off. In a mesh cage placed into a furnace, the metallic flakes would lift off and "fly" in a flowing chemical vapor. As they flew, arrays of nanotubes grew vertically from the iron particles in tight, forest-like formations. When done cooking and viewed under a microscope, the bundles looked remarkably like the pile of a carpet.

While other methods used to grow SWNTs had yielded a paltry 0.5 percent ratio of nanotubes to substrate materials, Hauge's technique brought the yield up to an incredible 400 percent. The process could facilitate large-scale SWNT growth, Pint said.

In the latest research, the team replaced the Mylar with pure carbon. In this setup, the growing nanotubes literally raise the roof, lifting up the iron and aluminum oxide from which they’re sprouting while the other ends stay firmly attached to the carbon. As the bundle of tubes grows higher, the catalyst becomes like a kite, flying in the hydrogen and acetylene breeze that flows through the production chamber.

Hauge and his team hope to follow up their work on flying carpets and nanokites with the holy grail of nanotube growth: a catalyst that will not die, enabling furnaces that churn out continuous threads of material.

"If we could get these growing so they never stop – so that, at some point, you pull one end out of the furnace while the other end is still inside growing – then you should be able to grow meter-long material and start weaving it," he said.

(Photo: Rice U.)

Rice University




Selected Science News. Copyright 2008 All Rights Reserved Revolution Two Church theme by Brian Gardner Converted into Blogger Template by Bloganol dot com