Wednesday, February 17, 2010


0 comentarios

Could humans one day walk on walls, like Spider-Man? A palm-sized device invented at Cornell that uses water surface tension as an adhesive bond just might make it possible.

The rapid adhesion mechanism could lead to such applications as shoes or gloves that stick and unstick to walls, or Post-it-like notes that can bear loads, according to Paul Steen, professor of chemical and biomolecular engineering, who invented the device with Michael Vogel, a former postdoctoral associate.

The device is the result of inspiration drawn from a beetle native to Florida, which can adhere to a leaf with a force 100 times its own weight, yet also instantly unstick itself. Research behind the device is published online Feb. 1 in Proceedings of the National Academy of Sciences.

The device consists of a flat plate patterned with holes, each on the order of microns (one-millionth of a meter). A bottom plate holds a liquid reservoir, and in the middle is another porous layer. An electric field applied by a common 9-volt battery pumps water through the device and causes droplets to squeeze through the top layer. The surface tension of the exposed droplets makes the device grip another surface -- much the way two wet glass slides stick together.

"In our everyday experience, these forces are relatively weak," Steen said. "But if you make a lot of them and can control them, like the beetle does, you can get strong adhesion forces."

For example, one of the researchers' prototypes was made with about 1,000 300-micron-sized holes, and it can hold about 30 grams -- more than 70 paper clips. They found that as they scaled down the holes and packed more of them onto the device, the adhesion got stronger. They estimate, then, that a one-square-inch device with millions of 1-micron-sized holes could hold more than 15 pounds.

To turn the adhesion off, the electric field is simply reversed, and the water is pulled back through the pores, breaking the tiny "bridges" created between the device and the other surface by the individual droplets.

The research builds on previously published work that demonstrated the efficacy of what's called electro-osmotic pumping between surface tension-held interfaces, first by using just two larger water droplets.

One of the biggest challenges in making these devices work, Steen said, was keeping the droplets from coalescing, as water droplets tend to do when they get close together. To solve this, they designed their pump to resist water flow while it's turned off.

Steen envisions future prototypes on a grander scale, once the pump mechanism is perfected, and the adhesive bond can be made even stronger. He also imagines covering the droplets with thin membranes -- thin enough to be controlled by the pump but thick enough to eliminate wetting. The encapsulated liquid could exert simultaneous forces, like tiny punches.

"You can think about making a credit card-sized device that you can put in a rock fissure or a door, and break it open with very little voltage," Steen said. "It's a fun thing to think about."

(Photo: Michael Vogel)

Cornell University


0 comentarios

Scientists at Georgia Tech and the Ovarian Cancer Institute have further developed a potential new treatment against cancer that uses magnetic nanoparticles to attach to cancer cells, removing them from the body. The treatment, tested in mice in 2008, has now been tested using samples from human cancer patients. The results appear online in the journal Nanomedicine.

“We are primarily interested in developing an effective method to reduce the spread of ovarian cancer cells to other organs ,” said John McDonald, professor at the School of Biology at the Georgia Institute of Technology and chief research scientist at the Ovarian Cancer Institute.

The idea came to the research team from the work of Ken Scarberry, then a Ph.D. student at Tech. Scarberry originally conceived of the idea as a means of extracting viruses and virally infected cells. At his advisor’s suggestion Scarberry began looking at how the system could work with cancer cells.

He published his first paper on the subject in the Journal of the American Chemical Society in July 2008. In that paper he and McDonald showed that by giving the cancer cells of the mice a fluorescent green tag and staining the magnetic nanoparticles red, they were able to apply a magnet and move the green cancer cells to the abdominal region.

Now McDonald and Scarberry, currently a post-doc in McDonald’s lab, has showed that the magnetic technique works with human cancer cells.

“Often, the lethality of cancers is not attributed to the original tumor but to the establishment of distant tumors by cancer cells that exfoliate from the primary tumor,” said Scarberry. “Circulating tumor cells can implant at distant sites and give rise to secondary tumors. Our technique is designed to filter the peritoneal fluid or blood and remove these free floating cancer cells, which should increase longevity by preventing the continued metastatic spread of the cancer.”

In tests, they showed that their technique worked as well with at capturing cancer cells from human patient samples as it did previously in mice. The next step is to test how well the technique can increase survivorship in live animal models. If that goes well, they will then test it with humans.

(Photo: GIT)

Georgia Institute of Technology


0 comentarios

A recent study at Oregon State University indicates that some past approaches to calculating the impacts of forest fires have grossly overestimated the number of live trees that burn up and the amount of carbon dioxide released into the atmosphere as a result.

The research was done on the Metolius River Watershed in the central Oregon Cascade Range, where about one-third – or 100,000 acres – of the area burned in four large fires in 2002-03. Although some previous studies assumed that 30 percent of the mass of living trees was consumed during forest fires, this study found that only 1-3 percent was consumed.

Some estimates done around that time suggested that the B&B Complex fire in 2003, just one of the four Metolius fires, released 600 percent more carbon emissions than all other energy and fossil fuel use that year in the state of Oregon – but this study concluded that the four fires combined produced only about 2.5 percent of annual statewide carbon emissions.

Even in 2002, the most extreme fire year in recent history, the researchers estimate that all fires across Oregon emitted only about 22 percent of industrial and fossil fuel emissions in the state – and that number is much lower for most years, about 3 percent on average for the 10 years from 1992 to 2001.

The OSU researchers said there are some serious misconceptions about how much of a forest actually burns during fires, a great range of variability, and much less carbon released than previously suggested. Some past analyses of carbon release have been based on studies of Canadian forests that are quite different than many U.S. forests, they said.

“A new appreciation needs to be made of what we’re calling ‘pyrodiversity,’ or wide variation in fire effects and responses,” said Garrett Meigs, a research assistant in OSU’s Department of Forest Ecosystems and Society. “And more studies should account for the full gradient of fire effects.”

The past estimates of fire severity and the amounts of carbon release have often been high and probably overestimated in many cases, said Beverly Law, a professor of forest ecosystems and society at OSU.

“Most of the immediate carbon emissions are not even from the trees but rather the brush, leaf litter and debris on the forest floor, and even below ground,” Law said. “In the past we often did not assess the effects of fire on trees or carbon dynamics very accurately.”

Even when a very severe fire kills almost all of the trees in a patch, the scientists said, the trees are still standing and only drop to the forest floor, decay, and release their carbon content very slowly over several decades. Grasses and shrubs quickly grow back after high-severity fires, offsetting some of the carbon release from the dead and decaying trees. And across most of these Metolius burned areas, the researchers observed generally abundant tree regeneration that will result in a relatively fast recovery of carbon uptake and storage.

“A severe fire does turn a forest from a carbon sink into an atmospheric carbon source in the near-term,” Law said. “It might take 20-30 years in eastern Oregon, where trees grow and decay more slowly, for the forest to begin absorbing more carbon than it gives off, and 5-10 years on the west side of the Cascades.”

Since fire events are episodic in nature while greenhouse gas emissions are continuous and increasing, climate change mitigation strategies focused on human-caused emissions will have more impact than those emphasizing wildfire, the researchers said. And to be accurate, estimates of carbon impacts have to better consider burn severity, non-tree responses, and below-ground processes, they said.

“Even though it looks like everything is burning up in forest fires, that simply isn’t what happens,” Meigs said. “The trees are not vaporized even during a very intense fire. In a low-severity fire many of them are not even killed. And in the Pacific Northwest, the majority of burned area is not stand-replacement fire.”

Fire suppression has resulted in a short-term reduction of greenhouse gases, the researchers said, but on a long-term basis fire will still be an inevitable part of forest ecosystems. Timber harvest also has much more impact on carbon dynamics than fire. Because of this, forest fires will be a relatively minor player in greenhouse gas mitigation strategies compared to other factors, such as human consumption of fossil fuels, they said.

Global warming could cause higher levels of forest fire and associated carbon emissions in the future, the researchers said, although there are many uncertainties about how climate change will affect forests, and no indication that forest fire carbon emissions will become comparable to those caused by fossil fuel use.

(Photo: Garrett Meigs, Oregon State University)

Oregon State University


0 comentarios

C. elegans, a tiny worm about a millimeter long, doesn’t have much of a brain, but it has a nervous system — one that comprises 302 nerve cells, or neurons, to be exact. In the 1970s, a team of researchers at Cambridge University decided to create a complete “wiring diagram” of how each of those neurons are connected to one another. Such wiring diagrams have recently been christened “connectomes,” drawing on their similarity to the genome, the total DNA sequence of an organism. The C. elegans connectome, reported in 1986, took more than a dozen years of tedious labor to find.

Now a handful of researchers scattered across the globe are tackling a much more ambitious project: to find connectomes of brains more like our own. The scientists, including several at MIT, are working on technologies needed to accelerate the slow and laborious process that the C. elegans researchers originally applied to worms. With these technologies, they intend to map the connectomes of our animal cousins, and eventually perhaps even those of humans. Their results could fundamentally alter our understanding of the brain.

Mapping the millions of miles of neuronal “wires” in the brain could help researchers understand how those neurons give rise to intelligence, personality and memory, says Sebastian Seung, professor of computational neuroscience at MIT. For the past three years, Seung and his students have been building tools that they hope will allow researchers to unravel some of those connections. To find connectomes, researchers will need to employ vast computing power to process images of the brain. But first, they need to teach the computers what to look for.

Piecing together connectomes requires analyzing vast numbers of electron microscopic images of brain slices and tracing the tangled connections between neurons, each of which can send projections to other cells several inches away.

At the Max Planck Institute for Medical Research in Heidelberg, Germany, neuroscientists in the laboratory of Winfried Denk have assembled a team of several dozen people to manually trace connections between neurons in the retina. It’s a painstaking process — each neuron takes hours to trace, and each must be traced by as many as 10 people, in order to catch careless errors. Using this manual approach, finding the connectome of just one cubic millimeter of brain would take tens of thousands of work-years, says Viren Jain, who recently completed his PhD in Seung’s lab.

Jain and postdoctoral associate Srinivas Turaga want to speed up the process dramatically by enlisting the help of high-powered computers. To do that, they are teaching the computers to analyze the brain slices, using a common computer science technique called automated machine learning, which allows computers to change their behavior in response to new data.

With machine learning, the researchers teach computers to learn by example. They feed their computer electron micrographs as well as human tracings of these images. The computer then searches for an algorithm that allows it to imitate human performance.

“Instead of specifying the details of how the computer does something, you give it an example of what you want it to do and an algorithm that tries to figure out how to do what you want,” says Jain. After the computer is trained on the human tracings, it is applied to electron micrographs that have not been traced by humans. This new technique represents the first time that computers have been effectively taught to segment any kind of images, not just neurons.

Jain and Turaga have also invented new ways of evaluating how well the computer imitates humans at the task of tracing. Those measures are crucial for machine learning because the computer, just like students in a class, will not learn the desired task well unless the “exam” properly measures performance.

In their early efforts, it took the computer weeks or even months to come up with an accurate neuron-tracing algorithm. However, Jain and Turaga cut that time dramatically when they started using computers equipped with graphics processing cards, allowing them to perform computations 50 to 100 times faster. Now, it takes only days for their computer programs to produce a new tracing algorithm.

Their eventual goal is to use computers to process the bulk of the images needed to create connectomes, but they expect that humans will still need to proofread the computers’ work. Jain and Turaga have reported their advances at the International Conference on Computer Vision and the Neural Information Processing Systems Conference.

Olaf Sporns, a neuroscientist at the University of Indiana who first proposed diagramming the connectome in 2005, says that he originally did not think it would be possible to create a map of individual connections between single neurons, and thought it would be best to focus on higher-level connections between brain regions.

“Doing such a microscopic level of resolution seemed to be infeasible at the time,” he says. “But now I’m coming around to the idea that something like that may well be possible.” The machine learning technology that Seung and his students are developing could be “a big leap forward” in making that kind of diagram a reality, Sporns adds.

Last year, the National Institutes of Health announced a five-year, $30 million Human Connectome Project to develop new techniques to figure out the connectivity of the human brain. That project is focused mainly on higher level, region-to-region connections. Sporns says he believes that a good draft of higher-level connections could be achieved within the five-year timeline of the NIH project, and that significant progress will also be made toward a neuron-to-neuron map.

Some neuroscientists believe that mapping connectomes could have just as much impact as sequencing the human genome. Much as genetic researchers can now compare individuals’ genes to look for variability that might account for diseases, brain researchers could discover which differences in the wiring diagrams are important in diseases like Alzheimer’s and schizophrenia, says Turaga.

Comparing connectomes as human development unfolds could also be informative, since much human behavior is learned, not encoded in the genome. “Compared to an adult, a baby doesn’t know how to do very much. The brain is slowly refined and becomes more powerful, and the thing that’s refined is the wiring diagram,” says Jain.

Many of the research teams that have begun working on neuron-to-neuron connectome diagrams are starting with small pieces of the whole. These teams include a group at Harvard that’s focusing on the human hippocampus, a brain region involved in memory and learning. Other groups are starting with brain diagrams for smaller animals such as mice and zebrafish.

Though only a handful of labs around the world are working on the connectome right now, Jain and Turaga expect that to change as tools for diagramming the brain improve. “It’s a common pattern in neuroscience: A few people will come up with new technology and pioneer some applications, and then everybody else will start to adopt it,” says Jain.

(Photo: Christine Daniloff)



0 comentarios

A team of geographers from The University of Manchester have discovered a group of glaciers in one of Europe's most inhospitable places.

Drs Philip Hughes, Jeff Blackford and PhD student Rose Wilkinson, from the University’s Quaternary Environments and Geoarchaeology Group, found four glaciers in the 'Cursed' mountains of Albania last year.

They were following up Dr Hughes' 2006 expedition, funded by the Royal Geographical Society, to a nearby spot in Montenegro researching features carved into the landscape by past glaciers.

But he got more than he bargained for when he stumbled upon the real thing - a glacier which was until that point completely undiscovered. In 2009, they found four more in Albania.

Some of the findings were published in the December 2009 edition of the journal "Arctic, Antarctic, and Alpine Research", and a new research paper will appear in "Earth Surface Processes and Landforms" this year.

The glaciers are at the relatively low level of 2,000 metres - almost unique for such a southerly latitude. Most glaciers at this latitude are usually much higher, and many only survive on higher mountains further north, such as the Alps.

The Prokletije mountains - known as the 'cursed' mountains in Albanian - extend from northern Albania and Kosovo to eastern Montenegro in the Western Balkans.

The glaciers - the largest of which is currently the size of six football pitches - vary in size every year according to the amount of winter snowfall and temperatures during the following summer.

However, the geographers think at least eight glaciers were present in neighbouring mountains during the 19th century, correlating with the culmination of the 'Little Ice Age' in the European Alps.

Physical Geography lecturer at the University’s School of Environment and Development, Dr Philip Hughes said: "We were amazed that - as far as we know - no-one, apart from local shepherds, were aware of the existence of these glaciers and it was tremendously exciting to find them.

"The fact that the mountains were until only recently surrounded by war and lawlessness might explain why they have proved so elusive.

"Only ten years ago, this area was out of bounds and crossing the border from Montenegro into Albania was prohibited.

"Another probable reason why we weren't aware of their existence is that very few people live in these mountains and there's so much late-lying snow and shadow they are not even visible on Google Earth."

Though the region is experiencing weak signs of recovery, the region is still politically precarious.

But one day the researchers hope the glaciers will be enjoyed by visitors to the area - which is comparable to the Alps in terms of its attractiveness and size.

Dr Jeff Blackford said: "The reason why these glaciers can form at such a low attitude - and so far south - is that there are sufficient quantities of windblown snow and, in particular, avalanching snow.

"Though these remaining glaciers seem to be relatively robust in response to regional climate change, it's clear that there were more glaciers in the area a hundred or so years ago.”

He added: "While more glaciers existed a hundred years ago because of cooler temperatures, it is very difficult to predict the future fate of these remaining glaciers.

“This is because of the strong local controls on climate in the high mountains.

“But if it gets warmer then these glaciers will melt away."

The scientists hope to make it to the Prokletije mountains to continue their research later this year.

PhD student Rose Wilkinson said: "The trip provided me with an important opportunity to progress my research, which looks at how vegetation responds to changes in climate over the past 500 years.

"These glaciers - which have not been studied before - will hopefully create an interesting record of environmental change in this area."

(Photo: U. Manchester)

The University of Manchester


0 comentarios

Northwestern University researchers are the first to design a bioactive nanomaterial that promotes the growth of new cartilage in vivo and without the use of expensive growth factors. Minimally invasive, the therapy activates the bone marrow stem cells and produces natural cartilage. No conventional therapy can do this.

The results were published online the week of Feb. 1 by the Proceedings of the National Academy of Sciences (PNAS).

"Unlike bone, cartilage does not grow back, and therefore clinical strategies to regenerate this tissue are of great interest," said Samuel I. Stupp, senior author, Board of Trustees Professor of Chemistry, Materials Science and Engineering, and Medicine, and director of the Institute for BioNanotechnology in Medicine. Countless people -- amateur athletes, professional athletes and people whose joints have just worn out -- learn this all too well when they bring their bad knees, shoulders and elbows to an orthopaedic surgeon.

Damaged cartilage can lead to joint pain and loss of physical function and eventually to osteoarthritis, a disorder with an estimated economic impact approaching $65 billion in the United States. With an aging and increasingly active population, this figure is expected to grow.

"Cartilage does not regenerate in adults. Once you are fully grown you have all the cartilage you'll ever have," said first author Ramille N. Shah, assistant professor of materials science and engineering at the McCormick School of Engineering and Applied Science and assistant professor of orthopaedic surgery at the Feinberg School of Medicine. Shah is also a resident faculty member at the Institute for BioNanotechnology in Medicine.

Type II collagen is the major protein in articular cartilage, the smooth, white connective tissue that covers the ends of bones where they come together to form joints.

"Our material of nanoscopic fibers stimulates stem cells present in bone marrow to produce cartilage containing type II collagen and repair the damaged joint," Shah said. "A procedure called microfracture is the most common technique currently used by doctors, but it tends to produce a cartilage having predominantly type I collagen which is more like scar tissue."

The Northwestern gel is injected as a liquid to the area of the damaged joint, where it then self-assembles and forms a solid. This extracellular matrix, which mimics what cells usually see, binds by molecular design one of the most important growth factors for the repair and regeneration of cartilage. By keeping the growth factor concentrated and localized, the cartilage cells have the opportunity to regenerate.

Together with Nirav A. Shah, a sports medicine orthopaedic surgeon and former orthopaedic resident at Northwestern, the researchers implanted their nanofiber gel in an animal model with cartilage defects.

The animals were treated with microfracture, where tiny holes are made in the bone beneath the damaged cartilage to create a new blood supply to stimulate the growth of new cartilage. The researchers tested various combinations: microfracture alone; microfracture and the nanofiber gel with growth factor added; and microfracture and the nanofiber gel without growth factor added.

They found their technique produced much better results than the microfracture procedure alone and, more importantly, found that addition of the expensive growth factor was not required to get the best results. Instead, because of the molecular design of the gel material, growth factor already present in the body is enough to regenerate cartilage.

The matrix only needed to be present for a month to produce cartilage growth. The matrix, based on self-assembling molecules known as peptide amphiphiles, biodegrades into nutrients and is replaced by natural cartilage.

(Photo: Northwester U.)

Northwestern University




Selected Science News. Copyright 2008 All Rights Reserved Revolution Two Church theme by Brian Gardner Converted into Blogger Template by Bloganol dot com