Thursday, April 1, 2010

SELF-ASSEMBLED NANOCOMPOSITES BOOST LITHIUM-ION BATTERY ANODES

0 comentarios

A new high-performance anode structure based on silicon-carbon nanocomposite materials could significantly improve the performance of lithium-ion batteries used in a wide range of applications from hybrid vehicles to portable electronics.

Produced with a "bottom-up" self-assembly technique, the new structure takes advantage of nanotechnology to fine-tune its materials properties, addressing the shortcomings of earlier silicon-based battery anodes. The simple, low-cost fabrication technique was designed to be easily scaled up and compatible with existing battery manufacturing.

Details of the new self-assembly approach were published online in the journal Nature Materials on March 14.

"Development of a novel approach to producing hierarchical anode or cathode particles with controlled properties opens the door to many new directions for lithium-ion battery technology," said Gleb Yushin, an assistant professor in the School of Materials Science and Engineering at the Georgia Institute of Technology. "This is a significant step toward commercial production of silicon-based anode materials for lithium-ion batteries."

The popular and lightweight batteries work by transferring lithium ions between two electrodes -- a cathode and an anode -- through a liquid electrolyte. The more efficiently the lithium ions can enter the two electrodes during charge and discharge cycles, the larger the battery's capacity will be.

Existing lithium-ion batteries rely on anodes made from graphite, a form of carbon. Silicon-based anodes theoretically offer as much as a ten-fold capacity improvement over graphite, but silicon-based anodes have so far not been stable enough for practical use.

Graphite anodes use particles ranging in size from 15 to 20 microns. If silicon particles of that size are simply substituted for the graphite, expansion and contraction as the lithium ions enter and leave the silicon creates cracks that quickly cause the anode to fail.

The new nanocomposite material solves that degradation problem, potentially allowing battery designers to tap the capacity advantages of silicon. That could facilitate higher power output from a given battery size -- or allow a smaller battery to produce a required amount of power.

"At the nanoscale, we can tune materials properties with much better precision than we can at traditional size scales," said Yushin. "This is an example of where having nanoscale fabrication techniques leads to better materials."

Electrical measurements of the new composite anodes in small coin cells showed they had a capacity more than five times greater than the theoretical capacity of graphite.

Fabrication of the composite anode begins with formation of highly conductive branching structures -- similar to the branches of a tree -- made from carbon black nanoparticles annealed in a high-temperature tube furnace. Silicon nanospheres with diameters of less than 30 nanometers are then formed within the carbon structures using a chemical vapor deposition process. The silicon-carbon composite structures resemble "apples hanging on a tree."

Using graphitic carbon as an electrically-conductive binder, the silicon-carbon composites are then self-assembled into rigid spheres that have open, interconnected internal pore channels. The spheres, formed in sizes ranging from 10 to 30 microns, are used to form battery anodes. The relatively large composite powder size -- a thousand times larger than individual silicon nanoparticles -- allows easy powder processing for anode fabrication.

The internal channels in the silicon-carbon spheres serve two purposes. They admit liquid electrolyte to allow rapid entry of lithium ions for quick battery charging, and they provide space to accommodate expansion and contraction of the silicon without cracking the anode. The internal channels and nanometer-scale particles also provide short lithium diffusion paths into the anode, boosting battery power characteristics.

The size of the silicon particles is controlled by the duration of the chemical vapor deposition process and the pressure applied to the deposition system. The size of the carbon nanostructure branches and the size of the silicon spheres determine the pore size in the composite.

Production of the silicon-carbon composites could be scaled up as a continuous process amenable to ultra high-volume powder manufacturing, Yushin said. Because the final composite spheres are relatively large when they are fabricated into anodes, the self-assembly technique avoids the potential health risks of handling nanoscale powders, he added.

Once fabricated, the nanocomposite anodes would be used in batteries just like conventional graphite structures. That would allow battery manufacturers to adopt the new anode material without making dramatic changes in production processes.

So far, the researchers have tested the new anode through more than a hundred charge-discharge cycles. Yushin believes the material would remain stable for thousands of cycles because no degradation mechanisms have become apparent.

"If this technology can offer a lower cost on a capacity basis, or lighter weight compared to current techniques, this will help advance the market for lithium batteries," he said. "If we are able to produce less expensive batteries that last for a long time, this could also facilitate the adoption of many 'green' technologies, such as electric vehicles or solar cells."

(Photo: GIT)

Georgia Institute of Technology

STUDY SUGGESTS ENVIRONMENT MAY IMPACT APES ABILITY TO UNDERSTAND DECLARATIVE COMMUNICATION

0 comentarios
When we notice somebody pointing at something, we automatically look in the direction of the gesture. In humans, the ability to understand this type of gesturing (known as declarative communication) may seem to be an automatic response, but it is actually a sign of sophisticated communication behavior. Numerous studies have tried to determine if great apes (for example, chimpanzees and bonobos) are able to understand declarative communication, but results have been mixed.

Psychological scientists Heidi Lyn and William Hopkins from Agnes Scott College and Jamie Russell from the Yerkes National Primate Research Center examined if exposure to different human communicative environments would affect understanding of declarative signals in chimpanzees and bonobos. Three groups of apes were tested in this study. One group consisted of chimpanzees that had been raised in standard laboratory housing; although they had regular contact with humans, these interactions were limited to basic animal-care contexts such as feeding. The other two groups of apes consisted of chimpanzees and bonobos that had been raised in socio-linguistically rich environments, where they were routinely exposed to complex communicative interactions with humans. In the current experiment, the apes participated in an object-choice task — they had to choose between two containers, one of which contained a food reward. The placement of the food in one of the containers was hidden from the apes, and a researcher indicated the correct container by pointing, vocalizing, or both.

The results, reported in Psychological Science, a journal of Association for Psychological Science, indicate interesting differences between chimps and bonobos raised in socio-linguistically rich environments and chimps raised in standard laboratory housing. The bonobos and chimps that had been reared in the highly communicative environments performed significantly better than chimps that had been reared in standard laboratory settings in the pointing, vocalizing, and pointing-and-vocalizing conditions. Further analysis revealed that the best results occurred when the researcher simultaneously pointed and vocalized towards the correct container. This finding supports earlier studies that suggest visual cues enhance performance on auditory tasks.

These results indicate that apes may have the potential for understanding declarative communication and this potential may be achieved in specific environments. The authors conclude, “Because the ability to acquire declarative comprehension is common to both apes and humans, researchers must look elsewhere for a candidate biological change that allowed for the evolution of human language and cognition.”

Association for Psychological Science

HONEY, I SHRUNK THE RECEIVER

0 comentarios

CSIRO and Australian company Sapphicon Semiconductor Pty Ltd have signed an agreement to jointly develop a complete radio receiver on a chip measuring just 5 mm x 5 mm that could eventually be used in mobile phones and other communications technologies.

The development of a low-cost, ultra-high-bandwidth ‘system-on-chip’ device could also replace traditional receivers currently used in radio astronomy applications, many of which are about the size of a bar fridge.

The chip’s first test will be in CSIRO’s Australian SKA Pathfinder (ASKAP) – an array of 36 radio dishes that acts as a single telescope now under construction in Western Australia.

It will be trialled in an innovative radio camera (or “phased array feed”) developed by CSIRO, which sits at the focal centre of each ASKAP dish to receive incoming cosmic radio waves.

“This chip will minimise the size and weight of the phased array feed, reduce cost and power, and facilitate maintenance,” said CSIRO Project Director for ASKAP, Dr David DeBoer.

“In our radio camera, it could revolutionise radio astronomy.”

Sapphicon Semiconductor’s CEO, Andrew Brawley, said the chip will be developed using the company’s Silicon-on-Sapphire CMOS process.

“A sapphire substrate is not ‘lossy’,” Mr Brawley said. “That’s important for an application such as radio astronomy, because it minimises losses in integrated passive components, significantly improving their performance.

“Perhaps the biggest advantage is that so much circuitry – RF, logic, mixed-signal and passives – can be incorporated into the same chip. This is real miniaturisation and could open up whole new product markets,” Mr Brawley said.

The chip's first test will be in CSIRO's Australian SKA Pathfinder (ASKAP) – an array of 36 radio dishes that acts as a single telescope now under construction in Western Australia. The chip is very high bandwidth, able to sample about 600 MHz around a central frequency of 1400 MHz.

International researchers developing the Square Kilometre Array radio telescope are interested in the R&D proposed by CSIRO and Sapphicon. No other group is developing a fully integrated single-chip receiver.

“This sort of technology, which looks to deliver mass-produced components on low-loss substrates consuming little power, is ideal for the SKA, which potentially needs millions of highly sensitive electronic sensors,” said Professor Richard Schilizzi, Director of the SKA Program Development Office in the UK, which coordinates international planning for the SKA.

“CSIRO is to be congratulated on this important step.”

The development project will take about two years to complete and will involve a number of stages of sub-component development and testing.
CSIRO will contribute the intellectual property it has generated over the last five years from research funded by the Commonwealth Government under the second round of the Major National Research Facilities program in 2001.

CSIRO’s SKA Director, Professor Brian Boyle, said that that research led to a proof-of-concept implemented in RF-CMOS. “It’s a good demonstration of the value of long-range investment in these technologies,” he said.

Following its collaboration with Sapphicon and CSIRO on earlier proof-of-concept projects, the Centre for Technology Infusion at La Trobe University in Melbourne will also work with Sapphicon in the development of the novel chip.

CSIRO’s Business Development Manager for ASKAP, Dr Carole Jackson, said the development of these devices will require significant Australian electronics design expertise.

“It will encourage training and the diffusion of expertise throughout this industry.”

(Photo: Sapphicon Semiconductor Pty Ltd)

The Commonwealth Scientific and Industrial Research Organization (CSIRO)

WIND RESISTANCE

0 comentarios

Wind power has emerged as a viable renewable energy source in recent years — one that proponents say could lessen the threat of global warming. Although the American Wind Energy Association estimates that only about 2 percent of U.S. electricity is currently generated from wind turbines, the U.S. Department of Energy has said that wind power could account for a fifth of the nation’s electricity supply by 2030.

But a new MIT analysis may serve to temper enthusiasm about wind power, at least at very large scales. Ron Prinn, TEPCO Professor of Atmospheric Science, and principal research scientist Chien Wang of the Department of Earth, Atmospheric and Planetary Sciences, used a climate model to analyze the effects of millions of wind turbines that would need to be installed across vast stretches of land and ocean to generate wind power on a global scale. Such a massive deployment could indeed impact the climate, they found, though not necessarily with the desired outcome.

In a paper published online Feb. 22 in Atmospheric Chemistry and Physics, Wang and Prinn suggest that using wind turbines to meet 10 percent of global energy demand in 2100 could cause temperatures to rise by one degree Celsius in the regions on land where the wind farms are installed, including a smaller increase in areas beyond those regions. Their analysis indicates the opposite result for wind turbines installed in water: a drop in temperatures by one degree Celsius over those regions. The researchers also suggest that the intermittency of wind power could require significant and costly backup options, such as natural gas-fired power plants.

Prinn cautioned against interpreting the study as an argument against wind power, urging that it be used to guide future research that explores the downsides of large-scale wind power before significant resources are invested to build vast wind farms. “We’re not pessimistic about wind,” he said. “We haven’t absolutely proven this effect, and we’d rather see that people do further research.”

Daniel Kirk-Davidoff, a chief scientist for MDA Federal Inc., which develops remote sensing technologies, and adjunct professor of meteorology at the University of Maryland, has examined the climate impacts of large-scale wind farms in previous studies. To him, the most promising result of the MIT analysis is that it indicates that the large-scale installation of wind turbines doesn’t appear to slow wind flow so much that it would be impossible to generate a desirable amount of energy. “When you put the wind turbines in, they are generating the kind of power you’d hope for,” he said.

Previous studies have predicted that annual world energy demand will increase from 14 terawatts (trillion watts) in 2002 to 44 terawatts by 2100. In their analysis, Prinn and Wang focus on the impact of using wind turbines to generate five terawatts of electric power.

Using a climate model developed by the U.S. National Center for Atmospheric Research, the researchers simulated the aerodynamic effects of large-scale wind farms — located both on land and on the ocean — to analyze how the atmosphere, ocean and land would respond over a 60-year span.

For the land analysis, they simulated the effects of wind farms by using data about how objects similar to turbines, such as undulating hills and clumps of trees, affect surface “roughness,” or friction that can disturb wind flow. After adding this data to the model, the researchers observed that the surface air temperature over the wind farm regions increased by about one degree Celsius, which averages out to an increase of .15 degrees Celsius over the entire global surface.

According to Prinn and Wang, this temperature increase occurs because the wind turbines affect two processes that play critical roles in determining surface temperature and atmospheric circulation: vertical turbulent motion and horizontal heat transport. Turbulent motion refers to the process by which heat and moisture are transferred from the land or ocean surface to the lower atmosphere. Horizontal heat transport is the process by which steady large-scale winds transport excessive heat away from warm regions, generally in a horizontal direction, and redistribute it to cooler regions. This process is critical for large-scale heat redistribution, whereas the effects of turbulent motion are generally more localized.

In the analysis, the wind turbines on land reduced wind speed, particularly on the downwind side of the wind farms, which reduced the strength of the turbulent motion and horizontal heat transport processes that move heat away from the Earth’s surface. This resulted in less heat being transported to the upper parts of the atmosphere, as well as to other regions farther away from the wind farms. The effect is similar to being at the beach on a windy summer day: If the wind weakened or disappeared, it would get warmer.

In contrast, when examining ocean-based wind farms, Prinn and Wang found that wind turbines cooled the surface by more than one degree Celsius. They said that these results are unreliable, however, because in their analysis, they modeled the effects of wind turbines by introducing surface friction in the form of large artificial waves. But they acknowledge that this is not an accurate comparison, meaning that a better way of simulating marine-based wind turbines must be developed before reliable conclusions can be made.

In addition to changes in temperatures and surface heat fluxes, they also observed changes in large-scale precipitation, particularly at the mid-latitudes in the Northern Hemisphere. Although these changes exceeded 10 percent in some areas, the global total changes were not very large, according to Prinn and Wang.

To investigate the effect of wind variability on the intermittency in wind power generation, the researchers used the climate model to estimate the monthly-mean wind power consumption and electrical generation for each continent, concluding that there are very large and geographically extensive seasonal variations, particularly over North and South America, Africa and the Middle East. They explain that this unreliability means that an electrical generation system with greatly increased use of wind turbines would still require backup generation even if continental-scale power lines enabled electrical transmission from windy to non-windy areas.

Although Prinn and Wang believe their results for the land-based wind farms are robust, Wang called their analysis a “proof-of-concept” study that requires additional theoretical and modeling work, as well as field experiments for complete verification.

Their next step is to address how to simulate ocean-based wind farms more accurately. They plan to collaborate with aeronautical engineers to develop parameters for the climate model that will allow them to simulate turbines in coastal waters.

(Photo: istockphoto)

MIT

PRINCETON SCIENTISTS SAY EINSTEIN'S THEORY APPLIES BEYOND THE SOLAR SYSTEM

0 comentarios

A team led by Princeton University scientists has tested Albert Einstein's theory of general relativity to see if it holds true at cosmic scales. And, after two years of analyzing astronomical data, the scientists have concluded that Einstein's theory, which describes the interplay between gravity, space and time, works as well in vast distances as in more local regions of space.

The scientists' analysis of more than 70,000 galaxies demonstrates that the universe -- at least up to a distance of 3.5 billion light years from Earth -- plays by the rules set out by Einstein in his famous theory.

Ever since the physicist Arthur Eddington measured starlight bending around the sun during a 1919 eclipse and proved Einstein's theory of general relativity, the scientific world has accepted its tenets. But until now, according to the team, no one had tested the theory so thoroughly and robustly at distances and scales that go beyond the solar system.

Reinabelle Reyes, a Princeton graduate student in the Department of Astrophysical Sciences, along with co-authors Rachel Mandelbaum, an associate research scholar, and James Gunn, the Eugene Higgins Professor of Astronomy, outlined their assessment in the March 11 edition of Nature.

Other scientists collaborating on the paper include Tobias Baldauf, Lucas Lombriser and Robert Smith of the University of Zurich and Uros Seljak of the University of California-Berkeley.

The results are important, they said, because they shore up current theories explaining the shape and direction of the universe, including ideas about "dark energy," and dispel some hints from other recent experiments that general relativity may be wrong.
"All of our ideas in astronomy are based on this really enormous extrapolation, so anything we can do to see whether this is right or not on these scales is just enormously important," Gunn said. "It adds another brick to the foundation that underlies what we do."

First published in 1915, Einstein's general theory of relativity remains a pivotal breakthrough in modern physics. It redefined humanity's understanding of the fabric of existence -- gravity, space and time -- and ultimately explained everything from black holes to the Big Bang.

The groundbreaking theory showed that gravity can affect space and time, a key to understanding basic forces of physics and natural phenomena, including the origin of the universe. Shockingly, the flow of time, Einstein said, could be affected by the force of gravity. Clocks located a distance from a large gravitational source will run faster than clocks positioned more closely to that source, Einstein said. For scientists, the theory provides a basis for their understanding of the universe and the foundation for modern research in cosmology.

In recent years, several alternatives to general relativity have been proposed. These modified theories of gravity depart from general relativity on large scales to circumvent the need for dark energy, an elusive force that must exist if the calculations of general relativity balance out. But because these theories were designed to match the predictions of general relativity about the expansion history of the universe, a factor that is central to current cosmological work, it has become crucial to know which theory is correct, or at least represents reality as best as can be approximated.

"We knew we needed to look at the large-scale structure of the universe and the growth of smaller structures composing it over time to find out," Reyes said. The team used data from the Sloan Digital Sky Survey, a long-term, multi-institution telescope project mapping the sky to determine the position and brightness of several hundred million celestial objects.

By calculating the clustering of these galaxies, which stretch nearly one-third of the way to the edge of the universe, and analyzing their velocities and distortion from intervening material, the researchers have shown that Einstein's theory explains the nearby universe better than alternative theories of gravity.

The Princeton scientists studied the effects of gravity on these objects over long periods of time. They observed how this elemental force drives galaxies to clump into larger collections of themselves and how it shapes the expansion of the universe. They also studied the effects of a phenomenon known as "weak" gravitational lensing on galaxies as further evidence.

In weak lensing, matter -- galaxies and groups of galaxies -- that is closer to viewers bends light to change the shape of more distant objects, according to Mandelbaum. The effect is subtle, making viewers feel as if they are looking through a window made of old glass. Studying data collected from telescope surveys of regions showing what the universe looked like 5 billion years ago, the scientists could search for common factors in the distortion of multiple galaxies.

And, because relativity calls for the curvature of space to be equal to the curvature of time, the researchers could calculate whether light was influenced in equal amounts by both, as it should be if general relativity holds true.

"This is the first time this test was carried out at all, so it's a proof of concept," Mandelbaum said. "There are other astronomical surveys planned for the next few years. Now that we know this test works, we will be able to use it with better data that will be available soon to more tightly constrain the theory of gravity."

Astronomers made the discovery a decade ago that the expansion of the universe was speeding up. They attributed this acceleration to dark energy, which they hypothesized pervaded otherwise empty space and exerted a repulsive gravitational force. Dark energy could be a cosmological constant, proposed by Einstein in his theory of general relativity, or it could be a new form of energy whose properties evolve with time.

Firming up the predictive powers of Einstein's theory can help scientists better understand whether current models of the universe make sense, the scientists said.

"Any test we can do in building our confidence in applying these very beautiful theoretical things but which have not been tested on these scales is very important," Gunn said. "It certainly helps when you are trying to do complicated things to understand fundamentals. And this is a very, very, very fundamental thing."

(Photo: Brian Wilson)

Princeton University

THINGS WE WANT APPEAR CLOSER THAN THEY ARE, STUDIES SHOW

0 comentarios
Tempted by a plate of cookies on the buffet table? Chances are, the goodies are a little farther away than you think they are. But your faulty estimation may give you a little added nudge to head over to the table and have one. (Or two.)

In research published in the January 2010 issue of the journal Psychological Science, psychology professor David Dunning and Emily Balcetis, Ph.D. '06 (now an assistant professor of psychology at New York University), found that when an object is desirable, we perceive it to be closer than it actually is. A $100 bill, for example, may appear just within reach -- while a letter from the IRS, if it were placed at exactly the same distance, may appear farther away.

The phenomenon could be part of an adaptive mechanism that gives us added incentive to pursue the things we want and discourages us from expending energy on things we don't.

In the study, the researchers first tested the effect of thirst (a physical desire) on distance perception. They asked 90 undergraduates -- half of whom had just eaten a serving of pretzels and half who hadn't -- to estimate the distance between themselves and a bottle of water. On average, the thirsty group judged the water to be 25 inches away, while the non-thirsty group estimated the distance at 28 inches.

To test the effect on social desires, the researchers then asked two groups of students to judge their distance from objects that had social value (a $100 bill that could be won or a form with positive feedback) and objects that had no value or negative value (a $100 bill that belonged to someone else or a form with critical feedback). Because mood has been shown in previous research to affect aspects of perception, the participants also completed a mood assessment exercise.

As in the first experiment, the desirable objects were thought to be closer than the undesirable ones. Mood, however, showed no effect on distance perception.

In a final set of experiments, the researchers tested whether the results were due to actual differences in perception, or instead to differences in the thought processes that go into reporting the perception.

Instead of asking participants to estimate inches to an object, they asked participants to toss a beanbag as close as possible to it or to walk a set distance toward or away from it. In both cases, the participants acted as though the desirable objects were closer.

The finding makes sense from an evolutionary perspective, said Dunning. "We know that things that are closer are more motivating than things that are farther away. So if you wanted to motivate an organism to go and pick up that thing that's really good for it or that it desires, you'd want an organism that would see that thing as closer."

Understanding how desire and other factors influence perception is also important in everyday life, he said. The way we perceive changes in our health can influence what kind of medical care we seek, for example. "Also interpersonal relationships -- if you're in a marriage, how loud do you think your spouse is yelling at you? Is that a smile or is that a smirk? There are a lot of ways perception might guide people toward a more pleasant or a less pleasant road."

Cornell University

WHY SURPRISES TEMPORARILY BLIND US

0 comentarios

New research from Vanderbilt University reveals for the first time how our brains coordinate these two types of attention and why we may be temporarily blinded by surprises.

The research was published March 7, 2010, in Nature Neuroscience.

“The simple example of having your reading interrupted by a fire alarm illustrates a fundamental aspect of attention: what ultimately reaches our awareness and guides our behavior depends on the interaction between goal-directed and stimulus-driven attention. For coherent behavior to emerge, you need these two forms of attention to be coordinated,” René Marois, associate professor of psychology and co-author of the new study, said. “We found a brain area, the inferior frontal junction, that may play a primary role in coordinating these two forms of attention.”

The researchers were also interested in what happens to us when our attention is captured by an unexpected event.

“We wanted to understand what caused limitations in our conscious perception when we are surprised,” Christopher Asplund, a graduate student in the Department of Psychology and primary author of the new study, said. “We found that when shown a surprise stimulus, we are temporarily blinded to subsequent events.”

In their study, the research team asked individuals to detect the letter “X” in a stream of letters appearing on a screen while their brain activity was being monitored using functional magnetic resonance imaging, or fMRI. Occasionally, an image of a face would unexpectedly interrupt the stream.

The surprise caused the subject to completely miss the “X” the first couple of times, despite the fact they were staring directly at the part of the screen on which it appeared. They were eventually able to identify it as successfully as when there was no surprise.

Using fMRI, the researchers found that the inferior frontal junction, a region of the lateral prefrontal cortex, was involved in both the original task and in the reaction to the surprise.

“What we think might be happening is that this brain area is coordinating different attention systems – it has a response both when you are controlling your attention and when you feel as though your attention is jerked away,” Asplund said.

Surprise stimuli trigger what is known as the orienting response in which the heart rate increases, the nervous system is more aroused and we pay intense attention to a new item in our environment. Described by Pavlov in dogs, the orienting response allows one to determine if a new item is a good thing, such as food, or a threat, such as a predator, and to react accordingly.

“What we show is the dark side or negative impact of the orienting response. We found it blinds you to other events that can occur soon after the presentation of the surprise stimulus,” Marois said.

The researchers hypothesize that we may be temporarily blinded by surprise because the surprise stimulus and subsequent response occupies so much of our processing ability.

“The idea is that we can’t attend to everything at once,” Asplund said. “It seems that the inferior frontal junction is involved in this limitation in attention.”

The new research supports previous work by Marois’ laboratory that found the interior frontal junction plays the role of an attentional bottleneck – limiting our ability to multitask and attend to many things at once.

“These new findings and our previous findings suggest that this area is centrally involved in the control of attention and may limit our attentional capacities,” Marois said. “It is a very exciting convergence of findings across our studies. We’re conducting studies now to demonstrate whether in fact disruption of activity in this brain region leads to loss of control of attention.”

(Photo: Vanderbilt U.)

Vanderbilt University

COMPUTATIONAL FEAT SPEEDS FINDING OF GENES TO MILLISECONDS INSTEAD OF YEARS

0 comentarios

Like a magician who says, “Pick a card, any card,” Stanford University computer scientist Debashis Sahoo, PhD, seemed to be offering some kind of trick when he asked researchers at the Stanford Institute for Stem Cell Biology and Regenerative Medicine to pick any two genes already known to be involved in stem cell development. Finding such genes can take years and hundreds of thousands of dollars, but Sahoo was promising the skeptical stem cell scientists that, in a fraction of a second and for practically zero cost, he could find new genes involved in the same developmental pathway as the two genes provided.

Sahoo went on to show that this amazing feat could actually be performed. The proof-of-principle for his idea, published online March 15 in the Proceedings of the National Academy of Sciences, opens a powerful, mathematical route for conducting stem cell research and shows the power of interdisciplinary collaborations in science. It also demonstrates that using computers to mine existing databases can radically accelerate research in the laboratory. Ultimately, it may lead to advances in diverse areas of medicine such as disease diagnosis or cancer therapy.

Biologists have long used math and statistics in their work. In the simplest case, when looking for genes involved in a certain biological process, they look for genes that have a symmetrical correlation. For instance, if they know gene A is involved in a certain process, they try to determine if gene C is correlated with gene A during the same process.

Four years ago, while studying for his doctorate in electrical engineering with advisor David Dill, PhD, professor of computer science, and co-advisor Sylvia Plevritis, PhD, associate professor of radiology, Sahoo took an immunology class and realized that many of the relationships in biology are not symmetric, but asymmetric. As an analogy, Sahoo noted that trees bearing fruit almost certainly have leaves, but trees outside of the fruiting season may or may not have leaves, depending on the time of year.

Sahoo and Dill realized that these asymmetric relationships could be found by applying Boolean logic, in which the researchers established a series of if/then rules and then searched data for candidates that satisfied all the rules. For example, scientists might know that gene A is very active at the beginning of cell development, and gene C is active much later. By screening large public databases, Sahoo can find the genes that are almost never active when A is active, and almost always active when C is active, in many other types of cells. Researchers can then test to determine whether these genes become active between the early and late stages of development.

In the paper, lead author Sahoo looked at gene expression patterns in the development of an immunological cell called a B cell. Starting with two known B-cell genes, Sahoo searched through databases with thousands of gene products in milliseconds and found 62 genes that matched the patterns he would expect to see for genes that got turned on in between the activation of the two genes he started with. He then examined databases involving 41 strains of laboratory mice that had been engineered to be deficient in one or more of the 62 genes. Of those 41 strains, 26 had defects in B cell development.

“This was the validation of the method,” Sahoo said. “Biologists are really amazed that, with just a computer algorithm, in milliseconds I can find genes that it takes them a really long time to isolate in the lab.” He added that he was especially gratified that the information comes from databases that are widely available and from which other scientists have already culled information.

Sahoo is now using the technique to find new genes that play a role in developing cancers.

“This shows that computational analysis of existing data can provide clues about where researchers should look next,” he said. “This is something that could have an impact on cancer. It’s exciting.”

(Photo: Stanford U.)

Stanford University

NASA SPOTS SURPRISING SHRIMP BENEATH ANTARCTIC ICE

0 comentarios

At a depth of 600 feet beneath the West Antarctic ice sheet, a small shrimp-like creature managed to brighten up an otherwise gray polar day in November 2009. Bob Bindschadler of NASA's Goddard Space Flight Center, Greenbelt, Md., remembers the day well. He and his team were on a joint NASA-National Science Foundation expedition to examine the underside of the ice sheet when they found the pinkish-orange creature swimming beneath the ice.

"We were like little kids huddling around, just oohing and aahing at this little creature swimming around and giving us a little show," said Bindschadler. "It was the thrill of discovery that made us giddy; just totally unexpected."

The complex critter was identified as a Lyssianasid amphipod, about 3 inches in length. It was found beneath the 180-meter (590-foot) thick Ross Ice Shelf in Windless Bight, 20 miles northeast of McMurdo Station. Bindschadler and his team drilled an 8-inch diameter hole through the ice so that Alberto Behar of NASA's Jet Propulsion Laboratory, Pasadena, Calif., could submerge a small camera to obtain what are believed to be the first images of the underbelly of an ice shelf.

"This is the first time we've had a camera able to look back up at the ice. This probe is an upgrade to the original. It has three cameras – down, side and back-looking. The back-looking camera saw the shrimp-like animal," said Behar. The drilling in Windless Bight was part of the team's preparation for upcoming field studies 1000 miles from McMurdo where the Pine Island ice shelf is rapidly thinning and Antarctic ice is swiftly sliding off the continent, raising sea level. Bindschadler and his team want to find out why.

Behar designed the original NASA borehole camera apparatus in 1999. It's now seen six deployments with British, Australian and American science teams in Antarctica, Greenland and Alaska. He'll take this new camera rig to Pine Island with Bindschadler and others, and hopes to eventually probe into Antarctica's mysterious sub-glacial lakes. There he'll attach a fiber-optically tethered micro-submarine with high-resolution camera, "so we can swim within the lake."

The rig, originally developed by NASA, has proven to be invaluable to science teams around the world. "We wouldn't be able to use it in the places we've gone without collaboration with the National Science Foundation and our British and Australian partners, among many others," said Behar. "When we get to Pine Island we'll be able to look at the sea floor. We couldn't do it this time because the cavity was deeper than we expected, but we'll have a kilometer of cable at Pine Island."

It's not unusual to find amphipods and other marine life in Antarctic waters. The complex circulatory system of the surrounding ocean brings warm, salty, nutrient-rich water towards the Antarctic continent, helping to sustain life even in the cold, dark winter. When the Larsen B ice shelf collapsed in 2002, scientists discovered clams and bacterial mats, or large aggregations of bacteria, half a mile below the ocean surface. Even within their average temperature range of -1.8 to 1 degree Celsius (28.7 to 33.8 Fahrenheit), Antarctic waters are teeming with life.

"The ocean flows under ice sheets, and where there is exchange of water with the open ocean, there will be microbes and other food resources for larger animals such as jellyfish and amphipods," according to Peter Wiebe, a biologist from the Woods Hole Oceanographic Institution in Woods Hole, Mass., who studies marine life in the waters around West Antarctica.

But for a group of glaciologists, a familiar face was the last thing they expected to see below the ice and so far from the open ocean. "We thought we were just going into a deep, dark cold water hole and never anticipated we'd see any life," Bindshadler added. "The color was what caught our eyes."

The science team -- with members from the University of Alaska, Fairbanks; the Naval Postgraduate School, Monterey, Calif.; and Moss Landing Marine Laboratories, Monterey -- is now analyzing temperature, salinity and current data from the sub-glacial watering hole to understand if the comfortable conditions for this shrimp-like creature are typical.

NASA-funded scientists have long studied life in extreme environments. From astrobiology to extremeophiles and survivophiles, the search for life in harsh places has led to a smorgasbord of discoveries seemingly ripped from the pages of science fiction. The Antarctic amphipod has gotten scientists talking again: if life-forms as complex as these can survive deep within sub-glacial waters could they survive in other unusual and unfriendly environments in space?

Behar, also known for his work on robotic exploration of Mars, remarked, "The real benefit of these exploration programs is that you go in not knowing what you're going to find and you get surprised. It makes it worth all the trouble putting everything together when you find something new that you didn't expect."

(Photo: NASA-JPL/CalTech/Behar)

NASA

Followers

Archive

 

Selected Science News. Copyright 2008 All Rights Reserved Revolution Two Church theme by Brian Gardner Converted into Blogger Template by Bloganol dot com