Friday, April 30, 2010


0 comentarios

With just a half second's notice, a driver can swerve to avoid a fatal accident or slam on the brakes to miss hitting a child running after a ball. But first, the driver must perceive the danger.

Research shows that a rapid alert system can help mitigate the risks, fatalities and severe injuries from road accidents, says Prof. Shai Avidan of Tel Aviv University's Faculty of Engineering. He is currently collaborating with researchers from General Motors Research Israel to keep cars on the road and people out of hospitals.

An expert in image processing, Prof. Avidan and his team are working to develop advanced algorithms that will help cameras mounted on GM cars detect threats, alerting drivers to make split-second decisions. His research has been published in leading journals, including the IEEE Transaction on Pattern Analysis and Machine Intelligence and featured at conferences in the field.

The challenge, says Prof. Avidan, is to develop a system that can recognize people, distinguishing them from other moving objects — and to create a model that can react almost instantaneously. Ultimately, he is hoping computer vision research will make cars smarter, and roads a lot safer.

Cars are not much different from one another. They all have engines, seats, and steering wheels. But new products are adding another dimension by making cars more intelligent. One such product is the smart camera system by MobilEye, an Israeli startup company. Prof. Avidan was part of the MobilEye technical team that developed a system to detect vehicles and track them in real-time.

He is now extending that research to develop the next generation of smart cameras — cameras that are aware of their surroundings. His goal is a camera capable of distinguishing pedestrians from other moving objects that can then warn the driver of an impending accident.

The challenge is in the development of a method that can detect and categorize moving objects reliably and quickly. Prof. Avidan hopes to realize such a method by combining powerful algorithms to recognize and track objects. Such a tool could double check for vehicles in your blind spot, help you swerve when a child runs into the street, or automatically block your door from opening if a cyclist is racing toward you, he says.

Eventually, he hopes cameras will be able to recognize just about anything moving through the physical world, offering a tantalizing vision of applications such as autonomous vehicles. The underlying technology could also be used in computer gaming to track a player's movements, or for surveillance to detect a potential intruder.

Previously, detection systems used radar, which is expensive and not particularly sensitive to human beings. A smart camera fuelled by a powerful chip, on the other hand, could detect the activities of people and animals, and prompt the car to react accordingly, braking more or locking the doors, for example.

To date, Prof. Avidan has demonstrated that his technology works on infrared, greyscale, and color cameras. "Cameras are quite dumb machines unless you know how to extract information from them," he says. "Now, as the price of cameras drop and computer power grows, we'll see more exciting applications that will keep us safe and make our lives more comfortable."

(Photo: TAU)

Tel Aviv University


0 comentarios
When we like a product, do we think others will like it, too? And when we believe others like a product, do we like it as well? A new study in the Journal of Consumer Research says these two questions are fundamentally different.

"The answer to the first question (Will others like it?) requires people to start with their own product preferences, which we call projection," write authors Caglar Irmak (University of South Carolina), Beth Vallen (Loyola University), and Sankar Sen (Baruch College). The second question (If others like it, do I?) makes people think first about others' preferences and then decide whether they like the product or not, which is called "introjection."

"We show that different psychological processes underlie projection and introjection," the authors write. "In particular, we demonstrate that providing our own opinion about a product before thinking about others' preferences, as in projection, affirms one's unique concept." This, in turn, weakens uniqueness motivations and leads consumers to predict others will like what they themselves like.

On the other hand, thinking about others' preferences before our own (introjection) threatens our sense of uniqueness. "As a result, those who are in high need for uniqueness don't like what other people like," the authors explain.

In their studies, the authors showed participants advertisements for one of two novel technology products that had not yet been introduced to the market. One group of participants, assigned to the projection condition, stated their own preferences for the product and then estimated those of others. Another group, which was assigned the introjection condition, estimated the preferences of others and then reported their own preferences. Then they measured the participants' need for uniqueness.

"If we learn others' preferences before forming our own, we tend to preserve our uniqueness by altering our product preferences accordingly," the authors write. "If, however, we already have an opinion about a product, we are okay with others following us."

University of Chicago


0 comentarios

In order to make electric cars a part of everyday life, new vehicle designs and parts are needed. Take wheel hub motors, for instance. One of the advantages of wheel hub motors is that manufacturers can dispense with the conventional engine bay – the space under the »hood« or »bonnet« – since the motors are attached directly to the wheels of the vehicle. This opens up a wealth of opportunities for car designers when drafting the layout of the vehicle. Additional advantages: By dispensing with the transmission and differential, the mechanical transmission elements suffer no losses or wear and tear. Moreover, the direct drive on each individual wheel may improve the drive dynamic and drive safety.

Researchers are developing not only individual components, but the total system as well. They assemble the components on their concept car, known as the »Frecc0« or the »Fraunhofer E-Concept Car Type 0« – a scientific test platform. Starting next year, automobile manufacturers and suppliers will also be able to use the »Frecc0« for testing new components. The basis of this demo model is an existing car: The new Artega GT manufactured by Artega Automobil GmbH. The establishment of this platform and the engineering of the wheel hub motor are just two projects among the panoply run by »Fraunhofer System Research for Electromobility«. The research cooperative is focusing on subjects that include vehicle design, energy production, distribution and implementation, energy storage techniques, technical system integration and sociopolitical matters. The federal ministry for education and research BMBF is funding this Fraunhofer initiative with 44 million euro. The goal is to develop prototypes for hybrid and electric vehicles, in order to support the German automotive industry as it makes the crossover to electromobility.

Wheel hub motors were invented back in the 19th century. Ferdinand Porsche used these motors to equip his »Lohner Porsche« at the 1900 World Fair in Paris. Much has been done since then: »We are developing a wheel hub motor that integrates all essential electric and electronic components, especially the power electronics and electronic control systems, into the installation space of the motor. Thus, no external electronics are necessary and the number and scope of the feed lines can be minimized. There is a marked increase in power compared to the wheel hub motors currently available on the market. Moreover, there is an innovative security and redundancy concept, which guarantees drive safety – even if the system breaks down,« explains Professor Matthias Busse, head of the Fraunhofer Institute for Manufacturing Engineering and Applied Materials Research IFAM. Beside IFAM, researchers from the Fraunhofer Institute for Integrated Systems and Device Technology IISB, for Mechanics of Materials IWM and for Structural Durability and System Reliability LBF are tackling these issues.

Critics find fault with the negative effects of wheel hub motors on vehicle handling. Dr. Hermann Pleteit, IFM project manager, responds: »The motor is extremely compact. The high power and torque densities merely cause a relatively low increase in unsprung mass. But by configuring the chassis in different ways – like the muffler settings, for example - you can compensate for these effects. There is no impact on drive comfort.«

The researchers are meeting yet another challenge: In contrast to conventional vehicles, electric cars can recapture the energy that comes from braking, and feed it back into the battery. In this case, the experts refer to »recuperation«. Now they are working on maximizing this energy recapture in the future. The conventional braking system still in use will only be needed in emergency situations.

With the Fraunhofer wheel hub motor, the researchers are implementing Ferdinand Porsche's idea for the cars of the future, and testing these components on the demonstration vehicle.

(Photo: Fraunhofer IFAM)



0 comentarios

As the solar wind flows over natural obstructions on the moon, it may charge polar lunar craters to hundreds of volts, according to new calculations by NASA's Lunar Science Institute team.

Polar lunar craters are of interest because of resources, including water ice, which exist there. The moon's orientation to the sun keeps the bottoms of polar craters in permanent shadow, allowing temperatures there to plunge below minus 400 degrees Fahrenheit, cold enough to store volatile material like water for billions of years. "However, our research suggests that, in addition to the wicked cold, explorers and robots at the bottoms of polar lunar craters may have to contend with a complex electrical environment as well, which can affect surface chemistry, static discharge, and dust cling," said William Farrell of NASA's Goddard Space Flight Center, Greenbelt, Md. Farrell is lead author of a paper on this research published March 24 in the Journal of Geophysical Research. The research is part of the Lunar Science Institute's Dynamic Response of the Environment at the moon (DREAM) project.

"This important work by Dr. Farrell and his team is further evidence that our view on the moon has changed dramatically in recent years," said Gregory Schmidt, deputy director of the NASA Lunar Science Institute at NASA's Ames Research Center, Moffett Field, Calif. "It has a dynamic and fascinating environment that we are only beginning to understand."

Solar wind inflow into craters can erode the surface, which affects recently discovered water molecules. Static discharge could short out sensitive equipment, while the sticky and extremely abrasive lunar dust could wear out spacesuits and may be hazardous if tracked inside spacecraft and inhaled over long periods.

The solar wind is a thin gas of electrically charged components of atoms -- negatively charged electrons and positively charged ions -- that is constantly blowing from the surface of the sun into space. Since the moon is only slightly tilted compared to the sun, the solar wind flows almost horizontally over the lunar surface at the poles and along the region where day transitions to night, called the terminator.

The researchers created computer simulations to discover what happens when the solar wind flows over the rims of polar craters. They discovered that in some ways, the solar wind behaves like wind on Earth -- flowing into deep polar valleys and crater floors. Unlike wind on Earth, the dual electron-ion composition of the solar wind may create an unusual electric charge on the side of the mountain or crater wall; that is, on the inside of the rim directly below the solar wind flow.

Since electrons are over 1,000 times lighter than ions, the lighter electrons in the solar wind rush into a lunar crater or valley ahead of the heavy ions, creating a negatively charged region inside the crater. The ions eventually catch up, but rain into the crater at consistently lower concentrations than that of the electrons. This imbalance in the crater makes the inside walls and floor acquire a negative electric charge. The calculations reveal that the electron/ion separation effect is most extreme on a crater's leeward edge – along the inside crater wall and at the crater floor nearest the solar wind flow. Along this inner edge, the heavy ions have the greatest difficulty getting to the surface. Compared to the electrons, they act like a tractor-trailer struggling to follow a motorcycle; they just can't make as sharp a turn over the mountain top as the electrons. "The electrons build up an electron cloud on this leeward edge of the crater wall and floor, which can create an unusually large negative charge of a few hundred Volts relative to the dense solar wind flowing over the top," says Farrell.

The negative charge along this leeward edge won't build up indefinitely. Eventually, the attraction between the negatively charged region and positive ions in the solar wind will cause some other unusual electric current to flow. The team believes one possible source for this current could be negatively charged dust that is repelled by the negatively charged surface, gets levitated and flows away from this highly charged region. "The Apollo astronauts in the orbiting Command Module saw faint rays on the lunar horizon during sunrise that might have been scattered light from electrically lofted dust," said Farrell. "Additionally, the Apollo 17 mission landed at a site similar to a crater environment – the Taurus-Littrow valley. The Lunar Ejecta and Meteorite Experiment left by the Apollo 17 astronauts detected impacts from dust at terminator crossings where the solar wind is nearly-horizontal flowing, similar to the situation over polar craters."

Next steps for the team include more complex computer models. "We want to develop a fully three-dimensional model to examine the effects of solar wind expansion around the edges of a mountain. We now examine the vertical expansion, but we want to also know what happens horizontally," said Farrell. As early as 2012, NASA will launch the Lunar Atmosphere and Dust Environment Explorer (LADEE) mission that will orbit the moon and could look for the dust flows predicted by the team's research.

(Photo: NASA/Goddard Space Flight Center)



0 comentarios
Richard Borgens and his colleagues from the Center for Paralysis Research at the Purdue School of Veterinary Medicine have a strong record of inventing therapies for treating nerve damage. From Ampyra, which improves walking in multiple sclerosis patients to a spinal cord simulator for spinal injury victims, Borgens has had a hand in developing therapies that directly impact patients and their quality of life. Another therapy that is currently undergoing testing is the use of polyethylene glycol (PEG) to seal and repair damaged spinal cord nerve cells. By repairing the damaged membranes of nerve cells, Borgens and his team can restore the spinal cord's ability to transmit signals to the brain. However, there is one possible clinical drawback: PEG's breakdown products are potentially toxic. Is there a biodegradable non-toxic compound that is equally effective at targeting and repairing damaged nerve membranes?

Borgens teamed up with physiologist Riyi Shi and chemist Youngnam Cho, who pointed out that some sugars are capable of targeting damaged membranes. Could they find a sugar that restored spinal cord activity as effectively as PEG? Borgens and his team publish their discovery that chitosan can repair damaged nerve cell membranes in The Journal of Experimental Biology on 16 April 2010 at

Having initially tested mannose and found that it did not repair spinal cord nerve membranes, Cho decided to test a modified form of chitin, one of the most common sugars that is found in crustacean shells. Converting chitin into chitosan, Cho isolated a segment of guinea pig spinal cord, compressed a section, applied the modified chitin and then added a fluorescent dye that could only enter the cells through damaged membranes. If the chitosan repaired the crushed membranes then the spinal cord tissue would be unstained, but if the chitosan had failed, the spinal cord neurons would be flooded with the fluorescent dye. Viewing a section of the spinal cord under the microscope, Cho was amazed to see that the spinal cord was completely dark. None of the dye had entered the nerve cells. Chitosan had repaired the damaged cell membranes.

Next Cho tested whether a dose of chitosan could prevent large molecules from leaking from damaged spinal cord cells. Testing for the presence of the colossal enzyme lactate dehydrogenase (LDH), Borgens admits he was amazed to see that levels of LDH leakage from chitosan treated spinal cord were lower than from undamaged spinal cords. Not only had the sugar repaired membranes at the compression site but also at other sites where the cell membranes were broken due to handling. And when the duo tested for the presence of harmful reactive oxygen species (ROS), released when ATP generating mitochondria are damaged, they found that ROS levels also fell after applying chitosan to the damaged tissue: chitosan probably repairs mitochondrial membranes as well as the nerve cell membranes.

But could chitosan restore the spinal cord's ability to transmit electrical signals to the brain through a damaged region? Measuring the brain's response to nerve signals generated in a guinea pig's hind leg, the duo saw that the signals were unable to reach the brain through a damaged spinal cord. However, 30•min after injecting chitosan into the rodents, the signals miraculously returned to the animals' brains. Chitosan was able to repair the damaged spinal cord so that it could carry signals from the animal's body to its brain.

Borgens is extremely excited by this discovery that chitosan is able to locate and repair damaged spinal cord tissue and is even more enthusiastic by the prospect that nanoparticles of chitosan could also target delivery of neuroprotective drugs directly to the site of injury 'giving us a dual bang for our buck,' says Borgens.

The Company of Biologists Ltd


0 comentarios

The eerie glow that straddles the night time zodiac in the eastern sky is no longer a mystery. First explained by Joshua Childrey in 1661 as sunlight scattered in our direction by dust particles in the solar system, the source of that dust was long debated. In a paper appeared in the April 20 issue of The Astrophysical Journal, David Nesvorny and Peter Jenniskens put the stake in asteroids. More than 85 percent of the dust, they conclude, originated from Jupiter Family comets, not asteroids.

"This is the first fully dynamical model of the zodiacal cloud," says planetary scientist Nesvorny of the Southwest Research Institute in Boulder, Colo. "We find that the dust of asteroids is not stirred up enough over its lifetime to make the zodiacal dust cloud as thick as observed. Only the dust of short-period comets is scattered enough by Jupiter to do so."

This result confirms what meteor astronomer Jenniskens of the SETI Institute in Mountain View, Calif., had long suspected. An expert on meteor showers, he had noticed that most consist of dust moving in orbits similar to those of Jupiter Family comets, but without having active dust-oozing comets associated with them.

Instead, Jenniskens discovered a dormant comet in the Quadrantid meteor shower in 2003 and has since identified a number of other such parent bodies. While most are inactive in their present orbit around the Sun, all have in common that they broke apart violently at some point in time in the past few thousand years, creating dust streams that now have migrated into Earth's path.

Nesvorny and Jenniskens, with the help of Harold Levison and William Bottke of the Southwest Research Institute, David Vokrouhlicky of the Institute of Astronomy at Charles University in Prague, and Matthieu Gounelle of the Natural History Museum in Paris, demonstrated that these comet disruptions can account for the observed thickness of the dust layer in the zodiacal cloud.

In doing so, they solved another mystery. It was long known that snow in Antarctica is laced with micro-meteorites, some 80 to 90 percent of which have a peculiar primitive composition, rare among the larger meteorites that we know originated from asteroids. Instead, Nesvorny and Jenniskens suggest that most antarctic micro-meteorites are pieces of comets. According to their calculations, cometary grains dive into Earth's atmosphere at entry speeds low enough for them to survive, reach the ground, and be picked up later by a curious micro-meteorite hunter.

(Photo: Southwest Research Institute)

Southwest Research Institute

Thursday, April 29, 2010


0 comentarios

Evidence that the world's water cycle has already intensified is contained in new research to be published in the American Journal of Climate.

The stronger water cycle means arid regions have become drier and high rainfall regions wetter as atmospheric temperature increases.

The study, co-authored by CSIRO scientists Paul Durack and Dr Susan Wijffels, shows the surface ocean beneath rainfall-dominated regions has freshened, whereas ocean regions dominated by evaporation are saltier. The paper also confirms that surface warming of the world’s oceans over the past 50 years has penetrated into the oceans’ interior changing deep-ocean salinity patterns.

"This is further confirmation from the global ocean that the Earth’s water cycle has accelerated," says Mr Durack – a PhD student at the joint CSIRO/University of Tasmania, Quantitative Marine Science program.

"These broad-scale patterns of change are qualitatively consistent with simulations reported by the Intergovernmental Panel on Climate Change (IPCC).

"While such changes in salinity would be expected at the ocean surface (where about 80 per cent of surface water exchange occurs), sub-surface measurements indicate much broader, warming-driven changes are extending into the deep ocean," Mr Durack said.

The study finds a clear link between salinity changes at the surface driven by ocean warming and changes in the ocean subsurface which follow the trajectories along which surface water travels into the ocean interior.

The ocean's average surface temperature has risen around 0.4ºC since 1950. As the near surface atmosphere warms it can evaporate more water from the surface ocean and move it to new regions to release it as rain and snow. Salinity patterns reflect the contrasts between ocean regions where the oceans lose water to the atmosphere and the others where it is re-deposited on the surface as salt-free rainwater.

"Observations of rainfall and evaporation over the oceans in the 20th century are very scarce. These new estimates of ocean salinity changes provide a rigorous benchmark to better validate global climate models and start to narrow the wide uncertainties associated with water cycle changes and oceanic processes both in the past and the future – we can use ocean salinity changes as a rain-gauge," Mr Durack said.

Based on historical records and data provided by the Argo Program's world-wide network of ocean profilers – robotic submersible buoys which record and report ocean salinity levels and temperatures to depths of two kilometres – the research was conducted by CSIRO's Wealth from Oceans Flagship and partially funded by the Australian Climate Change Science Program. Australia’s Integrated Marine Observing System is a significant contributor to the global Argo Program.

(Photo: Alicia Navidad)

The Commonwealth Scientific and Industrial Research Organisation (CSIRO)


1 comentarios

Researchers from Imperial College London have created a structure that acts like a single pole of a magnet, a feat that has evaded scientists for decades. The researchers say their new Nature Physics study takes them a step closer to isolating a ’magnetic monopole.’

Magnets have two magnetic poles, north and south. ‘Like’ poles, such as north and north, repel one another and ‘opposite’ poles, such as north and south, attract. Whichever way a magnet is cut, it will always have these two poles.

Scientists have theorised for many years that it must be possible to isolate a ‘magnetic monopole’, either north or south on its own, but until recently researchers have been unable to show this in experiments.

Researchers at Imperial have now enabled tiny nano-sized magnets to behave like magnetic monopoles, by arranging them in a honeycomb structure. In late 2009, various teams of scientists reported they had created monopole-like behaviour in a material called ‘spin ice’. In these materials, monopoles form only at extremely low temperatures of -270 degrees Celsius. The Imperial researchers’ structure contains magnetic monopoles at room temperature.

(Photo: ICL)

Imperial College London


0 comentarios

In a world first, xenon gas has been successfully delivered to a newborn baby in a bid to prevent brain injury following a lack of oxygen at birth. This pioneering technique was developed by Professor Marianne Thoresen of the University of Bristol and carried out at St Michael’s Hospital, part of University Hospitals Bristol NHS Foundation Trust.

Every year in the UK, more than 1,000 otherwise healthy babies born at full term die or suffer brain injury caused by a lack of oxygen and/or blood supply at birth. This can lead to lifelong problems such as cerebral palsy.

The use of xenon gas to prevent brain injury was developed by Professor Thoresen with Dr John Dingley of Swansea University, in a study funded by Sparks, the children’s medical research charity.

The University of Bristol and St Michael’s Hospital have pioneered new treatments for brain injury in babies since 1998 when Professor Thoresen first started cooling babies after a lack of oxygen and showed that this technique could reduce damage in the newborn brain.

Professor Thoresen’s original laboratory work from 1995 had shown that cooling after lack of oxygen reduced brain injury in animal models. Clinical trials in humans then proved that mild cooling by only a few degrees for 72 hours is a safe and beneficial treatment. However, cooling only partially reduces disability and does not prevent it in all babies. The search thus began for a second treatment that could be added to cooling to further reduce disability.

Professor Marianne Thoresen said: “Xenon is a very rare and chemically inert anaesthetic gas found in tiny quantities in the air that we breathe. In 2002 John Dingley and I realised the potential xenon and cooling might have in combination to further reduce disability. Over the past eight years, we have shown in the laboratory that xenon adds to the protective effect of cooling on the brain; however we faced the challenge of how to successfully deliver this rare and extremely expensive gas to newborn babies.”

Dr Dingley has been developing equipment in Swansea for xenon anaesthesia in adults for over 10 years and has invented a machine to successfully deliver the gas to babies. His machine takes the exhaled gas, removes any waste products from it and re-circulates it to be breathed again without any loss at all to the outside air. Some types of specialist military diving equipment work in this way but it is very unusual to build a system small enough to work reliably in newborn babies.

Dr Dingley said: “A key design feature of this machine is that it is very efficient, using less than 200ml of xenon per hour – less than the volume of a soft drinks can. Xenon is a precious and finite resource and difficult to extract so it can cost up to £30 per litre. As even newborns breathe many litres of air per hour, any xenon based treatment would be impossibly expensive without an economical delivery method.

“Despite these challenges, the lack of side-effects and brain protecting properties of xenon make it uniquely attractive as a potential treatment to apply alongside cooling in these babies. We are very grateful to Sparks, the children’s medical research charity, for supporting us in making this happen.”

Following rigorous Medicines and Healthcare Regulatory Authority approvals and other regulatory challenges, the device is now authorised for clinical trials and will be used on a minimum of 12 babies over the coming months. Successful completion of this feasibility trial is the first required step before larger trials can be done in baby units on a larger scale.

(Photo: Bristol U.)


0 comentarios

Using a sensor that weighs cells with unprecedented precision, MIT and Harvard researchers have for the first time measured the rate at which single cells accumulate mass — a feat that could shed light on how cells control their growth and why those controls fail in cancer cells.

The research team, led by Scott Manalis, MIT associate professor of biological engineering, revealed that individual cells vary greatly in their growth rates, and also found evidence that cells grow exponentially (meaning they grow faster as they become larger).

The new measurement system, reported in the April 11 edition of the journal Nature Methods, is the first technique that can measure cells’ mass as they grow over a period of time (in this case, ranging from five to 30 minutes). Previous methods for measuring cell growth rates have focused on volume or length measurements, and have not yet exhibited the necessary precision for revealing single-cell growth models.

The new method should give researchers a way to unravel the relationship between cell growth and cell division — a relationship that has long been murky, says Marc Kirschner, professor of systems biology at Harvard Medical School. While biologists have a good idea of how the cell division cycle is controlled, “the problem of cell growth — how a cell regulates the amount of material it makes — is not well known at all,” says Kirschner, an author of the Nature Methods paper.

A longstanding question in studies of cell growth is whether growth is linear or exponential. Previous studies have yielded conflicting data.

“Over the twofold size range experienced by most proliferating cells, linear and exponential growth curves differ by less than 10 percent, and so the measurement precision must be much less than this,” says Manalis, a member of MIT’s David H. Koch Institute for Integrative Cancer Research.

The researchers studied four types of cells: two strains of bacteria (E. coli and B. subtilis), a strain of yeast and mammalian lymphoblasts (precursors to white blood cells). They showed that B. subtilis cells appear to grow exponentially, but they did not obtain conclusive evidence for E. coli. That’s because there is so much variation between individual cell growth rates in E. coli, even for cells of similar mass, says Francisco Delgado, a grad student in Manalis’ lab and co-lead author of the paper.

If cells do grow exponentially, it means there must be some kind of mechanism to control that growth, says Kirschner. Otherwise, when cells divide into two slightly different-sized daughter cells, as they often do, the larger cell in each generation would always grow faster than the smaller cell, leading to inconsistent cell sizes. Instead, cells generally even out in size, through a mechanism that biologists don’t yet understand.

The cell-mass sensor, which Manalis first demonstrated in 2007, consists of a fluid-filled microchannel etched in a tiny silicon slab that vibrates inside a vacuum. As cells flow through the channel, one at a time, their mass slightly alters the slab’s vibration frequency. The mass of the cell can be calculated from that change in frequency, with a resolution as low as a femtogram (10-15 grams) which is less than 0.01 percent of the weight of a lymphoblast cell in solution.

Michel Godin, a former postdoctoral associate in Manalis’ lab and co-lead author of the paper, developed a way to trap a cell within the microchannel by precisely coordinating the flow direction. That enables the researchers to repeatedly pass a single cell through the channel every second or so, measuring it each time it moves through.

The new system represents a significant advance over any existing cell measurement technique, says Fred Cross, a Rockefeller University professor who studies the yeast cell cycle. “Since it directly measures biomass (at least net biomass with density greater than water) by the truly remarkable expedient of effectively directly placing a single cell on a scale, it is not troubled by ambiguities and inaccuracies inevitably associated with previous, more indirect measurements,” Cross says.

In their current studies, Manalis and his students are tagging proteins inside the cell with fluorescent molecules that reveal what stage of the cell cycle the cell is in, allowing them to correlate cell size with cell-cycle position and ultimately obtain a growth model for yeast and mammalian cells. They are also working on a way to add chemicals such as nutrients, antibiotics and cancer drugs to the fluid inside the microchannel so their effect on growth rates can be studied.

(Photo: Donna Coveney)



0 comentarios

Researchers have discovered the world's only known living population of Sibree's Dwarf Lemur, a rare lemur known only in eastern Madagascar. The discovery of approximately a thousand of these lemurs was made by Mitchell Irwin, a Research Associate at McGill University, and colleagues from the German Primate Centre in Göttingen Germany; the University of Antananarivo in Madagascar; and the University of Massachusetts.

The species was first discovered in Madagascar in 1896, but this tiny, nocturnal dwarf lemur was never studied throughout the 20th century. Following the destruction of its only known rainforest habitat, scientists had no idea whether the species still existed in the wild - or even whether it was a distinct species. The study will be published in the current issue of the journal Molecular Phylogenetics and Evolution.

Irwin first observed dwarf lemurs at Tsinjoarivo, Madagascar, in 2001, shortly after setting up a long-term rainforest research site. "Even then we knew something was unusual about them," Irwin said. "Instead of the rainforest species we expected to see, our lemur resembled the species known from dry western forests, only it was much larger."

In 2006, Irwin began collaborating with Marina Blanco, University of Massachusetts at Amherst who trapped dwarf lemurs at several sites throughout Tsinjoarivo. This work led to the further surprise that two morphologically distinct dwarf lemur species were present, living side-by-side. Further work by geneticist Linn Groeneveld, German Primate Center confirmed the existence of the more common Crossley's dwarf lemur, and the elusive Sibree's dwarf lemur.

The new study showed the mystery lemurs to be very similar to the only known specimen of Sibree's dwarf lemur, now in The Natural History Museum in London, England. Genetic analysis shows the mystery lemurs to be highly distinct from all other known species. In fact, the genetic analyses confirmed that of the four known dwarf lemur species, this is the most genetically unique and probably closely resembles the ancestor that gave rise to the other species.

Irwin is hopeful that this new discovery will lead to increased conservation efforts. "On one hand, you want to get the taxonomy right, just to determine how many dwarf lemur species are out there," said Irwin. But protecting this newly rediscovered species from extinction in a country ravaged by habitat destruction is the next challenge. "Without the recognition provided by this study, this species probably would have gone extinct in the near future. Protecting its only known population and determining how many individuals are left are now top priorities, especially since much of this region's forests have already disappeared."

(Photo: McGill U.)

McGill University


0 comentarios

Working in a rare, "natural seafloor laboratory" of hydrothermal vents that had just been rocked by a volcanic eruption, scientists from the Woods Hole Oceanographic Institution (WHOI) and other institutions have discovered what they believe is an undersea superhighway.

This superhighway carries tiny life forms unprecedented distances to inhabit the post-eruption site.

One such "pioneer species," Ctenopelta porifera, appears to have traveled more than 300 kilometers to settle at the site on the underwater mountain range known as the East Pacific Rise.

"Ctenopelta had never been observed before at the study site, and the nearest known population is 350 km to the north," said Lauren Mullineaux, a senior scientist in WHOI's biology department.

The discovery--in collaboration with scientists at the Lamont-Doherty Earth Observatory (LDEO) and the NOAA Pacific Marine Environmental Laboratory (PMEL)--clashes with the widely accepted assumption that when local adult life is wiped out in a hydrothermal eruption, it is replaced by a pool of tiny creatures from nearby vents.

In this case, however, the larvae that re-settled the post-eruption vent area are noticeably different from the species that were destroyed, according to David Garrison, director of the National Science Foundation (NSF)'s Biological Oceanography Program. In addition, the larvae appear to have traveled great distances to reach their destination.

"That raises the question of how they can possibly disperse so far," said Mullineaux. She added that the findings have implications for the wider distribution of undersea life.

A report on the research by Mullineaux and her colleagues is published in the April 12 issue of the journal Proceedings of the National Academy of Sciences (PNAS).

The discovery of hydrothermal vents on the bottom of the Pacific Ocean in 1977 revolutionized ideas about where and how life could exist.

The seafloor vents gushing warm, mineral-rich fluids and teeming with life raised new questions that researchers have been studying ever since, including: How can so much life thrive at the sunless seafloor? What is the nature of organisms at hydrothermal vents? How do animals migrate to other vent sites?

It was this last question that motivated Mullineaux and her team as they began their study of a vent area on the East Pacific Rise "to gather observations of currents, larvae and juvenile colonists in order to understand what physical processes might facilitate dispersal," Mullineaux said.

One of the group's primary challenges was to determine where the organisms around the vent came from.

As the scientists set out on their mission in 2006, "we got a surprise," said Mullineaux. "A seafloor eruption was detected at our study site, resulting in changes in topography and enormous disturbance to ecological communities. The eruption was, in essence, a natural experiment."

By the time the researchers arrived at the site, they found a scene quite unlike that usually observed at a hydrothermal vent.

Normally, such fissures are teeming with life, supported by the hot chemicals that spew from the vents and provide food through microbial chemosynthesis, a deep-sea version of photosynthesis.

But at this spot on the East Pacific Rise, near 9 degrees North, there was no life.

The eruption had wiped it out.

"Although the vents survived, the animals did not, and virtually all the detectable invertebrate communities were paved over," said Mullineaux. "For us, this was an exciting event. In essence it was a natural clearance experiment that allowed us to explore how the elimination of local source populations affected the supply of larvae and re-colonization."

What the scientists found went against the accepted assumption that most of the organisms needed to re-populate an area come from relatively nearby. But instead, the new larval inhabitants were from a considerable distance away.

"These results show clearly that the species arriving after the eruption are different than those before," says Mullineaux, "with two new pioneer species, Ctenopelta porifera and Lepetodrilus tevnianus, prominent."

The most important finding is that "the processes of the larval stage--as opposed to those of adult organisms--seem to control colonization," Mullineaux said. "We found that a pioneer colonization event by one species, Ctenopelta porifera, radically changed the community structure."

But a question remained: How were these weak-swimming larvae propelled such vast distances to the decimated vent area?

The answer may lie in a recently developed model by Mullineaux's colleagues Dennis McGillicuddy and Jim Ledwell of WHOI, Bill Lavelle of PMEL and Andreas Thurnherr of LDEO, all part of the team for an NSF-funded project called LADDER--LArval Dispersal on the Deep East Pacific Rise.

Seemingly the only way the emigrating larvae could get to their new home from so far away, Mullineaux says, would be to ride ocean-bottom "jets" traveling up to 10 centimeters a second, such as those identified in the work of McGillicuddy and Thurnherr.

Theoretically, however, even these ridge-crest jets might not quite be able to transport the larvae from 350 kilometers within the time frame of their 30-day lifespan, she said. "Either the larvae are using some other transport or they are living longer than we thought," said Mullineaux.

She speculates that large eddies, or whirlpools of water, several hundred kilometers in diameter, may be propelling the migrating larvae even faster--delivering them to their new home while they are still alive. Or perhaps the larvae are able to somehow reduce their metabolism and extend their life.

The findings present an array of fascinating scientific scenarios that warrant further exploration, according to Mullineaux.

They also may open up new ways of looking at the impacts of human activities on the seafloor, such as seafloor mineral mining, which could alter a vent site in a similar way to an eruption.

Such activity could conceivably foster a greater diversity of species at a vent that has just been mined, or it could cause extinction, Mullineaux said. But such scenarios are still highly speculative, she emphasized.

Mullineaux's WHOI co-authors on the paper are Diane Adams, currently at the National Institutes of Health, Susan Mills and Stace Beaulieu.

(Photo: LADDER Project/WHOI Alvin Group)

National Science Foundation


0 comentarios

The world's largest antimony mine has become the world's largest laboratory for studying the environmental consequences of escaped antimony -- an element whose environmental and biological properties are still largely a mystery.

Scientists from Indiana University Bloomington, the University of Alberta, and the Chinese Academy of Sciences have found the waters around Xikuangshan mine in southwest China contain antimony at levels two to four orders of magnitude higher than normal (0.33 - 11.4 parts per million). The scientists' report will appear in an upcoming edition of Environmental Geochemistry and Health (now online).

"Antimony is an emergent contaminant," said IU Bloomington Ph.D. student Faye Liu, the paper's lead author. "People have not paid enough attention to it."

Used in small quantities, antimony has a wide variety of applications -- from hardening the lead in bullets and improving battery performance to combating malaria.

Little is known about antimony's toxicity, in part because the metalloid element is usually found at low, parts-per-billion concentrations in natural environments. At Xikuangshan, Liu and her colleagues found that aqueous antimony concentrations could be as high 11 parts per million, 1,000 times the antimony levels found in uncontaminated water.

The alarming circumstances at Xikuangshan present an opportunity to understand what happens to antimony, geologically and chemically, when large quantities of it are introduced to the environment. That knowledge will be useful to investigations of antimony contamination near factories and military bases around the world.

The U.S. Environmental Protection Agency and similar regulatory agencies in Europe operate under the assumption that antimony's properties are similar to those of arsenic, another element in antimony's chemical group.

"That will need to change," said IU Bloomington geologist Chen Zhu, Liu's advisor and the project's principal investigator. "We saw that antimony behaves very differently from arsenic -- antimony oxidizes much more quickly than arsenic when exposed."

The vast majority of antimony the scientists isolated at Xikuangshan was of the "V" type, an oxidation state in which the metal has given up five electrons. It is believed V is the least toxic of the three oxidation states of which antimony is capable (I, III and V). It is not known whether antimony-V's relatively diminished toxicity is upended at Xikuangshan by its overwhelming presence.

Land within and around the mining area is used for farming. The drinking water plant for local residents was built in the mining area. Zhu says health problems are common at Xikuangshan, possibly the result of antimony intoxication.

Zhu says he is discussing a possible collaboration with IU School of Medicine toxicologist Jim Klaunig. Researchers would return to Xikuangshan to determine whether the elevated antimony can be tied to acute and chronic health problems among those who live in the vicinity. Another possible study group might be those Chinese who live downstream of Xikuangshan along the Qing River.

As part of their Environmental Geochemistry and Health study, Zhu and scientists from the Chinese Academy of Sciences conducted field work at Xikuangshan in 2007, drawing multiple water samples from 18 different sample sites. Samples were shipped back to Bloomington for atomic fluorescence spectroscopic analysis and to Alberta for inductively coupled plasma mass spectroscopy analysis. The scientists learned antimony-III was rare, beyond detection or present at trace levels. The near totality of antimony in each water sample was antimony-V.

The Xikuangshan antimony mine is the world's largest. Since antimony mining began there more than 200 years ago, mine production has increased steadily to the present day. Today, Xikuangshan produces 60 percent of the world's antimony.

While Zhu was on sabbatical leave in 2008, Faye Liu was advised by IU Bloomington biogeochemist and inaugural Provost's Professor Lisa Pratt. Zhu and Pratt recently began a joint project to learn more about the biogeochemistry of antimony. The scientists' antimony research complements their concurrent NSF-funded research on arsenic.

IU Bloomington geologists Claudia Johnson and Erika Elswick, both participants in the Environmental Geochemistry and Health study, have also taken seawater samples from the Caribbean. Liu is investigating the samples' antimony content.

(Photo: Chen Zhu)

Indiana University Bloomington


0 comentarios
People who pursue happiness through material possessions are liked less by their peers than people who pursue happiness through life experiences, according to a new study led by University of Colorado at Boulder psychology Professor Leaf Van Boven.

Van Boven has spent a decade studying the social costs and benefits of pursuing happiness through the acquisition of life experiences such as traveling and going to concerts versus the purchase of material possessions like fancy cars and jewelry.

"We have found that material possessions don't provide as much enduring happiness as the pursuit of life experiences," Van Boven said.

The "take home" message in his most recent study, which appears in this month's edition of the Personality and Social Psychology Bulletin, is that not only will investing in material possessions make us less happy than investing in life experiences, but that it often makes us less popular among our peers as well.

"The mistake we can sometimes make is believing that pursuing material possessions will gain us status and admiration while also improving our social relationships," Van Boven said. "In fact, it seems to have exactly the opposite effect. This is really problematic because we know that having quality social relationships is one of the best predictors of happiness, health and well-being.

"So for many of us we should rethink these decisions that we might make in terms of pursuing material possessions versus life experiences," he said. "Trying to have a happier life by the acquisition of material possessions is probably not a very wise decision."

CU-Boulder marketing Professor Margaret Campbell and Cornell University Professor Thomas Gilovich were co-authors on the study.

Past studies have found that people who are materialistic tend to have lower quality social relationships. They also have fewer and less satisfying friendships.

In the recent study, Van Boven and his colleagues conducted five experiments with undergraduate students and through a national survey. They sought to find out if people had unfavorable stereotypes of materialistic people and to see if these stereotypes led them to like the materialistic people less than those who pursued life experiences.

In one experiment undergraduates who didn't know each other were randomly paired up and assigned to discuss either a material possession or a life experience they had purchased and were happy with. After talking for 15 or 20 minutes they were then asked about their conversation partners by the researchers.

"What we found was that people who had discussed their material possessions liked their conversation partner less than those who had discussed an experience they had purchased," Van Boven said. "They also were less interested in forming a friendship with them, so there's a real social cost to being associated with material possessions rather than life experiences."

In another experiment using a national survey, the researchers told people about someone who had purchased a material item such as a new shirt or a life experience like a concert ticket. They then asked them a number of questions about that person. They found that simply learning that someone made a material purchase caused them to like him or her less than learning that someone made an experiential purchase.

"We have pretty negative stereotypes of people who are materialistic," Van Boven said. "When we asked people to think of someone who is materialistic and describe their personality traits, selfish and self-centered come up pretty frequently. However, when we asked people to describe someone who is more experiential in nature, things like altruistic, friendly and outgoing come up much more frequently."

So what do you do if you're somebody who really likes to buy lots of material possessions?

"The short answer is you should try to change," Van Boven said. "Not just our research, but a lot of other research has found that people who are materialistic incur many mental health costs and social costs -- they're less happy and more prone to depression."

Van Boven says one thing you can do is choose to be around people who are less interested in material goods.

"It's not a quick fix, but it can be done," he said. "I think what makes it particularly challenging is that it requires some extra effort and mindfulness about the way we make decisions about how to be happy in life."

University of Colorado


0 comentarios

Shifting work schedules can wreak havoc on a person's ability to get enough sleep, resulting in poor performance on the job.

Researchers funded by the National Space Biomedical Research Institute (NSBRI) have developed software that uses mathematical models to help astronauts and ground support personnel better adjust to shifting work and sleep schedules. Outside the space program, the software could help people who do shift or night work or who experience jet lag due to travel across time zones.

"The best methods that we know to help people operate at peak performance are first to ensure that they get adequate sleep, and second that their work schedules are designed to be aligned with the natural body clock," said project leader Dr. Elizabeth Klerman, associate team leader for NSBRI's Human Factors and Performance Team.

According to Klerman, a physician in the Division of Sleep Medicine at Brigham and Women’s Hospital in Boston and associate professor at Harvard Medical School, the software has two components. The Circadian Performance Simulation Software (CPSS) uses complex mathematical formulas to predict how an individual will react to specific conditions. CPSS also allows users to interactively design a schedule, such as shifting sleep/wake to a different time, and predicts when they would be expected to perform well or poorly.

The second component, known as Shifter, then "prescribes" the optimal times in the schedule to use light to shift a person’s circadian rhythm in order to improve performance at critical times during the schedule.

"If there is a mission event, such as a spacewalk, scheduled at one or two o'clock in the morning, what can we do to help the astronaut to be alert and functioning well at that time?" Klerman said. "Do we suggest a nap or caffeine? Do we shift their sleep/wake schedule? There are a variety of options that we would like to be able to provide."

Scientists know that an individual's performance and alertness are tightly regulated by several factors related to circadian rhythms and the sleep/wake cycle – length of time awake; the timing, intensity and wavelength of light; the amount of sleep the night before; and the body clock's perception of time. As a result, most people are not able to operate at peak job performance in the late night or early morning hours.

The situation for International Space Station astronauts is complicated by the fact that they often face schedules that are not uniform. A shift in scheduled sleep/wake time, due to an event such as docking, could be as much as eight or nine hours, with the transition taking place over a short period of time. "These dramatic shifts in schedule not only affect the body's ability to know what time it is, but also hinder the body’s ability to give the appropriate signals to a person trying to wake up or go to sleep," Klerman said.

With the basic software program complete, the researchers are now working to individualize the model. They want to determine what personal data are needed in order to provide recommendations for individuals. Klerman said the information needed could be as simple as age, or it could require more complicated data.

The software can easily be adapted for use in many occupations. "This program may be helpful for anyone who has to work the night shift, rotating shifts or extended shifts," Klerman said. "It could also help international travelers effectively deal with jet lag."

Workers outside the space industry that could benefit directly are medical personnel, security or police officers, firefighters, those working in transportation such as long-haul truckers, and power plant operators. Klerman suggested that everyone could benefit indirectly from the modeling. "Our lives, including our safety, are impacted by those people who have jobs requiring shift work or extremely long hours and who may be at increased risk of accidents and errors affecting themselves or others," she said.

Klerman added that lack of sleep can affect more than a person’s alertness and performance. It can impact overall health. Lack of sleep is associated with an increased risk of obesity, pre-diabetic conditions, reduced response to vaccines and changes in cardiovascular functions.

The mathematical modeling effort is one of several projects being conducted by NSBRI’s Human Factors and Performance Team to improve sleep and scheduling of work shifts, as well as determining which specific types of lighting can improve alertness and performance during spaceflight.

(Photo: Elizabeth Klerman, M.D., Ph.D./Brigham and Women’s Hospital)

National Space Biomedical Research Institute

Wednesday, April 28, 2010


0 comentarios

"Supervolcanoes" have been blamed for multiple mass extinctions in Earth's history, but the cause of their massive eruptions is unknown.

Despite their global impact, the eruptions' origin and triggering mechanisms have remained unexplained. New data obtained during a recent Integrated Ocean Drilling Program (IODP) expedition in the Pacific Ocean may provide clues to unlocking this mystery.
To explore the origins of these seafloor giants, scientists drilled into a large, 145 million-year-old underwater volcanic mountain chain off the coast of Japan.

IODP Expedition 324: Shatsky Rise Formation took place onboard the scientific ocean drilling vessel JOIDES Resolution from September 4 to November 4, 2009. Preliminary results of the voyage are emerging.

"'Supervolcanoes' emitted large amounts of gases and particles into the atmosphere, and re-paved the ocean floor," says Rodey Batiza, marine geosciences section head in the National Science Foundation (NSF)'s Division of Ocean Sciences, which co-funded the research.

The result?

"Loss of species, increased greenhouse gases in the atmosphere, and changes in ocean circulation," says Batiza.

In fall 2009, an international team of scientists participating in IODP Expedition 324 drilled five sites in the ocean floor. They studied the origin of the 145 million-year-old Shatsky Rise volcanic mountain chain.

Located 1,500 kilometers (930 miles) east of Japan, Shatsky Rise measures roughly the size of California.

This underwater mountain chain is one of the largest supervolcanoes in the world: the top of Shatsky Rise lies three and a half kilometers (about two miles) below the sea's surface, while its base plunges to nearly six kilometers (four miles) beneath the surface.

Shatsky Rise is composed of layers of hardened lava, with individual lava flows that are up to 23 meters (75 feet) thick.

"Seafloor supervolcanoes are characterized by the eruption of enormous volumes of lava," says William Sager of Texas A&M University, who led the expedition with co-chief scientist Takashi Sano of Japan's National Museum of Nature and Science in Tokyo. "Studying their formation is critical to understanding the processes of volcanism, and the movement of material from Earth's interior to its surface."

About a dozen supervolcanoes exist on Earth; some are on land, while others lie at the bottom of the ocean. Those found on the seafloor are often referred to as large oceanic plateaus.

Current scientific thinking suggests that these supervolcanoes were caused by eruptions over a period of a few million years or less--a rapid pace in geologic time.

Each of these supervolcanoes produced several million cubic kilometers of lava--about three hundred times the volume of all the Great Lakes combined--dwarfing the volume of lava produced by the largest present-day volcanoes in places like Hawaii.

Since the 1960s, geologists have debated the formation and origin of these large oceanic plateaus. The mystery lies in the origin of the magma, molten rock that forms within the Earth.

A magma source rising from deep within the Earth has a different chemical composition than magma that forms just below Earth's crust. Some large oceanic plateaus show signs of a deep-mantle origin. Others exhibit chemical signatures indicative of magma from a much shallower depth.

The IODP Shatsky Rise expedition focused on deciphering the relationship between supervolcano formation and the boundaries of tectonic plates, crucial to understanding what triggers supervolcano formation.

A widely-accepted explanation for oceanic plateaus is that they form when magma in the form of a "plume head" rises from deep within the Earth to the surface.

An alternative theory suggests that large oceanic plateaus can originate at the intersection of three tectonic plates, known as a "triple junction."

Shatsky Rise could play a key role in this debate, because it formed at a triple junction. However, it also displays characteristics that could be explained by the plume head model.

"Shatsky Rise is one of the best places in the world to study the origin of supervolcanoes," says Sager. "What makes Shatsky Rise unique is that it's the only supervolcano to have formed during a time when Earth's magnetic field reversed frequently."

This process creates "magnetic stripe" patterns in the seafloor. "We can use these magnetic stripes to decipher the timing of the eruption," says Sager, "and the spatial relationship of Shatsky Rise to the surrounding tectonic plates and triple junctions."

Sediments and microfossils collected during the expedition indicate that parts of the Shatsky Rise plateau were at one time at or above sea level, and formed an archipelago during the early Cretaceous period (about 145 million years ago).

Shipboard lab studies show that much of the lava erupted rapidly, and that Shatsky Rise formed at or near the equator.

As analyses continue, data collected during this expedition will help scientists resolve the 50 year-old debate about the origin and nature of large oceanic plateaus.

(Photo: John Beck, IODP/TAMU)

National Science Foundation


0 comentarios
Like microscopic inchworms, cancer cells slink away from tumors to travel and settle elsewhere in the body. Now, researchers at Weill Cornell Medical College report in today’s online edition of the journal Nature that new anti-cancer agents break down the looping gait these cells use to migrate, stopping them in their tracks.

Mice implanted with cancer cells and treated with the small molecule macroketone lived a full life without any cancer spread, compared with control animals, which all died of metastasis. When macroketone was given a week after cancer cells were introduced, it still blocked greater than 80 percent of cancer metastasis in mice.

These findings provide a very encouraging direction for development of a new class of anti-cancer agents, the first to specifically stop cancer metastasis, says the study’s lead investigator, Dr. Xin-Yun Huang, a professor in the Department of Physiology and Biophysics at Weill Cornell Medical College.

“More than 90 percent of cancer patients die because their cancer has spread, so we desperately need a way to stop this metastasis,” Dr. Huang says. “This study offers a paradigm shift in thinking and, potentially, a new direction in treatment.”

Dr. Huang and his research team have been working on macroketone since 2003. Their work started after researchers in Japan isolated a natural substance, dubbed migrastatin, secreted by Streptomyces bacteria, that is the basis of many antibiotic drugs. The Japanese researchers noted that migrastatin had a weak inhibitory effect on tumor cell migration.

Dr. Huang and collaborators at the Memorial Sloan-Kettering Cancer Center then proceeded to build analogues of migrastatin — synthetic and molecularly simpler versions.

“After a lot of modifications, we made several versions that were a thousand-fold more potent than the original,” Dr. Huang says.

In 2005, they published a study showing that several of the new versions, including macroketone, stopped cancer cell metastasis in laboratory animals, but they didn’t know how the agent worked.

In the current study, the researchers revealed the mechanism. They found that macroketone targets an actin cytoskeletal protein known as fascin that is critical to cell movement. In order for a cancer cell to leave a primary tumor, fascin bundles actin filaments together like a thick finger. The front edge of this finger creeps forward and pulls along the rear of the cell. Cells crawl away in the same way that an inchworm moves.

Macroketone latches on to individual fascin, preventing the actin fibers from adhering to each other and forming the pushing leading edge, Dr. Huang says. Because individual actin fibers are too soft when they are not bundled together, the cell cannot move. The new animal experiments detailed in the study confirmed the power of macroketone. The agent did not stop the cancer cells implanted into the animals from forming tumors or from growing, but it completely prevented tumor cells from spreading, compared with control animals, he says. Even when macroketone was given after tumors formed, most cancer spread was blocked.

“This suggests to us that an agent like macroketone could be used to both prevent cancer spread and to treat it as well,” Dr. Huang says. “Of course, because it has no effect on the growth of a primary tumor, such a drug would have to be combined with other anti-cancer therapies acting on tumor cell growth.”

Also pleasing was the finding that the mice suffered few side effects from the treatment, according to Dr. Huang. “The beauty of this approach is that fascin is over-expressed in metastatic tumor cells but is only expressed at a very low level in normal epithelial cells, so a treatment that attacks fascin will have comparatively little effect on normal cells — unlike traditional chemotherapy which attacks all dividing cells,” he says.

Dr. Huang and his colleagues reported another key finding in the same Nature paper — on x-ray crystal structures of fascin and of the complex of fascin and macroketone. They demonstrated how macroketone blocks the activity of fascin. The images showed precisely how macroketone snugly nestles into a pocket of fascin affecting the way it regulates actin filament bundling.

“The molecular snapshots provide an approach for rational drug design of other molecules inhibiting the function of fascin, the therapeutic target,” says Dr. Huang.

Brookhaven National Laboratory


0 comentarios
Scientists at Newcastle University have developed a pioneering technique which enables them for the first time to successfully transfer DNA between two human eggs. The technique has the potential to help prevent the transmission of serious inherited disorders known as mitochondrial diseases.

The study, led by Dr Mary Herbert and Professor Doug Turnbull, and funded primarily by the Muscular Dystrophy Campaign, the Medical Research Council and the Wellcome Trust, is published in the journal Nature.

Every cell in our body needs energy to function. This energy is provided by mitochondria, often referred to as the cells' 'batteries'. Mitochondria are found in every cell, along with the cell nucleus, which contains the genes that determine our individual characteristics. The information required to create these 'batteries' – the mitochondrial DNA – is passed down the maternal line, from mother to child.

A mother's egg contains a copy of her own DNA – twenty-three chromosomes – as well as DNA for her mitochondria. The amount of genetic material contained in mitochondrial DNA is very small – 13 protein-producing genes, compared to an estimated 23,000 genes that we inherit from our parents – and this information is used solely to generate the energy produced by the 'batteries'.

Like all DNA, the DNA in mitochondria can mutate and mothers can pass these mutations onto their children. Around one in 200 children are born each year with mutations which in most cases cause only mild or asymptomatic forms of mitochondrial disease. However, around one in 6,500 children are born with severe mitochondrial diseases, which include muscular weakness, blindness, fatal heart failure, liver failure, learning disability and diabetes and can lead to death in early infancy.

There are no treatments available to cure these conditions and mothers face the agonising choice of whether to risk having a child who may be affected by such a disease or not to have children at all.

Now, researchers at Newcastle University have developed a technique which allows them to replace these 'batteries'. This is the first time such a technique has been used in fertilised human eggs.

A fertilised egg usually contains two pronuclei – genetic material from the egg and sperm – as well as mitochondria. The technique developed by the Newcastle team involves extracting the pronuclei but leaving behind the mitochondria. The researchers then take a fertilised egg from a donor, remove its pronuclei and replace them with the extracted pronuclei. This new fertilised egg contains the DNA of the father and mother, and the mitochondria from the donor.

"What we've done is like changing the battery on a laptop. The energy supply now works properly, but none of the information on the hard drive has been changed," explains Professor Turnbull. "A child born using this method would have correctly functioning mitochondria, but in every other respect would get all their genetic information from their father and mother."

The Newcastle team used their technique to create a total of eighty zygotes (fertilised eggs). These were cultured for six to eight days in the laboratory to monitor development as far as the blastocyst stage (the stage at which it has divided into a group of around one hundred cells) in line with the terms of the licence granted by the Human Fertility and Embryology Authority (HFEA) in 2005.

In some cases, a very small amount of the mother's mitochondrial DNA was carried over to the new egg. Since severe diseases only occurs with large amounts of mutations, this would be very unlikely to affect a child's health.

The research is a proof of principle that researchers should be able to prevent transmission of mitochondrial diseases, thereby allowing the mother to give birth to a healthy child.

"This is a very exciting development with immense potential to help families at risk from mitochondrial diseases," says Professor Turnbull. "We have no way of curing these diseases at the moment, but this technique could allow us to prevent the diseases occurring in the first place. It is important that we do all we can to help these families and give them the chance to have healthy children, something most of us take for granted."

The Newcastle team used eggs which were unsuitable for IVF; for example, eggs with one or three pronuclei, rather than the normal two. This is common in the IVF process and affects around one in ten fertilised eggs. The eggs were donated by couples attending the Newcastle Fertility Centre at Life. The egg donation programme and the ethical and regulatory aspects of the project are led by Professor Alison Murdoch.

The team is now planning further studies that will provide further evidence of the safety of this procedure. The Human Fertility and Embryology (HFE) Act as amended in 2009, currently prevents fertility treatment using these techniques. However, the HFE Act includes the provision for the Secretary of State to make provisions for this to be permitted in the future.

Newcastle University


0 comentarios

Scientists here are taking the trial and error out of drug design by using powerful computers to identify molecular structures that have the highest potential to serve as the basis for new medications.

Most drugs are designed to act on proteins that somehow malfunction in ways that lead to damage and disease in the body. The active ingredient in these medicines is typically a single molecule that can interact with a protein to stop its misbehavior.

Finding such a molecule, however, is not easy. It ideally will be shaped and configured in a way that allows it to bind with a protein on what are known as “hot spots” on the protein surface – and the more hot spots it binds to, the more potential it has to be therapeutic.

To accomplish this, many drug molecules are composed of units called fragments that are linked through chemical bonds. An ideal drug molecule for a specific protein disease target should be a combination of fragments that fit into each hot spot in the best possible way.

Previous methods to identify these molecules have emphasized searching for fragments that can attach to one hot spot at a time. Finding structures that attach to all of the required hot spots is tedious, time-consuming and error-prone.

Ohio State University researchers, however, have used computer simulations to identify molecular fragments that attach simultaneously to multiple hot spots on proteins. The technique is a new way to tackle the fragment-based design strategy.

“We use the massive computing power available to us to find only the good fragments and link them together,” said Chenglong Li, assistant professor of medicinal chemistry and pharmacognosy at Ohio State and senior author of a study detailing this work.

Li likens the molecular fragments to birds flying around in space, looking for food on the landscape: the protein surface. With this technique, he creates computer programs that allow these birds – or molecular fragments – to find the prime location for food, or the protein hot spots. The algorithm is originated from a computation technique called particle swarm optimization.

“Each bird can see the landscape individually, and it can sense other birds that inform each other about where the foods are,” Li said. “That’s how this method works. Each fragment is like a bird finding food on the landscape. And that’s how we place the fragments and obtain the best fragment combination for specific protein binding sites.”

Li verified that the technique works by comparing a molecular structure he designed to the molecular base of an existing cancer medication that targets a widely understood protein.

“My method reconstructed what pharmaceutical companies have already done,” he said. “In the future, we’ll apply this technique to protein targets for diseases that remain challenging to treat with currently available therapies.”

The research appears online and is scheduled for later print publication in the Journal of Computational Chemistry.

Li said this new computer modeling method of drug design has the potential to complement and increase efficiency of more time-consuming methods like nuclear magnetic resonance and X-ray crystallography. For example, he said, X-ray fragment crystallography can be hard to interpret because of “noise” created by fragments that don’t bind well to proteins.

With this new computer simulation technique, called multiple ligand simultaneous docking, Li instructs molecular fragments to interact with each other before the actual experimental trials, removing weak and “noisy” fragments so only the promising ones are left.

“They sense each other’s presence through molecular force. They suppress the noise and go exactly where they are supposed to go,” he said. “You find the right fragment in the right place, and it’s like fitting the right piece into a jigsaw puzzle.”

Before he can begin designing a molecule, Li must obtain information about a specific protein target, especially the protein structures. These details come from collaborators who have already mapped a target protein’s surface to pinpoint where the hot spots are, for example, through directed mutations or from databases.

Li starts the design process with molecular fragments that come from thousands of existing drugs already on the market. He creates a computer image of those molecules, and then chops them up into tiny pieces and creates a library of substructures to work with – typically more than a thousand possibilities.

That is where computational power comes into play.

“To search all of the possibilities of these molecular combinations and narrow them down, we need a massive computer,” he said. Li uses two clusters of multiple computers, one in Ohio State’s College of Pharmacy and the other in the Ohio Supercomputer Center, to complete the simulations.

The results of this computation create an initial molecular template that can serve as a blueprint for later stages of the drug discovery process. Medicinal chemists can assemble synthetic molecules based on these computer models, which can then be tested for their effectiveness against a given disease condition in a variety of research environments.

Li already has used this technique to identify molecules that bind to known cancer-causing proteins. He said the method can be applied to any protein that is a suspected cause of diseases of any kind, not just cancer.

(Photo: OSU)

Ohio State University


0 comentarios

Observations of how the youngest-known neutron star has cooled over the past decade are giving astronomers new insights into the interior of these super-dense dead stars.

Dr Wynn Ho presented the findings on Thursday April 15th at the RAS National Astronomy Meeting in Glasgow.

Dr Ho, of the University of Southampton, and Dr Craig Heinke, of the University of Alberta in Canada, measured the temperature of the neutron star in the Cassiopeia A supernova remnant using data obtained by NASA’s Chandra X-ray Observatory between 2000 and 2009.

“This is the first time that astronomers have been able to watch a young neutron star cool steadily over time. Chandra has given us a snapshot of the temperature roughly every two years for the past decade and we have seen the temperature drop during that time by about 3%,” said Dr Ho.

Neutron stars are composed mostly of neutrons crushed together by gravity, compressed to over a million million times the density of lead. They are the dense cores of massive stars that have run out of nuclear fuel and collapsed in supernova explosions. The Cassiopeia A supernova explosion, likely to have taken place around 1680, would have heated the neutron star to temperatures of billions of degrees, from which it has cooled down to a temperature of about two million degrees Celsius.

“Young neutron stars cool through the emission of high-energy neutrinos – particles similar to photons but which do not interact much with normal matter and therefore are very difficult to detect. Since most of the neutrinos are produced deep inside the star, we can use the observed temperature changes to probe what’s going on in the neutron star’s core. The structure of neutron stars determines how they cool, so this discovery will allow us to understand better what neutron stars are made of. Our observations of temperature variations already rule out some models for this cooling and has given us insights into the properties of matter that cannot be studied in laboratories on Earth,” said Dr Ho.

Initially, the core of the neutron star cools much more rapidly than the outer layers. After a few hundred years, equilibrium is reached and the whole interior cools at a uniform rate. At approximately 330 years old, the Cassiopeia A neutron star is near this cross-over age. If the cooling is only due to neutrino emission, there should be a steady decline in temperature. However, although

Dr Ho and Dr Heinke observed an overall steady trend over the 10 year period, there was a larger change around 2006 that suggests other processes may be active.

“The neutron star may not yet have relaxed into the steady cooling phase, or we could be seeing other processes going on. We don’t know whether the interior of a neutron star contains more exotic particles, such as quarks, or other states of matter, such as superfluids and superconductors. We hope that with more observations, we will be able to explain what is happening in the interior in much more detail,” said Dr Ho.

Dr Ho and Dr Heinke have submitted a paper on their discovery to the Astrophysical Journal.

(Photo: NASA/CXC/MIT/UMass Amherst/M.D.Stage et al.)

The Royal Astronomical Society


0 comentarios

A lightning researcher at the University of Bath has discovered that during thunderstorms, giant natural particle accelerators can form 40 km above the surface of the Earth.

Dr Martin Füllekrug from the University’s Department of Electronic & Electrical Engineering presented his new work on Wednesday 14 April at the Royal Astronomical Society National Astronomy Meeting (RAS NAM 2010) in Glasgow.

His findings show that when particularly intense lightning discharges in thunderstorms coincide with high-energy particles coming in from space (cosmic rays), nature provides the right conditions to form a giant particle accelerator above the thunderclouds.

The cosmic rays strip off electrons from air molecules and these electrons are accelerated upwards by the electric field of the lightning discharge. The free electrons and the lightning electric field then make up a natural particle accelerator.

The accelerated electrons then develop into a narrow particle beam which can propagate from the lowest level of the atmosphere (the troposphere), through the middle atmosphere and into near-Earth space, where the energetic electrons are trapped in the Earth’s radiation belt and can eventually cause problems for orbiting satellites.

These are energetic events and for the blink of an eye, the power of the electron beam can be as large as the power of a small nuclear power plant.

Dr Füllekrug explained: “The trick to determining the height of one of the natural particle accelerators is to use the radio waves emitted by the particle beam.”

These radio waves were predicted by his co-worker Dr Robert Roussel-Dupré using computer simulations at the Los Alamos National Laboratory supercomputer facility.

A team of European scientists, from Denmark, France, Spain and the UK helped to detect the intense lightning discharges in southern France which set up the particle accelerator.

They monitored the area above thunderstorms with video cameras and reported lightning discharges which were strong enough to produce transient airglows above thunderstorms known as sprites. A small fraction of these sprites were found to coincide with the particle beams.

The zone above thunderstorms has been a suspected natural particle accelerator since the Scottish physicist and Nobel Prize winner Charles Thomson Rees Wilson speculated about lightning discharges above these storms in 1925.

In the next few years five different planned space missions (the TARANIS, ASIM, CHIBIS, IBUKI and FIREFLY satellites) will be able to measure the energetic particle beams directly.

Dr Füllekrug commented: “It’s intriguing to see that nature creates particle accelerators just a few miles above our heads. Once these new missions study them in more detail from space we should get a far better idea of how they actually work.

“They provide a fascinating example of the interaction between the Earth and the wider Universe.”

(Photo: Oscar van der Velde, Universitat de Catalunya, Spain and Serge Soula, Laboratoire d'Aerologie, France)

University of Bath


0 comentarios

A natural product found in both coconut oil and human breast milk – lauric acid -- shines as a possible new acne treatment thanks to a bioengineering graduate student from the UC San Diego Jacobs School of Engineering. The student developed a “smart delivery system” – published in the journal ACS Nano in March – capable of delivering lauric-acid-filled nano-scale bombs directly to skin-dwelling bacteria (Propionibacterium acnes) that cause common acne.

On Thursday April 15, bioengineering graduate student Dissaya “Nu” Pornpattananangkul presented her most recent work on this experimental acne-drug-delivery system at Research Expo, the annual research conference of the UC San Diego Jacobs School of Engineering.

Common acne, also known as “acne vulgaris,” afflicts more than 85 percent of teenagers and over 40 million people in the United States; and current treatments have undesirable side effects including redness and burning. Lauric-acid-based treatments could avoid these side effects, the UC San Diego researchers say.

“It’s a good feeling to know that I have a chance to develop a drug that could help people with acne,” said Pornpattananangkul, who performs this research in the Nanomaterials and Nanomedicine Laboratory of UC San Diego NanoEngineering professor Liangfang Zhang from the Jacobs School of Engineering.

The new smart delivery system includes gold nanoparticles attached to surfaces of lauric-acid-filled nano-bombs. The gold nanoparticles keep the nano-bombs (liposomes) from fusing together. The gold nanoparticles also help the liposomes locate acne-causing bacteria based on the skin microenvironment, including pH.

Once the nano-bombs reach the bacterial membranes, the acidic microenvironment causes the gold nanoparticles to drop off. This frees the liposomes carrying lauric acid payloads to fuse with bacterial membranes and kill the Propionibacterium acnes bacteria.

“Precisely controlled nano-scale delivery of drugs that are applied topically to the skin could significantly improve the treatment of skin bacterial infections. By delivering drugs directly to the bacteria of interest, we hope to boost antimicrobial efficacy and minimize off-target adverse effects,” said Zhang. “All building blocks of the nano-bombs are either natural products or have been approved for clinical use, which means these nano-bombs are likely to be tested on humans in the near future.”

Zhang noted that nano-scale topical drug delivery systems face a different set of challenges than systems that use nanotechnology to deliver drugs systematically to people.

Pornpattananangkul and UC San Diego chemical engineering undergraduate Darren Yang confirmed, in 2009 in the journal Biomaterials, the antimicrobial activity of nano-scale packets of lauric acid against Propionibacterium acnes.

Pornpattananangkul, who is originally from Thailand, said that it’s just a coincidence that her research involves a natural product produced by coconuts – a staple of Thai cuisine.

(Photo: UCSD)

University of California, San Diego


0 comentarios
Some of us need regular amounts of coffee or other chemical enhancers to make us cognitively sharper. A newly published study suggests perhaps a brief bit of meditation would prepare us just as well.

While past research using neuroimaging technology has shown that meditation techniques can promote significant changes in brain areas associated with concentration, it has always been assumed that extensive training was required to achieve this effect. Though many people would like to boost their cognitive abilities, the monk-like discipline required seems like a daunting time commitment and financial cost for this benefit.

Surprisingly, the benefits may be achievable even without all the work. Though it sounds almost like an advertisement for a "miracle" weight-loss product, new research now suggests that the mind may be easier to cognitively train than we previously believed. Psychologists studying the effects of a meditation technique known as "mindfulness " found that meditation-trained participants showed a significant improvement in their critical cognitive skills (and performed significantly higher in cognitive tests than a control group) after only four days of training for only 20 minutes each day.

"In the behavioral test results, what we are seeing is something that is somewhat comparable to results that have been documented after far more extensive training," said Fadel Zeidan, a post-doctoral researcher at Wake Forest University School of Medicine, and a former doctoral student at the University of North Carolina at Charlotte, where the research was conducted.

"Simply stated, the profound improvements that we found after just 4 days of meditation training– are really surprising," Zeidan noted. "It goes to show that the mind is, in fact, easily changeable and highly influenced, especially by meditation."

The study appeared in the April 2 issue of Consciousness and Cognition. Zeidan's co-authors are Susan K. Johnson, Zhanna David and Paula Goolkasian from the Department of Psychology at UNC Charlotte, and Bruce J. Diamond from William Patterson University. The research was also part of Zeidan's doctoral dissertation.

The experiment involved 63 student volunteers, 49 of whom completed the experiment. Participants were randomly assigned in approximately equivalent numbers to one of two groups, one of which received the meditation training while the other group listened for equivalent periods of time to a book (J.R.R. Tolkein's The Hobbit) being read aloud.

Prior to and following the meditation and reading sessions, the participants were subjected to a broad battery of behavioral tests assessing mood, memory, visual attention, attention processing, and vigilance.

Both groups performed equally on all measures at the beginning of the experiment. Both groups also improved following the meditation and reading experiences in measures of mood, but only the group that received the meditation training improved significantly in the cognitive measures. The meditation group scored consistently higher averages than the reading/listening group on all the cognitive tests and as much as ten times better on one challenging test that involved sustaining the ability to focus, while holding other information in mind.

"The meditation group did especially better on all the cognitive tests that were timed," Zeidan noted. "In tasks where participants had to process information under time constraints causing stress, the group briefly trained in mindfulness performed significantly better."

Particularly of note were the differing results on a "computer adaptive n-back task," where participants would have to correctly remember if a stimulus had been shown two steps earlier in a sequence. If the participant got the answer right, the computer would react by increasing the speed of the subsequent stimulus, further increasing the difficulty of the task. The meditation-trained group averaged aproximately10 consecutive correct answers, while the listening group averaged approximately one.

"Findings like these suggest that meditation's benefits may not require extensive training to be realized, and that meditation's first benefits may be associated with increasing the ability to sustain attention," Zeidan said.

"Further study is warranted," he stressed, noting that brain imaging studies would be helpful in confirming the brain changes that the behavioral tests seem to indicate, "but this seems to be strong evidence for the idea that we may be able to modify our own minds to improve our cognitive processing – most importantly in the ability to sustain attention and vigilance – within a week's time."

The meditation training involved in the study was an abbreviated "mindfulness" training regime modeled on basic "Shamatha skills" from a Buddhist meditation tradition, conducted by a trained facilitator. As described in the paper, "participants were instructed to relax, with their eyes closed, and to simply focus on the flow of their breath occurring at the tip of their nose. If a random thought arose, they were told to passively notice and acknowledge the thought and to simply let 'it' go, by bringing the attention back to the sensations of the breath." Subsequent training built on this basic model, teaching physical awareness, focus, and mindfulness with regard to distraction.

Zeidan likens the brief training the participants received to a kind of mental calisthenics that prepared their minds for cognitive activity.

"The simple process of focusing on the breath in a relaxed manner, in a way that teaches you to regulate your emotions by raising one's awareness of mental processes as they're happening is like working out a bicep, but you are doing it to your brain. Mindfulness meditation teaches you to release sensory events that would easily distract, whether it is your own thoughts or an external noise, in an emotion-regulating fashion. This can lead to better, more efficient performance on the intended task."

"This kind of training seems to prepare the mind for activity, but it's not necessarily permanent," Zeidan cautions. "This doesn't mean that you meditate for four days and you're done – you need to keep practicing."

University of North Carolina Charlotte




Selected Science News. Copyright 2008 All Rights Reserved Revolution Two Church theme by Brian Gardner Converted into Blogger Template by Bloganol dot com