Thursday, November 11, 2010

100-MILLION-YEAR-OLD MISTAKE PROVIDES SNAPSHOT OF EVOLUTION

0 comentarios
Research by University of Leeds plant scientists has uncovered a snapshot of evolution in progress, by tracing how a gene mutation over 100 million years ago led flowers to make male and female parts in different ways.

The findings – published in the Proceedings of the National Academy of Sciences (PNAS) Online Early Edition – provide a perfect example of how diversity stems from such genetic 'mistakes'. The research also opens the door to further investigation into how plants make flowers – the origins of the seeds and fruits that we eat.

In a number of plants, the gene involved in making male and female organs has duplicated to create two, very similar, copies. In rockcress (Arabidopsis), one copy still makes male and female parts, but the other copy has taken on a completely new role: it makes seed pods shatter open. In snapdragons (Antirrhinum), both genes are still linked to sex organs, but one copy makes mainly female parts, while still retaining a small role in male organs – but the other copy can only make male.

"Snapdragons are on the cusp of splitting the job of making male and female organs between these two genes, a key moment in the evolutionary process," says lead researcher Professor of Plant Development, Brendan Davies, from Leeds' Faculty of Biological Sciences. "More genes with different roles gives an organism added complexity and opens the door to diversification and the creation of new species."

By tracing back through the evolutionary 'tree' for flowering plants, the researchers calculate the gene duplication took place around 120 million years ago. But the mutation which separates how snapdragons and rock cress use this extra gene happened around 20 million years later.

The researchers have discovered that the different behaviour of the gene in each plant is linked to one amino acid. Although the genes look very similar, the proteins they encode don't always have this amino acid. When it is present, the activity of the protein is limited to making only male parts. When the amino acid isn't there, the protein is able to interact with a range of other proteins involved in flower production, enabling it to make both male and female parts.

"A small mutation in the gene fools the plant's machinery to insert an extra amino acid and this tiny change has created a dramatic difference in how these plants control making their reproductive organs," says Professor Davies. "This is evolution in action, although we don't know yet whether this mutation will turn out to be a dead end and go no further or whether it might lead to further complexities.

"Our research is an excellent example of how a chance imperfection sparks evolutionary change. If we lived in a perfect world, it would be a much less interesting one, with no diversity and no chance for new species to develop."

The researchers now plan to study the protein interactions which enable the production of both male and female parts as part of further investigation into the genetic basis by which plants produce flowers.

University of Leeds

RESEARCHERS TAP NEW SOURCE OF CANCER MARKERS IN BLOOD

0 comentarios

The future of cancer diagnosis may lie in just a few milliliters of blood, according to a research team led by Professor Arie Admon of the Technion-Israel Institute of Technology.

In a study released in the Proceedings of the National Academy of Sciences, the scientists report on a new source of blood-derived biomarkers that could soon help doctors determine whether a recovering cancer patient has relapsed, and may someday aid in the early detection of a variety of cancers. The technique may also “provide a large enough source of information to enable personalized treatment for the disease,” Admon said.

The biomarkers consist of immune molecules called HLA and their cargo of peptides, which are degraded bits of protein that they haul to the surface of tumor cells. Since cancer cells release larger amounts of the HLA molecules, “we may be able to diagnose different disease, including cancer, by analyzing the repertoires of peptides carried by these soluble HLA,” said Admon.

Most of the time, the HLA ferry these peptides to the cell surface for inspection by immune T cells and small amounts of these HLA molecules are also released by the cells to the blood. Admon and his colleagues now show that the HLA molecules that are released to the blood continue to carry their peptide cargo.

So far, the method has been tested in blood from patients with multiple myeloma and leukemia, as well as healthy people and cancer cells grown in the lab. If their process holds up under further intensive testing, the researchers say, it could form “a foundation for development of a simple and universal blood-based cancer diagnosis.”

“We aim at early detection, leading to a better prognosis, relapse detection, and better information for personalized treatment,” said Admon. “All of these are long term goals. We think that relapse detection may be the first achievable goal.”

Some researchers have suggested that the flood of HLA-peptide complexes released by tumor cells helps the cancer evade immune detection, by “blocking and confusing the anti-cancer T cells,” Admon said.

There are only a handful of peptides known to be associated with particular types of cancer, so the new technique could not be used yet to determine whether a person has a certain type of cancer, Admon explained. But researchers could study the soluble HLA-peptide repertoires to learn more about the proteins that each kind of tumor produces.

HLA come in a wide variety of their own, and differ between individuals. The different subtypes of HLA differ from each other in the repertoires of peptides they carry and present. By analyzing these differences in “many people of diverse ethnic origin,” Admon said, “we will be able to come up with better diagnoses for larger parts of the human population.”

Someday, a person’s “healthy” HLA profile may join blood pressure and cholesterol readings as part of the person’s medical record, the researchers suggest in their PNAS report. Any changes in the HLA profile, they note, could be used for “detecting the telltale changes associated with the onset of diseases.”

(Photo: ATS)

American Technion Society

LIGHT ON SILICON BETTER THAN COPPER?

0 comentarios
Duke engineers have designed and demonstrated microscopically small lasers that could replace the copper in a host of electronic products.

Step aside copper and make way for a better carrier of information -- light.

As good as the metal has been in zipping information from one circuit to another on silicon inside computers and other electronic devices, optical signals can carry much more, according to Duke University electrical engineers. So the engineers have designed and demonstrated microscopically small lasers integrated with thin film-light guides on silicon that could replace the copper in a host of electronic products.

The structures on silicon not only contain tiny light-emitting lasers, but connect these lasers to channels that accurately guide the light to its target, typically another nearby chip or component. This new approach could help engineers who, in their drive to create tinier and faster computers and devices, are studying light as the basis for the next generation information carrier.

The engineers believe they have solved some of the unanswered riddles facing scientists trying to create and control light at such a miniscule scale.

"Getting light onto silicon and controlling it is the first step toward chip scale optical systems," said Sabarni Palit, who this summer received her Ph.D. while working in the laboratory of Nan Marie Jokerst, J.A. Jones Distinguished Professor of Electrical and Computer Engineering at Duke's Pratt School of Engineering.

The results of team's experiments, which were supported by the Army Research Office, were published online in the journal Optics Letters.

"The challenge has been creating light on such a small scale on silicon, and ensuring that it is received by the next component without losing most of the light," Palit said.

"We came up with a way of creating a thin film integrated structure on silicon that not only contains a light source that can be kept cool, but can also accurately guide the wave onto its next connection," she said. "This integration of components is essential for any such chip-scale, light-based system."

The Duke team developed a method of taking the thick substrate off of a laser, and bonding this thin film laser to silicon. The lasers are about one one-hundreth of the thickness of a human hair. These lasers are connected to other structures by laying down a microscopic layer of polymer that covers one end of the laser and goes off in a channel to other components. Each layer of the laser and light channel is given its specific characteristics, or functions, through nano- and micro-fabrication processes and by selectively removing portions of the substrate with chemicals.

"In the process of producing light, lasers produce heat, which can cause the laser to degrade," Sabarni said. "We found that including a very thin band of metals between the laser and the silicon substrate dissipated the heat, keeping the laser functional."

For Jokerst, the ability to reliably facilitate individual chips or components that "talk" to each other using light is the next big challenge in the continuing process of packing more processing power into smaller and smaller chip-scale packages.

"To use light in chip-scale systems is exciting," she said. "But the amount of power needed to run these systems has to be very small to make them portable, and they should be inexpensive to produce. There are applications for this in consumer electronics, medical diagnostics and environmental sensing."

The work on this project was conducted in Duke's Shared Materials Instrumentation Facility, which, like similar facilities in the semiconductor industry, allows the fabrication of intricate materials in a totally "clean" setting. Jokerst is the facility's executive director.

Duke University

THE NEXT CARBON CAPTURE TOOL COULD BE NEW, IMPROVED GRASS

0 comentarios

A blade of grass destined to be converted into biofuel may join energy efficiency and other big-ticket strategies in the effort to reduce atmospheric carbon — but not in the way that you might think.

Miscanthus, a potential feedstock for biofuel, could pull double duty in the fight against climate change by sequestering carbon in the soil for thousands of years.

Sounds promising. But should scientists genetically engineer bioenergy crops to be better at ridding the atmosphere of the greenhouse gas? And can this strategy take place on the scale needed to mitigate climate change?

These questions are framed in a new analysis by Lawrence Berkeley National Laboratory scientist Christer Jansson and researchers from Oak Ridge National Laboratory. Their research, published in the October issue of Bioscience, explores ways in which bioenergy crops can become a big player in the drive to rein in rising levels of atmospheric carbon dioxide.

The authors hope to get others thinking about engineering plants to not only produce biofuel, but to also sequester carbon.

“We want to encourage discussion and research on this topic,” says Jansson, a senior staff scientist in Berkeley Lab’s Earth Sciences Division and lead author of the analysis. “We need to explore the extent to which plants, and specifically genetically engineered plants, can reduce levels of atmospheric carbon.”

The conversation has already started. Scientific American and other news outlets and blogs have published articles on the team’s analysis since it was published a few weeks ago.

At the heart of the scientists’ analysis is the idea that bioenergy crops can fight climate change in two ways. There’s the obvious way, in which a plant’s cellulosic biomass is converted into a carbon-neutral transportation fuel that displaces fossil fuels. And the not-so obvious way: bioenergy crops also take in atmospheric carbon dioxide during photosynthesis and send a significant amount of the carbon to the soil via roots. Carbon from plant biomass can also be incorporated into soil as a type of charcoal called biochar. Either way, the captured carbon could be out of circulation for millennia.

At stake is the urgent need to make a dent in the nine gigatons of carbon that human activities emit into the atmosphere each year (one gigaton is one billion tons). Natural processes such as plant photosynthesis annually capture about three gigatons of carbon from the atmosphere.

“We could double that in the next several decades,” says Jansson. “By 2050, we could get to five or six gigatons of carbon removed from the atmosphere by plants, and I think a major part of that could come from bioenergy crops like grasses and trees. They could make a big contribution in sequestering carbon, but other strategies will have to be used.”

As Jansson explains, to increase the capacity for plants to act as carbon sinks, scientists need to continue to develop bioenergy crops that are efficient in harvesting light energy and using the energy to convert carbon dioxide to biomass. Bioenergy crops should also have a high capacity to send the carbon it captures to its roots, where it has the best chance to be stored in soil for thousands of years.

Fortunately, top bionergy crop candidates, such as Miscanthus, are already better-than-average carbon sinks. The large root systems in perennials such as grasses make them better at sequestering carbon in biomass and soil than annual plants.

But can bioenergy crops become even better? Jansson and colleagues outline several possibilities in their analysis. A plant’s canopy can be altered to enhance its efficiency at intercepting sunlight. Another approach accelerates a plant’s photoprotection mechanisms, which would improve its ability to use light. And a plant’s tolerances to various stresses could be improved without compromising yield.

A game-changing success, Jansson explains, could be the design of a bioenergy crop that can withstand drought and which utilizes brine, saline wastewater, or seawater for irrigation to avoid having to tap into freshwater supplies. Jansson suggests that genetic engineering can play a key role in introducing these traits into a plant.

“Bionergy crops are likely to be engineered anyway,” he says. “It makes sense to also consider enhancing their ability to withstand stress and sequester carbon. This analysis will hopefully guide research and prompt people to think in new ways about bioenergy crops.”

(Photo: LBNL)

Lawrence Berkeley National Laboratory

YOUNGER BRAINS ARE EASIER TO REWIRE

0 comentarios

About a decade ago, scientists studying the brains of blind people made a surprising discovery: A brain region normally devoted to processing images had been rewired to interpret tactile information, such as input from the fingertips as they trace Braille letters. Subsequent experiments revealed a similar phenomenon in other brain regions. However, these studies didn’t answer the question of whether the brain can rewire itself at any time, or only very early in life.

A new paper from MIT neuroscientists, in collaboration with Alvaro Pascual-Leone at Beth Israel Deaconess Medical Center, offers evidence that it is easier to rewire the brain early in life. The researchers found that a small part of the brain’s visual cortex that processes motion became reorganized only in the brains of subjects who had been born blind, not those who became blind later in life.

The new findings, described in the Oct. 14 issue of the journal Current Biology, shed light on how the brain wires itself during the first few years of life, and could help scientists understand how to optimize the brain’s ability to be rewired later in life. That could become increasingly important as medical advances make it possible for congenitally blind people to have their sight restored, said MIT postdoctoral associate Marina Bedny, lead author of the paper.

In the 1950s and ’60s, scientists began to think that certain brain functions develop normally only if an individual is exposed to relevant information, such as language or visual information, within a specific time period early in life. After that, they theorized, the brain loses the ability to change in response to new input.

Animal studies supported this theory. For example, cats blindfolded during the first months of life are unable to see normally after the blindfolds are removed. Similar periods of blindfolding in adulthood have no effect on vision.

However, there have been indications in recent years that there is more wiggle room than previously thought, said Bedny, who works in the laboratory of MIT assistant professor Rebecca Saxe, also an author of the Current Biology paper. Many neuroscientists now support the idea of a period early in life after which it is difficult, but not impossible, to rewire the brain.

Bedny, Saxe and their colleagues wanted to determine if a part of the brain known as the middle temporal complex (MT/MST) can be rewired at any time or only early in life. They chose to study MT/MST in part because it is one of the most studied visual areas. In sighted people, the MT region is specialized for motion vision.

In the few rare cases where patients have lost MT function in both hemispheres of the brain, they were unable to sense motion in a visual scene. For example, if someone poured water into a glass, they would see only a standing, frozen stream of water.

Previous studies have shown that in blind people, MT is taken over by sound processing, but those studies didn’t distinguish between people who became blind early and late in life.

In the new MIT study, the researchers studied three groups of subjects — sighted, congenitally blind, and those who became blind later in life (age nine or older). Using functional magnetic resonance imaging (fMRI), they tested whether MT in these subjects responded to moving sounds — for example, approaching footsteps.

The results were clear, said Bedny. MT reacted to moving sounds in congenitally blind people, but not in sighted people or people who became blind at a later age.

This suggests that in late-blind individuals, the visual input they received in early years allowed the MT complex to develop its typical visual function, and it couldn’t be remade to process sound after the person lost sight. Congenitally blind people never received any visual input, so the region was taken over by auditory input after birth.

“We need to think of early life as a window of opportunity to shape how the brain works,” said Bedny. “That’s not to say that later experience can’t alter things, but it’s easier to get organized early on.”

Another important aspect of the work is the finding that in the congenitally blind, there is enhanced communication between the MT complex and the brain’s prefrontal cortex, said Ione Fine, associate professor of psychology at the University of Washington. That enhanced connection could help explain how the brain remodels the MT region to process auditory information. Previous studies have looked for enlarged nerve bundles, with no success. “People have been looking for bigger roads, but what she’s seeing is more traffic on the same-size road,” said Fine, who was not involved in the study.

Although this work supports the idea that brain regions can switch functions early in a person’s development, Bedny believes that by better understanding how the brain is wired during this period, scientists may be able to learn how to rewire it later in life. There are now very few cases of sight restoration, but if it becomes more common, scientists will need to figure out how to retrain the patient’s brain so it can process the new visual input.

“The unresolved question is whether the brain can relearn, and how that learning differs in an adult brain versus a child’s brain,” said Bedny.

Bedny hopes to study the behavioral consequences of the MT switch in future studies. Those would include whether blind people have an advantage over sighted people in auditory motion processing, and if they have a disadvantage if sight is restored.

(Photo: MIT)

MIT

PORTABLE BREAST SCANNER ALLOWS CANCER DETECTION IN THE BLINK OF AN EYE

0 comentarios

Women could have a fast test for breast cancer and instantly identify the presence of a tumour in the comfort of their own home thanks to ground-breaking new research from The University of Manchester.

Professor Zhipeng Wu has invented a portable scanner based on radio frequency technology, which is able to show in a second the presence of tumours – malignant and benign – in the breast on a computer.

Using radio frequency or microwave technology for breast cancer detection has been proven by researchers in the US, Canada and UK.

However, up to now, it can take a few minutes for an image to be produced, and this had to be done in a hospital or specialist care centre.

Now Professor Wu, from the University’s School of Electrical and Electronic Engineering, says concerned patients can receive real-time video images in using the radio frequency scanner which would clearly and simply show the presence of a tumour.

Not only is this a quicker and less-intrusive means of testing, it also means women can be tested at GP surgeries, which could help dramatically reduce waiting times and in some cases avoid unnecessary X-ray mammography. The scanner could also be used at home for continuous monitoring of breast health.

The patented real-time radio frequency scanner uses computer tomography and works by using the same technology as a mobile phone, but with only a tiny fraction of its power.

This makes it both safe and low-cost and the electronics can be housed in a case the size of a lunch box for compactness and portability. Other existing systems are much larger.

Breast cancer is the second biggest killer in women, accounting for 8.2% of all cancer deaths. October is National Breast Cancer Awareness month.

The usual way of detecting breast cancer up to now is mammography, which works well for women over the age of 50 and can give results of up to 95% accuracy.

But it is far less effective for younger women. The detection rate could be as low as 60% for women under the age of 50, which accounts for 20% of all breast cancer cases.

At that stage it is even more important get accurate diagnosis. Early diagnosis and treatment could save thousands of lives.

The main difference between the two methods is that mammography works on density, while radio frequency technique works on dielectric contrasts between normal and diseased breast tissues.

In Professor Wu’s design, as soon as the breast enters the cup an image appears on screen.

The presence of a tumour or other abnormality will show up in red as the sensor detects the difference in tissue contrasts at radio frequencies. Malignant tissues have higher permittivity and conductivity and therefore appear differently than normal ones to a screen.

Up to 30 images are generated every second, meaning a breast scan could be over in a far shorter time than they are currently.

Professor Wu said: “The system we have is portable and as soon as you lie down you can get a scan – it’s real-time.

“The real-time imaging minimises the chance of missing a breast tumour during scanning.

“Other systems also need to use a liquid or gel as a matching substance, such as in an ultrasound, to work but with our system you don’t need that – it can be done simply in oil, milk, water or even with a bra on.

“Although there is still research to be done, the system has great potential to bring a new way for breast cancer diagnosis.

“This will benefit millions of women in both developed and developing countries bearing in mind that one in nine women may develop breast cancer in their lifetime.”

Professor Wu submitted his innovation of the sensor system to the IET Innovation Awards. The technology has been shortlisted in both Electronics and Measurement in Action categories. The winners will be announced in November.

(Photo: U. Manchester)

University of Manchester

ONE-FIFTH OF WORLDS VERTEBRATE SPECIES THREATENED

0 comentarios

The most comprehensive assessment of the world’s vertebrates (mammals, bird, amphibians, reptiles and fishes) confirms an extinction crisis with one-fifth of species threatened. However, the situation would be worse were it not for current global conservation efforts, according to a study launched by the International Union for Conservation of Nature (IUCN).

The study, coordinated by Dr Simon Stuart, Chair of IUCN’s Species Survival Commission and visiting professor at the University of Bath, used data for 25,000 species from The IUCN Red List of Threatened Species™, to investigate the status of the world’s vertebrates and how this status has changed over time.

The results show that, on average, 50 species of mammal, bird and amphibian move closer to extinction each year due to the impacts of agricultural expansion, logging, over-exploitation and invasive alien species.

The research, to be published in the international journal Science, involved 174 scientists from 115 institutions and 38 countries.

However, whilst the study confirms previous reports of continued losses in biodiversity, it also presents clear evidence of the positive impact of conservation efforts around the globe.

Results show that the status of biodiversity would have declined by nearly 20 per cent if conservation action had not been taken.

Dr Stuart said: “History has shown us that conservation can achieve the impossible, as anyone who knows the story of the White Rhinoceros in southern Africa is aware.”

“But this is the first time we can demonstrate the aggregated positive impact of these successes on the state of the environment.”

The study highlights 64 mammal, bird and amphibian species that have improved in status due to successful conservation action. This includes three species that were extinct in the wild and have since been re-introduced back to nature: the California Condor and the Black-footed Ferret in the United States, and Przewalski’s Horse in Mongolia.

Conservation efforts have been particularly successful at combating invasive alien species on islands. The global population of the Seychelles Magpie-robin, increased from fewer than 15 birds in 1965 to 180 in 2006 through control of introduced predators, like the Brown Rat, and captive-breeding and re-introduction programmes.

Another conservation success story is the ban on commercial whaling, which has seen the Humpback Whale move from Vulnerable to Least Concern.

The authors caution that their study represents only a minimum estimate of the true impact of conservation, highlighting that some nine per cent of threatened species have increasing populations. Their results show that conservation works, given resources and commitment.

They also show that global responses will need to be substantially scaled up, because the current level of conservation action is outweighed by the magnitude of threat.

(Photo: NOAA's National Ocean Service)

University of Bath

GETTING THE BIG PICTURE QUICKLY

0 comentarios

University of Utah computer scientists developed software that quickly edits "extreme resolution imagery" - huge photographs containing billions to hundreds of billions of pixels or dot-like picture elements. Until now, it took hours to process these "gigapixel" images. The new software needs only seconds to produce preview images useful to doctors, intelligence analysts, photographers, artists, engineers and others.

By sampling only a fraction of the pixels in a massive image - for example, a satellite photo or a panorama made of hundreds of individual photos - the software can produce good approximations or previews of what the fully processed image will look like.

That allows someone to interactively edit and analyze massive images - pictures larger than a gigapixel (billion pixels) - in seconds rather than hours, says Valerio Pascucci, an associate professor of computer science at the University of Utah and its Scientific Computing and Imaging (SCI) Institute.

"You can go anywhere you want in the image," he says. "You can zoom in, go left, right. From your perspective, it is as if the full 'solved' image has been computed."

He compares the photo-editing software with public opinion polling: "You ask a few people and get the answer as if you asked everyone. It's exactly the same thing."

The new software - Visualization Streams for Ultimate Scalability, or ViSUS - allows gigapixel images stored on an external server or drive to be edited from a large computer, a desktop or laptop computer, or even a smart phone, Pascucci says.

"The same software runs very well on an iPhone or a large computer," he adds.

The paper calls ViSUS "a simple framework for progressive processing of high-resolution images with minimal resources … [that] for the first time, is capable of handling gigapixel imagery in real time."

Pascucci conducted the research with University of Utah SCI Institute colleagues Brian Summa, a doctoral student in computing; Giorgio Scorzelli, a senior software developer; and Peer-Timo Bremer, a computer scientist at Lawrence Livermore National Laboratory in California, where co-author Ming Jiang also works.

The research was funded by the U.S. Department of Energy and the National Science Foundation. The University of Utah Research Foundation and Lawrence Livermore share a patent on the software, and the researchers plan to start a company to commercialize ViSUS.

Pascucci defines massive imagery as images containing more than one gigapixel -which is equal to 100 photos from a 10-megapixel (10 million pixel) digital camera.

In the study, the computer scientists used a number of images ranging in size from megapixels (millions of picture elements) to hundreds of gigapixels to test how well the ViSUS software let them interactively edit large images, and to show how well the software can handle images of various sizes, from small to extremely large.

In one example, they used the software to perform "seamless cloning," which means taking one image and merging it with another image. They combined a 3.7-gigapixel image of the entire Earth with a 116-gigapixel satellite photo of the city of Atlanta, zooming in on the Gulf of Mexico and putting Atlanta underwater there.

"An artist can interactively place a copy of Atlanta under shallow water and recreate the lost city of Atlantis," says the new study, which is titled, "Interactive Editing of Massive Imagery Made Simple: Turning Atlanta into Atlantis."

"It's just a way to demonstrate how an artist can manipulate a huge amount of data in an image without being encumbered by the file size," says Pascucci.

Pascucci, Summa and colleagues also used a camera mounted on a robotic panning device and placed atop a University of Utah building to take 611 photographs during a six-hour period. Together, the photos covered the entire Salt Lake Valley.

At full resolution, it took them four hours to do "panorama stitching," which is stitching the mosaic of photos together into a 3.27-gigapixel panorama of the valley that eliminated the seams between the images and differences in their exposures, says Summa, first author of the study.

But using the ViSUS software, it took only two seconds to create a "global preview" of the entire Salt Lake panorama that looked almost as good - and had a relatively low resolution of only 0.9-megapixels, or only one-3,600th as much data as full-resolution panorama.

And that preview image is interactive, so a photo editor can make different adjustments - such as tint, color intensity and contrast - and see the effects in seconds.

Pascucci says ViSUS' significance is not in creating the preview, but in allowing an editor to zoom in on any part of the low-resolution panorama and quickly see and edit a selected portion of it at full resolution. Older software required the full resolution image to be processed before it could be edited.

Pascucci says the method can be used to edit medical images such as MRI and CT scans - and can do so in three dimensions, even though their study examined only two-dimensional images. "We can handle 2-D and 3-D in the same way," he says.

The software also might lead to more sophisticated computer games. "We are studying the possibility of involving the player in building their own [gaming] environment on the fly," says Pascucci.

The software also will be useful to intelligence analysts examining satellite photos, and researchers using high-resolution microscopes, for example, to study how the eye's light-sensing retina is "wired" by nerves, based on detailed microscopic images.

An intelligence analyst may need to compare two 100-gigabyte satellite photos of the same location but taken at different times - perhaps to learn if aircraft or other military equipment arrived or left that location between the times the photos were taken.

Conventional software to compare the photos must go through all the data in each photo and compare differences - a process that "would take hours. It might be a whole day," Pascucci says. But with ViSUS, "we quickly build an approximation of the difference between the images, and allow the analyst to explore interactively smaller regions of the total image at higher resolution without having to wait."

Pascucci says two key parts of the software must work together delicately:

* "One is the way we store the images - the order in which we store the pixels on the disk. That is part of the technology being patented" because the storage format "allows you to retrieve the sample of pixels you want really fast."

* How the data are processed is the software's second crucial feature. The algorithm - a set of formulas and rules - for processing image data allows the researchers to use only a subset of pixels, which they can move efficiently.

The image processing method can produce previews at various resolutions by taking progressively more and more pixels from the data that make up the entire full-resolution image.

Normally, the amount of memory used in a computer to edit and preview a massive image would have to be large enough to handle the entire data set for that image.

"In our method, the preview has constant size, so it can always fit in memory, even if the fine-resolution data keep growing," Pascucci says.

Data for the full-resolution image is stored on a disk or drive, and ViSUS repeatedly swaps data with the disk as needed for creating new preview images as editing progresses. The software does that very efficiently by pulling more and more data subsets from the full image data in the form of progressively smaller Z-shaped sets of pixels.

Pascucci says ViSUS' major contribution is that "we don't need to read all the data to give you an approximation" of the full image.

If an image contained a terabyte of data - a trillion bytes - the software could produce a good approximation of the image using only one-millionth of the total image data, or about a megabyte, Pascucci says.

The computer scientists now have gone beyond the 116-gigapixel Atlanta image and, in unpublished work, have edited satellite images of multiple cities exceeding 500 gigapixels. The next target: a terapixel image - 1,000 gigapixels or 1 trillion pixels.

(Photo: Scientific Computing and Imaging Institute, University of Utah)

University of Utah

SCENTED CONSUMER PRODUCTS SHOWN TO EMIT MANY UNLISTED CHEMICALS

0 comentarios
The sweet smell of fresh laundry may contain a sour note. Widely used fragranced products -- including those that claim to be "green" -- give off many chemicals that are not listed on the label, including some that are classified as toxic.

A study led by the University of Washington discovered that 25 commonly used scented products emit an average of 17 chemicals each. Of the 133 different chemicals detected, nearly a quarter are classified as toxic or hazardous under at least one federal law. Only one emitted compound was listed on a product label, and only two were publicly disclosed anywhere. The article is published online today in the journal Environmental Impact Assessment Review.

"We analyzed best-selling products, and about half of them made some claim about being green, organic or natural," said lead author Anne Steinemann, a UW professor of civil and environmental engineering and of public affairs. "Surprisingly, the green products' emissions of hazardous chemicals were not significantly different from the other products."

More than a third of the products emitted at least one chemical classified as a probable carcinogen by the U.S. Environmental Protection Agency, and for which the EPA sets no safe exposure level.

Manufacturers are not required to disclose any ingredients in cleaning supplies, air fresheners or laundry products, all of which are regulated by the Consumer Product Safety Commission. Neither these nor personal care products, which are regulated by the Food and Drug Administration, are required to list ingredients used in fragrances, even though a single "fragrance" in a product can be a mixture of up to several hundred ingredients, Steinemann said.

So Steinemann and colleagues have used chemical sleuthing to discover what is emitted by the scented products commonly used in homes, public spaces and workplaces.

The study analyzed air fresheners including sprays, solids and oils; laundry products including detergents, fabric softeners and dryer sheets; personal care products such as soaps, hand sanitizers, lotions, deodorant and shampoos; and cleaning products including disinfectants, all-purpose sprays and dish detergent. All were widely used brands, with more than half being the top-selling product in its category.

Researchers placed a sample of each product in a closed glass container at room temperature and then analyzed the surrounding air for volatile organic compounds, small molecules that evaporate off a product's surface. They detected chemical concentrations ranging from 100 micrograms per cubic meter (the minimum value reported) to more than 1.6 million micrograms per cubic meter.

The most common emissions included limonene, a compound with a citrus scent; alpha-pinene and beta-pinene, compounds with a pine scent; ethanol; and acetone, a solvent found in nail polish remover.

All products emitted at least one chemical classified as toxic or hazardous. Eleven products emitted at least one probable carcinogen according to the EPA. These included acetaldehyde, 1,4-dioxane, formaldehyde and methylene chloride.

The only chemical listed on any product label was ethanol, and the only additional substance listed on a chemical safety report, known as a material safety data sheet, was 2-butoxyethanol.

"The products emitted more than 420 chemicals, collectively, but virtually none of them were disclosed to consumers, anywhere," Steinemann said.

Because product formulations are confidential, it was impossible to determine whether a chemical came from the product base, the fragrance added to the product, or both.

Tables included with the article list all chemicals emitted by each product and the associated concentrations, although they do not disclose the products' brand names.

"We don't want to give people the impression that if we reported on product 'A' and they buy product 'B,' that they're safe," Steinemann said. "We found potentially hazardous chemicals in all of the fragranced products we tested."

The study establishes the presence of various chemicals but makes no claims about the possible health effects. Two national surveys published by Steinemann and a colleague in 2009 found that about 20 percent of the population reported adverse health effects from air fresheners, and about 10 percent complained of adverse effects from laundry products vented to the outdoors. Among asthmatics, such complaints were roughly twice as common.

The Household Product Labeling Act, currently being reviewed by the U.S. Senate, would require manufacturers to list ingredients in air fresheners, soaps, laundry supplies and other consumer products. Steinemann says she is interested in fragrance mixtures, which are included in the proposed labeling act, because of the potential for unwanted exposure, or what she calls "secondhand scents."

As for what consumers who want to avoid such chemicals should do in the meantime, Steinemann suggests using simpler options such as cleaning with vinegar and baking soda, opening windows for ventilation and using products without any fragrance.

"In the past two years, I've received more than 1,000 e-mails, messages, and telephone calls from people saying: 'Thank you for doing this research, these products are making me sick, and now I can start to understand why,'" Steinemann said.

University of Washington

Followers

Archive

 

Selected Science News. Copyright 2008 All Rights Reserved Revolution Two Church theme by Brian Gardner Converted into Blogger Template by Bloganol dot com