Friday, February 18, 2011

Words help people form mathematical concepts

0 comentarios

Language may play an important role in learning the meanings of numbers, scholars at the University of Chicago report.

A study based on research on deaf people in Nicaragua who never learned formal sign language showed that people who communicate using self-developed gestures, called homesigns, were unable to comprehend the value of numbers greater than three because they had not learned a language containing symbols used for counting.

By contrast, deaf people who acquire conventional sign language as children can learn the meaning of large numbers. Researchers believe this is because conventional sign language, like spoken languages, imparts a counting routine early in childhood.

The study illustrates the complexity of learning the symbolic relationships embedded in language, including seemingly simple numerical concepts. The work may help researchers learn more about how language shapes the way children learn early mathematical concepts, and how that crucial process can go awry in the preschool years.

"It's not just the vocabulary words that matter, but understanding the relationships that underlie the words––the fact that 'eight' is one more than 'seven' and one less than 'nine.' Without having a set of number words to guide them, deaf homesigners in the study failed to understand that numbers build on each other in value," said Susan Goldin-Meadow, the Bearsdley Ruml Distinguished Service Professor in Psychology at the University.

The findings are reported in the paper, "Number Without a Language Model," published in the current issue of the Proceedings of the National Academy of Sciences. The lead author is University researcher Elizabet Spaepen, a 2008 Ph.D. graduate in psychology who did field work in Nicaragua as part of the study.

Scholars have previously found that people in isolated cultures do not learn the value of large numbers when they are not part of the local language. Two groups studied in the Amazon, for instance, do not have words for numbers greater than five and are unable to match two rows of checkers containing more than five items. Their local culture does not require the use of exact large numbers, which could explain the Amazonians' difficulty with them.

However, most Nicaraguans do use exact numbers in everyday monetary transactions. Although the deaf homesigners in the UChicago study understood the relative worth of different currency items, they apparently had an incomplete understanding of their numerical values because they had never been taught number words, Spaepen said.

For the study, scholars gave homesigners a series of tasks to determine how well they could recognize money. They were shown 10-unit and 20-unit bills and asked which had more value. When asked if nine 10-unit coins had more or less value than a 100-unit bill, each of the homesigners was able to determine the money's relative value.

"The coins and bills used in Nicaraguan currency vary in size and color according to value, which give clues to their value, even if the user has no knowledge of numbers," Spaepen said. The deaf homesigners could learn about currency based on its color and shape without fully understanding its numerical value.

To see if the homesigners could express numerical value outside of the context of money, the scholars showed them animated videos in which numbers were an important part of the plot. They then asked the deaf individuals to retell the video to a friend or relative using homesigns. As the numbers grew, the homesigners became increasingly less able to produce an accurate gesture for the number with their fingers.

They were then shown cards with different numbers of items on them, and asked to give a gesture that represented the number of items. The homesigners were accurate only up to the number 3. In addition, they had difficulty making a second row of checkers match a target row when there were more than three checkers in the target, despite the fact that this task did not require any comprehension or production of number gestures. Their difficulty in understanding large numbers therefore did not stem from an inability to communicate about large numbers, but rather from an inability to think about them, the researchers concluded.

Researchers performed the tests on hearing, unschooled Nicaraguans, as well as deaf individuals trained in American Sign Language. Both groups outperformed the Nicaraguan homesigners in the study.

(Photo: iStockphoto.com)

University of Chicago

Bad Things Seem Even Worse If People Have to Live Through Them Again

0 comentarios
When people think unpleasant events are over, they remember them as being less painful or annoying than when they expect them to happen again, pointing to the power of expectation to help people brace for the worst, according to studies published by the American Psychological Association.

In a series of eight studies exposing people to annoying noise, subjecting them to tedious computer tasks, or asking them about menstrual pain, participants recalled such events as being significantly more negative if they expected them to happen again soon.

This reaction might be adaptive: People may keep their equilibrium by using memory to steel themselves against future harm, said co-authors Jeff Galak, PhD, of Carnegie Mellon University, and Tom Meyvis, PhD, of New York University. Their findings appear in the February issue of the Journal of Experimental Psychology: General.

The laboratory studies (of 30, 44, 112, 154, 174, 160 and 51 subjects) exposed people to five seconds of vacuum cleaner noise. People who were told they would have to listen to more vacuum cleaner noise said it was significantly more irritating than people who were told the noise was over.

Subsequent studies replicated this finding using larger samples and boring, repetitive tasks -- such as dragging circles from the left to the right side of a computer screen 50 times. Again, people who were told they would have to do it again said the task was significantly more irritating, boring and annoying than people told when they were done.

Other studies varied the method to allow researchers to understand what subjects were experiencing emotionally. For example, the researchers found evidence that people used more intensely negative memories to steel themselves against the future. Also, not having time to reflect on the first experience, or having their resources drained by a demanding “filler” task, reduced the power of expectation.

Also, people recalled fun activities, such as playing video games, as equally enjoyable whether they thought they would play again or not. The authors concluded that emotions negatively shape memory’s judgment of unpleasant experiences, but positively shape the recollected quality of pleasant experiences.

In the culminating field study of 180 women (average age 29), those whose menstrual periods had ended fewer than three days earlier or who expected their periods within three days remembered their last period as significantly more painful than women in the middle of their cycle (none were currently menstruating).

“The prospect of repeating an experience can, in fact, change how people remember it,” the authors concluded. Bracing for the worst may actually help people to reduce their discomfort if a bad experience should happen, and allow them to be pleasantly surprised if it does not, they added.

American Psychological Association

Play was important -- even 4,000 years ago

0 comentarios

Play was a central element of people's lives as far back as 4,000 years ago. This has been revealed by an archaeology thesis from the University of Gothenburg, Sweden, which investigates the social significance of the phenomenon of play and games in the Bronze Age Indus Valley in present-day Pakistan.

It is not uncommon for archaeologists excavating old settlements to come across play and game-related finds, but within established archaeology these types of finds have often been disregarded.

"They have been regarded, for example, as signs of harmless pastimes and thus considered less important for research, or have been reinterpreted based on ritual aspects or as symbols of social status," explains author of the thesis Elke Rogersdotter.

She has studied play-related artefacts found at excavations in the ruins of the ancient city of Mohenjo-daro in present-day Pakistan. The remains constitute the largest urban settlement from the Bronze Age in the Indus Valley, a cultural complex of the same era as ancient Egypt and Mesopotamia. The settlement is difficult to interpret; for example, archaeologists have not found any remains of temples or palaces. It has therefore been tough to offer an opinion on how the settlement was managed or how any elite class marked itself out.

Elke Rogersdotter's study shows some surprising results. Almost every tenth find from the ruined city is play-related. They include, for instance, different forms of dice and gaming pieces. In addition, the examined finds have not been scattered all over. Repetitive patterns have been discerned in the spatial distribution, which may indicate specific locations where games were played.

"The marked quantity of play-related finds and the structured distribution shows that playing was already an important part of people's everyday lives more than 4,000 years ago," says Elke.

"The reason that play and game-related artefacts often end up ignored or being reinterpreted at archaeological excavations is probably down to scientific thinking's incongruity with the irrational phenomenon of games and play," believes Elke.

"The objective of determining the social significance of the actual games therefore, in turn, challenges established ways of thinking. It is an instrument we can use to come up with interpretations that are closer to the individual person. We may gain other, more socially-embedded, approaches for a difficult-to-interpret settlement."

The thesis has been successfully defended.

(Photo: Elke Rogersdotter, University of Gothenburg)

University of Gothenburg

Caltech Geobiologists Uncover Links between Ancient Climate Change and Mass Extinction

0 comentarios

About 450 million years ago, Earth suffered the second-largest mass extinction in its history—the Late Ordovician mass extinction, during which more than 75 percent of marine species died. Exactly what caused this tremendous loss in biodiversity remains a mystery, but now a team led by researchers at the California Institute of Technology (Caltech) has discovered new details supporting the idea that the mass extinction was linked to a cooling climate.

"While it’s been known for a long time that the mass extinction is intimately tied to climate change, the precise mechanism is unclear," says Seth Finnegan, a postdoctoral researcher at Caltech and the first author of the paper published online in Science on January 27. The mass extinction coincided with a glacial period, during which global temperatures cooled and the planet saw a marked increase in glaciers. At this time, North America was on the equator, while most of the other continents formed a supercontinent known as Gondwana that stretched from the equator to the South Pole.

By using a new method to measure ancient temperatures, the researchers have uncovered clues about the timing and magnitude of the glaciation and how it affected ocean temperatures near the equator. "Our observations imply a climate system distinct from anything we know about over the last 100 million years," says Woodward Fischer, assistant professor of geobiology at Caltech and a coauthor.

The fact that the extinction struck during a glacial period, when huge ice sheets covered much of what's now Africa and South America, makes it especially difficult to evaluate the role of climate. "One of the biggest sources of uncertainty in studying the paleoclimate record is that it’s very hard to differentiate between changes in temperature and changes in the size of continental ice sheets," Finnegan says. Both factors could have played a role in causing the mass extinction: with more water frozen in ice sheets, the world’s sea levels would have been lower, reducing the availability of shallow water as a marine habitat. But differentiating between the two effects is a challenge because until now, the best method for measuring ancient temperatures has also been affected by the size of ice sheets.

The conventional method for determining ancient temperature requires measuring the ratios of oxygen isotopes in minerals precipitated from seawater. The ratios depend on both temperature and the concentration of isotopes in the ocean, so the ratios reveal the temperature only if the isotopic concentration of seawater is known. But ice sheets preferentially lock up one isotope, which reduces its concentration in the ocean. Since no one knows how big the ice sheets were, and these ancient oceans are no longer available for scientists to analyze, it's hard to determine this isotopic concentration. As a result of this "ice-volume effect," there hasn’t been a reliable way to know exactly how warm or cold it was during these glacial periods.
The researchers analyzed the chemistry of fossils to determine the temperature during the Late Ordovician period. Shown are fossilized shells, brachiopods, trilobites, and gastropods, with a hammer for scale.

But by using a new type of paleothermometer developed in the laboratory of John Eiler, Sharp Professor of Geology and professor of geochemistry at Caltech, the researchers have determined the average temperatures during the Late Ordovician—marking the first time scientists have been able to overcome the ice-volume effect for a glacial episode that happened hundreds of millions of years ago. To make their measurements, the researchers analyzed the chemistry of fossilized marine animal shells collected from Quebec, Canada, and from the midwestern United States.

The Eiler lab’s method, which does not rely on the isotopic concentration of the oceans, measures temperature by looking at the "clumpiness" of heavy isotopes found in fossils. Higher temperatures cause the isotopes to bond in a more random fashion, while low temperatures lead to more clumping.

"By providing independent information on ocean temperature, this new method allows us to know the isotopic composition of 450-million-year-old seawater," Finnegan says. "Using that information, we can estimate the size of continental ice sheets through this glaciation." And with a clearer idea of how much ice there was, the researchers can learn more about what Ordovician climate was like—and how it might have stressed marine ecosystems and led to the extinction.

"We have found that elevated rates of climate change coincided with the mass extinction," says Aradhna Tripati, a coauthor from UCLA and visiting researcher in geochemistry at Caltech.

The team discovered that even though tropical ocean temperatures were higher than they are now, moderately sized glaciers still existed near the poles before and after the mass extinction. But during the extinction intervals, glaciation peaked. Tropical surface waters cooled by five degrees, and the ice sheets on Gondwana grew to be as large as 150 million cubic kilometers—bigger than the glaciers that covered Antarctica and most of the Northern Hemisphere during the modern era’s last ice age 20,000 years ago.

"Our study strengthens the case for a direct link between climate change and extinction," Finnegan says. "Although polar glaciers existed for several million years, they only caused cooling of the tropical oceans during the short interval that coincides with the main pulse of mass extinction."

(Photo: Caltech)

California Institute of Technology (Caltech)

UMD advance lights possible new path to creating next gen computer chips

0 comentarios
University of Maryland researchers have made a breakthrough in the use of visible light for making tiny integrated circuits. Though their advance is probably at least a decade from commercial use, they say it could one day make it possible for companies like Intel to continue their decades long tread of making ever smaller, faster, and cheaper computer chips.

For some 50 years, the integrated circuits, or chips, that are at the heart of computers, smart phones, and other high-tech devices have been created through a technique known as photolithography, in which each computer chip is built up in layers.

In photolithography, each layer of a conductive material (metal, treated silicon, etc,) is deposited on a chip and coated with a chemical that hardens when exposed to light. Light shining through a kind of stencil know as a mask projects a detailed pattern onto the photoresist, which hardens where it's exposed. Then, the unhardened areas of photoresist and underlying metal are etched away with a chemical. Finally, the remaining photoresist is etched away using a different chemical treatment, leaving an underlying layer of metal with the same shape as the mask.

However, fitting more and more circuits on each chip has meant making smaller and smaller circuits. In fact, features of circuits in today's computer chips are significantly smaller than the wavelength of visible light. As a result, manufacturers have gone to using shorter and shorter wavelengths of light (radiation), or even charged particles, to enable them to make these circuits.

University of Maryland chemistry Professor John Fourkas and his research group recently introduced a technique called RAPID lithography that makes it possible to use visible light to attain lithographic resolution comparable to (and potentially even better than) that obtained with shorter wave length radiation.

"Our RAPID technique could offer substantial savings in cost and ease of production," Fourkas said. "Visible light is far less expensive to generate, propagate and manipulate than shorter wavelength forms of electromagnetic radiation, such as vacuum ultraviolet or X-rays. And using visible light would not require the use of the high vacuum conditions needed for current short wavelength technologies."

The key to RAPID is the use of a special "photoinitiator" that can be excited, or turned on, by one laser beam and deactivated by another. In new work just published online by Nature Chemistry, Fourkas and his group report three broad classes of common dye molecules that can be used for RAPID lithography.

In earlier work, Fourkas and his team used a beam of ultrafast pulses for the excitation step and a continuous laser for deactivation. However, they say that in some of their newly reported materials deactivation is so efficient that the ultrafast pulses of the excitation beam also deactivate molecules. This phenomenon leads to the surprising result that higher exposures can lead to smaller features, leading to what the researchers call a proportional velocity (PROVE) dependence.

"PROVE behavior is a simple way to identify photoinitiators that can be deactivated efficiently," says Fourkas, "which is an important step towards being able to use RAPID in an industrial setting."

By combining a PROVE photoinitiator with a photoinitiator that has a conventional exposure dependence, Fourkas and co-workers were also able to demonstrate a photoresist for which the resolution was independent of the exposure over a broad range of exposure times.

"Imagine a photographic film that always gives the right exposure no matter what shutter speed is used," says Fourkas. "You could take perfect pictures every time. By the same token, these new photoresists are extremely fault-tolerant, allowing us to create the exact lithographic pattern we want time after time."

According to Fourkas, he and his team have more research to do before thinking about trying to commercialize their new RAPID technology. "Right now we're using the technique for point-by-point lithography. We need to get it to the stage where we can operate on an entire silicon wafer, which will require more advances in chemistry, materials and optics. If we can make these advances -- and we're working hard on it -- then we will think about commercialization."

Another factor in time to application, he explained, is that his team's approach is not a R&D direction that chip manufacturers had been looking at before now. As a result, commercial use of the RAPID approach is probably at least ten years down the road, he said.

University of Maryland

Cancer drug aids the regeneration of spinal cord injuries

0 comentarios

After a spinal cord injury a number of factors impede the regeneration of nerve cells. Two of the most important of these factors are the destabilization of the cytoskeleton and the development of scar tissue. While the former prevents regrowth of cells, the latter creates a barrier for severed nerve cells. Scientists of the Max Planck Institute of Neurobiology in Martinsried and their colleagues from the Kennedy Krieger Institute and University of Miami in the United States, and the University of Utrecht in the Netherlands, have now shown that the cancer drug Taxol reduces both regeneration obstacles.

Paraplegia. This is often the long-lasting result, when nerve fibers have been crushed or cut in the spinal cord. In contrast, for example, to the nerves in a cut finger, the injured nerve cells in the central nervous system (CNS) won't regrow. Scientists have been working for decades to discover the reasons for this discrepancy in the regeneration abilities of nerve cells. They have found a variety of factors that prevent the regeneration of CNS nerve cells. One by one a number of substances that act like stop signs and halt the resumption of growth have been discovered. Other obstacles lie within the cells: The microtubules, small protein tubes which compose the cells' cytoskeleton, are completely jumbled in an injured CNS nerve cell. A structured growth becomes impossible. In addition to this, the lost tissue is progressively replaced by scar tissue creating a barrier for growing nerve cells.

Frank Bradke and his team at the Max Planck Institute of Neurobiology in Martinsried study the mechanisms inside CNS nerve cells responsible for stopping their growth: "We try to provoke the cells to ignore the stop signs so that they regrow." For this task, the neurobiologists have focused on studying the role of microtubules. These protein tubes have a parallel arrangement in the tip of growing nerve cells, stabilizing cells and actively pushing the cell end forward. This arrangement is lost in injured CNS cells. So how can the order of the microtubule be kept or regained in these cells? And once the cells start growing, how can they overcome the barrier of the scar tissue? Together with their colleagues from the United States and the Netherlands, the Max Planck scientists have now found a common solution for both problems.

Taxol, the trade name of a drug currently used for cancer treatment, has now been shown to promote regeneration of injured CNS-nerve cells. The scientists report in the online issue of the journal Science that Taxol promotes regeneration of injured CNS-nerve cells in two ways: Taxol stabilizes the microtubules so that their order is maintained and the injured nerve cells regain their ability to grow. In addition, Taxol prevents the production of an inhibitory substance in the scar tissue. The scar tissue, though reduced by Taxol, will still develop at the site of injury and can thus carry out its protective function. Yet growing nerve cells are now better able to cross this barrier. "This is literally a small breakthrough", says Bradke.

Experiments in rats performed by this group verified the effects of Taxol. These researchers supplied the injury site after a partial spinal cord lesion with Taxol via a miniature pump. After just a few weeks, animals showed a significant improvement in their movements. "So far we tested the effects of Taxol immediately after a lesion", explains Farida Hellal, the first author of the study. "The next step is to investigate whether Taxol is as effective when applied onto an existing scar several months after the injury."

The fact that a clinically approved drug shows these effects has a number of advantages. Much is already known about its interactions with the human body. In addition, Taxol can be applied directly at the site of injury for the treatment of spinal cord injuries and the amount needed is far less than what is used in cancer therapy. This should reduce side effects. "We are still in the state of basic research and a variety of obstacles remain - and eventually, pre-clinical trials will need to be done ", cautions Bradke. "However, I believe that we are on a very promising path."

(Photo: © Max Planck Institute of Neurobiology / Bradke & Hellal)

Max Planck Institute

NORMAL AIR COULD HALVE FUEL CONSUMPTION

0 comentarios

Every time a car brakes, energy is generated. At present this energy is not used, but new research shows that it is perfectly possible to save it for later use in the form of compressed air. It can then provide extra power to the engine when the car is started and save fuel by avoiding idle operation when the car is at a standstill.

Air hybrids, or pneumatic hybrids as they are also known, are not yet in production. Nonetheless, electric cars and electric hybrid cars already make use of the brake energy, to power a generator that charges the batteries. However, according to Per Tunestål, a researcher in Combustion Engines at Lund University in Sweden, air hybrids would be much cheaper to manufacture. The step to commercialisation does not have to be a large one.

“The technology is fully realistic. I was recently contacted by a vehicle manufacturer in India which wanted to start making air hybrids”, he says.

The technology is particularly attractive for jerky and slow driving, for example for buses in urban traffic.

“My simulations show that buses in cities could reduce their fuel consumption by 60 per cent”, says Sasa Trajkovic, a doctoral student in Combustion Engines at Lund University who recently defended a thesis on the subject.

Sasa Trajkovic also calculated that 48 per cent of the brake energy, which is compressed and saved in a small air tank connected to the engine, could be reused later. This means that the degree of reuse for air hybrids could match that of today’s electric hybrids. The engine does not require any expensive materials and is therefore cheap to manufacture. What is more, it takes up much less space than an electric hybrid engine. The method works with petrol, natural gas and diesel.

For this research the Lund researchers have worked with the Swedish company Cargine, which supplies valve control systems.

The idea of air hybrids was initially hit upon by Ford in the 1990s, but the American car company quickly shelved the plans because it lacked the necessary technology to move forward with the project. Today, research on air hybrids is conducted at ETH in Switzerland, Orléans in France and Lund University in Sweden. One company that intends to invest in engines with air hybrid technology is the American Scuderi. However, their only results so far have been from simulations, not from experiments.

“This is the first time anyone has done experiments in an actual engine. The research so far has only been theoretical. In addition, we have used data that means we get credible driving cycle results, for example data from the driving patterns of buses in New York”, says Sasa Trajkovic.

The researchers in Lund hope that the next step will be to convert their research results from a single cylinder to a complete, multi-cylinder engine. They would thus be able to move the concept one step closer to a real vehicle.

(Photo: Lund U.)

Lund University

FIRST STARS IN UNIVERSE WERE NOT ALONE

0 comentarios

The first stars in the universe were not as solitary as previously thought. In fact, they could have formed alongside numerous companions when the gas disks that surrounded them broke up during formation, giving birth to sibling stars in the fragments.

These are the findings of studies performed with the aid of computer simulations by researchers at Heidelberg University’s Centre for Astronomy together with colleagues at the Max Planck Institute for Astrophysics in Garching and the University of Texas at Austin (USA). The group’s findings, being published in “Science” magazine, cast an entirely new light on the formation of the first stars after the Big Bang.

Stars evolve from cosmic gas clouds in a fierce and complex battle between gravity and internal gas pressure. The density of the gas increases due to its own gravitational pull. This causes the gas to heat up, as a consequence the pressure rises, and the compression process comes to a halt. If the gas manages to get rid of the thermal energy, compression can continue and a new star is born. This cooling process works especially well if the gas contains chemical elements like carbon or oxygen. Stars forming in this way are normally low in mass, like our Sun. But in the early universe these elements had yet to emerge, so the primordial cosmic gas could not cool down very well. Accordingly, most theoretical models predict the masses of primordial stars to be about a hundred times greater than that of the Sun.

Heidelberg astrophysicist Dr. Paul Clark and his colleagues investigated these processes with the help of very high resolution computer simulations. Their findings indicate that this simple picture needs to be revised and that the early universe was not only populated by huge, solitary stars. The reason is the underlying physics of the so called accretion disks accompanying the birth of the very first stars. The gas from which a new star forms rotates, and so the gas is unable to fall directly onto the star, but first builds up a disk-like structure. Only as a result of internal friction can the gas continue to flow onto the star. If more mass falls onto this disk than it can transport inwards, it becomes unstable and breaks into several fragments. So instead of forming just one star at the centre, a group of several stars is formed. The distances between some of the stars can be as small as that between the Earth and the Sun.

According to Dr. Clark, this realisation opens up exciting new avenues for detecting the first stars in the universe. In the final stages of their lives, binaries or multiple stellar systems can produce intense bursts of X-rays or gamma rays. Future space missions are being planned specifically to investigate such bursts from the early universe. It is also conceivable that some of the first stars may have been catapulted out of their birth group through collisions with their neighbours before they were able to accumulate a great deal of mass. Unlike short-lived high-mass stars, low-mass stars may survive for billions of years. “Intriguingly,” says Dr. Clark, “some low-mass primordial stars may even have survived to the present day, allowing us to probe the earliest stages of star and galaxy formation right in our own cosmic backyard.”

(Photo: Star-formation Research Group)

Heidelberg University

FUTURE SURGEONS MAY USE ROBOTIC NURSE, 'GESTURE RECOGNITION'

0 comentarios

Surgeons of the future might use a system that recognizes hand gestures as commands to control a robotic scrub nurse or tell a computer to display medical images of the patient during an operation.

Both the hand-gesture recognition and robotic nurse innovations might help to reduce the length of surgeries and the potential for infection, said Juan Pablo Wachs, an assistant professor of industrial engineering at Purdue University.

The "vision-based hand gesture recognition" technology could have other applications, including the coordination of emergency response activities during disasters.

"It's a concept Tom Cruise demonstrated vividly in the film 'Minority Report,'" Wachs said.

Surgeons routinely need to review medical images and records during surgery, but stepping away from the operating table and touching a keyboard and mouse can delay the surgery and increase the risk of spreading infection-causing bacteria.

The new approach is a system that uses a camera and specialized algorithms to recognize hand gestures as commands to instruct a computer or robot.

At the same time, a robotic scrub nurse represents a potential new tool that might improve operating-room efficiency, Wachs said.

Findings from the research will be detailed in a paper appearing in the February issue of Communications of the ACM, the flagship publication of the Association for Computing Machinery. The paper, featured on the journal's cover, was written by researchers at Purdue, the Naval Postgraduate School in Monterey, Calif., and Ben-Gurion University of the Negev, Israel.

Research into hand-gesture recognition began several years ago in work led by the Washington Hospital Center and Ben-Gurion University, where Wachs was a research fellow and doctoral student, respectively.

He is now working to extend the system's capabilities in research with Purdue's School of Veterinary Medicine and the Department of Speech, Language, and Hearing Sciences.

"One challenge will be to develop the proper shapes of hand poses and the proper hand trajectory movements to reflect and express certain medical functions," Wachs said. "You want to use intuitive and natural gestures for the surgeon, to express medical image navigation activities, but you also need to consider cultural and physical differences between surgeons. They may have different preferences regarding what gestures they may want to use."

Other challenges include providing computers with the ability to understand the context in which gestures are made and to discriminate between intended gestures versus unintended gestures.

"Say the surgeon starts talking to another person in the operating room and makes conversational gestures," Wachs said. "You don't want the robot handing the surgeon a hemostat."

A scrub nurse assists the surgeon and hands the proper surgical instruments to the doctor when needed.

"While it will be very difficult using a robot to achieve the same level of performance as an experienced nurse who has been working with the same surgeon for years, often scrub nurses have had very limited experience with a particular surgeon, maximizing the chances for misunderstandings, delays and sometimes mistakes in the operating room," Wachs said. "In that case, a robotic scrub nurse could be better."

The Purdue researcher has developed a prototype robotic scrub nurse, in work with faculty in the university's School of Veterinary Medicine.

Researchers at other institutions developing robotic scrub nurses have focused on voice recognition. However, little work has been done in the area of gesture recognition, Wachs said.

"Another big difference between our focus and the others is that we are also working on prediction, to anticipate what images the surgeon will need to see next and what instruments will be needed," he said.

Wachs is developing advanced algorithms that isolate the hands and apply "anthropometry," or predicting the position of the hands based on knowledge of where the surgeon's head is. The tracking is achieved through a camera mounted over the screen used for visualization of images.

"Another contribution is that by tracking a surgical instrument inside the patient's body, we can predict the most likely area that the surgeon may want to inspect using the electronic image medical record, and therefore saving browsing time between the images," Wachs said. "This is done using a different sensor mounted over the surgical lights."

The hand-gesture recognition system uses a new type of camera developed by Microsoft, called Kinect, which senses three-dimensional space. The camera is found in new consumer electronics games that can track a person's hands without the use of a wand.

"You just step into the operating room, and automatically your body is mapped in 3-D," he said.

Accuracy and gesture-recognition speed depend on advanced software algorithms.

"Even if you have the best camera, you have to know how to program the camera, how to use the images," Wachs said. "Otherwise, the system will work very slowly."

The research paper defines a set of requirements, including recommendations that the system should:

* Use a small vocabulary of simple, easily recognizable gestures.

* Not require the user to wear special virtual reality gloves or certain types of clothing.

* Be as low-cost as possible.

* Be responsive and able to keep up with the speed of a surgeon's hand gestures.

* Let the user know whether it understands the hand gestures by providing feedback, perhaps just a simple "OK."

* Use gestures that are easy for surgeons to learn, remember and carry out with little physical exertion.

* Be highly accurate in recognizing hand gestures.

* Use intuitive gestures, such as two fingers held apart to mimic a pair of scissors.

* Be able to disregard unintended gestures by the surgeon, perhaps made in conversation with colleagues in the operating room.

* Be able to quickly configure itself to work properly in different operating rooms, under various lighting conditions and other criteria.

"Eventually we also want to integrate voice recognition, but the biggest challenges are in gesture recognition," Wachs said. "Much is already known about voice recognition."

(Photo: Purdue University/Mark Simons)

Purdue University

Followers

Archive

 

Selected Science News. Copyright 2008 All Rights Reserved Revolution Two Church theme by Brian Gardner Converted into Blogger Template by Bloganol dot com