Wednesday, July 14, 2010


0 comentarios

A science historian at The University of Manchester has cracked “The Plato Code” – the long disputed secret messages hidden in the great philosopher’s writings.

Plato was the Einstein of Greece’s Golden Age and his work founded Western culture and science. Dr Jay Kennedy’s findings are set to revolutionise the history of the origins of Western thought.

Dr Kennedy, whose findings are published in the leading US journal Apeiron, reveals that Plato used a regular pattern of symbols, inherited from the ancient followers of Pythagoras, to give his books a musical structure. A century earlier, Pythagoras had declared that the planets and stars made an inaudible music, a ‘harmony of the spheres’. Plato imitated this hidden music in his books.

The hidden codes show that Plato anticipated the Scientific Revolution 2,000 years before Isaac Newton, discovering its most important idea – the book of nature is written in the language of mathematics. The decoded messages also open up a surprising way to unite science and religion. The awe and beauty we feel in nature, Plato says, shows that it is divine; discovering the scientific order of nature is getting closer to God. This could transform today’s culture wars between science and religion.

“Plato’s books played a major role in founding Western culture but they are mysterious and end in riddles,” Dr Kennedy, at Manchester’s Faculty of Life Sciences explains.

“In antiquity, many of his followers said the books contained hidden layers of meaning and secret codes, but this was rejected by modern scholars.

“It is a long and exciting story, but basically I cracked the code. I have shown rigorously that the books do contain codes and symbols and that unraveling them reveals the hidden philosophy of Plato.

“This is a true discovery, not simply reinterpretation.”

This will transform the early history of Western thought, and especially the histories of ancient science, mathematics, music, and philosophy.

Dr Kennedy spent five years studying Plato’s writing and found that in his best-known work the Republic he placed clusters of words related to music after each twelfth of the text – at one-twelfth, two-twelfths, etc. This regular pattern represented the twelve notes of a Greek musical scale. Some notes were harmonic, others dissonant. At the locations of the harmonic notes he described sounds associated with love or laughter, while the locations of dissonant notes were marked with screeching sounds or war or death. This musical code was key to cracking Plato’s entire symbolic system.

Dr Kennedy, a researcher in the Centre for the History of Science, Technology and Medicine, says: “As we read his books, our emotions follow the ups and downs of a musical scale. Plato plays his readers like musical instruments.”

However Plato did not design his secret patterns purely for pleasure – it was for his own safety. Plato's ideas were a dangerous threat to Greek religion. He said that mathematical laws and not the gods controlled the universe. Plato's own teacher had been executed for heresy. Secrecy was normal in ancient times, especially for esoteric and religious knowledge, but for Plato it was a matter of life and death. Encoding his ideas in secret patterns was the only way to be safe.

Plato led a dramatic and fascinating life. Born four centuries before Christ, when Sparta defeated plague-ravaged Athens, he wrote 30 books and founded the world’s first university, called the Academy. He was a feminist, allowing women to study at the Academy, the first great defender of romantic love (as opposed to marriages arranged for political or financial reasons) and defended homosexuality in his books. In addition, he was captured by pirates and sold into slavery before being ransomed by friends.

Dr Kennedy explains: “Plato’s importance cannot be overstated. He shifted humanity from a warrior society to a wisdom society. Today our heroes are Einstein and Shakespeare – and not knights in shining armour – because of him.”

Over the years Dr Kennedy carefully peeled back layer after symbolic layer, sharing each step in lectures in Manchester and with experts in the UK and US.

He recalls: “There was no Rosetta Stone. To announce a result like this I needed rigorous, independent proofs based on crystal-clear evidence.

“The result was amazing – it was like opening a tomb and finding new set of gospels written by Jesus Christ himself.

“Plato is smiling. He sent us a time capsule.”

Dr Kennedy’s findings are not only surprising and important; they overthrow conventional wisdom on Plato. Modern historians have always denied that there were codes; now Dr Kennedy has proved otherwise.

He adds: “This is the beginning of something big. It will take a generation to work out the implications. All 2,000 pages contain undetected symbols.”

(Photo: U. Manchester)

University of Manchester


0 comentarios

In his 2002 book Lost Languages, Andrew Robinson, then the literary editor of the London Times’ higher-education supplement, declared that “successful archaeological decipherment has turned out to require a synthesis of logic and intuition … that computers do not (and presumably cannot) possess.”

Regina Barzilay, an associate professor in MIT’s Computer Science and Artificial Intelligence Lab, Ben Snyder, a grad student in her lab, and the University of Southern California’s Kevin Knight took that claim personally. At the Annual Meeting of the Association for Computational Linguistics in Sweden next month, they will present a paper on a new computer system that, in a matter of hours, deciphered much of the ancient Semitic language Ugaritic. In addition to helping archeologists decipher the eight or so ancient languages that have so far resisted their efforts, the work could also help expand the number of languages that automated translation systems like Google Translate can handle.

To duplicate the “intuition” that Robinson believed would elude computers, the researchers’ software makes several assumptions. The first is that the language being deciphered is closely related to some other language: In the case of Ugaritic, the researchers chose Hebrew. The next is that there’s a systematic way to map the alphabet of one language on to the alphabet of the other, and that correlated symbols will occur with similar frequencies in the two languages.

The system makes a similar assumption at the level of the word: The languages should have at least some cognates, or words with shared roots, like main and mano in French and Spanish, or homme and hombre. And finally, the system assumes a similar mapping for parts of words. A word like “overloading,” for instance, has both a prefix — “over” — and a suffix — “ing.” The system would anticipate that other words in the language will feature the prefix “over” or the suffix “ing” or both, and that a cognate of “overloading” in another language — say, “surchargeant” in French — would have a similar three-part structure.

The system plays these different levels of correspondence off of each other. It might begin, for instance, with a few competing hypotheses for alphabetical mappings, based entirely on symbol frequency — mapping symbols that occur frequently in one language onto those that occur frequently in the other. Using a type of probabilistic modeling common in artificial-intelligence research, it would then determine which of those mappings seems to have identified a set of consistent suffixes and prefixes. On that basis, it could look for correspondences at the level of the word, and those, in turn, could help it refine its alphabetical mapping. “We iterate through the data hundreds of times, thousands of times,” says Snyder, “and each time, our guesses have higher probability, because we’re actually coming closer to a solution where we get more consistency.” Finally, the system arrives at a point where altering its mappings no longer improves consistency.

Ugaritic has already been deciphered: Otherwise, the researchers would have had no way to gauge their system’s performance. The Ugaritic alphabet has 30 letters, and the system correctly mapped 29 of them to their Hebrew counterparts. Roughly one-third of the words in Ugaritic have Hebrew cognates, and of those, the system correctly identified 60 percent. “Of those that are incorrect, often they’re incorrect only by a single letter, so they’re often very good guesses,” Snyder says.

Furthermore, he points out, the system doesn’t currently use any contextual information to resolve ambiguities. For instance, the Ugaritic words for “house” and “daughter” are spelled the same way, but their Hebrew counterparts are not. While the system might occasionally get them mixed up, a human decipherer could easily tell from context which was intended.

Nonetheless, Andrew Robinson remains skeptical. “If the authors believe that their approach will eventually lead to the computerised ‘automatic’ decipherment of currently undeciphered scripts,” he writes in an e-mail, “then I am afraid I am not at all persuaded by their paper.” The researchers’ approach, he says, presupposes that the language to be deciphered has an alphabet that can be mapped onto the alphabet of a known language — “which is almost certainly not the case with any of the important remaining undeciphered scripts,” Robinson writes. It also assumes, he argues, that it’s clear where one character or word ends and another begins, which is not the case with many deciphered and undeciphered scripts.

“Each language has its own challenges,” Barzilay agrees. “Most likely, a successful decipherment would require one to adjust the method for the peculiarities of a language.” But, she points out, the decipherment of Ugaritic took years and relied on some happy coincidences — such as the discovery of an axe that had the word “axe” written on it in Ugaritic. “The output of our system would have made the process orders of magnitude shorter,” she says.

Indeed, Snyder and Barzilay don’t suppose that a system like the one they designed with Knight would ever replace human decipherers. “But it is a powerful tool that can aid the human decipherment process,” Barzilay says. Moreover, a variation of it could also help expand the versatility of translation software. Many online translators rely on the analysis of parallel texts to determine word correspondences: They might, for instance, go through the collected works of Voltaire, Balzac, Proust and a host of other writers, in both English and French, looking for consistent mappings between words. “That’s the way statistical translation systems have worked for the last 25 years,” Knight says.

But not all languages have such exhaustively translated literatures: At present, Snyder points out, Google Translate works for only 57 languages. The techniques used in the decipherment system could be adapted to help build lexicons for thousands of other languages. “The technology is very similar,” says Knight, who works on machine translation. “They feed off each other.”

(Photo: MIT)



0 comentarios

Since the first exoplanet — a planet outside our solar system — was discovered in 1995, more than 460 others have been found. While astronomers have been able to measure the size, orbital characteristics, and even some of the molecules that make up the atmospheres of some exoplanets, many mysteries about their formation and evolution remain.

A team of astronomers, including a researcher from MIT’s Kavli Institute for Astrophysics and Space Research, has become the first to measure wind in the atmosphere of an exoplanet. By detecting heavy winds on HD209458b, a huge exoplanet located 150 light years away that is slightly more than half the mass of Jupiter, the researchers could then measure the movement of the planet as it orbited its host star — also another first for exoplanetary research.

The work, which is detailed in a paper published June 24 in Nature, will guide future research on exoplanets, since understanding the properties of a planet’s atmosphere is a critical first step for characterizing how that planet formed and evolved.

Measuring the planet’s orbital movement is also important because the velocity of that movement can be used with Newton’s law of universal gravitation to get a more precise estimate of the mass of both the planet and its parent star. Before now, astronomers had to rely on complex mathematical models, as well as the changes in light that occurred when an exoplanet’s host star wobbled in response to the exoplanet’s gravitational pull, to determine the exoplanet’s mass. Thanks to a new technique that the researchers used to study HD209458b, astronomers should now be able to refine their estimates of the mass of some exoplanets and their stars.

One way that astronomers can learn a lot about an exoplanet is by observing it as it passes in front of its host star as seen from Earth. By measuring the light obscured by an exoplanet during this event, which is known as a transit, astronomers can learn details about the planet, such as its size and what kinds of molecules exist in its atmosphere. Of the 463 exoplanets discovered to date, more than 80 are known to be transiting planets. (HD209458b, identified in 1999, was the first transiting exoplanet discovered.)

Researchers detected the heavy winds in HD209458b’s atmosphere by studying carbon monoxide. According to co-author and Kavli postdoc Simon Albrecht, who collaborated with researchers from Leiden University and the Netherlands Institute for Space Research (SRON), the results are “among the many small steps the astronomy community is taking toward being able to, at some point, measure atmospheric conditions on exoplanets that are twins to our Earth.”

What makes the work, partly funded by the Netherlands Organisation for Scientific Research, “potentially groundbreaking” is the ground-based technique that was used to detect the winds and orbital movement of HD209458b, according to Adam Showman, a planetary scientist at the University of Arizona. “Just the fact this is even doable from the ground is spectacular,” he said.

Instead of using a space-based instrument like NASA’s Spitzer Space Telescope to study the faraway planet, the researchers used a ground-based, high-resolution spectrograph at the European Southern Observatory in Chile that can detect subtle changes in the wavelength of light when a planet transits its star. As HD209458b transited last August, its parent star left what lead author Ignas Snellen from the Leiden Observatory in the Netherlands described as “a fingerprint” of light that filtered through the planet’s atmosphere. The researchers then used the spectrograph to analyze that imprint of light to detect carbon monoxide molecules in the atmosphere. “It seems that H209458b is actually as carbon-rich as Jupiter and Saturn, and this could indicate that it was formed in the same way,” Snellen said.

The researchers then spent several months analyzing spectrographic measurements of the movement of the carbon monoxide thanks to the Doppler shift, a phenomenon that creates subtle color changes in wavelengths of light when something moves. When an object moves toward us, it looks slightly bluer, and when it moves away, it looks slightly redder. The spectrograph revealed color shifts in the light absorbed by the exoplanet, which indicated that something was moving the gas. That something, the researchers believe, is heavy wind that is blowing carbon monoxide in the planet’s atmosphere up to 10,000 kilometers per hour (the fastest winds ever detected on another planet in our solar system were blowing at up to 2,000 kilometers per hour on Neptune, according to previous research). By tracking the movement of the carbon monoxide, the astronomers could then measure the movement of the planet as it orbited its host star.

While the results are notable, future research must address what might be causing the heavy winds, said Showman. Right now, the spectrograph simply does not have enough spectral resolution to distinguish that level of detail.

As the team continues to refine the ground-based technique used in this research, Albrecht said that he and his colleagues must do “a better job” of analyzing exoplanetary atmospheres for molecules that have fainter spectral signals than carbon monoxide, such as water. Their next step is to measure the atmospheres of exoplanets that are located slightly farther away from their host stars to see how this distance affects detectable concentrations of carbon monoxide and other molecules.

(Photo: ESO/L. Calçada)



0 comentarios

Science fiction has nothing over quantum physics when it comes to presenting us with a labyrinthine world that can twist your mind into knots when you try to make sense of it.

A team of Arizona State University researchers, however, believe they’ve opened a door to a clearer view of how the common, everyday world we experience through our senses emerges from the ethereal quantum world.

Physicists call our familiar everyday environment the classical world. That’s the world in which we and the things around us appear to have measurable characteristics such as mass, height, color, weight, texture and shape.

The quantum world is the world of the elemental building block of matter – atoms. Atoms are combinations of neutrons and protons and electrons bound to a nucleus by electrical attraction.

But most of an atom – more than 99 percent of it – is empty space filled with invisible energy.

So from a quantum-world view, we and the things around us are mostly empty space. The way we experience ourselves and other things in the classical world is really just “a figment of our imaginations shaped by our senses,” said ASU Regents’ Professor David Ferry.

For more than a century, scientists and engineers have struggled to come to a satisfactory conclusion about the missing link that bridges the classical and quantum worlds and enables a transition from that world of mostly empty space to the familiar environment we experience through our senses.

One proposed scenario based on these questions was investigated in a dissertation written by Adam Burke to earn his doctorate in electrical engineering in 2009 from ASU’s Ira A. Fulton Schools of Engineering.

To try working out an answer to some of the questions, Burke teamed with Ferry, a professor in the School of Electrical, Computer and Energy Engineering; Tim Day, who recently earned his doctorate in electrical engineering from the school; physicist Richard Akis, an associate research professor in the school; Gil Speyer, an assistant research scientist for the engineering schools’ High Performance Computing Initiative; and Brian Bennett, a materials scientist with the Naval Research Laboratory.

The result is an article published recently in the research journal Physical Review Letters and featured on, a science, technology and research news website. It describes the transition from quantum to classical world as a “decoherence” process that involves a kind of evolutionary progression somewhat analogous to Charles Darwin’s concept of natural selection.

The authors built on two theories called decoherence and quantum Darwinism, both proposed by Los Alamos National Laboratory researcher Wojciech Zurek.

The decoherence concept holds that many quantum states “collapse” into a “broad diaspora,” or dispersion, while interacting with the environment. Through a selection process, other quantum states arrive at a final stable state, called a pointer state, which is “fit enough” (think “survival of the fittest” in Darwinian terms) to be transmitted through the environment without collapsing.

These single states with the lowest energy can then make high-energy copies of themselves that can be described by the Darwinian process and observed on the macroscopic scale in the classical world.

The experiments arose from using advanced scanning gate microscopy to obtain images of what are called quantum dots.

Burke, now doing research in a post-doctoral program at the University of New South Wales in Sydney, Australia, explains it like this:

Imagine the quantum dot as a billiard table in which the quantum point contacts are the two openings through which a ball could enter or leave the dot, and the interior walls of the dot act as bumpers.

If there were no friction on the table, a billiard ball with an initial trajectory would bounce off of these walls until eventually finding an exit and leaving the dot (this is the decoherence part).

Or it might find a trajectory that does not couple to the openings and would therefore be a surviving pointer state, what is called a diamond state.

One difference between the classical physics of billiard balls and the quantum physics of electrons is that an electron can tunnel through “forbidden phase space” to enter this diamond state, whereas a billiard ball entering from outside the dot would not find itself able to reach this diamond trajectory.

It is this isolated classical trajectory, and the buildup of an electron wave functions' amplitude along that trajectory, that is referred to as a scarred wave function.

To experimentally measure these scars, imagine that we can’t see inside the walls of our billiard table, but we can count the billiard balls exiting the table. This is what is normally measured with the conductance of the quantum dot and its environment.

“We measure the current through the dot, the numbers of ‘billiard balls’ passing through it per second, to try to see how this changes when we move our probe around the ‘billiard table,’ ” Ferry said.

Furthermore, there is the probe of the scanning gate microscope, which applies a small electric field. This can be pictured as a small circular bumper on the billiard table that can be moved around within the dot.

This small "bumper" is rastered left to right, top to bottom over the area of interest. If a ball is traveling along this diamond pattern it is perturbed by the bumper when it rasters into the trajectory.

Think of rastering like the way a television image works, with a pattern of scanning lines that cover the area on which the image is projected, or a set or horizontal lines composed of individual pixels that are used to form an image on a computer screen.
When this happens, the ball bounces off the perturbation, and takes a new course within the dot until finally coupling out one of the openings to be measured. The change in the ball’s motion appears as a change in the conductance – the number of balls going through the openings in a given time.

Ferry explained: “With scanning gate microscopy, we monitor where these changes occur within the scans, and hopefully this gives us a map of the scarred wave functions corresponding to the pointer states.”

Quantum mechanically, he said, a new electron will tunnel right into the diamond state, so the measurement can continue until the whole area is mapped.

The data that came from the team’s experiment supports Zurek's theories of decoherence and quantum Darwinism, Burke said.

Ferry said these findings are just one step in a process that is open to conjecture, but they point toward a “smoking gun” for the existence of this quantum Darwinism and a new view in the search for evidence of how the quantum-to-classical world transition actually occurs.

If you can wrap your mind around all this, he said, “You open the door to a deeper understanding of what is really going on” at the core of physical reality.

(Photo: ASU)

Arizona State University


0 comentarios
The concept of pregnancy makes no sense—at least not from an immunological point of view. After all, a fetus, carrying half of its father's genome, is biologically distinct from its mother. The fetus is thus made of cells and tissues that are very much not "self"—and not-self is precisely what the immune system is meant to search out and destroy.

Women's bodies manage to ignore this contradiction in the vast majority of cases, making pregnancy possible. Similarly, scientists have generally paid little attention to this phenomenon—called "pregnancy tolerance"—and its biological details.

Now, a pair of scientists from the California Institute of Technology (Caltech) have shown that females actively produce a particular type of immune cell in response to specific fetal antigens—immune-stimulating proteins—and that this response allows pregnancy to continue without the fetus being rejected by the mother's body.

Their findings were detailed in a recent issue of the Proceedings of the National Academy of Sciences (PNAS).

“Our finding that specific T regulatory cells protect the mother is a step to learning how the mother avoids rejection of her fetus. This central biological mechanism is important for the health of both the fetus and the mother,” says David Baltimore, Caltech's Robert Andrews Millikan Professor of Biology, recipient of the 1975 Nobel Prize in Physiology or Medicine, and the principal investigator on the research.

Scientists had long been "hinting around at the idea that the mother's immune system makes tolerance possible," notes paper coauthor Daniel Kahn, a visiting associate in biology at Caltech, and an assistant professor of maternal–fetal medicine at the University of California, Los Angeles (UCLA). What they didn't have were the details of this tolerance—or proof that it was immune-related.

Now they do. To pin down those details, the two scientists began looking at the immune system's T regulatory cells (Tregs) in a strain of inbred mice that are all genetically identical—except for one seemingly tiny detail. Male mice—including male fetuses—carry on their cells' surfaces a protein known as a "minor transplantation antigen." Female mice lack this antigen.

Under normal circumstances, this antigen's existence isn’t a problem for the male fetuses because the pregnancy tolerance phenomenon kicks in and protects them from any maternal immune repercussions.

To demonstrate the role of Tregs, Baltimore and Kahn used a drug to selectively target and destroy the cells. If the Tregs were indeed the source of pregnancy tolerance, they reasoned, their destruction would give the immune system free rein to go after the antigen-laden fetuses.

"In this case," says Kahn, "we knew the only possible immune response would be against the males—that the males would be at risk."

Indeed they were. When Baltimore and Kahn looked at the offspring of mice who'd been treated with the toxin, they found that fewer of the male fetuses survived to birth; those males that did survive were of significantly lower birthweight, presumably because of the inflammation caused by the mother's immune response to that single antigen.

"These T cells are functioning in an antigen-specific manner," Kahn notes. "In other words, their function requires the presence of the specific fetal antigens."

In their studies of these animals, the scientists also found that pregnancy tolerance "develops actively as a consequence of pregnancy," says Kahn. "The mice are not born with it." Indeed, virgin mice showed no signs of these pregnancy-specific Treg cells. Conversely, the cells were found in larger numbers in those individual mice that had given birth to more male babies, with the level of Treg cells increasing with the number of male births.

The next step, Kahn adds, is to look at Tregs and their role in pregnancy tolerance in humans—a line of research that may lead to new insights into such pregnancy-related conditions as preeclampsia, in which high blood pressure and other symptoms develop in the second half of pregnancy. Preeclampsia is a major cause of maternal mortality around the world.

"There's a lot to be learned," he says. "Pregnancy is often ignored in research because it's usually successful, and because—from an immunologic standpoint—it has such complexity. Until now, it's been difficult to grab a handle on how the immunology of pregnancy really works."

The work described in the PNAS article, "Pregnancy induces a fetal antigen-specific maternal T regulatory cell response that contributes to tolerance," was supported in part by a research grant from the Skirball Foundation. Kahn is supported by the National Institutes of Health's Building Interdisciplinary Research Careers in Women's Health Center at UCLA.

California Institute of Technology (Caltech)


0 comentarios

Psychologists report in the journal Science that interpersonal interactions can be shaped, profoundly yet unconsciously, by the physical attributes of incidental objects: Resumes reviewed on a heavy clipboard are judged to be more substantive, while a negotiator seated in a soft chair is less likely to drive a hard bargain.

The research was conducted by psychologists at Harvard University, the Massachusetts Institute of Technology, and Yale University. The authors say the work suggests that physical touch — the first of our senses to develop — may continue to operate throughout life like a scaffold upon which people build their social judgments and decisions.

“Touch remains perhaps the most underappreciated sense in behavioral research,” said co-author Christopher C. Nocera, a graduate student in Harvard’s Department of Psychology. “Our work suggests that greetings involving touch, such as handshakes and cheek kisses, may in fact have critical influences on our social interactions, in an unconscious fashion.”

Nocera conducted the research with Joshua M. Ackerman, assistant professor of marketing at MIT’s Sloan School of Management, and John A. Bargh, professor of psychology at Yale.

“First impressions are liable to be influenced by the tactile environment, and control over this environment may be especially important for negotiators, pollsters, job seekers, and others interested in interpersonal communication,” the authors wrote in the latest issue of Science. “The use of ‘tactile tactics’ may represent a new frontier in social influence and communication.”

The researchers conducted a series of experiments probing how objects’ weight, texture, and hardness can unconsciously influence judgments about unrelated events and situations:

— To test the effects of weight, metaphorically associated with seriousness and importance, subjects used either light or heavy clipboards while evaluating resumes. They judged candidates whose resumes were seen on a heavy clipboard as better qualified and more serious about the position, and rated their own accuracy at the task as more important.

— An experiment testing texture’s effects had participants arrange rough or smooth puzzle pieces before hearing a story about a social interaction. Those who worked with the rough puzzle were likelier to describe the interaction in the story as uncoordinated and harsh.

— In a test of hardness, subjects handled either a soft blanket or a hard wooden block before being told an ambiguous story about a workplace interaction between a supervisor and an employee. Those who touched the block judged the employee as more rigid and strict.

— A second hardness experiment showed that even passive touch can shape interactions. Subjects seated in hard or soft chairs engaged in mock haggling over the price of a new car. Subjects in hard chairs were less flexible, showing less movement between successive offers. They also judged their adversaries in the negotiations as more stable and less emotional.

Nocera and his colleagues say these experiments suggest that information acquired through touch exerts broad, if generally imperceptible, influence over cognition. They propose that encounters with objects can elicit a “haptic mindset,” triggering application of associated concepts even to unrelated people and situations.

“People often assume that exploration of new things occurs primarily through the eyes,” Nocera said. “While the informative power of vision is irrefutable, this is not the whole story. For example, the typical reaction to an unknown object is usually as follows: With an outstretched arm and an open hand, we ask, ‘Can I see that?’ This response suggests the investigation is not limited to vision, but rather the integrative sum of seeing, feeling, touching, and manipulating the unfamiliar object.”

Nocera said that because touch appears to be the first sense that people use to experience the world - for example, by equating the warm and gentle touch of a mother with comfort and safety - it may provide part of the basis by which metaphorical abstraction allows for the development of a more complex understanding of comfort and safety. This physical-to-mental abstraction is reflected in metaphors and shared linguistic descriptors, such as the multiple meanings of words like “hard,” “rough,” and “heavy.”

(Photo: Kris Snibbe/Harvard Staff Photographer)

Harvard University


0 comentarios

High-intensity ultrasound waves traveling through liquid leave bubbles in their wake. Under the right conditions, these bubbles implode spectacularly, emitting light and reaching very high temperatures, a phenomenon called sonoluminescence. Researchers have observed imploding bubble conditions so hot that the gas inside the bubbles ionizes into plasma, but quantifying the temperature and pressure properties has been elusive.

In a paper published in the June 27 issue of Nature Physics, University of lllinois chemistry professor Kenneth S. Suslick and former student David Flannigan, now at the California Institute of Technology, experimentally determine the plasma electron density, temperature and extent of ionization.

Suslick and Flannigan first observed super-bright sonoluminescence in 2005 by sending ultrasound waves through sulfuric acid solutions to create bubbles.

“The energies of the populated atomic levels suggested a plasma, but at that time there was no estimate of the density of the plasma, a crucial parameter to understanding the conditions created at the core of the collapsing bubble,” said Suslick, the Marvin T. Schmidt Professor of Chemistry and a professor of materials science and engineering.

The new report uses the same setup, but now with a detailed analysis of the shape of the observed spectrum, which provides information on the conditions of the region around the atoms inside the bubble as it collapses.

“The temperature can be several times that of the surface of the sun and the pressure greater than that at the bottom of the deepest ocean trench,” Suslick said.

“What’s more, we were able to determine how these properties are affected by the ferocity with which the bubble collapses, and we found that the plasma conditions generated may indeed be extreme.”

The duo observed temperatures greater than 16,000 kelvins – three times the temperature on the surface of the sun. They also measured electron densities during bubble collapse similar to those generated by laser fusion experiments. However, Suslick emphasized that his group has not observed evidence that fusion takes place during sonoluminescence, as some have theorized possible.

In addition, the researchers found that plasma properties show a strong dependence on the violence of bubble implosion, and that the degree of ionization, or how much of the gas is converted to plasma, increases as the acoustic pressure increases.

“It is evident from these results that the upper bounds of the conditions generated during bubble implosion have yet to be established,” Suslick said. “The observable physical conditions suggest the limits of energy focusing during the bubble-forming and imploding process may approach conditions achievable only by much more expensive means.”

(Photo: Hangxun Xu and Ken Suslick)

University of lllinois


0 comentarios

Fast-food restaurants can supersize French fries and drinks, but Mother Nature has found a way to supersize a type of apple.

Peter Hirst, a Purdue University associate professor of horticulture, found that an anomaly in some Gala apple trees causes some apples to grow much larger than others because cells aren't splitting. The findings, reported in the current issue of the Journal of Experimental Botany, showed that the new variety, called Grand Gala, is about 38 percent heavier and has a diameter 15 percent larger than regular Galas.

"It's never been found in apples before," Hirst said. "This is an oddball phenomenon in the apple world."

Hirst is trying to understand what causes the difference in the size of apples - for instance, why Gala apples are so much larger than crabapples.

"There is real incentive for fruit growers to increase the size of their apples," Hirst said. "At 125 apples per bushel, a grower gets 8 cents per apple. But if they have larger apples - 88 per bushel - the price more than doubles."

Since different apple varieties don't always have the same genes controlling the same functions, comparing Galas to crabapples isn't an easy way to understand the mechanisms that control their destined sizes. But the Grand Gala seemed like it might provide an opportunity to unravel the mystery.

"The way the Grand Gala was found was that someone in an orchard full of Gala trees noticed that one branch had different-sized apples than the rest of the tree. They grafted new trees from this branch to started a new tree," Hirst said. "These are just chance events."

Larger apples tend to have more cells than their smaller counterparts, so Hirst theorized that there was a gene or genes that kept cell division turned on in Grand Gala. Instead, he found that Grand Gala had about the same number of cells as a regular Gala, but those cells were larger.

Normally, cells make a copy of their DNA, grow and then split. Each of those cells continues the process. Through a phenomenon called endoreduplication, the cells in Grand Gala make copies of their DNA, but don't divide. Instead, the cells grow, add more copies of the DNA and continue that growth.

The Grand Gala fruit has the same core size, so the added size and weight is in the meat, or cortex, of the fruit. Hirst said they're also crunchier and tend to taste better.

Hirst's study found that one or more of a handful of genes is likely responsible for the endoreduplication. And while it may be possible to isolate those genes and find ways to increase the size of other apples, Hirst said it's unlikely.

"You won't see Grand Galas in the grocery store," Hirst said. "Consumers like shiny, perfect-looking apples. Grand Galas are slightly lopsided. They're good eating apples, but the end product isn't something that consumers are used to seeing at the store."

He said the apples are likely to gain more of a following at apple orchards where they're grown.

Hirst will continue studying what causes different apples to be different sizes, but he won't pursue endoreduplication as an answer, he said. Purdue University funded his research.

(Photo: Purdue University/Peter Hirst)

Purdue University


0 comentarios
In the initial stages of sleep, energy levels increase dramatically in brain regions found to be active during waking hours, according to new research in the June 30 issue of The Journal of Neuroscience. These results suggest that a surge of cellular energy may replenish brain processes needed to function normally while awake.

A good night’s rest has clear restorative benefits, but evidence of the actual biological processes that occur during sleep has been elusive. Radhika Basheer, PhD, and Robert McCarley, MD, of Boston V.A. Healthcare System and Harvard Medical School, proposed that brain energy levels are key to nightly restoration.

“Our finding bears on one of the perennial conundrums in biology: the function of sleep,” Basheer said. “Somewhat surprisingly, there have been no modern-era studies of brain energy using the most sensitive measurements.”

The authors measured levels of adenosine triphosphate (ATP), the energy currency of cells, in rats. They found that ATP levels in four key brain regions normally active during wakefulness increased when the rats were in non-REM sleep, but were accompanied by an overall decrease in brain activity. When the animals were awake, ATP levels were steady. When the rats were gently nudged to stay awake three or six hours past their normal sleep times, there was no increase in ATP.

The authors conclude that sleep is necessary for this ATP energy surge, as keeping the rats awake prevented the surge. The energy increase may then power restorative processes absent during wakefulness, because brain cells consume large amounts of energy just performing daily waking functions.

“This research provides intriguing evidence that a sleep-dependent energy surge is needed to facilitate the restorative biosynthetic processes,” said Robert Greene, MD, PhD, of the University of Texas Southwestern, a sleep expert who was unaffiliated with the study. He observed that questions arise from the findings, such as the specific cause of the ATP surge. “The authors propose that the surge is related to decreases in brain cell activity during sleep, but it may be due to many other factors as well, including cellular signaling in the brain,” he said.

Society for Neuroscience




Selected Science News. Copyright 2008 All Rights Reserved Revolution Two Church theme by Brian Gardner Converted into Blogger Template by Bloganol dot com