Thursday, January 7, 2010
Led by Michael Kozicki, an ASU electrical engineering professor and director of the Center for Applied Nanoionics, the researchers have shown that they can build stackable memory based on “ionic memory technology,” which could make them ideal candidates for storage cells in high-density memory. Best of all, the new method uses well-known electronics materials.
“This opens the door to inexpensive, high-density data storage by ‘stacking’ memory layers on top one another inside a single chip,” Kozicki said. “This could lead to hard drive data storage capacity on a chip, which enables portable systems that are smaller, more rugged and able to go longer between battery charges.”
“This is a significant improvement on the technology we developed two years ago where we made a new type of memory that could replace Flash, using materials common to the semiconductor industry (copper-doped silicon dioxide). What we have done now is add some critical functionality to the memory cell merely by involving another common materia – silicon.”
Kozicki outlined the new memory device in a technical presentation he made in November at the 2009 International Electron Devices and Materials Symposia in Taiwan. He worked with Sarath C. Puthen Thermadam, an ASU electrical engineering graduate student.
Kozicki said that given current technology, electronics researchers are fast reaching the physical limits of device memory. This fact has spurred research into new types of memory that can store more information into less and less physical space. One way of doing this is to stack memory cells.
The concept of stackable memory is akin to one’s ability to store boxes in a small room. You can store more boxes (each representing a memory cell) if you stack them and take advantage of three dimensions of the room, rather than only putting each box on the floor.
Kozicki said stacking memory cells has not been achieved before because the cells could not be isolated. Each memory cell has a storage element and an access device; the latter allowing you to read, write or erase each storage cell individually.
“Before, if you joined several memory cells together you wouldn’t be able to access one without accessing all of the others because they were all wired together,” Kozicki said. “What we did was put in an access, or isolation device, that electrically splits all of them into individual cells.”
Up until now, people built these access elements into the silicon substrate.
“But if you do that for one layer of memory and then you build another layer, where will you put the access device,” Kozicki asked. “You already used up the silicon on the first layer and it’s a single crystal, it is very difficult to have multiple layers of single crystal material.”
The new approach does use silicon, but not single crystal silicon, which can be deposited in layers as part of the three-dimensional memory fabrication process. Kozicki said his team was wrestling with how to find a way to build an electrical element, called a diode, into the memory cell. The diode would isolate the cells.
Kozicki said this idea usually involves several additional layers and processing steps when making the circuit, but his team found an elegant way of achieving diode capability by substituting one known material for another, in this case replacing a layer of metal with doped silicon.
“We can actually use a number of different types of silicon that can be layered,” he said. “We get away from using the substrate altogether for controlling the memory cells and put these access devices in the layers of memory above the silicon substrate.”
“Rather than having one transistor in the substrate controlling each memory cell, we have a memory cell with a built-in diode (access device) and since it is built into the cell, it will allow us to put in as many layers as we can squeeze in there,” Kozicki said. “We’ve shown that by replacing the bottom electrode with silicon it is feasible to go any number of layers above it.
With each layer applied, memory capacity significantly expands.
“Stackable memory is thought to be the only way of reaching the densities necessary for the type of solid state memory that can compete with hard drives on cost as well as information storage capacity,” Kozicki said. “If you had eight layers of memory in a single chip, this would give you almost eight times the density without increasing the area.”
Kozicki said the advance mimics an idea employed in early radios.
“We created a modern analog to the ‘cat’s whisker,’ where we are growing a nanowire, a copper nanowire, right onto the silicon to create a diode,” he said.
Cat’s whisker radios, a product of the 1930s, were simple devices that employed a small wire to scratch the surface of a semiconductor material. The connection between the semiconductor and the wire created a diode that they could use as part of a radio.
“It turns out to be a ridiculously simple idea, but it works,” Kozicki said of his stackable memory advance. “It works better than the complicated ideas work.”
“The key was the diodes, and making a diode that was simple and essentially integrated in with the memory cell. Once you do that, the rest is pretty straightforward.”
Arizona State University
The presence of fog provides the first direct evidence for the exchange of material between the surface and the atmosphere, and thus of an active hydrological cycle, which previously had only been known to exist on Earth.
In a talk delivered December 18 at the American Geophysical Union's 2009 Fall Meeting in San Francisco, Brown, the Richard and Barbara Rosenberg Professor and professor of planetary astronomy, details evidence that Titan's south pole is spotted "more or less everywhere" with puddles of methane that give rise to sporadic layers of fog. (Technically, fog is just a cloud or bank of clouds that touch the ground).
Brown and his colleagues also describe their findings in a recent paper published in The Astrophysical Journal Letters.
The researchers made their discovery using data from the Visual and Infrared Mapping Spectrometer (VIMS) onboard the Cassini spacecraft, which has been observing Saturn's system for the past five years.
The VIMS instrument provides "hyperspectral" imaging, covering a large swath of the visible and infrared spectrum. Brown and his colleagues—including Caltech undergraduate students Alex Smith and Clare Chen, who were working with Brown as part of a Summer Undergraduate Research Fellowship (SURF) project—searched public online archives to find all Cassini data collected over the moon's south pole from October 2006 through March 2007. They filtered the data to separate out features occurring at different depths in the atmosphere, ranging from 20 kilometers (12.4 miles) to .25 kilometers (820 feet) above the surface. Using other filters, they homed in on "bright" features caused by the scattering of light off small particles—such as the methane droplets present in clouds.
In this way, they isolated clouds located about 750 meters (less than a half-mile) above the ground. These clouds did not extend into the higher altitudes—into the moon's troposphere, where regular clouds form. In other words, says Brown, they had found fog.
"Fog—or clouds, or dew, or condensation in general—can form whenever air reaches about 100 percent humidity," Brown says. "There are two ways to get there. The first is obvious: add water (on Earth) or methane (on Titan) to the surrounding air. The second is much more common: make the air colder so it can hold less water (or liquid methane), and all of that excess needs to condense."
This, he explains, is the same process that causes water droplets to form on the outside of a cool glass.
On Earth, this is the most common method of making fog, Brown says. "That fog you often see at sunrise hugging the ground is caused by ground-level air cooling overnight, to the point where it cannot hang onto its water. As the sun rises and the air heats, the fog goes away."
Similarly, fog can form when wet air passes over cold ground; as the air cools, the water condenses. And mountain fog occurs when air gets pushed up the side of a mountain and cools, causing the water to condense.
However, none of these mechanisms work on Titan.
The reason is that Titan's muggy atmosphere takes a notoriously long time to cool (or warm). "If you were to turn the sun totally off, Titan's atmosphere would still take something like 100 years to cool down," Brown says. "Even the coldest parts of the surface are much too warm to ever cause fog to condense."
Mountain fog is also out of the question, he adds. "A Titanian mountain would have to be about 15,000 feet high before the air would get cold enough to condense," he says. And yet the tallest mountains the moon could possibly carry (because of its fragile, icy crust) would be no more than 3000 feet high.
The only possible way to make Titanian fog, then, is to add humidity to the air. And the only way to do that, Brown says, is by evaporating liquid—in this case, methane, the most common hydrocarbon on the moon, which exists in solid, liquid, and gaseous forms.
Brown notes that evaporating methane on Titan "means it must have rained, and rain means streams and pools and erosion and geology. The presence of fog on Titan proves, for the first time, that the moon has a currently active methane hydrological cycle."
The presence of fog also proves that the moon must be dotted with methane pools, Brown says. That's because any ground-level air, after becoming 100 percent humid and turning into fog, would instantly rise up into the atmosphere like a giant cumulus cloud. "The only way to make the fog stick around on the ground is to both add humidity and cool the air just a little," he explains. "The way to cool the air just a little is to have it in contact with something cold, like a pool of evaporating liquid methane."
(Photo: Mike Brown/Caltech)
California Institute of Technology (Caltech)
Using first observations with the PACS Instrument on board ESA’s Herschel Space Telescope, scientists from the Max Planck Institute for Extraterrestrial Physics and other institutions have for the first time resolved more than half of this radiation into its constituting sources. Observations with Herschel open the road towards understanding the properties of these galaxies, and trace the dusty side of galaxy evolution.
In the mid 1990’s, scientists analyzing data from NASA’s COBE spacecraft discovered faint radiation in the far-infrared part of the electromagnetic spectrum that reaches earth with the same intensity from all directions in space. Immediately, they suspected it to be the aggregate emission of many distant galaxies in the early universe, releasing the same amount of energy in the far-infrared as reaches us in visible light from similarly distant galaxies. Whereas visible light tells us about the stars in galaxies, the far-infrared is emitted by cold dust that is hiding the newly formed stars. Identifying these surprisingly numerous dusty galaxies has proven difficult, though. Space telescopes are needed to detect far-infrared emission, because it is absorbed by the Earth’s atmosphere. Previous infrared space telescopes have detected far-infrared light from only the brightest of the galaxies forming this cosmic background. To glean any information about the fainter objects, astronomers had to rely on indirect evidence based on shorter wavelength radiation.
ESA’s Herschel Space Observatory, launched in May 2009, is the largest space telescope ever built with a mirror diameter of 3.5m. Its PACS instrument is designed to take high-resolution images of the sky at far-infrared wavelengths, exactly where most of the cosmic infrared background emission is received. "After the check-out of our instrument, we were yearning to obtain the first deep far-infrared observations of the sky," says Albrecht Poglitsch, principal investigator of PACS.
For a total of 30 hours in October, PACS has observed a small patch of sky in the constellation of Ursa Major, about a quarter of the size of the full moon. "Already in these first observations, we have resolved about 60% of the cosmic infrared background from this region of sky into individual well-detected sources," says Dieter Lutz from the consortium of scientists from five European institutes that have obtained the data. "And this is just the beginning. Yet more sensitive observations will follow soon, and we will be able to understand in detail the epoch of activity and the properties of the galaxies that produce the cosmic infrared background, now that we have pinned them down."
In research to be published in the Journal of Biological Chemistry, Dr Cornelia de Moor of The University of Nottingham and her team have investigated a drug called cordycepin, which was originally extracted from a rare kind of wild mushroom called cordyceps and is now prepared from a cultivated form.
Dr de Moor said: “Our discovery will open up the possibility of investigating the range of different cancers that could be treated with cordycepin. We have also developed a very effective method that can be used to test new, more efficient or more stable versions of the drug in the Petri dish. This is a great advantage as it will allow us to rule out any non-runners before anyone considers testing them in animals.”
Cordyceps is a strange parasitic mushroom that grows on caterpillars. Properties attributed to cordyceps mushroom in Chinese medicine made it interesting to investigate and it has been studied for some time. In fact, the first scientific publication on cordycepin was in 1950. The problem was that although cordycepin was a promising drug, it was quickly degraded in the body. It can now be given with another drug to help combat this, but the side effects of the second drug are a limit to its potential use.
Dr de Moor continued: “Because of technical obstacles and people moving on to other subjects, it’s taken a long time to figure out exactly how cordycepin works on cells. With this knowledge, it will be possible to predict what types of cancers might be sensitive and what other cancer drugs it may effectively combine with. It could also lay the groundwork for the design of new cancer drugs that work on the same principle.”
The team has observed two effects on the cells: at a low dose cordycepin inhibits the uncontrolled growth and division of the cells and at high doses it stops cells from sticking together, which also inhibits growth. Both of these effects probably have the same underlying mechanism, which is that cordycepin interferes with how cells make proteins. At low doses cordycepin interferes with the production of mRNA, the molecule that gives instructions on how to assemble a protein. And at higher doses it has a direct impact on the making of proteins.
Professor Janet Allen, BBSRC Director of Research said: “Research to understand the underlying bioscience of a problem is always important. This project shows that we can always return to asking questions about the fundamental biology of something in order to refine the solution or resolve unanswered questions. The knowledge generated by this research demonstrates the mechanisms of drug action and could have an impact on one of the most important challenges to health.”
(Photo: © Malcolm Storey, 2001, http://www.bioimages.org.uk/)
Biotechnology and Biological Sciences Research Council
Researchers funded by BBSRC and the Medical Research Council have discovered that the motor system is activated automatically when we hear speech. These findings could, in the future, play a central role in helping to unravel various language difficulties seen in adults and children.
The study, ‘Activation of Articulatory Information in Speech Perception’, was conducted by researchers at Royal Holloway, University of London, the MRC Cognition and Brain Sciences Unit, and Ghent University and is featured in the Proceedings of the National Academy of Sciences of the United States of America, and suggests that motor systems are recruited whenever we hear speech, irrespective of whether we are trying to ignore the speech that we are hearing.
Professor Kathleen Rastle, from the Department of Psychology at Royal Holloway, University of London, explained that the study was carried out by fitting each participant with a custom-made acrylic palate that measured contact between the tongue and the roof of the mouth 100 times per second whilst they were speaking, which formed a detailed picture of the tongue’s position during speech. The participants were asked to read aloud printed targets such as ‘koob’ whilst also listening to spoken distractors such as ‘toob’.
“Our key question was whether there would be evidence that participants had constructed programs for the movements involved in speaking from the spoken distractor syllables. We hypothesized that if motor systems are recruited when we listen to speech, then the way that target syllables were produced would be influenced by the characteristics of the spoken distractors”, she said.
Results showed that the articulation of target syllables was distorted toward the articulatory requirements of the spoken distractors. For example, in the production of the ‘K’ sound in ‘koob’, participants’ tongues were slightly further forward when they were trying to ignore the spoken distractor ‘toob’ than otherwise.
Professor Rastle says the results suggest that participants had activated articulatory programs of the spoken distractors even though doing so caused them to produce distorted speech.
“These findings provide the first evidence that when we hear speech, we activate the movements involved in speaking in an automatic and involuntary manner. Research must now focus on precisely how the motor system impacts on speech perception and on why the motor system is recruited when we listen to speech,” she said.
The researchers say that this study helps to understand the relationship between hearing and producing speech and could help explain things such as why our accents can unintentionally change when we visit or relocate to a foreign country and start sounding like those around us when we make no attempt to alter our mode of speaking.
“This phenomenon is likely to occur precisely because of the exquisite relationship between perceptual and motor systems revealed in our research”, Professor Rastle says.
“Nonetheless, there are limits to this tight coupling as evidenced by the fact that most of us cannot faithfully reproduce the foreign accents that we are able to hear. Future research will help to reveal the reasons behind these abilities and limitations.”
“Each facial expression is made up of many different components – a twitch of the mouth here, a widening of the eyes there – some lasting only a fraction of a second,” Dr Lucey said.
“Our computer program looks at these components, matches them against a list drawn up by expert psychologists and decides what expression just flitted across a face.”
It turns out that most human expressions are the same regardless of background.
And, as anyone who has been naughty knows, it’s also very hard to fake an expression.
While a guilty person might fool a human with their look of pure innocence, it’s very hard to fool Dr Lucey’s computer.
“There are always some micro expressions that we are unable to control and some of these are associated with deception. It is these tiny facial expression components that the computer can spot.”
The system uses a technique called machine learning. After being programmed with what to look for and what it means, each observation taken or analysis performed is used to refine the computer’s technique so it gets better at its assigned task over time.
Dr Lucey’s system uses a webcam and will work against any background, providing there’s enough light.
“Our expression work is still mostly from front-on, but we’re teaching the system to do what we do and recognise expressions when only one side of the face is visible.
“When it comes to finding out who’s been naughty or nice, we show the computer what expressions are associated with good behaviour and it watches for a departure from that,” Dr Lucey said.
One of the applications for Dr Lucey’s research is in detecting whether someone who cannot communicate is in pain and how intense that pain is.
Another is in making ‘telecollaboration’ between people using video and computers in different locations more natural by recognising gestures such as someone pointing at an object.
Commonwealth Scientific and Industrial Research Organisation (CSIRO)