Friday, October 30, 2009

MU RESEARCH TEAM ESTABLISHES FAMILY TREE FOR CATTLE, OTHER RUMINANTS

0 comentarios

Pairing a new approach to prepare ancient DNA with a new scientific technique developed specifically to genotype a cow, an MU animal scientist, along with a team of international researchers, created a very accurate and widespread "family tree" for cows and other ruminants, going back as far as 29 million years.

This genetic information could allow scientists to understand the evolution of cattle, ruminants and other animals. This same technique also could be used to verify ancient relatives to humans, help farmers develop healthier and more efficient cattle, and assist scientists who are studying human diseases, according to the research, which is being published in the Proceedings of the National Academy of Sciences (PNAS).

"We studied 678 different animals, representing 61 different species, and using the new Illumina cow 'SNP chip,' or 'snip chip,' we were able to generate some very precise genetic data for which the chip was not designed," said Jerry Taylor, a professor of animal science in the MU College of Agriculture, Food and Natural Resource and lead author of the study. "Our SNP chips allow scientists to examine hundreds of thousands of points on an animal's genome simultaneously. When we applied this technique to 48 recognized breeds of cattle, we were able to construct a family tree and infer the history of cattle domestication and breed formation across the globe."

The research revealed the history of European cattle, with domesticated cattle moving sequentially through Turkey, the Balkans and Italy, then spreading through Central Europe and France, and ending in Britain. The scientists also found evidence supporting a second route of ancient cattle into Europe by way of the Iberian Peninsula.

The applications for this technology and information discovered in the research could help solve a number of problems and answer questions about evolution, including how humans are related to extinct hominids and how different plant species are related to each other, Taylor said.

Based on the findings, animal scientists can begin to study evolution of certain breeds. For example, if breeds of cattle with high amounts of intramuscular fat, which is known as marbling, are closely related to each other, then they likely share the same gene variations to create the marbling, which is a trait some beef consumers prefer. On the other hand, if those same cattle are not closely related, different genetic variants might be at work. Understanding how different genetic variations allow high levels of marbling, feed efficiency and disease resistance in cattle could have a large economic impact for farmers who raise cattle throughout the world.

"This also provides us an opportunity to identify animal models for human disease since, for example, an excess amount of intramuscular fat in humans is associated with insulin resistance and type 2 diabetes," Taylor said. We're all interested in reconstructing our ancestry. This is essentially the same thing, except that we're able to zoom out by millions of years and include relatives who are long gone. The amazing thing about this technique is that it is very fast and extremely cheap. For relatively small amounts of money, we can generate the data that will allow us to recreate millions of years of evolutionary history."

(Photo: U. Missouri)

University of Missouri

BEDROCK OF A HOLY CITY: THE HISTORICAL IMPORTANCE OF JERUSALEM'S GEOLOGY

0 comentarios
Jerusalem's geology has been crucial in molding it into one of the most religiously important cities on the planet, according to a new study.

It started in the year 1000 BCE, when the Jebusite city's water system proved to be its undoing. The Spring of Gihon sat just outside the city walls, a vital resource in the otherwise parched region. But King David, in tent on taking the city, sent an elite group of his soldiers into a karst limestone tunnel that fed the spring. His men climbed up through a cave system hollowed out by flowing water, infiltrated beneath the city walls, and attacked from the inside. David made the city the capital of his new kingdom, and Israel was born.

In a new analysis of historical documents and detailed geological maps, Michael Bramnik of Northern Illinois University added new geological accents to this pivotal moment in human history in a presentation Tuesday, October 20 at the annual meeting of the Geological Society of America in Portland.

"The karst geology played a major role in the city's selection by David for his capital," Bramnik said.

It proved to be a wise decision. One of David's successors, King Hezekiah watched as the warlike Assyrian horde, a group of vastly superior warriors toppled city after city in the region. Fearing that they'd soon come for Jerusalem, he too took advantage of the limestone bedrock and dug a 550 meter-long (1804 feet) tunnel that rerouted the spring's water inside the city's fortified walls.

The Assyrians laid siege to the city in 701 BCE, but failed to conquer it. It was the only city in history to successfully fend them off.

"Surviving the Assyrian siege put it into the people's minds that it was because of their faith that they survived," Bramnik said. "So when they were captured by the Babylonians in 587, they felt it was because their faith had faltered."

Until then, the Jewish religion had been loosely associated. But that conviction united the Jews through the Babylonian Captivity, "and so began modern congregational religion," Bramnik said.

In an arid region rife with conflict, water security is as important today as it was during biblical times. While the groundwater for Jerusalem is recharged surface waters in central Israel, other settlements' water sources are not publicly available for research. Bramnik's efforts to find detailed hydrological maps were often rebuffed, or the maps were said to be non-existent.

"I think Jerusalem's geology and the geology of Israel is still significant to life in the region, perhaps even reaching into the political arena," he said.

The Geological Society of America

ARE HUMANS STILL EVOLVING? ABSOLUTELY, SAYS A NEW ANALYSIS OF A LONG-TERM SURVEY OF HUMAN HEALTH

0 comentarios
Although advances in medical care have improved standards of living over time, humans aren't entirely sheltered from the forces of natural selection, a new study shows.

"There is this idea that because medicine has been so good at reducing mortality rates, that means that natural selection is no longer operating in humans," said Stephen Stearns of Yale University. A recent analysis by Stearns and colleagues turns this idea on its head. As part of a working group sponsored by the National Evolutionary Synthesis Center in Durham, NC, the team of researchers decided to find out if natural selection — a major driving force of evolution — is still at work in humans today. The result? Human evolution hasn't ground to a halt. In fact, we're likely to evolve at roughly the same rates as other living things, findings suggest.

Taking advantage of data collected as part of a 60-year study of more than 2000 North American women in the Framingham Heart Study, the researchers analyzed a handful of traits important to human health. By measuring the effects of these traits on the number of children the women had over their lifetime, the researchers were able to estimate the strength of selection and make short-term predictions about how each trait might evolve in the future. After adjusting for factors such as education and smoking, their models predict that the descendents of these women will be slightly shorter and heavier, will have lower blood pressure and cholesterol, will have their first child at a younger age, and will reach menopause later in life.

"The take-home message is that humans are currently evolving," said Stearns. "Natural selection is still operating."

The changes may be slow and gradual, but the predicted rates of change are no different from those observed elsewhere in nature, the researchers say. "The evolution that's going on in the Framingham women is like average rates of evolution measured in other plants and animals," said Stearns. "These results place humans in the medium-to-slow end of the range of rates observed for other living things," he added. "But what that means is that humans aren't special with respect to how fast they're evolving. They're kind of average."

National Evolutionary Synthesis Center

SHIFTING THE WORLD TO 100 PERCENT CLEAN, RENEWABLE ENERGY AS EARLY AS 2030 -- HERE ARE THE NUMBERS

0 comentarios

Most of the technology needed to shift the world from fossil fuel to clean, renewable energy already exists. Implementing that technology requires overcoming obstacles in planning and politics, but doing so could result in a 30 percent decrease in global power demand, say Stanford civil and environmental engineering Professor Mark Z. Jacobson and University of California-Davis researcher Mark Delucchi.

To make clear the extent of those hurdles – and how they could be overcome – they have written an article that is the cover story in the November issue of Scientific American. In it, they present new research mapping out and evaluating a quantitative plan for powering the entire world on wind, water and solar energy, including an assessment of the materials needed and costs. And it will ultimately be cheaper than sticking with fossil fuel or going nuclear, they say.

The key is turning to wind, water and solar energy to generate electrical power – making a massive commitment to them – and eliminating combustion as a way to generate power for vehicles as well as for normal electricity use.

The problem lies in the use of fossil fuels and biomass combustion, which are notoriously inefficient at producing usable energy. For example, when gasoline is used to power a vehicle, at least 80 percent of the energy produced is wasted as heat.

With vehicles that run on electricity, it's the opposite. Roughly 80 percent of the energy supplied to the vehicle is converted into motion, with only 20 percent lost as heat. Other combustion devices can similarly be replaced with electricity or with hydrogen produced by electricity.

Jacobson and Delucchi used data from the U.S. Energy Information Administration to project that if the world's current mix of energy sources is maintained, global energy demand at any given moment in 2030 would be 16.9 terawatts, or 16.9 million megawatts.

They then calculated that if no combustion of fossil fuel or biomass were used to generate energy, and virtually everything was powered by electricity – either for direct use or hydrogen production – the demand would be only 11.5 terawatts. That's only two-thirds of the energy that would be needed if fossil fuels were still in the mix.

In order to convert to wind, water and solar, the world would have to build wind turbines; solar photovoltaic and concentrated solar arrays; and geothermal, tidal, wave and hydroelectric power sources to generate the electricity, as well as transmission lines to carry it to the users, but the long-run net savings would more than equal the costs, according to Jacobson and Delucchi's analysis.

"If you make this transition to renewables and electricity, then you eliminate the need for 13,000 new or existing coal plants," Jacobson said. "Just by changing our infrastructure we have less power demand."

Jacobson and Delucchi chose to use wind, water and solar energy options based on a quantitative evaluation Jacobson did last year of about a dozen of the different alternative energy options that were getting the most attention in public and political discussions and in the media. He compared their potential for producing energy, how secure an energy source each was, and their impacts on human health and the environment.

He determined that the best overall energy sources were wind, water and solar options. His results were published in Energy and Environmental Science.

The Scientific American article provides a quantification of global solar and wind resources based on new research by Jacobson and Delucchi.

Analyzing only on-land locations with a high potential for producing power, they found that even if wind were the only method used to generate power, the potential for wind energy production is 5 to 15 times greater than what is needed to power the entire world. For solar energy, the comparable calculation found that solar could produce about 30 times the amount needed.

If the world built just enough wind and solar installations to meet the projected demand for the scenario outlined in the article, an area smaller than the borough of Manhattan would be sufficient for the wind turbines themselves. Allowing for the required amount of space between the turbines boosts the needed acreage up to 1 percent of Earth's land area, but the spaces between could be used for crops or grazing. The various non-rooftop solar power installations would need about a third of 1 percent of the world's land, so altogether about 1.3 percent of the land surface would suffice.

The study further provides examples of how a combination of renewable energy sources could be used to meet hour-by-hour power demand, addressing the commonly asked question, given the inherent variability of wind speed and sunshine, can these sources consistently produce enough power? The answer is yes.

Expanding the transmission grid would be critical for the shift to the sustainable energy sources that Jacobson and Delucchi propose. New transmission lines would have to be laid to carry power from new wind farms and solar power plants to users, and more transmission lines will be needed to handle the overall increase in the quantity of electric power being generated.

The researchers also determined that the availability of certain materials that are needed for some of the current technologies, such as lithium for lithium-ion batteries, or platinum for fuel cells, are not currently barriers to building a large-scale renewable infrastructure. But efforts will be needed to ensure that such materials are recycled and potential alternative materials are explored.

Finally, they conclude that perhaps the most significant barrier to the implementation of their plan is the competing energy industries that currently dominate political lobbying for available financial resources. But the technologies being promoted by the dominant energy industries are not renewable and even the cleanest of them emit significantly more carbon and air pollution than wind, water and sun resources, say Jacobson and Delucchi.

If the world allows carbon- and air pollution-emitting energy sources to play a substantial role in the future energy mix, Jacobson said, global temperatures and health problems will only continue to increase.

(Photo: Stanford U.)

Stanford University

FIRST-TIME INTERNET USERS FIND BOOST IN BRAIN FUNCTION AFTER JUST 1 WEEK

0 comentarios

You can teach an old dog new tricks, say UCLA scientists who found that middle-aged and older adults with little Internet experience were able to trigger key centers in the brain that control decision-making and complex reasoning after just one week of surfing the Web.

The findings, presented Oct. 19 at the 2009 meeting of the Society for Neuroscience, suggest that Internet training can stimulate neural activation patterns and could potentially enhance brain function and cognition in older adults.

As the brain ages, a number of structural and functional changes occur, including atrophy, reductions in cell activity and increases in deposits of amyloid plaques and tau tangles, which can impact cognitive function.

Research has shown that mental stimulation similar to that which occurs in individuals who frequently use the Internet may affect the efficiency of cognitive processing and alter the way the brain encodes new information.

"We found that for older people with minimal experience, performing Internet searches for even a relatively short period of time can change brain activity patterns and enhance function," said study author Dr. Gary Small, a professor of psychiatry at the Semel Institute for Neuroscience and Human Behavior at UCLA and the author of "iBrain," a book that describes the impact of new technology on the brain and behavior.

The UCLA team worked with 24 neurologically normal volunteers between the ages of 55 and 78. Prior to the study, half the participants used the Internet daily, while the other half had very little experience. Age, educational level and gender were similar between the two groups.

Study participants performed Web searches while undergoing functional magnetic resonance imaging (fMRI) scans, which recorded the subtle brain-circuitry changes experienced during this activity. This type of scan tracks brain activity by measuring the level of cerebral blood flow during cognitive tasks.

After the initial brain scan, participants went home and conducted Internet searches for one hour a day for a total of seven days over a two-week period. These practice searches involved using the Internet to answer questions about various topics by exploring different websites and reading information. Participants then received a second brain scan using the same Internet simulation task but with different topics.

The first scan of participants with little Internet experience demonstrated brain activity in regions controlling language, reading, memory and visual abilities, which are located in the frontal, temporal, parietal, visual and posterior cingulate regions, researchers said. The second brain scan of these participants, conducted after the practice Internet searches at home, demonstrated activation of these same regions, as well as triggering of the middle frontal gyrus and inferior frontal gyrus – areas of the brain known to be important in working memory and decision-making.

Thus, after Internet training at home, participants with minimal online experience displayed brain activation patterns very similar to those seen in the group of savvy Internet users – after just a brief period of time.

"The results suggest that searching online may be a simple form of brain exercise that might be employed to enhance cognition in older adults," said Teena D. Moody, the study's first author and a senior research associate at the Semel Institute at UCLA.

When performing an Internet search, the ability to hold important information in working memory and to extract the important points from competing graphics and words is essential, Moody noted.

Previous research by the UCLA team found that searching online resulted in a more than twofold increase in brain activation in older adults with prior experience, compared with those with little Internet experience. According to Small, the new findings suggest that it may take only days for those with minimal experience to match the activity levels of those with years of experience.

Additional studies may address the impact of the Internet on younger individuals and help identify aspects of online searching that generate the greatest levels of brain activation.

(Photo: UCLA)

UCLA

WORLD’S OLDEST SUBMERGED TOWN DATES BACK 5,000 YEARS

0 comentarios

Archaeologists surveying the world’s oldest submerged town have found ceramics dating back to the Final Neolithic. Their discovery suggests that Pavlopetri, off the southern Laconia coast of Greece, was occupied some 5,000 years ago — at least 1,200 years earlier than originally thought.

These remarkable findings have been made public by the Greek government after the start of a five year collaborative project involving the Ephorate of Underwater Antiquities of the Hellenic Ministry of Culture and The University of Nottingham.

As a Mycenaean town the site offers potential new insights into the workings of Mycenaean society. Pavlopetri has added importance as it was a maritime settlement from which the inhabitants coordinated local and long distance trade.

The Pavlopetri Underwater Archaeology Project aims to establish exactly when the site was occupied, what it was used for and through a systematic study of the geomorphology of the area, how the town became submerged.

This summer the team carried out a detailed digital underwater survey and study of the structural remains, which until this year were thought to belong to the Mycenaean period — around 1600 to 1000 BC. The survey surpassed all their expectations. Their investigations revealed another 9,000 square metres of new buildings as well as ceramics that suggest the site was occupied throughout the Bronze Age — from at least 2800 BC to 1100 BC.

The work is being carried out by a multidisciplinary team led by Mr Elias Spondylis, Ephorate of Underwater Antiquities of the Hellenic Ministry of Culture in Greece and Dr Jon Henderson, an underwater archaeologist from the Department of Archaeology at The University of Nottingham.

Dr Jon Henderson said: “This site is unique in that we have almost the complete town plan, the main streets and domestic buildings, courtyards, rock-cut tombs and what appear to be religious buildings, clearly visible on the seabed. Equally as a harbour settlement, the study of the archaeological material we have recovered will be extremely important in terms of revealing how maritime trade was conducted and managed in the Bronze Age.”

Possibly one of the most important discoveries has been the identification of what could be a megaron — a large rectangular great hall — from the Early Bronze Age period. They have also found over 9,000 square metres of new buildings including what could be the first example of a pillar crypt ever discovered on the Greek mainland. Two new stone built cist graves were also discovered alongside what appears to be a Middle Bronze Age pithos burial.

Mr Spondylis said: “It is a rare find and it is significant because as a submerged site it was never re-occupied and therefore represents a frozen moment of the past.”

The Archaeological Co-ordinator Dr Chrysanthi Gallou a postdoctoral research fellow at The University of Nottingham is an expert in Aegean Prehistory and the archaeology of Laconia.

Dr Gallou said: “The new ceramic finds form a complete and exceptional corpus of pottery covering all sub-phases from the Final Neolithic period (mid 4th millennium BC) to the end of the Late Bronze Age (1100 BC). In addition, the interest from the local community in Laconia has been fantastic. The investigation at Pavlopetri offers a great opportunity for them to be actively involved in the preservation and management of the site, and subsequently for the cultural and touristic development of the wider region.”

The team was joined by Dr Nicholas Flemming, a marine geo-archaeologist from the Institute of Oceanography at the University of Southampton, who discovered the site in 1967 and returned the following year with a team from Cambridge University to carry out the first ever survey of the submerged town. Using just snorkels and tape measures they produced a detail plan of the prehistoric town which consisted of at least 15 separate buildings, courtyards, streets, two chamber tombs and at least 37 cist graves. Despite the potential international importance of Pavlopetri no further work was carried out at the site until this year.

This year, through a British School of Archaeology in Athens permit, The Pavlopetri Underwater Archaeology Project began its five year study of the site with the aim of defining the history and development of Pavlopetri.

Four more fieldwork seasons are planned before their research is published in full in 2014.

(Photo: U. Nottingham)

The University of Nottingham

Thursday, October 29, 2009

TRACKING ICY OBJECTS, ACROSS THE GLOBE

0 comentarios

On any given night, numerous icy bodies orbiting the sun far beyond the orbit of Pluto may happen to pass in front of a star (as seen from Earth). These events are called occultations, but because the icy moon-sized globes called Kuiper Belt Objects are so small, and their orbits not very accurately known, the vast majority of these events will go unobserved.

That's too bad, because there's a lot to be learned by watching occultations: It's a way to learn the exact size of the object, to discover whether it's actually a pair of objects or is accompanied by one or more moons, and whether or not it has an atmosphere. These questions bear directly on our understanding of the origins of our solar system, because Kuiper Belt Object — in a belt of tens of thousands of icy worlds that includes the former planet Pluto — are thought to be the nearly unchanged remnants of the small bodies called planetesimals that slammed together more than 4 billion years ago to form the planets themselves.

Now, an MIT-led group is aiming to pin down predictions of occultations so that they can be observed systematically by teams of observers scattered across the globe. Multiple observations are essential: they allow for the collection of as much data as possible from these events.

The group's first full-scale test of its system took place Thursday night, with at least 25 observing teams all the way from Australia and New Zealand, through Hawaii, and into the continental United States. The teams include both professional and experienced amateur astronomers, as well as at least one MIT graduate students.

James Elliot, a professor in the Department of Earth, Atmospheric and Planetary Sciences, is leading the project, and was due to discuss it and first-look results from Thursday night's observations on Friday at the annual meeting of the American Astronomical Society's Division for Planetary Sciences, being held in Puerto Rico.

Ultimately, the team hopes to be able to produce occultation predictions accurate enough to guide observations by NASA's new airplane-mounted telescope, called Sofia, which is expected to begin scientific work early next year. Thursday's observations were a kind of test case, Elliott says, because "with Sofia, it's going to be such a production, it would be very costly to get it wrong. This will test our ability to get it right."

(Photo: Carlos Zuluaga; courtesy of the Planetary Astronomy Lab)

Massachusetts Institute of Technology

WHY GIANT SEA SCORPIONS GOT SO BIG

0 comentarios

Research on giant sea scorpions (eurypterids) – the largest bugs that ever lived – has shed new light on why eurypterids became so large and eventually died out.

Originally proposed in the 1930s, ‘Romer’s theory’ argues that eurypterids evolved in an ‘arms race’, alongside early vertebrates – giant armour-plated fish called placoderms – which is why they became so large. Subsequently, scientists have thought that eurypterids evolved to their huge size due to higher levels of oxygen in the atmosphere in the past, and other environmental factors.

The new research shows that both views are correct as the two main eurypterid lineages faced different pressures.

The first group – the giant predatory eurypterids that were up to 2.5 metres long and had swimming paddles – became large due to competition with placoderms in line with Romer’s theory. The second group – initially smaller that walked and scavenged on the sea floor – grew to a huge size due to environmental factors.

Previous research had not differentiated between the two lineages, nor tested either theory statistically and thus overlooked the fact that different pressures affected the two groups independently.

The new study, by James Lamsdell and Dr Simon Braddy from the University of Bristol, which is published today in Biology Letters, compared patterns of the size and numbers of different types, of eurypterids and early fishes.

James Lamsdell, lead author on the paper explains: “We found that the evolution of the two main eurypterid lineages was quite different. The giant predatory eurypterids increased in size but decreased in diversity as placoderms become more common, while the other form of eurypterids that were initially small scavengers, only reached their massive size later on when many other invertebrates also increased in size.”

The demise of the giant predatory eurypterids coincided with the appearance of large placoderms around 400 million years ago. Lamsdell and Braddy show there is a peak in their diversity before placoderms appear in the fossil record, then they rapidly decrease in numbers, increasing in size as they do so before they eventually die out 370 million years ago. This suggests that they attained their huge size by competing with early vertebrates such as placoderms, a battle which they eventually lost.

The scavenging eurypterids on the other hand avoided competing with vertebrates and outlasted the placoderms. They also attained massive sizes, reaching almost 2 metres long, but not until 300 million years ago when they had to cope with life in less salty water, where being bigger is better for helping to regulate such things as the chemistry of blood fluids. They died out due to massive changes in the environment, along with 95% of life, during the Permian extinction 260 million years ago.

Dr Simon Braddy added: “This research indicates that ecology and competition with other animals is as important as environmental change in explaining why some bugs were so big in the past. The moral of the story is: if predators don’t get you, the environment will!”

(Photo: Simon Powell)

University of Bristol

NEW VEHICLE CONCEPT WOULD PROTECT CREWS FROM ROADSIDE BOMBS

0 comentarios

A new crew survivability concept that would build military vehicles around a protected personnel compartment and use a sacrificial “blast wedge” to absorb energy from improvised explosive devices could improve safety for the occupants of future light armored patrol vehicles.

Researchers from the Georgia Tech Research Institute (GTRI) have designed and tested the concept, dubbed ULTRA II, for the U.S. Office of Naval Research (ONR). The crew-protection concept builds on an earlier GTRI development for the ONR that evaluated new concepts for light armored vehicles. A blast test conducted with the ULTRA II full-sized crew compartment test article at the Aberdeen Test Center showed that the new concept could protect the vehicle crew from improvised explosions.

“Instead of up-armoring a standard vehicle or modifying an existing drive train, we built a bubble of force protection first and then addressed vehicle mobility,” explained Vince Camp, a GTRI senior research engineer and the project’s principal investigator. “The idea was to emphasize warfighter protection first by starting with design of an improved crew compartment, as opposed to starting with an existing vehicle and trying to add armor.”

The ULTRA II crew compartment was designed to house six persons: a driver and commander facing forward, and two pairs of crew members behind them, each pair facing opposite sides of the vehicle. By putting their backs toward the center of the crew compartment, the concept moves the crew away from the outside walls to reduce the likelihood of injury from side blasts, provides better visibility for the crew to monitor their surroundings, allows blast-resistant seats to be frame-mounted—and facilitates faster egress from the vehicle.

The crew compartment envisioned by GTRI uses a “space frame” constructed of tubular steel—similar to civilian off-road racing vehicles. An armored steel “skin” provides added structure and moderate ballistic and blast protection. Additional armor is bolted onto the frame in a modular way, allowing varying levels of protection that could be easily modified in the field and changed as new high-performance armor concepts are developed.

An integral part of the protection is provided by a sacrificial “blast wedge” bolted onto the bottom of the vehicle. Constructed of welded steel armor, the wedge both deflects energy away from the vehicle and absorbs energy from a blast, performing a function similar to “crumple zones” in modern civilian vehicles.

The design and fabrication of the test article was conducted by personnel in the Aerospace, Transportation and Advanced Systems Laboratory of GTRI. Tests using a heavily-instrumented test article with instrumented dummies simulating the crew showed that the wedge deflected or absorbed nearly 70 percent of the energy from an explosion beneath it. Damage from the blast was primarily confined to the sacrificial blast wedge and there was no structural damage and no blast penetration to the crew compartment.

“Energy used up in crushing and tearing the metal in the blast wedge is energy that wouldn’t go into injuring the crew,” said Kevin Massey, a GTRI senior research engineer who was part of the project team. “Data from the instrumented dummies shows that had this test been conducted with real warfighters in a real vehicle, we wouldn’t have seen any spinal injuries, head trauma, neck trauma or leg injuries.”

Because the wedge is removable, it could be replaced if damaged. Making the blast wedge removable also allows for an overall reduction of the vehicle’s height for shipping, an important issue for rapid deployment.

The research team, which also included Burt Jennings, Cal Jameson, Jake Leverett and Mark Entrekin, combined non-linear dynamic blast simulations and neural networks to study how blast forces would affect the vehicle. Conventional finite element analysis also provided valuable design feedback in development of the ULTRA II test article.

There were many tradeoffs to consider in designing the new concept, including vehicle height and resistance to blast forces that may come from many different angles.

“To survive the blast, you want to get as high off the ground as possible,” Massey noted. “But the higher you are off the ground, the more likely you are to roll over. This is an example of the tradeoffs that have to be balanced.”

In addition to crew protection, the researchers also designed a translating door that would provide a large side opening similar to that of existing civilian minivans. Such a door system would provide improved ingress/egress for the crew and could remain open when the vehicle is moving.

GTRI has presented data from the test to the Office of Naval Research, and hopes to pursue additional refinements to the blast wedge and overall vehicle concept. Among the goals would be to improve energy absorption from the blast wedge, and to evaluate whether the crew compartment should separate from the drive train in certain types of blasts.

“We think that the concept of a space-frame design is a very viable one, and we want to take the lessons we’ve learned so far to improve on it,” Massey added. “We’d also like to see if the concept of the energy-absorbing wedge can be applied to existing vehicles that are already out there. The bottom line is saving people’s lives and protecting them from injury.”

(Photo: GTRI)

Georgia Tech Research Institute (GTRI)

BY SIMULATING GULLIES, GEOGRAPHERS DISCOVER WAYS TO TAME SOIL EROSION

0 comentarios

Dead zones in critical waterways, accelerated loss of arable land and massive famines. They're all caused by the 24 billion tons of soil that are lost every year to erosion, a phenomenon that costs the world as much as $40 billion annually.

But predicting where erosion occurs, and thus how to prevent it, is a serious challenge.

That's why University at Buffalo geographer Sean Bennett has constructed various systems to model it, with assistance from UB's machine shop. His methods range from the deceptively low-tech, like simulating rainstorms over sandboxes to the high-tech, such as the use of particle image velocimetry (PIV) in large, re-circulating flumes to study how water and grains of sand interact.

The purpose of his work is both exceedingly practical -- geared toward helping farmers learn how to best prevent erosion -- and fundamental, to better understand how planetary surfaces evolve over time.

"We have feet in two domains," he explains, "we're studying processes similar to those that created Niagara Falls; at the same time, we're studying how these processes degrade soil resources worldwide."

The UB research is helping scientists better understand some of the key triggers of erosion, the complex formation of channels on the landscape, called rills and gullies.

"Rills and gullies are the dominant erosion processes on agricultural landscapes today and the main contributor to soil loss," says Bennett, PhD, UB professor of geography in the College of Arts and Sciences and an active researcher in the UB 2020 Strategic Strength in Extreme Events.

Rills and gullies also are a primary cause behind excess sediment and nutrients in waterways, which transports soil and chemicals further downstream.

Bennett says that these high nutrient loadings of nitrogen and phosphorus from eroding agricultural areas destroy aquatic resources, causing unmitigated growth of aquatic algae, depletion of dissolved oxygen and the creation of "dead zones" in places like the Gulf of Mexico.

Ironically, past research by Bennett demonstrated that when farmers till fields to remove rills and gullies, they actually end up accelerating erosion.

"Our numerical model showed that you could reduce soil losses by 400 percent if you adopt a no-till farming practice," says Bennett. "This is because the gullies grow to some maximum size on the landscape during a growing season. If farmers repair them by tilling the soil each spring, the practice actually causes much greater soil loss over the long term."

Bennett's physical model showed similar phenomena.

"Our laboratory landscape showed the same thing," he says, "rills grow and evolve in time and space, erosional processes get arrested and reach an endpoint. After that, they don't produce much sediment."

To model how rills and gullies form, Bennett and his students built a rainfall soil erosion facility, erecting a 30-foot by 8-foot flume containing eight tons of soil, which allowed them to monitor their simulated landscape, looking for disturbances in the soil and the creation of rills and gullies.

Using digital cameras positioned directly above the flume, they developed digital elevation models of the topography across the flume, at millimeter-scale accuracy.

"Each set of images represents how the topography evolved at a discrete space and time during the simulated storm," says Bennett.

The images reveal at what point during the rainfall and runoff, phenomena called headcuts -- small intense areas of localized erosion -- begin to carve deep channels into the soil.

"If we can predict where and when these headcuts occur, and develop technology that allows us to control them, then we can greatly improve soil resource management," says Bennett.

Such technologies include runoff diversions, grass barriers and vegetated waterways.

The images also revealed with startling clarity the fractal patterns that the simulated storm created in the landscape.

"Fractal organization is one of the most compelling ideas in science," says Bennett."While I always knew that landscapes had fractal characteristics, I never saw it demonstrated so clearly as when I saw these treelike patterns in the images we took of our rill networks.

To study sediment transport processes in rivers and how particles interact with the turbulent flow, Bennett designed a 30-foot by 2-foot flume channel, which was constructed by the UB machine shop.

In one experiment, the researchers fill the channel with sand and water, flatten the bed, and then turn on the centrifugal pump to initiate sediment movement.

"Once the flow reaches a certain velocity, the entire bed erupts into ripples, created by the instability between the fast-moving fluid overlying the slow-moving sediment," Bennett explains.

"The PIV system can provide us with high-quality images and data right at the bed surface while these bedforms are being created," he continues. "By examining the physics of sediment transport in this way, we can develop improved models for flow and transport in rivers, allowing us to better manage our river systems and aquatic ecology."

Bennett hopes to use these flumes and equipment to expand his research on the interactions between vegetation and river function and form. Such interactions are critical to the process of restoring and stabilizing degraded streams, a primary thrust of the National Science Foundation-funded "Ecosystem Restoration Through Interdisciplinary Exchange" graduate training program at UB, in which Bennett participates through research and training.

(Photo: U. Buffalo)

University at Buffalo

LIKE HUMANS, MONKEYS FALL INTO THE 'UNCANNY VALLEY'

0 comentarios

Princeton University researchers have come up with a new twist on the mysterious visual phenomenon experienced by humans known as the "uncanny valley." The scientists have found that monkeys sense it too.

The uncanny valley, a phrase coined by a Japanese researcher nearly three decades ago, describes that disquieting feeling that occurs when viewers look at representations designed to be as human-like as possible -- whether computer animations or androids -- but somehow fall short.

Movie-goers may not be familiar with the term, but they understand that it is far easier to love the out-of-proportion cartoon figures in the "The Incredibles," for example, than it is to embrace the more realistic-looking characters in "The Polar Express." Viewers, to many a Hollywood director's consternation, are emotionally unsettled by images of artificial humans that look both realistic and unrealistic at the same time.

In an attempt to add to the emerging scientific literature on the subject and answer deeper questions about the evolutionary basis of communication, Princeton University researchers have found that macaque monkeys also fall into the uncanny valley, exhibiting this reaction when looking at computer-generated images of monkeys that are close but less than perfect representations.

"Increased realism does not necessarily lead to increased acceptance," said Asif Ghazanfar, an assistant professor of psychology and the Princeton Neuroscience Institute, who led the research. It is the first such finding in any animal other than human. The paper, co-written by Shawn Steckenfinger, a research specialist in the Princeton's Department of Psychology, appeared in the October Oct. 12 edition of the Proceedings of the National Academy of Sciences.

The work, according to its authors, is significant because it indicates that there is a biological basis for the uncanny valley and supports theories that propose that the brain mechanisms underlying the uncanny valley are evolutionary adaptations. "These data demonstrate that the uncanny valley effect is not unique to humans and that evolutionary hypotheses regarding its origins are tenable," said Ghazanfar.

The uncanny valley hypothesis was introduced by the Japanese roboticist Masahiro Mori in 1970. The "valley" refers to a dip in a graph that charts a human's positive reaction in response to an image on one axis and a robot's human-likeness on another. People like to study other human faces, and they also can enjoy scrutinizing countenances that clearly are not human, such as a doll's or a cartoon figure's. But when an image falls in between -- close to human but clearly not -- it causes a feeling of revulsion.

Experts praised the Princeton report.

"This study makes a significant contribution to existing knowledge of the uncanny valley," said Karl MacDorman, an associate professor in the School of Informatics at Indiana University, who has led important experiments in the fields of android science and computational neuroscience. "The research design is novel, the experiment is carried out with a high degree of rigor, and the results are compelling, important, newsworthy, and support the [hypothesis]."

He believes the results will be of broad interest to scientists and non-scientists, including "ethologists, animal behaviorists, cognitive psychologists of human perception, evolutionary psychologists, primate social cognitive neuroscientists, humanoid roboticists and human character animators."

In the experiments, the monkeys, which normally coo and smack their lips to engage each other, quickly avert their glances and are frightened when confronted by the close-to-real images. When asked to peer at the less close-to-real faces and real faces, however, they viewed them more often and for longer periods.

Despite the widespread acknowledgement of the uncanny valley as a valid phenomenon, there are no clear explanations for it, Ghazanfar said. One theory suggests that it is the outcome of a "disgust response" mechanism that allows humans to avoid disease. Another idea holds that the phenomenon is an indicator of humanity's highly evolved face processing abilities. Some have suggested the corpse-like appearance of some images elicits an innate fear of death. Still others have posited that the response illustrates what is perceived as a threat to human identity.

Ghazanfar said the research is likely to point him in useful directions to further explore these theories.

(Photo: Shawn Steckenfinger)

University of Princeton

SELF-SACRIFICE AMONG STRANGERS HAS MORE TO DO WITH NURTURE THAN NATURE

0 comentarios

Socially learned behavior and belief are much better candidates than genetics to explain the self-sacrificing behavior we see among strangers in societies, from soldiers to blood donors to those who contribute to food banks. This is the conclusion of a study by Adrian V. Bell and colleagues from the University of California Davis in the Oct. 12 edition of Proceedings of the National Academy of Sciences.

Altruism has long been a subject of interest to evolutionary social scientists. Altruism presents them with a difficult line to argue: behaviors that help unrelated people while being costly to the individual and creating a risk for genetic descendants could not likely be favored by evolution, at least by common evolutionary arguments.

The researchers used a mathematical equation, called the Price equation, that describes the conditions for altruism to evolve. This equation motivated the researchers to compare the genetic and the cultural differentiation between neighboring social groups. Using previously calculated estimates of genetic differences, they used the World Values Survey (whose questions are likely to be heavily influenced by culture in a large number of countries) as a source of data to compute the cultural differentiation between the same neighboring groups. When compared they found that the role of culture had a much greater scope for explaining our pro-social behavior than genetics.

In applying their results to ancestral populations, the World Values Survey was less useful. But ancient cultural practices, such as exclusion from the marriage market, denial of the fruits of cooperative activities, banishment and execution happen now as they did then. These activities would have exerted strong selection against genes tending toward antisocial behavior, and presumably in favor of genes that predisposed individuals toward being pro-social rather than anti-social. This would result in the gene-culture coevolution of human prosocial propensities.

Bell is currently continuing his research in Tonga, where he is planning through ethnography to estimate statistically what social learning behaviors people have in general that may explain the distribution of cultural beliefs across the Tongan Islands. He is developing a survey instrument that will help capture people's cultural beliefs and measure the effect of migration on the similarities and differences between populations.

(Photo: Zina Deretsky, National Science Foundation)

National Science Foundation

HERBIVORY DISCOVERED IN A SPIDER

0 comentarios

A jumping spider from Central America eats mostly plants, according to new research.

Spiders were thought to be strictly predators on animals. The spider, Bagheera kiplingi, was described scientifically in the late 1800s, but its vegetarian tendencies were not observed until the 21st century.

"This is the first spider in the world known to deliberately hunt plant parts. It is also the first found to go after plants as a primary food source," said lead author Christopher Meehan.

Meehan, now a doctoral student in the University of Arizona's department of ecology and evolutionary biology, discovered Bagheera kiplingi's herbivory while he was a student at Villanova University in Pennsylvania.

Of the approximately 40,000 species of spiders known, Bagheera kiplingi is the only species known to be primarily herbivorous. Ironically, the vegetarian spider is named after the panther in Rudyard Kipling's "The Jungle Book."

The spider inhabits several species of acacia shrubs involved in a well-known mutualism between the acacias and several species of ants.

The ants live in hollow spines on the plant and drink nectar from glands at the base of each leaf and eat the specialized leaf tips known as Beltian bodies. In return, the ants fiercely guard the plants against most would-be herbivores.

The leaf-tip structures are named after naturalist Thomas Belt, who published a paper about them in the 1800s.

The B. kiplingi spiders are "cheaters" in the ant-acacia system, stealing and eating both nectar and Beltian bodies without helping to defend the plant, according to the researchers. Although the ants actively patrol the plant for intruders, the spiders' excellent eyesight and agility allow them to avoid the plant's ant bodyguards.

The story of the first known vegetarian spider is also a story of cooperation, rather than competition.

Co-author Eric Olson of Brandeis University had discovered herbivory in B. kiplingi in Costa Rica in 2001. In 2007, Meehan independently observed the same behaviors in spiders in Quintana Roo, Mexico, during a course taught by Villanova professor and study co-author Robert Curry. The two research groups subsequently combined efforts to publish the discovery.

Previously, very few spiders had been seen consuming plants at all. Some spiders had been observed occasionally eating nectar and pollen, although the bulk of their diet was insects and other small animals.

To verify the initial observations of the spiders' herbivory, the researchers documented B. kiplingi feeding behavior using high-definition video recordings. The team identified 140 food items of the Mexican spiders and found that more than 90 percent were Beltian bodies.

For the Costa Rican population of B. kiplingi, only 60 percent of the diet was Beltian bodies. Those spiders were seen with animal prey items more often than were the Mexican B. kiplingi.

Curry said, "What surprised us most about discovering this spider's extraordinary ecology was to find it on the ant-acacias. This well-known mutualism has been studied by tropical ecologists for nearly 50 years, yet the spider's role was not noticed until Olson's discovery in 2001.

"We were lucky to find in Mexico an area where the spider is both exceptionally abundant and even more herbivorous than in Costa Rica," he said.

The team also conducted laboratory analyses of the carbon and nitrogen in B. kiplingi spiders, other local spiders, Beltian bodies, and the acacia-dwelling ants.

Analyzing the different forms of nitrogen and carbon in an animal can indicate its trophic level and its food source.

B. kiplingi spiders were more similar in the nitrogen analyses to the herbivorous acacia-ants than to any of the other spiders sampled, suggesting the ants and the B. kiplingi were on the same level in the food chain. In the carbon analysis, which matches animals to their food source, B. kiplingi spiders and Beltian bodies were almost identical.

Collectively, the data show that B. kiplingi spiders, particularly those from Mexico, obtain most of their diet directly or indirectly from the ant-acacia plants.

Meehan and Curry suggest their finding shows that coevolution between an ant and a plant can result in the development of plant structures that may be especially vulnerable to exploitation by third parties that normally focus on completely different kinds of prey.

B. kiplingi also exhibits signs of sociality. The researchers suspect that something about the spider's transition to herbivory has influenced the species' social evolution, a possibility the researchers are continuing to study.

(Photo: Copyright Robert L. Curry)

University of Arizona

GENTLE TOUCH MAY AID MULTIPLE SCLEROSIS PATIENTS

0 comentarios
A team of University of Illinois at Chicago physical therapists report this month in the journal Neurorehabilitation and Neural Repair that persons with multiple sclerosis use excessive force when they are lifting objects. In an earlier finding reported in the journal Clinical Neurophysiology, they reported that regaining control and coordination may be as easy as applying a gentle touch to the affected hand from a finger of the opposite hand.

"We studied how this light touch application changes the way people apply force to an object they want to grip," said Alexander Aruin, professor of physical therapy. The study compared eight adults with multiple sclerosis to eight without the disease, gender-matched and of comparable age. "In each case, the grip force required to lift an object decreased," said Aruin.

He found similar results in an earlier study he did of people with arm weakness caused by a stroke.

Why the simple light finger touch application works so well is not fully understood, but Aruin offers a hypothesis.

"It could be due to auxiliary sensory information from the contra-lateral arm," he said. "When we use our second hand and touch the wrist of the target hand, available information to the central nervous system about the hand-object interaction may increase. Without the touch, the information needed to manipulate an object comes only through vision and sensory input from just the target arm and hand."

Aruin and his colleagues tested subjects griping and lifting a variety of objects that they moved in several different ways, directions and velocities. The gentle finger touch always helped to reduce grip force, making the task easier.

The UIC researcher said he and his colleagues plan to test the approach on those with other neurological and muscular diseases to examine the effects.

"We look forward to developing training and rehabilitation procedures on how to use this," said Aruin. "We know that MS patients are prone to fatigue and muscle weakness. This finding may enable them to perform daily activities more independently to improve their quality of life."

The papers' lead author was Veena Iyengar, a former master's student of Aruin's now at Advocate Lutheran General Hospital in Park Ridge, IL. Other authors were Marcio Santos, a former UIC postdoctoral fellow now at Santa Catarina State University in Brazil, and Michael Ko, a neurologist with Loyola University Chicago's medical center.

University of Illinois

CHEMISTRY TEAM SEEKS TO USE ARTIFICIAL PHOTOSYNTHESIS AND NANOTUBES TO GENERATE HYDROGEN FUEL WITH SUNLIGHT

0 comentarios
A team of four chemists at the University of Rochester have begun work on a new kind of system to derive usable hydrogen fuel from water using only sunlight.

The project has caught the attention of the U.S. Department of Energy, which has just given the team nearly $1.7 million to pursue the design.

"Everybody talks about using hydrogen as a super-green fuel, but actually generating that fuel without using some other non-green energy in the process is not easy," says Kara Bren, professor in the Department of Chemistry. "People have used sunlight to derive hydrogen from water before, but the trick is making the whole process efficient enough to be useful."

Bren and the rest of the Rochester team—Professor of Chemistry Richard Eisenberg, and Associate Professors of Chemistry Todd Krauss, and Patrick Holland—will be investigating artificial photosynthesis, which uses sunlight to carry out chemical processes much as plants do. What makes the Rochester approach different from past attempts to use sunlight to produce hydrogen from water, however, is that the device they are preparing is divided into three "modules" that allow each stage of the process to be manipulated and optimized far more easily than other methods.

The first module uses visible light to create free electrons. A complex natural molecule called a chromophore that plants use to absorb sunlight will be re-engineered to efficiently generate reducing electrons.

The second module will be a membrane suffused with carbon nanotubes to act as molecular wires so small that they are only one-millionth the thickness of a human hair. To prevent the chromophores from re-absorbing the electrons, the nanotube membrane channels the electrons away from the chromophores and toward the third module.

In the third module, catalysts put the electrons to work forming hydrogen from water. The hydrogen can then be used in fuel cells in cars, homes, or power plants of the future.

By separating the first and third modules with the nanotube membrane, the chemists hope to isolate the process of gathering sunlight from the process of generating hydrogen. This isolation will allow the team to maximize the system's light-harvesting abilities without altering its hydrogen-generation abilities, and vice versa. Bren says this is a distinct advantage over other systems that have integrated designs because in those designs a change that enhances one trait may degrade another unpredictably and unacceptably.

Bren says it may be years before the team has a system that clearly works better than other designs, and even then the system would have to work efficiently enough to be commercially viable. "But if we succeed, we may be able to not only help create a fuel that burns cleanly, but the creation of the fuel itself may be clean."

University of Rochester

Wednesday, October 28, 2009

PHYSICISTS MEASURE ELUSIVE PERSISTENT CURRENT THAT FLOWS FOREVER

0 comentarios

Physicists at Yale University have made the first definitive measurements of “persistent current,” a small but perpetual electric current that flows naturally through tiny rings of metal wire even without an external power source.

The team used nanoscale cantilevers, an entirely novel approach, to indirectly measure the current through changes in the magnetic force it produces as it flows through the ring. “They’re essentially little floppy diving boards with the rings sitting on top,” said team leader Jack Harris, associate professor of physics and applied physics at Yale. The findings appear in the October 9 issue of Science.

The counterintuitive current is the result of a quantum mechanical effect that influences how electrons travel through metals, and arises from the same kind of motion that allows the electrons inside an atom to orbit the nucleus forever. “These are ordinary, non-superconducting metal rings, which we typically think of as resistors,” Harris said. “Yet these currents will flow forever, even in the absence of an applied voltage.”

Although persistent current was first theorized decades ago, it is so faint and sensitive to its environment that physicists were unable to accurately measure it until now. It is not possible to measure the current with a traditional ammeter because it only flows within the tiny metal rings, which are about the same size as the wires used on computer chips.

Past experiments tried to indirectly measure persistent current via the magnetic field it produces (any current passing through a metal wire produces a magnetic field). They used extremely sensitive magnetometers known as superconducting quantum interference devices, or SQUIDs, but the results were inconsistent and even contradictory.

“SQUIDs had long been established as the tool used to measure extremely weak magnetic fields. It was extremely optimistic for us to think that a mechanical device could be more sensitive than a SQUID,” Harris said.

The team used the cantilevers to detect changes in the magnetic field produced by the current as it changed direction in the aluminum rings. This new experimental setup allowed the team to make measurements a full order of magnitude more precise than any previous attempts. They also measured the persistent current over a wider range of temperature, ring size and magnetic field than ever before.

“These measurements could tell us something about how electrons behave in metals,” Harris said, adding that the findings could lead to a better understanding of how qubits, used in quantum computing, are affected by their environment, as well as which metals could potentially be used as superconductors.

(Photo: Jack Harris/Yale University)

Yale University

MOMENTUM INFLUENCES BABY NAME CHOICES, COGNITIVE SCIENTISTS FIND

0 comentarios

Like momentum traders in the stock market, parents today appear to favor names that recently have risen in popularity relative to names that are on the decline, say cognitive science researchers from Indiana University and New York University.

Researchers have long noted that the overall popularity of a baby name exerts a strong influence on parents' preferences -- more popular names, such as Robert or Susan, are more frequent, and by their sheer ubiquity, drive more parents to adopt a similar choice. However, a new study by Todd Gureckis, assistant professor of psychology at New York University, and Robert Goldstone, director of the Cognitive Science Program at IU, suggests that the change in popularity of a name over time increasingly influences naming decisions in the United States.

"Parents in the United States are increasingly sensitive to the change in frequency of a name in recent time, such that names that are gaining in popularity are seen as more desirable than those that have fallen in popularity in the recent past," the authors noted. "This bias then becomes a self-fulfilling prophecy: names that are falling continue to fall while names on the rise reach new heights of popularity, in turn influencing a new generation of parents."

The research, which was supported by the National Institute of Mental Health and the National Science Foundation, is relevant to understanding how people's everyday decisions are influenced by aggregate cultural processes.

"Our results give support to the idea that individual naming choices are in a large part determined by the social environment that expecting parents experience," the authors wrote in Topics of Cognitive Science. "Like the stock market, cycles of boom and bust appear to arise out of the interactions of a large set of agents who are continually influencing one another."

The researchers note this pattern is a relatively new phenomenon. In the late 19th and early 20th centuries, the popularity of a name from one year to the next was correlated with a decrease in future popularity. The changing pattern, the authors suggest, arises from biases in how people estimate the overall desirability of cultural tokens like names. That is, tokens that are recently outpacing their long-term popularity are seen as better choices than those that appear to be falling out of favor.

The findings were based on a historical record of the frequency that particular names were given to babies during the last 127 years in the United States provided by the U.S. Social Security Administration.

Existing accounts of cultural evolution suggest that it is primarily the frequency of the token (i.e., name) in a parent's social environment that should drive aggregate patterns of name choice. However, by sorting through names and watching the way they rise and fall in popularity over time, the authors noted that many names appear to take surprisingly smooth trajectories through time such that increasing popularity one year is often associated with increasing popularity the next. Moreover, this trend has become more pronounced over the years.

In order to better quantify this effect, the authors analyzed the probability that a name goes up or down from one year to the next, given that it went up or down in the time period before. They found that around the turn of the last century (1880 to 1905) names tended to fluctuate in overall frequency from one year to the next. A name that increased its relative frequency one year was more likely to decrease rather than increase in frequency the following year. Similarly, decreases in frequency were more likely to be followed by increases than further decreases.

More recently (1981 to 2006), names moved in consistent ways such that a change in popularity in one year was predictive of the same direction of change the following year. Thus, names appear to carry with them a "momentum" that tends to push changes in popularity in the same direction year after year.

In the paper, the authors develop and test a number of formal models of cultural evolution in order to quantify the sources of bias that influence people's naming decisions. In particular, the authors incorporated well-known aspects of cognitive processing, including the way that novelty and familiarity bias our preferences. The authors found that a model that assumes that names which are outpacing their long-term popularity are preferentially selected better explains the distribution of names over time than do models which leave out this assumption.

The researchers argue that baby names provide a unique opportunity for studying the intersection of individual and group decision making for the following reasons:

-It's an important decision upon which parents devote significant time and energy;
-there are extensive historical records, making possible the detailed measurement of these choices and the social context in which those decisions were made;
-certain names (e.g., "Joshua", a popular name in 2007) do not appear to carry more intrinsic economic value than other names (e.g., "Damarion," an uncommon boys name the same year); and
-baby names are not subject to the forces of marketing or advertising -- factors that may complicate the analysis of other type of culturally-relevant decisions such as fashion or music preferences.

(Photo: Chris Meyer)

Indiana University

SKILLS TESTS LIKE 'CONNECT THE DOTS' MAY BE EARLY ALZHEIMER'S INDICATOR

0 comentarios
A study of mental decline in the years prior to diagnosis of Alzheimer's disease suggests that changing the focus of testing may allow physicians to detect signs of the disease three years earlier.

Current cognitive testing typically focuses on episodic memory, or the ability to remember things like word lists or information from a reading. But scientists at Washington University School of Medicine in St. Louis found that another class of mental abilities known as visuospatial skills begins to deteriorate up to three years prior to diagnosis. These skills are tested with tasks such as connecting the dots or using a guide to build a structure with blocks.

"We may need to rethink what we look for as the earliest signs of mental change associated with Alzheimer's disease," says senior author James Galvin, M.D., a Washington University neurologist who is also on staff at Barnes-Jewish Hospital. "If we can better recognize the first signs of disease, we can start treating patients earlier and hopefully with new treatments we can slow or perhaps even stop their progress into dementia."

The results are published in the October issue of Archives of Neurology.

Galvin and his coauthors analyzed long-term data from volunteers at the Memory and Aging Project at Washington University's Alzheimer's Disease Research Center (ADRC). For three decades, researchers have been regularly conducting extensive testing of volunteers to uncover the factors associated with the normal, healthy retention of mental function in seniors. The new study analyzes data on 444 volunteers aged 60 to 101 that were gathered between 1979 and 2006.

Scientists categorized cognitive testing results into a global measure of cognitive abilities as well as three specific types of mental skills: episodic memory, visuospatial skills and working memory, which assesses the ability to manipulate facts from memory, such as repeating a list of numbers backwards.

Declines in episodic memory and working memory became discernible a year before volunteers were diagnosed with Alzheimer's disease. Losses in the composite assessment of cognitive abilities were detectable two years prior to diagnosis, and visuospatial skills began to decay three years earlier. According to Galvin, the losses in visuospatial skills were particularly noticeable if testing tasks were timed.

Researchers also analyzed the data using a new model that not only tracks the speed of decline in a mental ability but also the acceleration of the decline. Episodic memory's decline accelerated more slowly than that of both visuospatial skills and working memory, which declined fastest.

The new perspective may allow doctors to detect signs of Alzheimer's earlier, but more information will be needed to make a firm diagnosis. To make that possible, researchers at the ADRC are trying to take what they've learned in the new study and correlate it with biomarkers, which are physical changes associated with preclinical Alzheimer's disease. These include such tests as scanning the brain for amyloid plaques or analyzing the levels of proteins in the cerebrospinal fluid.

Amyloid brain plaques, a primary characteristic of Alzheimer's disease, can begin building in patients 10 years or more before clinical symptoms become apparent, Galvin notes.

"The new findings raise the question of what changes are occurring in the brain during the one- to three-year period prior to diagnosis," Galvin says. "Patients have had plaques in their brain for years, and suddenly their cognitive abilities begin to deteriorate. Is a threshold being crossed where brain cell death begins to occur or really starts to pick up speed?"

Galvin and his coauthors also plan to apply their new approach for assessing mental decline to other dementias including Lewy body dementia and the form of dementia associated with Parkinson's disease.

Washington University in St. Louis

RADIO WAVES 'SEE' THROUGH WALLS

0 comentarios

University of Utah engineers showed that a wireless network of radio transmitters can track people moving behind solid walls. The system could help police, firefighters and others nab intruders, and rescue hostages, fire victims and elderly people who fall in their homes. It also might help retail marketing and border control.

"By showing the locations of people within a building during hostage situations, fires or other emergencies, radio tomography can help law enforcement and emergency responders to know where they should focus their attention," Joey Wilson and Neal Patwari wrote in one of two new studies of the method.

Both researchers are in the university's Department of Electrical and Computer Engineering - Patwari as an assistant professor and Wilson as a doctoral student.

Their method uses radio tomographic imaging (RTI), which can "see," locate and track moving people or objects in an area surrounded by inexpensive radio transceivers that send and receive signals. People don't need to wear radio-transmitting ID tags.

One of the studies - which outlines the method and tests it in an indoor atrium and a grassy area with trees - is awaiting publication soon in IEEE Transactions on Mobile Computing, a journal of the Institute of Electrical and Electronics Engineers.

The study involved placing a wireless network of 28 inexpensive radio transceivers - called nodes - around a square-shaped portion of the atrium and a similar part of the lawn. In the atrium, each side of the square was almost 14 feet long and had eight nodes spaced 2 feet apart. On the lawn, the square was about 21 feet on each side and nodes were 3 feet apart. The transceivers were placed on 4-foot-tall stands made of plastic pipe so they would make measurements at human torso level.

Radio signal strengths between all nodes were measured as a person walked in each area. Processed radio signal strength data were displayed on a computer screen, producing a bird's-eye-view, blob-like image of the person.

A second study detailed a test of an improved method that allows "tracking through walls." That study has been placed on arXiv.org, an online archive for preprints of scientific papers. The study details how variations in radio signal strength within a wireless network of 34 nodes allowed tracking of moving people behind a brick wall.

The method was tested around an addition to Patwari's Salt Lake City home. Variations in radio waves were measured as Wilson walked around inside. The system successfully tracked Wilson's location to within 3 feet.

The wireless system used in the experiments was not a Wi-Fi network like those that link home computers, printers and other devices. Patwari says the system is known as a Zigbee network - the kind of network often used by wireless home thermostats and other home or factory automation.

Wilson demonstrated radio tomographic imaging during a mobile communication conference last year, and won the MobiCom 2008 Student Research Demo Competition. The researchers now have a patent pending on the method.

"I have aspirations to commercialize this," says Wilson, who has founded a spinoff company named Xandem Technology LLC in Salt Lake City.

Radio tomographic imaging (RTI) is different and much less expensive than radar, in which radar or radio signals are bounced off targets and the returning echoes or reflections provide the target's location and speed. RTI instead measures "shadows" in radio waves created when they pass through a moving person or object.

RTI measures radio signal strengths on numerous paths as the radio waves pass through a person or other target. In that sense, it is quite similar to medical CT (computerized tomographic) scanning, which uses X-rays to make pictures of the human body, and seismic imaging, in which waves from earthquakes or explosions are used to look for oil, minerals and rock structures underground. In each method, measurements of the radio waves, X-rays or seismic waves are made along many different paths through the target, and those measurements are used to construct a computer image.

In their indoor, outdoor and through-the-wall experiments, Wilson and Patwari obtained radio signal strength measurements from all the transceivers - first when the rectangle was empty and then when a person walked through it. They developed math formulas and used them in a computer program to convert weaker or "attenuated" signals - which occur when someone creates "shadows" by walking through the radio signals - into a blob-like, bird's-eye-view image of that person walking.

RTI has advantages. "RF [radio frequency] signals can travel through obstructions such as walls, trees and smoke, while optical and infrared imaging systems cannot," the engineers wrote. "RF imaging will also work in the dark, where video cameras will fail."

Even "where video cameras could work, privacy concerns may prevent their deployment," Wilson and Patwari wrote. "An RTI system provides current images of the location of people and their movements, but cannot be used to identify a person."

Would bombardment by radio waves pose a hazard? Wilson says the devices "transmit radio waves at powers 500 times less than a typical cell phone."

"And you don't hold it against your head," Patwari adds.

Patwari says the system still needs improvements, "but the plan is that when there is a hostage situation, for example, or some kind of event that makes it dangerous for police or firefighters to enter a building, then instead of entering the building first, they would throw dozens of these radios around the building and immediately they would be able to see a computer image showing where people are moving inside the building."

"They are reusable and you can pick them up afterwards," he says.

The technique cannot distinguish good guys from bad guys, but at least will tell emergency personnel where people are located, he adds.

Patwari says radio tomography probably can be improved to detect people in a burning building, but also would "see" moving flames. "You may be able to look at the image and say this is a spreading fire and these are people," says Patwari.

Wilson believes radio imaging also could be used in "a smarter alarm system. … What if you put radios in your home [built into walls or plugged into outlets] and used tomography to locate people in your home. Not only would your security system be triggered by an intrusion, but you could track the intruder online or over your phone."

Radio tomography even might be used to study where people spend time in stores.

"Does a certain marketing display get people to stop or does it not?" Wilson asks. "I'm thinking of retail stores or grocery stores. They spend a lot of money to determine, 'Where should we put the cereal, where should we put the milk, where should we put the bread?' If I can offer that information using radio tomographic imaging, it's a big deal."

Radio image tracking might help some elderly people live at home. "The elderly want to stay in their homes but don't want a camera in their face all day," Wilson says. "With radio tomographic imaging, you could track where they are in their home, did they get up at the right time, did they go to the medicine cabinet, have they not moved today?"

Wilson says a computer monitoring the radio images might detect an elderly person falling down the stairs based on the unusually fast movement.

He says radio tracking also might be a relatively inexpensive method of border security, and would work in dark and fog unlike cameras.

Another possible use: automatic control of lighting, heating and air conditioning in buildings, says Wilson. Radio tracking might even control sound systems so that the best sound is aimed where people are located, as well as noise cancellation systems which could be aimed automatically at noise sources, Patwari says.

(Photo: Sarang Joshi and Joey Wilson, University of Utah)

University of Utah

PLANT FOSSILS GIVE FIRST REAL PICTURE OF EARLIEST NEOTROPICAL RAINFORESTS

0 comentarios
A team of researchers including a University of Florida paleontologist has used a rich cache of plant fossils discovered in Colombia to provide the first reliable evidence of how Neotropical rainforests looked 58 million years ago.

Researchers from the Smithsonian Institution and UF, among others, found that many of the dominant plant families existing in today’s Neotropical rainforests — including legumes, palms, avocado and banana — have maintained their ecological dominance despite major changes in South America’s climate and geological structure.

The study, which appears in the online edition of the Proceedings of the National Academy of Sciences, examined more than 2,000 megafossil specimens, some nearly 10 feet long, from the Cerrejón Formation in northern Colombia. The fossils are from the Paleocene epoch, which occurred in the 5- to 7-million-year period following the massive extinction event responsible for the demise of the dinosaurs.

“Neotropical rainforests have an almost nonexistent fossil record,” said study co-author Fabiany Herrera, a graduate student at the Florida Museum of Natural History on the UF campus. “These specimens allow us to actually test hypotheses about their origins for the first time ever.”

Herrera said the new specimens, discovered in 2003, also provide information for future studies that promise to provide an even stronger understanding of the plants that formed the earliest Neotropical communities.

Many previous assumptions and hypotheses on the earliest rainforests are based on studies of pollen fossils, which did not provide information about climate, forest structure, leaf morphology or insect herbivory.

The new study provides evidence Neotropical rainforests were warmer and wetter in the late Paleocene than today but were composed of the same plant families that now thrive in rainforests. “We have the fossils to prove this,” Herrera said. “It is also intriguing that while the Cerrejón rainforest shows many of the characteristics of modern equivalents, plant diversity is lower.”

The site, one of the world’s largest open-pit coal mines, also yielded the fossil for the giant snake known as Titanoboa, described by UF scientists earlier this year.

“These new plant fossils show us that the forest during the time of Titanoboa, 58 million years ago, was similar in many ways to that of today,” said Florida Museum vertebrate paleontologist Jonathan Bloch, who described Titanoboa but was not part of the rainforest study. “Like Titanoboa, which is clearly related to living boas and anacondas, the ancient forest of northern Colombia had similar families of plants as we see today in that ecosystem. The foundations of the Neotropical rainforests were there 58 million years ago.”

Megafossils found at the Cerrejón site made it possible to use leaf structure to identify specimens down to the genus level. This resolution allowed the identification of plant genera that still exist in Neotropical rainforests. With pollen fossils, specimens can be categorized only to the family level.

Researchers were surprised by the relative lack of diversity found in the Paleocene rainforest, Herrera said. Statistical analyses showed that the plant communities found in the Cerrejón Formation were 60 percent to 80 percent less diverse than those of modern Neotropical rainforests. Evidence of herbivory also showed a low diversity level among insects.

The study’s authors say the relative lack of diversity indicates either the beginning of rainforest species diversification or the recovery of existing species from the Cretaceous extinction event.

The researchers estimate the Paleocene rainforest received about 126 inches of rainfall annually and had an average annual temperature greater than 86 degrees. The Titanoboa study, which used different methods, estimated an average temperature between 89 and 91 degrees. Today the region’s temperatures average about 81 degrees.

Herrera is now comparing fossils from the Cerrejón site to specimens from other Paleocene sites in Colombia to see how far the early rainforest extended geographically. He is also examining fossils from a Cretaceous site to determine differences in composition before and after the extinction event.

University of Florida

IBEX EXPLORES GALACTIC FRONTIER, RELEASES FIRST-EVER ALL-SKY MAP

0 comentarios

NASA's Interstellar Boundary Explorer, or IBEX, spacecraft has made it possible for scientists to construct the first comprehensive sky map of our solar system and its location in the Milky Way galaxy. The new view will change the way researchers view and study the interaction between our galaxy and sun.

The sky map was produced with data that two detectors on the spacecraft collected during six months of observations. The detectors measured and counted particles scientists refer to as energetic neutral atoms.

The energetic neutral atoms are created in an area of our solar system known as the interstellar boundary region. This region is where charged particles from the sun, called the solar wind, flow outward far beyond the orbits of the planets and collide with material between stars. The energetic neutral atoms travel inward toward the sun from interstellar space at velocities ranging from 100,000 mph to more than 2.4 million mph. This interstellar boundary emits no light that can be collected by conventional telescopes.

The new map reveals the region that separates the nearest reaches of our galaxy, called the local interstellar medium, from our heliosphere -- a protective bubble that shields and protects our solar system from most of the dangerous cosmic radiation traveling through space.

"For the first time, we're sticking our heads out of the sun's atmosphere and beginning to really understand our place in the galaxy," said David J. McComas, IBEX principal investigator and assistant vice president of the Space Science and Engineering Division at Southwest Research Institute in San Antonio. "The IBEX results are truly remarkable, with a narrow ribbon of bright details or emissions not resembling any of the current theoretical models of this region."

NASA released the sky map image Oct. 15 in conjunction with publication of the findings in the journal Science. The IBEX data were complemented and extended by information collected using an imaging instrument sensor on NASA's Cassini spacecraft. Cassini has been observing Saturn, its moons and rings since the spacecraft entered the planet's orbit in 2004.

The IBEX sky maps also put observations from NASA's Voyager spacecraft into context. The twin Voyager spacecraft, launched in 1977, traveled to the outer solar system to explore Jupiter, Saturn, Uranus and Neptune. In 2007, Voyager 2 followed Voyager 1 into the interstellar boundary. Both spacecraft are now in the midst of this region where the energetic neutral atoms originate. However, the IBEX results show a ribbon of bright emissions undetected by the two Voyagers.

"The Voyagers are providing ground truth, but they're missing the most exciting region," said Eric Christian, the IBEX deputy mission scientist at NASA's Goddard Space Flight Center in Greenbelt, Md. "It's like having two weather stations that miss the big storm that runs between them."

(Photo: NASA/Goddard Space Flight Center)

NASA

Tuesday, October 27, 2009

QUANTUM COMPUTER CHIPS NOW ONE STEP CLOSER TO REALITY

0 comentarios

In the quest for smaller, faster computer chips, researchers are increasingly turning to quantum mechanics -- the exotic physics of the small.

The problem: the manufacturing techniques required to make quantum devices have been equally exotic.

That is, until now.

Researchers at Ohio State University have discovered a way to make quantum devices using technology common to the chip-making industry today.

This work might one day enable faster, low-power computer chips. It could also lead to high-resolution cameras for security and public safety, and cameras that provide clear vision through bad weather.

Paul Berger, professor of electrical and computer engineering and professor of physics at Ohio State University, and his colleagues report their findings in an upcoming issue of IEEE Electron Device Letters.

The team fabricated a device called a tunneling diode using the most common chip-making technique, called chemical vapor deposition.

“We wanted to do this using only the tools found in the typical chip-makers toolbox,” Berger said. “Here we have a technique that manufacturers could potentially use to fabricate quantum devices directly on a silicon chip, side-by-side with their regular circuits and switches.”

The quantum device in question is a resonant interband tunneling diode (RITD) -- a device that enables large amounts of current to be regulated through a circuit, but at very low voltages. That means that such devices run on very little power.

RITDs have been difficult to manufacture because they contain dopants -- chemical elements -- that don’t easily fit within a silicon crystal.

Atoms of the RITD dopants antimony or phosphorus, for example, are large compared to atoms of silicon. Because they don’t fit into the natural openings inside a silicon crystal, the dopants tend to collect on the surface of a chip.

“It’s like when you’re playing Tetris and you have a big block raining down, and only a small square to fit it in. The block has to sit on top,” Berger said. “When you’re building up layers of silicon, these dopants don’t readily fit in. Eventually, they clump together on top of the chip.”

In the past, researchers have tried adding the dopants while growing the silicon wafer one crystal layer at a time -- using a slow and expensive process called molecular beam epitaxy, a method which is challenging for high-volume manufacturing. That process also creates too many defects within the silicon.

Berger discovered that RITD dopants could be added during chemical vapor deposition, in which a gas carries the chemical elements to the surface of a wafer many layers at a time. The key was determining the right reactor conditions to deliver the dopants to the silicon, he found.

“One key is hydrogen,” he said. “It binds to the silicon surface and keeps the dopants from clumping. So you don’t have to grow chips at 320 degrees Celsius [approximately 600 degrees Fahrenheit] like you do when using molecular beam epitaxy. You can actually grow them at a higher temperature like 600 degrees Celsius [more than 1100 degrees Fahrenheit] at a lower cost, and with fewer crystal defects.”

Tunneling diodes are so named because they exploit a quantum mechanical effect known as tunneling, which lets electrons pass through thin barriers unhindered.

In theory, interband tunneling diodes could form very dense, very efficient micro-circuits in computer chips. A large amount of data could be stored in a small area on a chip with very little energy required.

Researchers judge the usefulness of tunneling diodes by the abrupt change in the current densities they carry, a characteristic known as “peak-to-valley ratio.” Different ratios are appropriate for different kinds of devices. Logic circuits such as those on a computer chip are best suited by a ratio of about 2.

The RITDs that Berger’s team fabricated had a ratio of 1.85. “We’re close, and I’m sure we can do better,” he said.

He envisions his RITDs being used for ultra-low-power computer chips operating with small voltages and producing less wasted heat.

“Chip makers today are having a great difficulty boosting performance in each generation, so they pack chips with more and more circuitry, and end up generating a lot of heat,” Berger said. “That’s why a laptop computer is often too hot to actually sit atop your lap. Soon, their heat output will rival that of a nuclear reactor per unit volume.”

“That’s why moving to quantum devices will be a game-changer.” RITDs could form high-resolution detectors for imaging devices called focal plane arrays. These arrays operate at wavelengths beyond the human eye and can permit detection of concealed weapons and improvised explosive devices. They can also provide vision through rain, snow, fog, and even mild dust storms, for improved airplane and automobile safety, Berger said. Medical imaging of cancerous tumors is another potential application.

(Photo: OSU)

Ohio State University

Followers

Archive

 

Selected Science News. Copyright 2008 All Rights Reserved Revolution Two Church theme by Brian Gardner Converted into Blogger Template by Bloganol dot com