Friday, December 31, 2010

Why Some Plants Flower in Spring, Autumn and Some in Summer

A team of researchers from Warwick have isolated a gene responsible for regulating the expression of CONSTANS, an important inducer of flowering, in Arabidopsis.

'Being able to understand and ultimately control seasonal flowering will enable more predictable flowering, better scheduling and reduced wastage of crops', explained Dr Jackson.

Whilst the relationship between CONSTANS and flowering time in response to day length is well established, the mechanism controlling the expression of CONSTANS is still not fully understood.

The scientists present their work at the Society for Experimental Biology Annual Meeting in Prague.

Many plants control when they flower to coincide with particular seasons by responding to the length of the day, a process known as photoperiodism. A flowering mutant of Arabidopsis, which had an altered response to photoperiod, was used in the study led by Dr Stephen Jackson.

In the study funded by the BBSRC, the team identified the defective gene in the mutant plant that caused its abnormal flowering time.

They then cloned a working version of the gene, known as DAY NEUTRAL FLOWERING (DNF), from a normal Arabidopsis plant and introduced it into the mutant plant to restore its normal flowering response to day length.
The role of DNF in normal plant flowering is to regulate the CONSTANS gene. CONSTANS is activated only in the light and the plant is triggered to flower when CONSTANS levels rise above a certain threshold level during the daytime.

In normal plants, DNF represses the levels of CONSTANS until the day length is long enough and conditions are favourable for the survival of their seedlings. In mutant plants without an active DNF gene, CONSTANS is not repressed and they are able to flower earlier in the year, when days are still short.

The presence of the DNF gene has not yet been identified in species other than Arabidopsis but the scientists believe their on-going work may prove to have a wider significance for other species.
Scientists can override complex pathways that control flowering by artificially inducing or inhibiting key flowering genes such as DNF and CONSTANS. This can already be done in the laboratory by spraying an 'inducing agent' onto plants, stimulating them to flower early.

This could be used to extend the length of the harvesting season or to co-ordinate flowering or fruit production to a specific time. Growers already regulate the flowering of a few plants such as Chrysanthemum and Poinsettia, the latter specifically for Christmas and Easter.

Unravelling the complex pathways that control plant flowering will help scientists to understand and influence flowering patterns more effectively and in many different species.


In winter or early spring, Arabidopsis plants without an active DNF gene are already flowering (right). Those with the DNF gene will delay flowering until later in the year when days are longer and conditions are more favorable for survival of their seedlings (left). (Credit: Dr Steve Jackson)

Thursday, December 30, 2010

Bacteria Provide Example of One of Nature's First Immune Systems, Research Shows

His findings, which appear in Nature Communications, a multidisciplinary publication dedicated to research in all areas of the biological, physical and chemical sciences, shed light on how bacteria have throughout the course of millions of years developed resistance to antibiotics by co-opting the DNA of their natural enemies -- viruses.

The battle between bacteria and bacteria-eating viruses, Wood explains, has been going on for millions of years, with viruses attempting to replicate themselves by -- in one approach -- invading bacteria cells and integrating themselves into the chromosomes of the bacteria. When this happens a bacterium makes a copy of its chromosome, which includes the virus particle. The virus then can choose at a later time to replicate itself, killing the bacterium -- similar to a ticking time bomb, Wood says.

However, things can go radically wrong for the virus because of random but abundant mutations that occur within the chromosome of the bacterium. Having already integrated itself into the bacterium's chromosome, the virus is subject to mutation as well, and some of these mutations, Wood explains, render the virus unable to replicate and kill the bacterium.

With this new diverse blend of genetic material, Wood says, a bacterium not only overcomes the virus' lethal intentions but also flourishes at a greater rate than similar bacteria that have not incorporated viral DNA.
"Over millions of years, this virus becomes a normal part of the bacterium," Wood says. "It brings in new tricks, new genes, new proteins, new enzymes, new things that it can do. The bacterium learns how to do things from this.
"What we have found is that with this new viral DNA that has been trapped over millions of years in the chromosome, the cell has created a new immune system," Wood notes. "It has developed new proteins that have enabled it to resists antibiotics and other harmful things that attempt to oxidize cells, such as hydrogen peroxide. These cells that have the new viral set of tricks don't die or don't die as rapidly."

Understanding the significance of viral DNA to bacteria required Wood's research team to delete all of the viral DNA on the chromosome of a bacterium, in this case bacteria from a strain of E. coli. Wood's team, led by postdoctoral researcher Xiaoxue Wang, used what in a sense could be described as "enzymatic scissors" to "cut out" the nine viral patches, which amounted to precisely removing 166,000 nucleotides. Once the viral patches were successfully removed, the team examined how the bacterium cell changed. What they found was a dramatically increased sensitivity to antibiotics by the bacterium.

While Wood studied this effect in E. coli bacteria, he says similar processes have taken place on a massive, widespread scale, noting that viral DNA can be found in nearly all bacteria, with some strains possessing as much as 20 percent viral DNA within their chromosome.

"To put this into perspective, for some bacteria, one-fifth of their chromosome came from their enemy, and until our study, people had largely neglected to study that 20 percent of the chromosome," Wood says. "This viral DNA had been believed to be silent and unimportant, not having much impact on the cell.
"Our study is the first to show that we need to look at all bacteria and look at their old viral particles to see how they are affecting the bacteria's current ability to withstand things like antibiotics. If we can figure out how the cells are more resistant to antibiotics because of this additional DNA, we can perhaps make new, effective antibiotics."


Single gram-negative Escherichia coli bacterium. Studying how bacteria incorporate foreign DNA from invading viruses into their own regulatory processes, Thomas Wood, professor in the Artie McFerrin Department of Chemical Engineering at Texas A&M University, is uncovering the secrets of one of nature's most primitive immune systems. (Credit: Janice Haney Carr)

Wednesday, December 29, 2010

When the Black Hole Was Born: Astronomers Identify the Epoch of the First Fast Growth of Black Holes

Now a team of astronomers from Tel Aviv University, including Prof. Hagai Netzer and his research student Benny Trakhtenbrot, has determined that the era of first fast growth of the most massive black holes occurred when the universe was only about 1.2 billion years old -- not two to four billion years old, as was previously believed -- and they're growing at a very fast rate.
The results will be reported in a new paper soon to appear in The Astrophysical Journal.

The oldest are growing the fastest
The new research is based on observations with some of the largest ground-based telescopes in the world: "Gemini North" on top of Mauna Kea in Hawaii, and the "Very Large Telescope Array" on Cerro Paranal in Chile. The data obtained with the advanced instrumentation on these telescopes show that the black holes that were active when the universe was 1.2 billion years old are about ten times smaller than the most massive black holes that are seen at later times. However, they are growing much faster.

The measured rate of growth allowed the researchers to estimate what happened to these objects at much earlier as well as much later times. The team found that the very first black holes, those that started the entire growth process when the universe was only several hundred million years old, had masses of only 100-1000 times the mass of the sun. Such black holes may be related to the very first stars in the universe. They also found that the subsequent growth period of the observed sources, after the first 1.2 billion years, lasted only 100-200 million years.

The team found that the very first black holes -- those that started growing when the universe was only several hundred million years old -- had masses of only 100-1000 times the mass of the sun. Such black holes may be related to the very first stars in the universe. They also found that the subsequent growth period of these black holes, after the first 1.2 billion years, lasted only 100-200 million years.

The new study is the culmination of a seven year-long project at Tel Aviv University designed to follow the evolution of the most massive black holes and compare them with the evolution of the galaxies in which such objects reside.

Other researchers on the project include Prof. Ohad Shemmer of the University of North Texas, who took part in the earlier stage of the project as a Ph.D student at Tel Aviv University, and Prof. Paulina Lira, from the University of Chile.



Illustration of a black hole and its surrounding disk. (Credit: NASA)

Tuesday, December 28, 2010

The finding has implications for understanding future climate change because dust plays a significant role in controlling the amount of solar energy in the atmosphere. Depending on their size and other characteristics, some dust particles reflect solar energy and cool the planet, while others trap energy as heat.

"As small as they are, conglomerates of dust particles in soils behave the same way on impact as a glass dropped on a kitchen floor," Kok says. "Knowing this pattern can help us put together a clearer picture of what our future climate will look like."

The study may also improve the accuracy of weather forecasting, especially in dust-prone regions. Dust particles affect clouds and precipitation, as well as temperatures.

The research was supported by the National Science Foundation, which sponsors NCAR.

Shattered soil
Kok's research focused on a type of airborne particle known as mineral dust. These particles are usually emitted when grains of sand are blown into soil, shattering dirt and sending fragments into the air. The fragments can be as large as about 50 microns in diameter, or about the thickness of a fine strand of human hair.

The smallest particles, which are classified as clay and are as tiny as 2 microns in diameter, remain in the atmosphere for about a week, circling much of the globe and exerting a cooling influence by reflecting heat from the Sun back into space. Larger particles, classified as silt, fall out of the atmosphere after a few days. The larger the particle, the more it will tend to have a heating effect on the atmosphere.

Kok's research indicates that the ratio of silt particles to clay particles is two to eight times greater than represented in climate models.

Since climate scientists carefully calibrate the models to simulate the actual number of clay particles in the atmosphere, the paper suggests that models most likely err when it comes to the number of silt particles. Most of these larger particles swirl in the atmosphere within about 1,000 miles of desert regions, so adjusting their quantity in computer models should generate better projections of future climate in desert regions, such as the southwestern United States and northern Africa.

Additional research will be needed to determine whether future temperatures in those regions will increase more or less than currently indicated by computer models.

The study results also suggest that marine ecosystems, which draw down carbon dioxide from the atmosphere, may receive substantially more iron from airborne particles than previously estimated. The iron enhances biological activity, benefiting ocean food chains, including plants that take up carbon during photosynthesis.

In addition to influencing the amount of solar heat in the atmosphere, dust particles also get deposited on mountain snowpacks, where they absorb heat and accelerate melt.

Glass and dust: Common fracture patterns
Physicists have long known that certain brittle objects, such as glass or rocks, and even atomic nuclei, fracture in predictable patterns. The resulting fragments follow a certain range of sizes, with a predictable distribution of small, medium, and large pieces. Scientists refer to this type of pattern as scale invariance or self-similarity.

Physicists have devised mathematical formulas for the process by which cracks propagate in predictable ways as a brittle object breaks. Kok theorized that it would be possible to use these formulas to estimate the range of dust particle sizes. He turned to a 1983 study by Guillaume d'Almeida and Lothar Schüth from the Institute for Meteorology at the University of Mainz in Germany that measured the particle size distribution of arid soil.

By applying the formulas for fracture patterns of brittle objects to the soil measurements, Kok determined the size distribution of emitted dust particles. To his surprise, the formulas described measurements of dust particle sizes almost exactly.

"The idea that all these objects shatter in the same way is a beautiful thing, actually," Kok says. "It's nature's way of creating order in chaos."




Dust particles in the atmosphere range from about 0.1 microns to 50 microns in diameter (microns are also known as micrometers, abbreviated as µm). The size of dust particles determines how they affect climate and weather, influencing the amount of solar energy in the global atmosphere as well as the formation of clouds and precipitation in more dust-prone regions. The NASA satellite image in this illustration shows a 1992 dust storm over the Red Sea and Saudi Arabia. (Credit: Copyright UCAR)

Sunday, December 26, 2010

Ever-Sharp Urchin Teeth May Yield Tools That Never Need Honing

The rock-boring behavior is astonishing, scientists agree, but what is truly remarkable is that, despite constant grinding and scraping on stone, urchin teeth never, ever get dull. The secret of their ever-sharp qualities has puzzled scientists for decades, but now a new report by scientists from the University of Wisconsin-Madison and their colleagues has peeled back the toothy mystery.
Writing in the journal Advanced Functional Materials, a team led by UW-Madison professor of physics Pupa Gilbert describes the self-sharpening mechanism used by the California purple sea urchin to keep a razor-sharp edge on its choppers.

The urchin's self-sharpening trick, notes Gilbert, is something that could be mimicked by humans to make tools that never need honing.

"The sea urchin tooth is complicated in its design. It is one of the very few structures in nature that self-sharpen," says Gilbert, explaining that the sea urchin tooth, which is always growing, is a biomineral mosaic composed of calcite crystals with two forms -- plates and fibers -- arranged crosswise and cemented together with super-hard calcite nanocement. Between the crystals are layers of organic materials that are not as sturdy as the calcite crystals.
"The organic layers are the weak links in the chain," Gilbert explains. "There are breaking points at predetermined locations built into the teeth. It is a concept similar to perforated paper in the sense that the material breaks at these predetermined weak spots."

The crystalline nature of sea urchin dentition is, on the surface, different from other crystals found in nature. It lacks the obvious facets characteristic of familiar crystals, but at the very deepest levels the properties of crystals are evident in the orderly arrangement of the atoms that make up the biomineral mosaic teeth of the sea urchin.

To delve into the fundamental nature of the crystals that form sea urchin teeth, Gilbert and her colleagues used a variety of techniques from the materials scientist's toolbox. These include microscopy methods that depend on X-rays to illuminate how nanocrystals are arranged in teeth to make the sea urchins capable of grinding rock. Gilbert and her colleagues used these techniques to deduce how the crystals are organized and melded into a tough and durable biomineral.

Knowing the secret of the ever-sharp sea urchin tooth, says Gilbert, could one day have practical applications for human toolmakers. "Now that we know how it works, the knowledge could be used to develop methods to fabricate tools that could actually sharpen themselves with use," notes Gilbert. "The mechanism used by the urchin is the key. By shaping the object appropriately and using the same strategy the urchin employs, a tool with a self-sharpening edge could, in theory, be created."

The new research was supported by grants from the U.S. Department of Energy and the National Science Foundation. In addition to Gilbert, researchers from the University of California, Berkeley; Argonne National Laboratory; the Weizmann Institute of Science; and the Lawrence Berkeley National Laboratory contributed to the report.


Sea urchin teeth are pictured in situ. New research by Pupa Gilbert, a physics professor at the University of Wisconsin-Madison, and her colleagues reveals how the sea urchin's teeth are always sharp, despite constant grinding and scraping to create the nooks that protect the marine animal from predators and crashing waves. (Credit: Photo courtesy of Pupa Gilbert)

Big Quakes Trigger Small Quakes.

An earthquake in Alaska could trigger one near you, even if you're not in an earthquake-prone area, new research shows. Seismologists are now finding earthquakes in some unexpected places.

City Hall in Park City, Utah, is undergoing a $10 million seismic update. Park City is near the Wasatch Fault, an area overdue for an earthquake, so leaders have been concerned about earthquake-proofing the building for years. "It's something we absolutely expect," said Ron Ivie, a building official in Park City. "The question is, 'What day?'"

While earthquakes along fault lines are expected, seismologist Kris Pankow and her research team recently found slow-moving seismic surface waves, or L waves, from large earthquakes travel along the ground and trigger smaller earthquakes as they go.

"It's sort of like if a tree falls in the forest, does anyone hear it?" said Pankow, assistant director of the University of Utah seismic stations in Salt Lake City. "The same question was here: If the seismic waves go by everywhere, do they generate earthquakes everywhere?"

Unlike the earthquake risk in Park City, Pankow says the risk of these smaller earthquakes is minimal. The team tracked 15 large earthquakes and found 12 of them actually triggered smaller jolts. These are different than aftershocks because they happen miles away and sometimes hours or days later.

The team also found earthquakes in unlikely places like Canada, Australia and western Africa.

Pankow doesn't want to alarm anyone. She says the triggered earthquakes that have been observed have been small. Without a seismograph, you may not even notice them.

Seismologists still don't completely understand why earthquakes happen. Pankow and her team hope their work with these dynamically triggered earthquakes will help lead to an answer.

WHAT CAUSES EARTHQUAKES? An earthquake is the result of a sudden release of stored energy in the Earth's crust triggered by shifting tectonic plates. The Earth's lithosphere is an elaborate network of interconnected plates that move constantly -- far too slow for us to be aware of them, but moving, nonetheless. Occasionally they lock up at the boundaries, and this creates frictional stress. When that strain becomes too large, the rocks give way and break and slide along fault lines. This can give rise to a violent displacement of the Earth's crust, which we feel as vibrations or tremors as the pent-up energy is released. However, only 10% or so of the total energy is released in the seismic waves. However, the rest is converted into heat, used to crush and deform rock, or released as friction.

HOW DO SCIENTISTS RATE EARTHQUAKES? An earthquake's magnitude describes how much the ground moves. The scale is logarithmic, which means that when the magnitude increases by one (say from 3 to 4, or from 4 to 5) the amount of ground motion increases by ten times. That is, a magnitude 3 quake leads to ten times as much ground motion as a magnitude 2 quake, and a magnitude 2 leads to ten times as much motion as a magnitude 1. This means that a magnitude 3 is a hundred times as violent as a magnitude 1, and a hundred times less violent than a magnitude 5.

The magnitude scale also tells us just how much energy an earthquake released. For example, a magnitude 1 earthquake releases the same amount of energy as 30 pounds of TNT exploding. Although a magnitude 2 earthquake makes the ground move ten times as much as a magnitude 1, it releases 32 times as much energy -- or roughly as much as a ton of TNT. A magnitude 5 earthquake packs the punch of a moderate nuclear weapon, and a magnitude 12 quake would be enough to put a crack all the way through the center of the Earth.

Friday, December 24, 2010

System for Detecting Noise Pollution in the Sea and Its Impact on Cetaceans

In 2007, the Applied Bioacoustics Laboratory started work on a project called Listening to the Deep Ocean Environment (LIDO). It set out to record sounds on the seafloor and subsequently assess the extent to which artificial noises (maritime traffic, fishing, offshore facilities, military maneuvers, etc.) affect the quality of life of cetaceans in terms of any disorders they may suffer, or even their deaths.

Under the supervision of Michel André, the Applied Bioacoustics Laboratory (LAB) has now developed algorithms that automatically interpret these sounds, classify them in real time by their biological or anthropogenic origins and, within this division, the species of cetaceans present in the area analyzed are identified. Using the data obtained, it is possible to measure the extent to which noise pollution has an impact on the conservation of ecosystems.

This is the first system of its kind in the world and saves considerable analysis time and human resources in the detection and classification of noise, as these processes are completely automated. Thus, the technology prevents a continuous flow of unanalyzed acoustic data from overloading hard drives at research centers. Before now, this was one of the problems in processing uninterrupted data streams.

Finally, the acoustic signals and the result of the analysis can be listened to and seen live over a website that is available to the international scientific community and to laypersons

The importance of noise in the sea
There has always been natural and biological noise in the sea. However, the recent, uncontrolled introduction of artificial noise in the sea on an unprecedented scale poses an even greater threat to its equilibrium than any other source of pollution in the marine environment.

The sense of hearing is vital to cetaceans, as they use it to find prey, navigate in the sea, migrate and distinguish members of the same species. Therefore, their survival depends on their sense of hearing working properly.

Using a set of 13 hydrophones installed in over 10 underwater platforms located all over the world, the UPC's system detects the presence of cetaceans and enables scientists to study the relationship these animals have with other mammals in their habitat. This innovative system therefore opens unexplored avenues in the biological study of these species. However, the importance of the LIDO project lies in the possibility of better understanding the sensitivity of cetaceans to sources of noise pollution, detect the interaction of these animals with human activity and, more importantly, it will make it possible to take decisions for mitigating noise when the lives of these mammals are threatened.

To date, the increase in beached whales, sperm whales and other cetaceans around the world has been put down to the greater noise levels caused by fishing, sea trade, military maneuvers, and the construction of oilrigs and offshore wind farms. Thanks the technology developed by the UPC's research team, based on the Vilanova i la Geltrú Campus, it will now be possible to accurately ascertain whether there is a direct cause and effect relationship between the two events.

Based in this information, governments, institutions and businesses that operate in the sea will be able to establish response protocols to prevent these species from falling victim to exposure to noise of an anthropogenic origin that may cause damage to their hearing and, therefore, an imbalance in marine ecosystems.

First step for regulating noise pollution in the sea
The LAB has in fact written a manual of good practices for managing noise pollution in the sea at the request of the Ministry of the Environment and Rural and Marine Affairs, within the framework of the eCREM (Effects and Control of Anthropogenic Noise in Marine Ecosystems) project. The manual is the first step for drawing up a draft bill and good practices to regulate noise pollution in the sea in Spain, which is one of the first countries in the EU that intends to introduce regulations to this regard.

It should be taken into account that forecasts show that maritime traffic in the Mediterranean basin will increase significantly over the next few years to mitigate the atmospheric pollution derived from the transport of goods by road. The new EU directive on the sea rules that all member states must comply with a set of indicators for measuring marine noise pollution before 2012. A group of 11 experts from around Europe, one of whom is Michel André, the director of the LAB, are currently working to establish exactly which indicators are to be used.

The LAB has planned to develop alarm technologies in the near future. They are to be installed on various devices, such as autonomous buoys and underwater robots, which would send off warnings that cetaceans are approaching areas with high noise levels and set off response protocols.
The UPC team has devoted 15 years to the study of noise pollution in the sea and to the creation of technological solutions that make it possible to combine human activities and the interests of industry with the conservation of cetaceans and the marine environment.

The LAB is placing particular emphasis on the study of the effects of noise pollution on cetaceans because these marine mammals are at the top of the food chain, and their activities depend on the exchange of acoustic information. Therefore, their reaction to sources of noise pollution helps to determine the general state of marine environments. Cetaceans are considered to be bioindicators of the acoustic balance in oceans.

International network of underwater observatories
The LIDO platform, which records underwater noise in different parts of Europe and North America, is open to the international scientific community.
Noise sources are detected by hydrophones installed on over 10 underwater observatories. Some of the LIDO sensors have been deployed on the European Seafloor Observatory Network (ESONET), one of whose members is the UPC's Expandable Seafloor Observatory (OBSEA) located on the coast of Vilanova i la Geltrú. The LAB has another set of sensors installed in the deep sea infrastructures of the ANTARES project, an international collaboration that focuses on detecting subatomic particles called neutrinos, which move through space without being stopped by matter. Finally, there are another three hydrophones in North America on the seafloor platforms of the NEPTUNE network in Canada.

The LAB is in the final stages of reaching an agreement with Japan to install the technology on 17 platforms designed to detect the risk of earthquakes in the Asian archipelago.


Listening to the Deep Ocean Environment (LIDO) website. (Credit: Image courtesy of Universitat Politècnica de Catalunya)

Comprehensive Wind Info Collected to Improve Renewable Energy

"We know that the wind will blow, but the real challenge is to know when and how much," said atmospheric scientist Larry Berg. "This project takes an interesting approach -adapting an established technology for a new use -- to find a reliable way to measure winds and improve wind power forecasts."
Berg and Rob Newsom, both researchers at the Department of Energy's Pacific Northwest National Laboratory, are using a variety of meteorological equipment to measure winds high up into the air -- about 350 feet, the average height of turbine hubs -- and get a better reading on how winds behave up there.

Wind measurements are typically made much lower -- at about 30 feet high -- for weather monitoring purposes. Wind power companies do measure winds higher up, but that information is usually kept proprietary. PNNL's findings will be available to all online.

The study's findings could also provide more accurate wind predictions because of its field location -- a working wind farm. The equipment is being erected on and near a radio tower near the 300-megawatt Stateline Wind Energy Center, a wind power project that runs along the eastern Washington-Oregon border. Any wind power company could use the study's findings to improve how sites are chosen for wind farms and how those farms are operated.

The equipment started collecting measurements in November. Berg and Newsom will continue gathering measurements for about nine months, or through this summer. The period will allow the researchers to draw a more complete and accurate picture of how wind behaves at turbine height. The period represents the windiest months for the area.

"The goal here is to help everyone -- not just one group -- better understand wind's behavior and ultimately improve our use of it as a renewable power source," Newsom said.

Cool tools

But first researchers need to document wind behavior. To do that, they're
employing a handful of sophisticated meteorological tools.
One key instrument is the National Weather Service's NEXRAD Doppler radar weather station in Pendleton, Ore., about 19 miles south of Stateline. The station emits short pulses of radio waves that bounce back when they strike water droplets and other particles in the air. A national network of these stations is routinely used by television meteorologists to show clouds and precipitation in familiar, colorful digital maps. For this study, computers will analyze the returned signals to determine how the wind varies in the area around the radar, including the wind farm.

The team is also installing equipment specifically designed to measure wind speed and direction: a radar wind profiler. Like NEXRAD, the profiler sends out radio waves that are bounced back when it hits variations in moisture or temperature. But while NEXRAD scans the entire sky with its one rotating radar beam, the profiler sends three radar beams up into the sky. The profiler being used is part of the DOE's Atmospheric Radiation Measurement Climate Research Facility.

Another tool they're using is Doppler sodar, which uses sound instead of radio waves. A regular sequence of high-pitched beeps is sent into the sky and, like radar, will be reflected from variations in moisture and temperature. That information will help researchers measure winds that are at lower heights in the sky than the profiler can measure.

Finally, the researchers will install ultrasonic anemometers on the radio tower. The anemometer holds six tiny microphones, and measures the time it takes for sound pulses to travel from one microphone to another. Beyond measuring speed, the anemometer also helps determine wind direction. Combined, all this equipment will help researchers gain a more comprehensive understanding of how wind behaves at the turbine level of a working wind farm.

Improving renewable energy
Data collected during this study will be used to evaluate the performance of computer models of the atmosphere near the operating wind farm. These computer models are routinely used to provide weather forecasts of wind conditions hours and even days into the future. This information can help wind farms operate more efficiently and lets them better integrate the power they produce into the electric grid. These models are known to have relatively large errors in forecasting the severity and times of strong winds, including gusts during thunderstorms as fronts pass through an area. Even relatively small errors in wind speed predictions can lead to large errors in the predicted power outputs of wind farms.

When that happens, grid operators have to accommodate the influx of power, often by diverting or turning off other power sources. In the Pacific Northwest, that can mean spilling river water over hydroelectric dams instead of sending the water through the dams' power-producing turbines. Sometimes those diversions are needed on a moment's notice, when the grid becomes overwhelmed by unexpected windy weather. If such gusts could be reliably predicted ahead of time, power operators could make adequate plans beforehand. And when the wind stops blowing unexpectedly, the grid can experience a quick need for power.

Wind power companies could also use improved predictions to more wisely choose their wind farm sites. These companies invest heavily in understanding the wind characteristics of their sites before breaking ground, but forecasting turbine-level winds is still an evolving field.

As a result, two industrial partners are collaborating with Newsom and Berg on their research. 3TIER of Seattle, Wash., and WindLogics of St. Paul, Minn., both help wind power developers identify and evaluate potential locations for wind farms. They're serving as consultants and have provided input on what kind of data would be most helpful when examining wind sites.

If the NEXRAD wind data is verified by the data collected through the other meteorological equipment, the next step in this research would be to plug the NEXRAD data into a working weather model. The model could then be used to better predict future wind behavior. Using the data in a weather model is outside the scope of Berg and Newsom's current research, but they hope to be able to do so in the future.

Field work for the study began this month and will continue for about nine months. This study is funded by the DOE's Office of Energy Efficiency and Renewable Energy's Wind and Water Power Program and the Office of Science Atmospheric Radiation Measurement Facility.


Pacific Northwest National Laboratory scientists are researching how radar weather instruments can help improve predictions on when and how strongly winds will blow. They’re testing the instruments from a working wind farm in southeastern Washington State. (Credit: PNNL)

Construction of the World's Largest Neutrino Observatory Completed: Antarctica's IceCube

The last of 86 holes had been drilled and a total of 5,160 optical sensors are now installed to form the main detector--a cubic kilometer of instrumented ice--of the IceCube Neutrino Observatory, located at the National Science Foundation's Amundsen-Scott South Pole Station.

From its vantage point at the end of the world, IceCube provides an innovative means to investigate the properties of fundamental particles that originate in some of the most spectacular phenomena in the universe.

In the deep, dark, stillness of the Antarctic ice, IceCube records the rare collisions of neutrinos--elusive sub-atomic particles--with the atomic nuclei of the water molecules of the ice. Some neutrinos come from the sun, while others come from cosmic rays interacting with the Earth's atmosphere and dramatic astronomical sources such as exploding stars in the Milky Way and other distant galaxies. Trillions of neutrinos stream through the human body at any given moment, but they rarely interact with regular matter, and researchers want to know more about them and where they come from.

The size of the observatory is important because it increases the number of potential collisions that can be observed, making neutrino astrophysics a reality.

The completion of construction brings to a culmination one of the most ambitious and complex multinational scientific projects ever attempted. The National Science Foundation (NSF) contributed $242 million toward the total project cost of $279 million. NSF is the manager of the United States Antarctic Program, which coordinates all U.S. research on the southernmost continent.

The University of Wisconsin-Madison, as the lead U.S. institution for the project, was funded by NSF to manage and coordinate the work needed to design and build the complex and often unique components and software for the project.

The university designed and built the Enhanced Hot Water Drill, which was assembled at the physical sciences lab in Stoughton, Wisconsin. The 4.8- megawatt hot-water drill is a unique machine that can penetrate more than two kilometers into the ice in less than two days.

After the hot water drill bores cleanly through the ice sheet, deployment specialists attach optical sensors to cable strings and lower them to depths between 1,450 and 2,450 meters. The ice itself at these depths is very dark and optically ultratransparent.

Each string has 60 sensors at depth and the 86 strings make up the main IceCube Detector. In addition, four more sensors sit on the top of the ice above each string, forming the IceTop array. The IceTop array combined with the IceCube detector form the IceCube Observatory, whose sensors record the neutrino interactions.

The successful completion of the observatory is also a milestone for international scientific cooperation on the southernmost continent. In addition to researchers at universities and research labs in the U.S., Belgium, Germany and Sweden--the countries that funded the observatory--IceCube data are analyzed by the larger IceCube Collaboration, which also includes researchers from Barbados, Canada, Japan, New Zealand, Switzerland and the United Kingdom.

"IceCube is not only a magnificent observatory for fundamental astrophysical research, it is the kind of ambitious science that can only be attempted through the cooperation--the science diplomacy, if you will--of many nations working together in the finest traditions of Antarctic science toward a single goal," said Karl A. Erb, director of NSF's Office of Polar Programs.

"To complete such an ambitious project, both on schedule and within budget, is a tribute to the fine work of the University of Wisconsin-Madison and its partner institutions, but it's also a reflection of the excellence of the personnel and infrastructure of the U.S. Antarctic Program," he added. "Science like IceCube is done in Antarctica because it is a unique global laboratory. I am very gratified that the U.S. Antarctic Program is equal to the challenge of supporting such a project."

IceCube is among the most ambitious and complex scientific construction projects ever attempted.

To build the observatory, all project personnel, equipment, food, and detector components had to be transported to Antarctica from various places around the globe. Everything then had to be flown in ski-equipped C-130 cargo aircraft from McMurdo Station near the Antarctic coast to the South Pole, more than 800 air miles away.

Working only during the relatively warm and short Antarctic summer--from November through February, when the sun shines 24 hours a day--drill and deployment teams worked in shifts to maximize their short time on the ice each year.

An international team of scientists, engineers and computer specialists have been working on development and construction of the detector since November 1999, when the first proposal was submitted to NSF and partners in Belgium, Germany and Sweden.

In the 1950's, Nobelist in physics Frederick Reines and other particle physicists realized that neutrinos could be used as astronomical messengers. Unlike light, neutrinos pass through most matter, making them a unique probe into the most violent processes in the universe involving neutron stars and black holes. The neutrinos IceCube studies have energies far exceeding those produced by manmade accelerators.

Unlike many large-scale science projects, IceCube began recording data before construction was complete. Each year since 2005 following the first deployment season, the new configuration of sensor strings began taking data. Each year as the detector grew, more and better data made its way from the South Pole to the data warehouses in the University of Wisconsin and around the world.

"Even in this challenging phase of the project, we published results on the search for dark matter and found intriguing patterns in the arrival directions of cosmic rays. Already, IceCube has extended the measurements of the atmospheric neutrino beam to energies in excess of 100 TeV," said Francis Halzen, principal investigator for the project. "With the completion of IceCube, we are on our way to reaching a level of sensitivity that may allow us to see neutrinos from sources beyond the sun."

Funding agencies outside of the U.S. that contributed to the construction of the IceCube Observatory are:
  • in Belgium: Fonds de la Recherche Scientifique (FRS-FNRS) and Fonds voor Wetenschappelijk Onderzoek (FWO);
  • in Germany: Federal Ministry of Education and Research (BMBF) and Deutsches Elektronen-Synchrotron Project (PD-DESY): and
  • in Sweden: Swedish Research Council (VR), and the Knut and Alice Wallenburg Foundation.

Sensor descends down a hole in the ice as part of the final season of IceCube. Icecube is among the most ambitious scientific construction projects ever attempted. (Credit: NSF/B. Gudbjartsson)

Thursday, December 23, 2010

Universe's Most Massive Stars Can Form in Near Isolation, New Study Finds

This is the most detailed observational study to date of massive stars that appear (from the ground) to be alone. The scientists used the Hubble Space Telescope to zoom in on eight of these giants, which range from 20 to 150 times as massive as the Sun. The stars they looked at are in the Small Magellanic Cloud, a dwarf galaxy that's one of the Milky Way's nearest neighbors.

Their results, published in the Dec. 20 edition of The Astrophysical Journal, show that five of the stars had no neighbors large enough for Hubble to discern. The remaining three appeared to be in tiny clusters of ten or fewer stars.

Doctoral student Joel Lamb and associate professor Sally Oey, both in the Department of Astronomy, explained the significance of their findings.
"My dad used to fish in a tiny pond on his grandma's farm," Lamb said. "One day he pulled out a giant largemouth bass. This was the biggest fish he's caught, and he's fished in a lot of big lakes. What we're looking at is analogous to this. We're asking: 'Can a small pond produce a giant fish? Does the size of the lake determine how big the fish is?' The lake in this case would be the cluster of stars.

"Our results show that you can, in fact, form big stars in small ponds."
The most massive stars direct the evolution of their galaxies. Their winds and radiation shape interstellar gas and promote the birth of new stars. Their violent supernovae explosions create all the heavy elements that are essential for life and the Earth. That's why astronomers want to understand how and where these giant stars form. There is currently a big debate about their origins, Oey said.

One theory is that the mass of a star depends on the size of the cluster in which it is born, and only a large star cluster could provide a dense enough source of gas and dust to bring about one of these massive stars. The opposing theory, and the one that this research supports, is that these monstrous stars can and do form more randomly across the universe -- including in isolation and in very small clusters.

"Our findings don't support the scenario that the maximum mass of a star in a cluster has to correlate with the size of the cluster," Oey said.

The researchers acknowledge the possibility that all of the stars they studied might not still be located in the neighborhood they were born in. Two of the stars they examined are known to be runaways that have been kicked out of their birth clusters. But in several cases, the astronomers found wisps of leftover gas nearby, strengthening the possibility that the stars are still in the isolated places where they formed.

The research is funded by NASA and the National Science Foundation.


Left: Star 302, as viewed from the ground. Right: Star 302 as viewed through the Hubble Space Telescope, which can zoom in roughly 40 times closer. From the ground, everything within the circle appears to be one star. (Credit: Courtesy of Joel Lamb)

New Fossil Site in China Shows Long Recovery of Life from the Largest Extinction in Earth's History.

Some 250 million years ago, at the end of the time known as the Permian, life was all but wiped out during a sustained period of massive volcanic eruption and devastating global warming. Only one in ten species survived, and these formed the basis for the recovery of life in the subsequent time period, called the Triassic. The new fossil site -- at Luoping in Yunnan Province -- provides a new window on that recovery, and indicates that it took about 10 million years for a fully-functioning ecosystem to develop.

"The Luoping site dates from the Middle Triassic and contains one of the most diverse marine fossil records in the world," said Professor Benton. "It has yielded 20,000 fossils of fishes, reptiles, shellfish, shrimps and other seabed creatures. We can tell that we're looking at a fully recovered ecosystem because of the diversity of predators, most notably fish and reptiles. It's a much greater diversity than what we see in the Early Triassic -- and it's close to pre-extinction levels."

Reinforcing this conclusion is the complexity of the food web, with the bottom of the food chains dominated by species typical of later Triassic marine faunas -- such as crustaceans, fishes and bivalves -- and different from preceding ones.
Just as important is the 'debut' of top predators -- such as the long-snouted bony fish Saurichthys, the ichthyosaur Mixosaurus, the sauropterygian Nothosaurus and the prolacertiform Dinocephalosaurus -- that fed on fishes and small predatory reptiles.

Professor Shixue Hu of the Chengdu Group said: "It has taken us three years to excavate the site, and we moved tonnes of rock. Now, with thousands of amazing fossils, we have plenty of work for the next ten years!"

"The fossils at Luoping have told us a lot about the recovery and development of marine ecosystems after the end-Permian mass extinction," said Professor Benton. "There's still more to be discovered there, and we hope to get an even better picture of how life reasserted itself after the most catastrophic global event in the history of our planet."


An ichthyosaur, a one-meter long fish-eating reptile -- from the new fossil site in China. (Credit: Image courtesy of University of Bristol)

Wednesday, December 22, 2010

Trace Amounts of Water Created Oceans on Earth and Other Terrestrial Planets, Study Suggests.

One question that has baffled planetary scientists is how oceans formed on the surface of terrestrial planets like Earth -- rocky planets made of silicate and metals. It's believed that in addition to Earth, the terrestrial planets Mars and Venus may have had oceans soon after their formation. There is ample evidence to suggest that these planets formed from rocky clumps called planetesimals that later combined in high-energy collisions and left their surfaces covered in molten rock, or magma. It didn't take long for these magma oceans to cool, and many researchers contend that oceans of water were created later on, when icy objects like comets and asteroids deposited water on the rocky planets.

But a recent study by an MIT planetary scientist suggests that the planetesimals themselves provided the water that created oceans. As Lindy Elkins-Tanton, the Mitsui Career Development Assistant Professor of Geology in MIT's

Department of Earth, Atmospheric and Planetary Sciences, reports in a recent paper in Astrophysics and Space Science, these planetesimals contained trace amounts of water -- at least .01 to .001 percent of their total mass (scientists don't know the precise size of planetesimals, but they estimate that those that created Earth were between hundreds and thousands of kilometers in diameter). In the paper, Elkins-Tanton says it is likely that even tiny amounts of water in the planetesimals could create steam atmospheres that later cooled and condensed into liquid oceans on terrestrial planets.

"These little bits of water get processed into planets in ways we can predict," says Elkins-Tanton, who created new models to detail the chemistry and physics of planet solidification. By suggesting that the majority of rocky planets formed water oceans early in their history, her analysis could help determine which planets outside the solar system, or exoplanets, might have or have had water and would therefore be possible candidates for hosting -- or having hosted -- life. This only applies to rocky exoplanets because most of the more than 500 exoplanets discovered to date are thought to be too hot and gaseous to host life.

Cooling planets
Samples of meteorites that originated from planetesimals indicate that the rocky bodies contained tiny amounts of water. To determine what happened to the water inside the planetesimals, Elkins-Tanton examined every step of the solidification process for rocky planets in the solar system (she didn't consider gas giants like Jupiter because the physics of how these planets form is entirely different). While this process had been modeled previously, no one had investigated whether water in planetesimals could produce oceans.

Elkins-Tanton first modeled how magma crystallizes into minerals on a theoretical rocky planet. This allowed her to calculate how much water from the planetesimals would be captured inside those minerals, and how much would remain in the magma as it cooled. She then incorporated details about the saturation level of magma into the models and observed that any water that doesn't dissolve in the magma would form bubbles. The models revealed that as the planet cools and forms a solid mantle, the bubbles in its magma oceans would rise to form a thick, steam atmosphere covering the planet. That steam would eventually collapse to create liquid oceans.

The idea that trace amounts of water in planetesimals could give rise to vast oceans may seem far-fetched until one considers how small an ocean can be relative to the size and mass of a planet. Earth's current oceans, for instance, make up just .02 percent of the planet's mass, excluding its metal core. Thus, if the majority of the small amounts of water in a planetesimal reaches a planetary surface as its magma solidifies, this would be enough to form oceans that are similar to Earth's.

For Earth, Elkins-Tanton estimates that this process occurred within tens of millions of years after the planetesimals crashed together, meaning that the planet could have been habitable pretty soon after it formed. She predicts that the same process could take up to hundreds of millions of years for super-Earths, or exoplanets that are at least twice as big as Earth and are just now being discovered. Because the research suggests that rocky super-Earths should have grown oceans soon after they formed, and because water is required for life as we know it, it's possible that these planets may have hosted -- or even still host -- life.

The life of oceans
"The study gives us a very important starting point for understanding the evolution and history of planets," says Pin Chen, a research scientist at NASA's Jet Propulsion Laboratory, who studies planetary atmospheres. He is confident that the research can be used to make predictions about oceans on exoplanets because "it is so well-grounded in fundamental principles of physics, chemistry and thermal dynamics."

Although the analysis suggests that oceans are expected to be prevalent in the early history of a rocky planet, it doesn't provide details about how long these oceans would last, which Chen says is critical for figuring out what happened to the oceans that may have covered Mars and Venus. Because atmospheres are responsible for releasing water from oceans into space, he suggests additional modeling of the interactions between the atmosphere and mantle of a young rocky planet.

In future work, Elkins-Tanton plans to model the chemistry of these atmospheres to figure out what kinds of atmospheres could be created by the solidification process, such as an oxidizing atmosphere (contains oxygen) or a reducing atmosphere (contains hydrogen). She's also interested in determining what conditions other than a liquid ocean might help initiate life on a terrestrial planet.

Tuesday, December 21, 2010

Raindrops Reveal How a Wave of Mountains Moved South Across the Country.

About 50 million years ago, mountains began popping up in southern British Columbia. Over the next 22 million years, a wave of mountain building swept (geologically speaking) down western North America as far south as Mexico and as far east as Nebraska, according to Stanford geochemists. Their findings help put to rest the idea that the mountains mostly developed from a vast, Tibet-like plateau that rose up across most of the western U.S. roughly simultaneously and then subsequently collapsed and eroded into what we see today.

The data providing the insight into the mountains -- so popularly renowned for durability -- came from one of the most ephemeral of sources: raindrops. Or more specifically, the isotopic residue -- fingerprints, effectively -- of ancient precipitation that rained down upon the American west between 65 and 28 million years ago.

Atoms of the same element but with different numbers of neutrons in their nucleus are called isotopes. More neutrons make for a heavier atom and as a cloud rises, the water molecules that contain the heavier isotopes of hydrogen and oxygen tend to fall first. By measuring the ratio of heavy to light isotopes in the long-ago rainwater, researchers can infer the elevation of the land when the raindrops fell.

The water becomes incorporated into clays and carbonate minerals on the surface, or in volcanic glass, which are then preserved for the ages in the sediments.

Hari Mix, a PhD candidate in Environmental Earth System Science at Stanford, worked with the analyses of about 2,800 samples -- several hundred that he and his colleagues collected, the rest from published studies -- and used the isotopic ratios to calculate the composition of the ancient rain. Most of the samples were from carbonate deposits in ancient soils and lake sediments, taken from dozens of basins around the western U.S.

Using the elevation trends revealed in the data, Mix was able to decipher the history of the mountains. "Where we got a huge jump in isotopic ratios, we interpret that as a big uplift," he said.

"We saw a major isotopic shift at around 49 million years ago, in southwest Montana," he said. "And another one at 39 mya, in northern Nevada" as the uplift moved southward. Previous work by Chamberlain's group had found evidence for these shifts in data from two basins, but Mix's work with the larger data set demonstrated that the pattern of uplift held across the entire western U.S.

The uplift is generally agreed to have begun when the Farallon plate, a tectonic plate that was being shoved under the North American plate, slowly began peeling away from the underside of the continent.

"The peeling plate looked sort of like a tongue curling down," said Page Chamberlain, a professor in environmental Earth system science who is Mix's advisor.

As hot material from the underlying mantle flowed into the gap between the peeling plates, the heat and buoyancy of the material caused the overlying land to rise in elevation. The peeling tongue continued to fall off, and hot mantle continued to flow in behind it, sending a slow-motion wave of mountain-building coursing southward.

"We knew that the Farallon plate fell away, but the geometry of how that happened and the topographic response to it is what has been debated," Mix said.

Mix and Chamberlain estimate that the topographic wave would have been at least one to two kilometers higher than the landscape it rolled across and would have produced mountains with elevations up to a little over 4 kilometers (about 14,000 feet), comparable to the elevations existing today.

Mix said their isotopic data corresponds well with other types of evidence that have been documented.

"There was a big north to south sweep of volcanism through the western U.S. at the exact same time," he said.
There was also a simultaneous extension of the Earth's crust, which results when the crust is heated from below, as it would have been by the flow of hot magma under the North American plate.

"The pattern of topographic uplift we found matches what has been documented by other people in terms of the volcanology and extension," Mix said.

"Those three things together, those patterns, all point to something going on with the Farallon plate as being responsible for the construction of the western mountain ranges, the Cordillera."

Chamberlain said that while there was certainly elevated ground, it was not like Tibet.
"
It was not an average elevation of 15,000 feet. It was something much more subdued," he said.
"
The main implication of this work is that it was not a plateau that collapsed, but rather something that happened in the mantle, that was causing this mountain growth," Chamberlain said.

Mix presented results of the study at the American Geophysical Union annual meeting in San Francisco on Dec. 17.

Sunday, December 19, 2010

One of France's Largest Dinosaur Fossil Deposits Found in the Charente Region

 The first excavations at the Audoin quarries in the town of Angeac, in the Charente region of south-western France, have confirmed that the site is one of the richest dinosaur fossil deposits in the country. Coordinated by the Musée d'Angoulême and the Géosciences Rennes laboratory (CNRS / Université de Rennes 1), the project involved researchers from CNRS and the Muséum National d'Histoire Naturelle (French Natural History Museum). With more than 400 bones brought to light, this site is remarkable both for the quantity of discoveries and their state of preservation.

The quarries have yielded a wide variety of fossils dating from the Lower Cretaceous Period, dating back 130 million years. The most impressive is a femur exceeding 2.2 meters, which could have belonged to the largest sauropod known in Europe. Unusually, the paleontologists at the site also discovered fossilized wood, leaves and seeds that will enable them to reconstitute the flora in which the animals lived. Based on these exceptional finds, the scientists hope to gain a clearer picture of the terrestrial ecosystems of the Lower Cretaceous, a little-known and insufficiently documented period in this region of Europe.

Although its existence had been suspected for years, the dinosaur fossil deposit in Angeac-Charente, near Angoulême, was only discovered in the Audoin quarries in January 2010, and turned out to be one of the largest paleontological sites in France. Covering several hundred square meters, the site consists of argillaceous strata from the Lower Cretaceous Period buried under the ancient quaternary alluvial deposits of the Charente River. The first excavation campaign, which took place over 20 days from late August to early September this year, was conducted by a team led by the Musée d'Angoulême and the Géosciences Rennes laboratory (CNRS / Université de Rennes 1), in collaboration with scientists and technicians from the Centre de Recherche sur la Paléobiodiversité et les Paléoenvironnements (Paleobiodiversity and Paleoenvironmental Research Center, CNRS / MNHN), the Université de Lyon and the Musée des Dinosaures in Esperaza (Aude region, south-western France).

Remains of herbivorous and carnivorous dinosaurs mixed with aquatic species
These initial excavations have already unearthed more than 400 bones, over 200 of which are of great scientific interest. The latter come from at least 3 dinosaur species, found alongside the remains of two types of turtle and three species of crocodile. The find is all the more exceptional as the bones are not only present in large numbers, but are also remarkably well preserved, having been buried rapidly in the argillaceous sediments of a marsh that covered the Angeac-Charente region during the Lower Cretaceous.

The most impressive finds are indisputably the remains of the largest known sauropod in Europe. Its femur, which has for the moment been left in situ, exceeds 2.2 meters in length, suggesting a weight of some 40 tons and a body length of about 35 meters. The biological links between this giant herbivore and other species have yet to be determined, but its anatomy is not dissimilar to examples found in Spain and dating from the same period. The presence of small herbivorous dinosaurs has also been evidenced by the discovery of a tooth and a few bones. The most abundant fossil material gathered this summer (nearly 80% of the bones exhumed) belongs to a large carnivorous dinosaur with a body length of about 9 meters. The number of femurs found points to no fewer than five individuals, young and adult.

Dinosaurs from the Lower Cretaceous are rarely found in France, and are usually identified on the basis of fragmentary remains. So far only three dinosaur genera have been identified: the ornithopod Iguanodon and the two theropods Genusaurus and Erectopus. Richer faunas, most likely contemporary with that of the Angeac site, have been described in Britain (in particular on the Isle of Wight) and Spain (Cuenca Province). The most remarkable animal remains from the period, including feathered carnivorous dinosaurs, were found in the Liaoning Province of China. The newly-found Angeac dinosaurs will be compared to these other specimens to determine their shared and distinctive characteristics.

For the paleontologists involved in the project, the next step will be to study and analyze their discoveries, whether it be the animal bones or the fossilized plants. In parallel with this scientific research, a project will be undertaken to enhance the site, enabling the public to view each phase of the operation, from excavation to museum display, over the next few years.


On the right, part of the giant femur (more than 2.2 meters long) of a sauropod, and on the left a section of a humerus from the same animal. (Credit: Copyright Didier Néraudeau (CNRS/Université de Rennes 1))

Friday, December 17, 2010

Total Lunar Eclipse and Winter Solstice Coincide on Dec. 21

Early in the morning on December 21 a total lunar eclipse will be visible to sky watchers across North America (for observers in western states the eclipse actually begins late in the evening of December 20), Greenland and Iceland. Viewers in Western Europe will be able to see the beginning stages of the eclipse before moonset, and in western Asia the later stages of the eclipse will be visible after moonrise.

From beginning to end, the eclipse will last about three hours and twenty-eight minutes. For observers on the east coast of the U.S. the eclipse lasts from 1:33am EST through 5:01 a.m. EST. Viewers on the west coast will be able to tune in a bit earlier. For them the eclipse begins at 10:33 p.m. PST on December 20 and lasts until 2:01am PST on Dec. 21. Totality, the time when Earth's shadow completely covers the moon, will last a lengthy 72 minutes.

While it is merely a coincidence that the eclipse falls on the same date as this year's winter solstice, for eclipse watchers this means that the moon will appear very high in the night sky, as the solstice marks the time when the Earth's axial tilt is farthest away from the sun.

A lunar eclipse occurs when the Earth lines up directly between the sun and the moon, blocking the sun's rays and casting a shadow on the moon. As the moon moves deeper and deeper into the Earth's shadow, the moon changes color before your very eyes, turning from gray to an orange or deep shade of red.

The moon takes on this new color because indirect sunlight is still able to pass through Earth's atmosphere and cast a glow on the moon. Our atmosphere filters out most of the blue colored light, leaving the red and orange hues that we see during a lunar eclipse. Extra particles in the atmosphere, from say a recent volcanic eruption, will cause the moon to appear a darker shade of red.

Unlike solar eclipses, lunar eclipses are perfectly safe to view without any special glasses or equipment. All you need is you own two eyes. So take this opportunity to stay up late and watch this stunning celestial phenomenon high in the night sky. It will be the last chance for sky watchers in the continental U.S. to see a total lunar eclipse until April 15, 2014.


Path of the Moon through Earth's umbral and penumbral shadows during the Total Lunar Eclipse of Dec. 21, 2010. (Credit: Fred Espenak/NASA's Goddard Space Flight Center)

Thursday, December 16, 2010

'Greener' Climate Prediction Shows Plants Slow Warming

The cooling effect would be -0.3 degrees Celsius (C) (-0.5 Fahrenheit (F)) globally and -0.6 degrees C (-1.1 F) over land, compared to simulations where the feedback was not included, said Lahouari Bounoua, of Goddard Space Flight Center, Greenbelt, Md. Bounoua is lead author on a paper detailing the results published Dec. 7 in the journal Geophysical Research Letters.
Without the negative feedback included, the model found a warming of 1.94 degrees C globally when carbon dioxide was doubled.

Bounoua stressed that while the model's results showed a negative feedback, it is not a strong enough response to alter the global warming trend that is expected. In fact, the present work is an example of how, over time, scientists will create more sophisticated models that will chip away at the uncertainty range of climate change and allow more accurate projections of future climate.
"This feedback slows but does not alleviate the projected warming," Bounoua said.

To date, only some models that predict how the planet would respond to a doubling of carbon dioxide have allowed for vegetation to grow as a response to higher carbon dioxide levels and associated increases in temperatures and precipitation.
Of those that have attempted to model this feedback, this new effort differs in that it incorporates a specific response in plants to higher atmospheric carbon dioxide levels. When there is more carbon dioxide available, plants are able to use less water yet maintain previous levels of photosynthesis. The process is called "down-regulation." This more efficient use of water and nutrients has been observed in experimental studies and can ultimately lead to increased leaf growth. The ability to increase leaf growth due to changes in photosynthetic activity was also included in the model. The authors postulate that the greater leaf growth would increase evapotranspiration on a global scale and create an additional cooling effect.

"This is what is completely new," said Bounoua, referring to the incorporation of down-regulation and changed leaf growth into the model. "What we did is improve plants' physiological response in the model by including down-regulation. The end result is a stronger feedback than previously thought."
The modeling approach also investigated how stimulation of plant growth in a world with doubled carbon dioxide levels would be fueled by warmer temperatures, increased precipitation in some regions and plants' more efficient use of water due to carbon dioxide being more readily available in the atmosphere. Previous climate models have included these aspects but not down-regulation. The models without down-regulation projected little to no cooling from vegetative growth.

Scientists agree that in a world where carbon dioxide has doubled -- a standard basis for many global warming modeling simulations -- temperature would increase from 2 to 4.5 degrees C (3.5 to 8.0 F). (The model used in this study found warming -- without incorporating the plant feedback -- on the low end of this range.) The uncertainty in that range is mostly due to uncertainty about "feedbacks" -- how different aspects of the Earth system will react to a warming world, and then how those changes will either amplify (positive feedback) or dampen (negative feedback) the overall warming.
An example of a positive feedback would be if warming temperatures caused forests to grow in the place of Arctic tundra. The darker surface of a forest canopy would absorb more solar radiation than the snowy tundra, which reflects more solar radiation. The greater absorption would amplify warming. The vegetative feedback modeled in this research, in which increased plant growth would exert a cooling effect, is an example of a negative feedback. The feedback quantified in this study is a result of an interaction between all these aspects: carbon dioxide enrichment, a warming and moistening climate, plants' more efficient use of water, down-regulation and the ability for leaf growth.
This new paper is one of many steps toward gradually improving overall future climate projections, a process that involves better modeling of both warming and cooling feedbacks.

"As we learn more about how these systems react, we can learn more about how the climate will change," said co-author Forrest Hall, of the University of Maryland-Baltimore County and Goddard Space Flight Center. "Each year we get better and better. It's important to get these things right just as it's important to get the track of a hurricane right. We've got to get these models right, and improve our projections, so we'll know where to most effectively concentrate mitigation efforts."

The results presented here indicate that changes in the state of vegetation may already be playing a role in the continental water, energy and carbon budgets as atmospheric carbon dioxide increases, said Piers Sellers, a co-author from NASA's Johnson Space Center, Houston, Texas.
"We're learning more and more about how our planet really works," Sellers said. "We have suspected for some time that the connection between vegetation photosynthesis and the surface energy balance could be a significant player in future climate. This study gives us an indication of the strength and sign of one of these biosphere-atmosphere feedbacks."


A new NASA modeling effort found that in a doubled-carbon dioxide world plant growth could lessen global warming by about 0.3 degrees C globally. The same model found that the world would warm by 1.94 degrees C without this cooling feedback factored in. Image: Great Smoky Mountains National Park. (Credit: National Park Service)

Tigers and Polar Bears Are Highly Vulnerable to Environmental Change

Large predators are much more vulnerable than smaller species to environmental changes, such as over-hunting and habitat change, because they have to work so hard to find their next meal, according to a new study.

Scientists matched studies of predator populations to the abundance of their prey and found that the largest species, such as lions, tigers or polar bears, had much greater declines in population due to diminishing food supplies than smaller species, such as weasels or badgers.

The review of studies of eleven species of carnivores by researchers from Durham University and the Zoological Society of London published in the Royal Society journal Biology Letters. It suggests that the vulnerability of larger species may be linked with the high energetic costs of being "big."
The robustness and large size of these species, which are well suited for hunting large prey, might become a hindrance when times are tough, prey are rare, and individuals have to work harder to find their next meal.
Dr Philip Stephens, from the School of Biological and Biomedical Sciences, Durham University, said: "We found that the largest species exhibited a five to six fold greater decrease in relative abundance in response to a decrease in their prey.

"It's hard work being a large predator roaming and hunting across extensive areas to find food. The apparent vulnerability of tigers and polar bears to reductions in the availability of prey may be linked to the energetic costs of being a large carnivore."

The research has important implications for the conservation of our largest carnivore species, which seem to be especially vulnerable to environmental threats and changes in the abundance of prey.
Dr Chris Carbone, Senior Research Fellow, Institute of Zoology, the Zoological Society of London, said: "This study helps us to understand why large carnivores are particularly sensitive to environmental disturbance and why the protection and conservation of their habitat and, in particular, of their prey, are so important to global initiatives to save large carnivores in the wild."
Dr Phil Stephens added: "The study highlights the need for more detailed study to aid carnivore conservation and shows how much more remains to be understood about the relationship between predators and their prey."
Notes: Animals included in the study:
  • Least weasel
  • Arctic fox
  • Canadian lynx
  • European badger
  • Coyote
  • Wolf
  • Leopard
  • Spotted hyena
  • Lion
  • Tiger
  • Polar bear
Purple bacteria were among the first life forms on Earth. They are single celled microscopic organisms that play a vital role in sustaining the tree of life. This tiny organism lives in aquatic environments like the bottom of lakes and the colorful corals under the sea, using sunlight as their source of energy. Its natural design seems the best structural solution for harvesting solar energy. Neil Johnson, a physicist and head of the inter-disciplinary research group in complexity in the College of Arts and Sciences at the University of Miami, thinks its cellular arrangement could be adapted for use in solar panels and other energy conversion devices to offer a more efficient way to garner energy from the sun.


"These bacteria have been around for billions of years, you would think they are really simple organisms and that everything is understood about them. However, purple bacteria were recently found to adopt different cell designs depending on light intensity," says Johnson. "Our study develops a mathematical model to describe the designs it adopts and why, which could help direct design of future photoelectric devices."


Johnson and his collaborators from the Universidad de los Andes in Colombia share their findings in a study entitled "Light-harvesting in bacteria exploits a critical interplay between transport and trapping dynamics," published in the current edition of Physical Review Letters.
Solar energy arrives at the cell in "drops" of light called photons, which are captured by the light-gathering mechanism of bacteria present within a special structure called the photosynthetic membrane. Inside this membrane, light energy is converted into chemical energy to power all the functions of the cell. The photosynthetic apparatus has two light harvesting complexes. The first captures the photons and funnels them to the second, called the reaction center (RC), where the solar energy is converted to chemical energy. When the light reaches the RCs, they close for the time it takes the energy to be converted.
According to the study, purple bacteria adapt to different light intensities by changing the arrangement of the light harvesting mechanism, but not in the way one would think by intuition.


"One might assume that the more light the cell receives, the more open reaction centers it has," says Johnson. "However, that is not always the case, because with each new generation, purple bacteria create a design that balances the need to maximize the number of photons trapped and converted to chemical energy, and the need to protect the cell from an oversupply of energy that could damage it."


To explain this phenomenon, Johnson uses an analogy comparing it to what happens in a typical supermarket, where the shoppers represent the photons, and the cashiers represent the reaction centers.


"Imagine a really busy day at the supermarket, if the reaction center is busy it's like the cashier is busy, somebody is doing the bagging," Johnson says. "The shopper wonders around to find an open checkout and some of the shoppers may get fed up and leave…The bacteria are like a very responsible supermarket," he says. "They would rather lose some shoppers than have congestion on the way out, but it is still getting enough profit for it to survive."


The study develops the first analytical model that explains this observation and predicts the "critical light intensity," below which the cell enhances the creation of RCs. That is the point of highest efficiency for the cell, because it contains the greatest number and best location of opened RCs, and the least amount of energy loss.
Because these bacteria grow and repair themselves, the researchers hope this discovery can contribute to the work of scientists attempting to coat electronic devices with especially adapted photosynthetic bacteria, whose energy output could become part of the conventional electrical circuit, and guide the development of solar panels that can adapt to different light intensities.
Currently, the researchers are using their mathematical model and the help of supercomputers, to try to find a photosynthetic design even better than the one they found in purple bacteria, although outsmarting nature is proving to be a difficult task

Unique Orangutan Reintroduction Project Under Imminent Threat

An investigation found that since 2004, companies affiliated with Asia Pulp & Paper/Sinar Mas Group have sought out selective logging concessions with dense natural forests in the Bukit Tigapuluh landscape. The companies obtained government licenses to switch the forest status to industrial timber plantation concessions, sometimes under legally questionable circumstances. This allows for clearcutting and planting of commercial plantations, making homeless the indigenous forest-dwelling tribes and endangered species. This is in breach of the company's claims that it doesn't clear high-quality forest.

"Our investigation found that in the last six years, the company in this landscape alone contributed to loss of about 60,000 hectares of forest without appropriate professional assessments or stakeholder consultation," said Susanto Kurniawan of Eyes on the Forest. "This is one of very few remaining rainforests in central Sumatra; therefore we urge the Government not to give it away to APP/SMG, who will mercilessly eliminate it and devastate local communities and biodiversity."

Bukit Tigapuluh harbors close to 320,000 hectares of natural forest, with around 30 tigers, 150 elephants and 130 rescued orangutans that were released here. "These great apes are the survivors of the illegal pet trade who were confiscated and are finally getting a chance to live and breed again in the wild," said Julius Paolo Siregar of the Frankfurt Zoological Society. "Forest conversion plans mean certain death for many of them."

It is also home to two forest-dwelling tribes -- the Orang Rimba and Talang Mamak -- who are "being driven off their ancestral land by APP and other companies," said Diki Kurniawan from WARSI. "Many must now beg for rice handouts to survive."

Bukit Tigapuluh has been deemed one of 20 landscapes critical to the long-term survival of tigers by international scientists. In November, Indonesia pledged at a global tiger summit to make it a focal area for tiger conservation.
"The Bukit Tigapuluh landscape is a major test of Indonesia's $1 billion climate agreement with the Kingdom of Norway," said Aditya Bayunanda of WWF-Indonesia. "We stand ready to help the Government find ways to protect the forest and Indonesia's natural heritage."

Wednesday, December 15, 2010

Computer contributing to carbon footprint

How much is that new computer server contributing to your company's carbon footprint? What about the laptop you bought your child for Christmas? As it turns out, answering those questions may be more difficult than you might think.

The results of a recent study by Carnegie Mellon's Christopher Weber found that the calculation of carbon footprints for products is often riddled with large uncertainties, particularly related to the use of electronic goods.

Weber, an adjunct professor in the university's Department of Civil and Environmental Engineering and a research staff member at the Science & Technology Policy Institute in Washington, D.C., found that a cache of variables from production and shipping to technology used in creating a product can alter the accuracy of carbon footprint labeling.

In particular, Weber and his team studied an IBM computer server. "We found that the use phase of the server accounted for an estimated 94 percent of the total greenhouse gas emissions associated with the product,'' said Weber. "This finding confirmed the importance of IBM's ongoing efforts to increase energy efficiency of its server products and the data centers where servers are used.''

However, while the study confirmed the importance of server energy efficiency on the product's overall carbon footprint, it also highlighted the large uncertainties in quantifying the server's carbon footprint. "Variability in the electricity mixes of different markets led to vastly different impacts of product use and greenhouse gas emissions in different geographic locations,'' said Weber. "Further complex systems requiring integrated circuits and several generations of technology increase the uncertainty of carbon footprint estimation for electronic goods,'' Weber said.

Still, globally more and more companies are seeking to estimate the carbon footprints of their products, and sometimes are going further into products' environmental impacts on water and pollution. This can be an even more difficult task to estimate.

"Given the increased interest in product carbon footprints, we need to continue to question the accuracy of carbon footprint techniques, especially for complex information technology products. At this point, carbon footprint estimation methodologies are not accurate enough to warrant putting footprint labels on most products,'' said Weber.

For calculating your carbon foot print visit http://www.carbonfootprint.com/