Coding the atmosphere: The quest for software that’s flexible, versatile, affordable, and accurate
A few simple numbers say a great deal about the power and usefulness of the community weather and climate models developed with strong NCAR involvement. The Weather Research and Forecasting model is being used by scientists in more than 90 nations, and daily weather forecasts reaching hundreds of millions of people are based in part on WRF guidance. Meanwhile, the Community Climate System Model has carried out exhaustive simulations of global climate for the upcoming century. Hundreds of university scientists are running CCSM on their home institution’s computers or through NCAR’s community-dedicated Climate Simulation Laboratory.
With so many circuits humming, there’s strong incentive to make these and other modeling packages as nimble and efficient as possible. NCAR is reaching in new directions to improve the quality of its community-oriented software. The center is also experimenting with novel ways to approach decades-old problems, such as simulating the turbulent eddies that course through our atmosphere.
Statistics that illuminate
“We get to dabble in everything. It’s like a big banquet of problems,” says Douglas Nychka, director of NCAR’s Institute for Mathematics Applied to Geosciences (IMAGe), formed in 2004. Building on prior work, the influence of IMAGe stretches across much of NCAR and along many paths in the geoscience community.
One of IMAGe’s main goals is to advance weather and climate modeling through the application of flexible mathematical models and methods. Nychka cites geophysical turbulence as an example that lends itself to multifaceted analysis. Whether on the Sun or in Earth’s atmosphere or oceans, turbulence has been an important research topic at NCAR for decades, but the computational power to grapple with some turbulence problems has arrived only recently.
IMAGe’s turbulence experts have made significant progress in modeling the behavior of the magnetic currents that prevail in Earth’s magnetosphere and the solar corona. New NCAR software can track magnetohydrodynamic turbulence as it unfolds and decays, mapped on a cubic grid that features more than 3.6 billion points. In a 2006 study by NCAR’s Pablo Mininni and Annick Pouquet with David Montgomery (Dartmouth College), sheets of magnetic current were found to roll into swirls driven by the same instabilities that produce wave-like Kelvin-Helmholtz clouds. “These are structures that you wouldn’t discern by just looking at the physical equations,” says Nychka.
Data assimilation is another long-time NCAR activity gaining a higher profile from its new home in IMAGe. The institute serves scientists working to infuse models with fresh data from satellites and a variety of other tools. Although many weather and climate models are chronically undernourished by a lack of data, a rich but unbalanced diet can cause its own problems. For example, the models can be overwhelmed by pockets of high data density, such as observations collected by aircraft along common flight paths amid otherwise data-sparse regions. IMAGe is exploring a technique called adaptive inflation to address this statistical challenge.
In addition, says project leader Jeffrey Anderson, “We’ve developed a data assimilation system that’s model- and observation-independent as much as possible.” The Data Assimilation Research Testbed (DART) gives developers an off-the-shelf system that can be adapted in a few weeks or months, as opposed to the years-long assimilation efforts common at major prediction centers. “We’re looking for partners in the university community who haven’t been able to do data assimilation before,” says Anderson.
At the University of Wisconsin–Madison, Eric DeWeaver is using the NCAR-based Community Atmosphere Model with DART to study Arctic climate. “The Arctic is a complicated place, with all sorts of delicate climate feedbacks,” DeWeaver points out. Errors in midlatitude storm tracks, for instance, can jeopardize how well models portray sea ice. DeWeaver is assimilating data from outside the Arctic to measure improvements in polar climate. He’s also studying the problem in reverse: “What is the effect of ice-cap circulation biases over the rest of the world?”
DeWeaver urges colleagues to try out DART. “Data assimilation is typically presented as an infinitely complicated and inaccessible process that can only happen at a forecast center, through the efforts of a cast of thousands. But DART makes these tools accessible to the community. Now folks on the academic side, with just a few resources, can see what they have to offer.”
The international reach of WRF
Like no other model, WRF has stormed the weather world over the last few years, entering the toolboxes of researchers and forecasters across the globe. Operational versions of the Weather Research and Forecasting model were adopted in 2006 by NOAA’s National Weather Service and the U.S. Air Force Weather Agency (AFWA). More recently, national weather agencies in Taiwan, South Korea, China, and India adopted the model, helping make it perhaps the most widely used operational weather model in history. At the same time, scientists are using WRF for a wide range of basic research and forecasting applications. Hundreds of scientists have taken advantage of in-person and online training (much of it coordinated through NCAR), and many dozens of papers are based on WRF runs.
Until WRF came along, forecasters and university scientists rarely used the same models. The intent of the WRF partnership, forged in the late 1990s, was to create a single system that could serve both groups. The partners in that effort have included NCAR, NOAA, AFWA, the Naval Research Laboratory and Army Research Laboratory, the Federal Aviation Administration, the University of Oklahoma, and more than 150 other universities, laboratories, and agencies from across the nation and beyond.
Scientists at partner institutions play a key role in creating and refining WRF’s smorgasbord of options. NCAR coordinates the inclusion of new model components, most of which are released for community use. Some of the model configurations are evaluated at the NCAR-based Joint Numerical Testbed.“WRF is a true community effort,” notes JNT head Robert Gall. Recent model components, and leaders in their development, include:
“The direct community involvement helps ensure that the WRF system expands and develops according to genuine community priorities and needs,” says Gall. He adds that the blend of community development with centralized support and testing is similar to the concept that lies behind open-source software such as Linux.
With its third version, to be released early in 2008, WRF will truly become a global model. Through an effort led by Mark Richardson and Claire Newman (California Institute of Technology), NCAR’s William Skamarock, and Anthony Toigo (Cornell University), the newest WRF will be able to simulate fine-scale conditions across the entire planet, supporting a wide range of weather, climate, and air-quality research. “This new model provides an exciting prospect for studying both terrestrial and planetary atmospheres,” says Richardson. He envisions the model being used to analyze interactions between large and small scales for phenomena ranging from severe thunderstorms in the Midwest to dust storms on Mars.
Of clouds and climate
A new NSF science and technology center based at Colorado State University, headed by CSU’s David Randall, aims to dramatically improve the way clouds are portrayed in global models. The Center for Multiscale Modeling of Atmospheric Processes (CMMAP) opened in 2006, with UCAR and NCAR among its primary partners.
“People have been struggling with this problem for 40 years,” says Randall. As anyone who has ever gazed skyward knows, clouds come in a stupendous variety of sizes, shapes, and types. Most climate models are forced to summarize cloud behavior across grid boxes that typically extend at least 50 by 50 km (30 by 30 mi) horizontally and at least 100 meters (330 feet) vertically. There isn’t much room for detail within the grid boxes, given the limits of computing resources. Thus, important but complex phenomena such as overlapping cloud layers and the effects of turbulence are depicted only crudely.
For example, those who take flights over cool subtropical oceans often see the vivid contrast of small, bright white cloud cells against dark blue ocean. It’s the signature look of marine stratocumulus clouds—huge yet thin sheets that produce a mackerel-sky effect. NCAR’s Chin-Hoh Moeng, deputy director of CMMAP, has spent years modeling this type of cloud. Marine stratocumulus carry a lot of moisture from the ocean, Moeng points out. They also reflect a great deal of sunlight, she adds, making them a key part of the global radiation budget. Yet marine stratocumulus are still not directly depicted in global climate models.
In order to treat such deficiencies, CMMAP is using cloud-system–resolving models (CSRMs). Each CSRM covers a small enough area with sharp enough resolution (typically about 2 km, or 1.2 mi) to depict many cloud types in a fairly realistic way. Although each CSRM covers only a fraction of a grid point, the technique provides modelers with a far clearer picture of local cloud behavior. That picture is being extended, in various ways, across each grid point and thus across the globe.