UCAR > Communications > UCAR Quarterly > Spring 2001 Search

Spring 2001

The edge of predictability: NCAR team pushes it forward

by Bob Henson

Weather predictions in the 72-hour window have improved markedly over the past 20 years. Scientists are now predicting seasonal trends months in advance, based on the state of the El Niño/Southern Oscillation (ENSO). In between these time frames, a chasm of poor forecast skill still yawns. Scientists in NCAR's Global Dynamics Section (GDS) have worked for decades to bridge that gap. Now, with some fresh approaches and an infusion of energy from several postdoctoral and graduate-student researchers, the group may have the tools it needs. Collaborations with university scientists and operational forecast centers are yielding new hope of advancing the frontier of useful forecasts and better assessing their reliability.

NCAR's Global Dynamics Section includes (left to right) Grant Branstator, Ronald Errico, Alessandra Giannini, Judith Berner, Isla Gilmour, Joseph Tribbia, Kevin Raeder, and R. Saravanan. (Photo by Carlye Calvin.)

In a 1959 paper, Philip Thompson, part of the early scientific leadership at NCAR, was the first to pose the question of how to determine the sensitivity of numerical weather forecasts to errors in the initial state. Edward Lorenz—Thompson's former classmate and a frequent NCAR visitor from the Massachusetts Institute of Technology (MIT)—started the race to quantify the uncertainty with a landmark 1963 paper. Lorenz introduced the notion that became known as the butterfly effect: the idea that tiny disturbances in the atmosphere, such as the flapping of a butterfly's wings, can grow in a nonlinear, unpredictable way to sabotage long-range weather forecasts.

A year later, Lorenz and a stellar group of international specialists assembled in Boulder to study the outer limits of forecast potential. Jule Charney of MIT organized a set of model experiments at several labs. They showed that the typical doubling time for small model errors was five days and that the limit for useful forecasts was about two weeks.

Like an index of scientific confidence, the length of that outer limit stretched and shrank over the succeeding years. By 1980, however, researchers had lowered the limit to about ten days. This ten-day window became standard for the operational models of what are now the National Centers for Environmental Prediction (NCEP) and the European Centre for Medium-range Weather Forecasts (ECMWF).

NCAR postdoc Isla Gilmour and scientist David Baumhefner are examining regime transitions at the 500-millibar level using 50 years of data from the NCEP/NCAR reanalysis project. Shown here is a regime transition unfolding from 25 December (top) to 25 January (bottom) during the winter of 1964–65. Heights at 500 mb are averaged between 30° and 50°N, with longitudes along the bottom of the graph. Contour intervals appear at right, in meters. Anomalies of at least 40 m that last for at least eight days are shown by the jagged black lines. A major regime transition centered on 10 January shows the reversal of low- and high-pressure anomalies over the western United States and the central Pacific. Gilmour and Baumhefner are studying the relative roles of model physics and initialization in the ability of models to predict such transitions.

Once these models were hitting the assumed limit of predictive skill each day, scientists figured out how to exploit the errors that kept the models in check. The breakthrough was ensemble modeling, first proposed in 1974 by Cecil (Chuck) Leith (then at NCAR, later at Lawrence Livermore National Laboratory). "This is the point where predictability moved from a research topic to an applied topic," says GDS head Joe Tribbia.

Ensembles are created using ten or more simultaneous runs of the same model. Researchers randomly tweak the initial conditions for each run, spanning the range of error known to be present at the starting line. There is no way to tell in advance which of the ten forecasts in an ensemble will wind up being closest to correct. Still, the actual weather usually ends up within the ensemble range—and through retrospective studies, one can get ten or more times the insight on how model errors grow.

GDS members collaborated with the ECMWF on some of the first ensemble modeling. In the 1990s, the section's links to the ECMWF and the Naval Research Laboratory's model development group strengthened, as it became clear how ensembles could be used to investigate the roots of forecast error.

GDS's hot topics

  • Regime shifts. Forecasting regime shifts—the transitions into and out of weather patterns that persist for a week or more—could be especially useful to society. Postdoc Isla Gilmour is studying regimes using 50 years' worth of data from the NCEP/NCAR reanalysis project (available from NCAR's Scientific Computing Division). "There's evidence to show that predicting the persistence [of the regime] is easier than forecasting the shift," says Gilmour. She is collaborating with David Baumhefner, a 20-year veteran of predictability research.

    One of Gilmour's goals is to see whether poor forecast skill is due to the physics of the model or to the initialization techniques—the ways in which observed data are brought into the starting point of a model cycle. In one test case, Baumhefner and Stephen Colucci (Cornell University) found that a single forecast initiated from the best guess of the initial state failed to capture the formation of a northeast Atlantic block. When the researchers used an ensemble of forecasts initiated from a variety of states to represent the uncertainty in measurements, they captured the possibility of a block formation. Gilmour is finding that this result holds in the more general regime scenario.

  • Predicting beyond the edge. Grant Branstator wants to know what information can be gleaned at the outer edge of current weather forecasts, in the one- to three-week period. "Rather than focus on the unpredictable parts of the flow, we try to find things that are predictable . . . patterns and structures that aren't as susceptible to error growth." One approach that he and graduate student Judith Berner are taking is to hunt for equilibrium points: two or more states between which a weather regime might oscillate. "Synoptic meteorologists think they've seen this kind of behavior for a long time, but it's been difficult to prove," says Berner. "It turns out that this kind of behavior is very subtle." Using an early low-resolution version of NCAR's Community Climate Model, Berner is looking for nonlinear behavior that produces multiple equilibrium points.

    Branstator is also hunting for more linear behavior that might persist to the edge of the forecast limit and beyond. If you consider long enough timescales, he says, "the nonlinearity looks like noise," and a useful signal could be hidden within. "The behavior's richer than [the early work of] Lorenz would suggest."

  • Beyond El Niño. Although ENSO is accepted as the leading influence on multiseasonal climate, there are other major ones. R. Saravanan has been working with Ping Chang (Texas A&M University) to study an often-overlooked region: the tropical Atlantic. Even that far from the Pacific, ENSO is an important index for seasonal prediction, according to Saravanan, but "I'm focusing on the next-order signal in magnitude." Model experiments have shown that this signal is tropical Atlantic sea-surface temperatures. For example, it's established that if the sea-surface temperatures east of the South American coast are known, one can predict the rainfall in northeast Brazil with notable accuracy. A new postdoc, Alessandra Giannini, will be using model ensembles to see how this factor could strengthen seasonal prediction.

    Taking it to the community

    GDS's research is filtering into practice through its ties with forecasting centers. Baumhefner recently visited the Fleet Numerical Meteorology and Oceanography Center, the U.S. military's main weather- modeling center, to help weave a new technique for analyzing perturbations into the center's ensemble modeling. "It's designed to reflect what we think we know about analysis of uncertainty," says Ronald Errico, one of the scientists behind the technique. The Navy had considered importing a more complex algorithm from ECMWF, says Errico, but the NCAR software allows them to accomplish the same goal, with almost no increase in modeling time needed.

    In recent years, GDS has supplemented its core work under NSF sponsorship with a roughly equal amount of support from NOAA, NASA, the Navy, and the U.S. Weather Research Program. The motivation for GDS science remains the same, says Tribbia: paving the way for better weather forecasts. "We're trying to do research on some of the more basic issues regarding predictability, the outstanding issues that aren't being tackled by the operational centers or within most other laboratories."

    Other prediction research at NCAR

    NCAR has more than one center of action in predictability. The Geophysical Statistics Project, now in its eighth year, is a unique place for atmospheric scientists and statisticians to work together. Led by Douglas Nychka, the project is applying statistical theory and other mathematical tools to determine where models and forecasts can be improved and how best to do it. A number of published papers and others in progress can be reviewed on the project's Web site.

    NCAR's Mesoscale and Microscale Meteorology Division (MMM) has made the predictability of precipitating weather systems one of its two main research themes for the next five years, an effort coordinated by Joseph Klemp. With support from the U.S. Weather Research Program, which is keenly interested in rainfall and snowfall forecasts, the division is examining the particulars of convection, tropical cyclones, and mountain effects and working to improve data assimilation in mesoscale and larger-scale models. For instance, MMM scientists Richard Rotunno and Chris Snyder and postdoc Fuqing Zhang have been studying the intense East Coast snowstorm of 24–25 January 2000, which was poorly handled by models at the time. They found that, although a higher- resolution model would have produced better forecasts of the rain and snow patterns, even very small changes in the initial data still would have led to significant changes in the precipitation forecast. "This suggests that the limits of predictability [for precipitation] might not be too far off," says Snyder. A summary of MMM's five-year research plan can be found on the Web.

    MMM's involvement in fieldwork on adaptive observing (weather sensors deployed at targeted locations to collect specific data that are needed to enhance model performance) has led to a new focus. "It turns out that the information required to do a rigorous job of adaptive observation is also the key to improving data assimilation," says Snyder. While he was a postdoc in NCAR's Advanced Study Program, Thomas Hamill (now with the NOAA-CIRES Climate Diagnostics Center, or CDC) worked with Snyder on a promising new way to create operational ensembles, called the ensemble Kalman filter. This technique not only provides a short-term estimate in probabilistic terms of how well the model is performing, it also helps researchers choose better sites for adaptive observations and carry out data assimilation more effectively. Hamill and CDC colleague Jeffrey Whitaker plan to test the ensemble Kalman filter using the Medium-Range Forecast (MRF) model. Whether the technique will prove practical for such full-scale models remains to be seen, says Snyder, but results from simpler models are encouraging.

    In this issue... Other issues of UCAR Quarterly

    UCAR > Communications > UCAR Quarterly > Spring 2001 Search

    Edited by Carol Rasmussen, carolr@ucar.edu
    Prepared for the Web by Jacque Marshall
    Last revised: Thu Jun 21 18:56:13 MDT 2001