The National Center for Atmospheric Research | UCAR | UOP
photo Home Our Organization Our Research News Center Education Libraries Community Tools

Quarterly logo

A closer look at today’s forecast

NCAR researchers study how people use weather predictions

by Bob Henson

It’s been more than 40 years since Americans were first introduced to probabilities of precipitation. Instead of simply conveying a threat of rain or snow without using numbers, forecasters began to include percentages, as in “a 60% chance of showers.” In a March 1966 article, Reader’s Digest called the new technique “a better way of reporting on tomorrow’s weather.”

Decades later, most U.S. residents still aren’t precisely sure what these ­probability-of-precipitation forecasts mean. However, they do get the general idea—and many of them want more such uncertainty information. These are among the insights arising from a major survey conducted by NCAR’s Societal Impacts Program. SIP involves scientists from several NCAR ­laboratories, with funding from the U.S. Weather Research Program.

Researchers in NCAR’s Societal Impacts Program studying public interpretation of forecasts include (left to right) Rebecca Morss, Julie Demuth, and Jeffrey Lazo. (Photo by Carlye Calvin.)

“Weather forecasts are unavoidably uncertain,” notes Rebecca Morss, “but most public forecasts come with, at best, limited information about their uncertainty.” Morss and SIP colleagues Julie Demuth and Jeffrey Lazo are sifting through the extensive data gleaned from their 2006 survey. They presented results at the American Meteorological Society’s 2008 annual meeting in New Orleans, and one paper is in press, with more to come.

One of the group’s goals is to provide grounding and perspective for the National Weather Service (NWS) and other providers who are now contemplating how best to improve public weather forecasts. The idea for the survey arose after a 2006 National Research Council report on estimating and communicating uncertainty in weather and climate prediction. Rebecca and NCAR’s Barbara Brown served on the panel that produced Completing the Forecast (see “On the Web”).

“Meteorologists often find it challenging to ­communicate uncertainty effectively,” Morss says. She and her colleagues found that relatively little research on communicating the uncertainty in weather forecasts had been published since the 1980s. “Being on the NRC panel helped clarify for us that this was an important ­priority area that hadn’t been adequately addressed,” she says.

Late in 2006, SIP conducted a nationwide, controlled-access Internet-based survey, with more than 1,500 U.S. respondents. The sample was checked to ensure geographic, demographic, and ethnic diversity; respondents came from every U.S. state and had similar gender and race characteristics to the public at large. The survey questions drew on previous research, asking people how important weather is to them as well as how they regard, interpret, and use forecasts.

Would you like probabilities with that?

Thanks to the progress of weather research and the advent of new technologies, there’s now a basis for creating far more detailed forecasts, incorporating shades of gray that don’t fit into the usual template. For example, say that forecasters predict a 20% chance that a cold front will arrive soon enough to hold tomorrow’s high down to 70°F (22.1°C), but otherwise they expect temperatures to soar to 85°F (29.4°C).

How best to present these options in a single outlook? More than 90% of the respondents liked having something other than a flat forecast of 85°F, and many also wanted to know why the forecast was uncertain. Among seven alternatives offered in the survey, this was the favorite option: “The high temperature tomorrow will most likely be 85°F, but it may be 70°F, because a cold front may move through during the day.”

People tend to read between the lines of deterministic forecasts, those in which the uncertainty isn’t made clear. When asked what a deterministic ­prediction of 75°F actually means, the largest share of respondents in the SIP survey (more than 40%) picked the range 73–77°F (22.8–25.0°C). Only about one in 20 people took the 75°F ­forecast literally.

“These and other results show that most people have some understanding of relative uncertainty in forecasts,” Demuth says. She adds that most respondents liked having some measure of uncertainty explicit in a forecast (such as a temperature range), and many ­preferred it.

Chance of prediction progress at UW: 100%

Daniel Wildcat
Clifford Mass. (Photo courtesy University of Washington.)

“It’s very challenging to get probabilistic information into people’s hands in a way they can use it,” says Clifford Mass. The University of Washington meteorologist is among the discipline’s most ardent proponents of bringing uncertainty information into public weather forecasts. Mass was a member of the National Research Center’s 2006 panel on probabilistic forecasting, and he’s also part of an interdisciplinary team at UW that is building an end-to-end system they hope will serve as a national prototype.

The UW project got its start with support from the Multidisciplinary Research Program of the U.S. Department of Defense University Research Initiative. UW researchers from atmospheric sciences, statistics, psychology, and the Applied Physics Laboratory built a probabilistic “prediction engine” based on statistically corrected model ensembles. They also constructed innovative websites for translating and delivering the results. The main site,, offers perhaps the most detailed and accessible probabilistic local forecasts found anywhere in the nation.

For a 48-hour forecast window, the site provides the most probable high and low temperatures as well as a range denoting the forecast’s 90% confidence interval (see example). The final range can be slightly asymmetric, as in a forecast high of 58°F that could be “as high as 60°F” or “as low as 55°F.” The scheme is similar for precipitation, with the most probable amounts bracketed by a 90% confidence interval; the chance of getting any rain or snow at all is also included.

UW is lucky to have the resources needed for such a system, says Mass. “We’ve been doing real-time weather prediction for over thirteen years and ensemble modeling for eight. We’re also fortunate to have a very good statistics department,” he says. The project collaborators at UW include statistician Adrian Raftery and psychologist Susan Jocelyn, who has tested the forecasts with UW students and local volunteers.

Although the National Weather Service (NWS) now offers probabilistic guidance for both severe weather and hurricanes, most Americans can’t yet get the kind of local forecasts pioneered by UW. The NWS offers precipitation probabilities out to four days and deterministic (single-number) temperature forecasts out to eight days. Many private vendors and TV stations furnish high and low temperature forecasts up to 11 days out, but again, with no confidence intervals—only a single number for each high or low.

Daniel Wildcat

The Weather Channel has begun contemplating ways to express forecast confidence. According to executive vice president Ray Ban, the channel’s initial ideas focus on qualitative rather than quantitative solutions, including ideas such as traffic signals (green, yellow, red) to convey relative uncertainty. However, says Ban, “it will be at least 2009 before there is any chance of us introducing any changes in our products.”
As noted in the NRC report, many icons now used in public and private weather forecasts fail to distinguish between probabilities. Thus, a 40% or 80% chance of rain might be depicted by the same image of pelting drops. Research by the UW psychology team resulted in a new set of precipitation icons, unveiled last year, in which rain or snow is portrayed within a pie slice that corresponds to the likelihood of precipitation (see left).

“Most icons have never been evaluated or tested properly,” says Mass. “Even if we get to the point where we create good probabilistic information, that’s only half the battle. The other part is providing it in an accessible and usable way.”



Several other lines of research are emerging from the survey. For example, Lazo is leading an analysis of how much people use and value the forecasts they now get. He says that the average household gets forecasts 115 times a month. After accounting for the few survey respondents who don’t use forecasts, and extrapolating to the public at large, this means that Americans access more than 300 billion forecasts a year.

Follow-up work found that the median annual value placed on forecasts was $285 per household, or about $32 billion nationwide. Another line of research based on the survey responses, with psychologist colleague Alan Stewart at the University of Georgia, examines people’s weather salience—essentially, how important weather is to them and their lives—and how that relates to their use of forecasts.

Analysis of the survey data is ongoing, and a variety of interdisciplinary research questions remain to be addressed. It will take several years for new types of forecasts to be developed, perhaps with icons to express uncertainty (see sidebar). However, the results to date give SIP scientists the sense that the public is ready for the next level of weather predictions. Just as a motorist needn’t understand every detail of a car’s engine in order to drive safely, weather consumers should be able to draw added value from new types of forecasts even if they never look under the meteorological hood. ♦

On the Web

NCAR Societal Impacts Program

Completing the Forecast (2006 NRC report)



Also in this issue...
Untitled Document