Highlights 2005

University Corporation for Atmospheric ResearchNational Center for Atmospheric ResearchUCAR Office of Programs

Minute by Minute

Close Up With Weather's Worst

Whether stationed on a lonely Midwest road or a deserted Atlantic beach, NCAR scientists and engineers train high-tech sensors on tornadoes, hurricanes, and severe thunderstorms in some of meteorology’s most intrepid research. Back at the lab, their data reveal new threads of similarity among many types of wild weather. Software experts at NCAR and elsewhere use these leads, plus innovative statistical techniques, to refine new models that help give forecasters and the public better guidance on weather at its worst.

Download the PDF version (791 KB)

AT FIRST
A pioneer predictor of twisters

Long before “tornado watch” became a household phrase, John Finley was issuing daily tornado outlooks.

John Finley
(Courtesy American Meteorological Society.)

While serving in the U.S. Signal Service on the East Coast, Finley indulged his interest in tornadoes by compiling an 88-year climatology. He also recruited hundreds of tornado observers across the growing nation. Each spring during the mid-1880s, Finley issued daily outlooks for tornado likelihood in 18 districts from the Great Plains eastward. Despite some evidence of skill, Finley’s technique was soon abandoned as the nation’s weather service shifted to civilian hands. Almost 70 years later, the U.S. Weather Bureau began issuing tornado watches once more, this time from Kansas City—the centralized point Finley himself had recommended.

AT THE UNIVERSITIES
A better starting point in the forecasting race

As a postdoctoral researcher at NCAR during 1997–99, Gregory Hakim (University of Washington) forged collaborations that continue today.

Gregory Hakim
(Photo by Carlye Calvin.)

Hakim’s keenest goal now is to improve data assimilation, the techniques by which diverse observations are fed into computer models. “This is a problem that affects both weather and climate—the whole shebang. Being able to synthesize models and observations is crucial for everybody.” His tool of choice is the ensemble Kalman filter, a mathematical technique that allows errors in the starting-point analyses of models to be quantified in detail. In standard analyses, “there’s no measure of error or uncertainty,” says Hakim. With the EnKF, as many as 100 or more analyses are produced for the same time and place, and the group average and variance can be calculated, providing a new window on why a given forecast might go right or wrong. “Having this probabilistic information actually gives new ways of looking at the dynamics of weather systems. That, to me, has been one of the huge, unexpected rewards.” With NSF support, Hakim is working with Christopher Snyder (NCAR), Thomas Hamill (NOAA), and Fuqing Zhang (Texas A&M University) on the EnKF studies as part of NCAR’s Data Assimilation Initiative.

AT UOP
A new eye on water vapor's invisible flow

Although thunderstorms came under the scrutiny of weather radar as early as the 1950s, water vapor in the atmosphere eluded frequent monitoring. Those days are numbered, though, thanks in part to a UOP-led team that includes John Braun.

(Photo by Carlye Calvin.)

In the mid-1990s, Braun and colleagues began extracting data on water vapor from the delays in Global Positioning System (GPS) signals induced by moisture in the atmosphere. A related technique will be used on a space-based monitoring system called COSMIC. The team has made big strides on the ground as well. More than 100 GPS receivers now track water vapor every half hour, many of them at university-operated sites through an NSF-funded initiative called SuomiNet. With these data, scientists can estimate the total amount of precipitable water above each station. An improvement of that technique measures water vapor along individual signal paths, so that signals arriving from satellites low on the horizon can be used to sample water vapor along low-lying horizontal swaths stretching tens of miles. Braun and colleagues are developing methods to synthesize the SuomiNet data with other sources and feed them into computer forecasting models. “Long-term studies are starting to show a significant positive value,” says Braun. “We can really start looking at how the water vapor changes over time. That allows us to validate models, to really understand a part of the hydrologic cycle, and make improvements in water-cycle science.”

AT UOP
Clearing the way to top-notch radar data

When the National Weather Service began upgrading its radar network nearly two decades ago, university scientists watched with some envy. Their access to the new data was often limited to a subset of current reports from a single nearby radar, plus archived data that could take weeks to receive. Now, university scientists and private firms can obtain up-to-the-minute, high-quality data from more than 120 radars, the result of a partnership championed by UOP’s Linda Miller.

(Photo by Carlye Calvin.)

A former administrator at NOAA and Vaisala, Miller joined UOP’s Unidata Program Center in 1990. “I really enjoy working with people in the academic community, government, and the private sector on various data issues and community needs,” says Miller. She and Unidata colleagues teamed with Kelvin Droegemeier (University of Oklahoma) and other key leaders to address institutional and technical hurdles that blocked speedy access to the new radar data. With the team’s help, NOAA secured bandwidth on the Internet2 network and chose four sites to funnel the vast amounts of data to universities (at no cost) and private industry (on a cost-recovery basis). The enhanced data include the higher-elevation scans needed to fully profile intense thunderstorms, especially those close to a radar site. “It’s really exciting to see our technology reaching beyond our traditional user community,” says Miller. “We worked through the issues in a collaborative environment with the stakeholders, and now we’ve got something that works.”

 
HIGHLIGHTS LIVE
Related web sites

INTEX-NA MOPITT HIAPER MIRAGE

Updates

Watch this space!

Even for storm-seasoned residents of the U.S. heartland and Atlantic coast, 2004 yielded a bumper crop of atmospheric angst. A total of nine named tropical systems pummeled the coastline from Texas to North Carolina, including an unprecedented string of billion-dollar hurricanes—four in all—that raked Florida. Nationwide, more than 1,500 tornadoes were reported, a preliminary total that far exceeds the U.S. record for any prior year.

As NOAA’s National Weather Service leads the way toward U.S. weather safety, NCAR plays a key role behind the scenes. With NOAA and university colleagues, NCAR develops and deploys some of the newest and most innovative tools for spying on bad weather from the ground and the sky. Those data feed into conceptual models that, in turn, help forecasters better understand growing threats, including the multiday unfolding of heavy-rain episodes. Vital new software developed through interagency teamwork entered the toolbox of forecasters in 2004 (see Test-driving tomorrow's forecast model). And one of NCAR’s newest weather sensors may help the nation protect itself from a different kind of atmospheric threat, one seldom pondered before September 11, 2001.

A super-sized storm chase

BAMAX

Top: Precipitation and winds derived from aircraft data for a June 2004 bow echo. (Courtesy David Jorgensen, NOAA.) Bottom: Hundreds of radiosondes launched across the Midwest for BAMEX helped feed in-depth storm analyses. (Photo by Carlye Calvin.)

Christopher Davis calls it “storm chasing gone mesoscale.” In the spring and early summer of 2003, this NCAR scientist co-managed the center’s biggest-ever foray into the springtime severe weather that regularly plagues the Midwest.

The massive study area for the Bow Echo and MCV Experiment (BAMEX) extended north to Minnesota, south to Louisiana, west to Kansas, and east to Ohio. This enabled participants to track not only a single day’s storms—including the bow-shaped echoes associated with destructive straight-line winds—but also the mesoscale convective vortices (MCVs) that sometimes emerge from the remains of the day’s tempests. Ranging in width from 80 to 300 kilometers (50–200 miles), these low-pressure centers can focus thunderstorms the following day. If they move over water, MCVs can even serve as seed for a tropical cyclone.

Bow echoes and MCVs are poorly sampled by routine, twice-daily weather-balloon launches separated by hundreds of kilometers. “They’re just not adequate,” says Davis. For six stormy weeks, BAMEX filled in the gaps, using mobile observing systems from NCAR, Doppler radars aboard P-3 jets from NOAA and the Naval Research Laboratory, parachute-borne sensors dropped from a Learjet, and other tools.

Early results from BAMEX point to a spectrum of vortex behavior whose vigor and extent scientists hadn’t expected. For instance, analysis led by Nolan Atkins (Lyndon State College) and Robert “Jeff” Trapp (Purdue University) found that the greatest damage was typically observed not in the most extensive bow echoes, but in smaller ones spanning 100 km (60 mi) or less. Even smaller spin-ups within these small bows appear to focus most of the destructive winds and pose the highest risk for tornadoes. BAMEX also uncovered some red flags that may prove useful in forecasting high wind, such as bands seen on radar that converge and feed into a storm from right angles.

At least two of the MCVs in BAMEX extended to the lowest kilometer of the atmosphere, a depth that surprised researchers. The apparent kinship between these systems and incipient tropical cyclones intrigues Davis. “There’s remarkable continuity in the spectrum of cyclones as you go from midlatitudes to the tropics, especially in the early stages of development,” he says.

Picking the tornado producers

Hailstone

Residents of Aurora, Nebraska, found some of the largest U.S. hailstones on record. (Photo by Carlye Calvin.)

The most spectacular duo of storms in BAMEX occurred on June 22 in southern Nebraska. One cell, near Aurora, dropped the largest U.S. hailstone ever measured, at 7 inches (nearly 18 centimeters) wide. Less publicized was the next cell south, near Superior, which bore the largest and strongest storm-scale cyclone ever measured. It was “the mother of all mesocyclones,” according to Roger Wakimoto, who will move from the University of California, Los Angeles, to head NCAR’s Earth Observing Laboratory in mid-2005.

Wakimoto captured the record-setting vortex using the airborne Electra Doppler Radar, built with the capacity to measure high winds in and near storms by NCAR and CRPE, France’s center for Earth and planetary physics research. During the Superior storm, the radar detected wind shear aloft that exceeded 400 km/hr (about 250 mph) across just a few miles. This shear was so strong that ground-based radar couldn’t accurately interpret it.
Despite its gargantuan size and strength, the Superior mesocyclone didn’t produce a tornado for most of its long life. Why it didn’t, and why violent twisters drop from other, seemingly weaker mesocyclones, is a question that’s nagged at tornado scientists for over a decade. “The factors that control the size and intensity of a tornado in relation to a mesocyclone are not well understood,” says Wakimoto. Yet forecasters often must rely on the strength of a mesocyclone’s radar signature, along with public reports, in order to decide whether to issue a tornado warning.

Could fractal geometry help solve this dilemma? NCAR postdoctoral researcher Huaqing Cai is exploring the notion. Fractal theory emphasizes how scales of measurement—the length of a ruler, as it were—can obscure some aspects of natural systems. While a mile-long ruler would give one measurement of a jagged coastline, for example, it would miss smaller curves.

Cai applied this technique to radar data on five mesocyclones (including the one near Superior), three of which spawned tornadoes. He analyzed the data with a variety of horizontal grid scales, and for each storm, he examined how the maximum vorticity (circulation around an axis) changed with different scales.

Cai found that vorticity and scale are related by a power law: a finer grid shows a bigger ramp-up in vorticity as one approaches the mesocyclone. In those cases when tightening the grid produced the biggest ramp-ups, a tornado was most likely to be present. If this relationship is confirmed by more data, it could eventually lead to on-the-fly radar analysis to distinguish the most tornado-prone storms. “This isn’t a tornadogenesis theory,” Cai adds. “It doesn’t tell you why a mesocyclone produces a tornado. It just tells you which one might.”

In the eye of the vortex

A single Doppler radar can only detect winds blowing toward or away from itself. That gives a useful but incomplete picture of the complex wind fields that swirl around intense vortices. An NCAR-designed technique for extracting riches from single-Doppler data is yielding insight into the processes at the heart of both tornadoes and hurricanes.

Doppler on Wheels

Doppler on Wheels caught dozens of tornadoes in 2004, including this one on May 12 near Attica, Kansas. (Center for Severe Weather Research / Herb Stein.)

NCAR’s Wen-Chau Lee had hurricanes in mind when he developed his ground-based single-Doppler wind retrieval technique (known by its acronym, GBVTD), as the technique assumes the presence of a good-sized vortex. But GBVTD soon found its way into tornado research, spurred by the dramatic data being collected across the Great Plains by mobile Doppler radars.

One of the pioneers of storm-chasing radar is Howard Bluestein (University of Oklahoma). He and colleagues recently applied GBVTD to high-resolution data gathered from Kansas and Nebraska twisters by a mobile W-band radar built at the University of Massachusetts Amherst. The radar’s fine detail shows an array of intriguing features, including concentric rings in the Kansas tornado akin to the double eyewalls found in some hurricanes.

Through GBVTD, Bluestein discovered even more: a two-lobe structure, with pockets of especially high wind at opposite ends of the two tornadoes. This pattern, long noted in some lab-generated tornadoes, could shed light on internal dynamics of the vortex. What’s more, the lobes stayed in position around the twister, unlike the multiple vortices that spin around some tornadoes—and in contrast to theory and computer-based simulations. “This was surprising,” notes Bluestein, who believes the lobes could be a product of the tornado being deformed by the boundary along which it developed.

Hurricane Jeanne

Hurricane Jeanne was one of four that raked Florida in 2004. (NOAA)

Another veteran of radar road trips, Joshua Wurman (Center for Severe Weather Research), is teaming with Lee to derive new insight from a decade of data. Wurman is working closely with NCAR to refine his Doppler on Wheels units, which had a busy 2004. The DOWs profiled 36 tornadoes and tracked the landfalls of Hurricanes Frances and Ivan, gathering data every 10 seconds on Frances. “It was an amazing year,” says Wurman.

Lee and Wurman have been working on the first three-dimensional tornado analysis produced using GBVTD. Drawing on DOW data from a large and deadly 1999 tornado that struck Mulhall, Oklahoma, they’ve found updrafts blowing at near-hurricane force, surface air pressure comparable to that of a Category 4 hurricane, and a central downdraft not unlike the subsidence in the eye of a hurricane. Lee notes “striking similarity” between characteristics derived from the Mulhall data and those observed for years in laboratory- and computer-generated twisters. “We hope to be able to look at more cases and produce a climatology for a spectrum of tornadoes,” he says.

Lee’s technique is being considered by the National Hurricane Center as a way to extract quick estimates of a hurricane’s peak winds and central pressure as it approaches land. For instance, Lee found that GBVTD could have drawn on coastal radar data in August 2004 to provide valuable early notice of Hurricane Charley’s rapid intensification just before landfall.

Corridors of concern

When a summer storm hits, it usually passes within an hour or two. But sometimes a region will be struck day after day by torrential, flood-triggering rain. A group of researchers, building on groundbreaking work by NCAR’s Richard Carbone, is looking into what drives these episodes.

On a typical summer day, thunderstorms build in the early afternoon over the Rocky Mountains, driven by intense high-altitude solar heating. They spread eastward, often gaining strength as they move through the Midwest overnight. An east-west corridor sets up during some summers, with storms favoring this atmospheric highway and bypassing areas to the north or south. The image at left shows the broad Midwest corridor for 2003.

“The episodes move more quickly than you would expect a single thunderstorm to move,” says NCAR’s Christopher Davis. Their extreme regularity seems to be unrelated to external factors such as ocean processes, El Niño and La Niña, or hurricane activity.

Similar summertime episodes appear to occur in the Himalayas, Europe, Australia, South America, and Africa. Studying these patterns with NCAR is an international team that includes Phillip Arkin (University of Maryland), George Tai-Jen Chen (National Taiwan University), Tom Keenan (Australia’s Bureau of Meteorology Research Centre), Vincenzo Levizzani and Laura Zamboni (Italian Research Council/University of Bologna), Augusto Perrera (University of São Paolo), and Chung-Chieh Wang (Jin-Wen Institute of Technology).

(Image courtesy David Ahijevych.)

A new eye on national security

If tornadoes and hurricanes still terrorize much of the United States, the nation as a whole began pondering other types of risk in earnest after September 11. There’s a meteorological element to one potential form of attack: the airborne dispersion of biological hazards, such as anthrax or smallpox.

A new type of lidar (laser-based radar) developed at NCAR has proven to be an uncommonly keen observer of trouble in the air. It detects not only the microscale air motions that might carry biological agents, but many of the tiny particles that could represent the hazards themselves.

Shane Mayor

NCAR's Shane Mayor and the breakthrough eye-safe lidar. (Photo by Carlye Calvin.)

The Raman-shifted Eye-safe Aerosol Lidar (REAL) emerged from several years of collaboration among NCAR scientist Shane Mayor, optical and systems engineer Scott Spuler, and associate scientist Bruce Morley. They combined traditional and newer techniques to build a lidar that would operate in an eye-safe range of wavelengths (making it easy to deploy in urban environments) yet would pack plenty of observing power.

The high-end optics allow aerosols to be tracked by REAL every tenth of a second at distances separated by as little as 3 meters (10 feet). The resulting image—which looks much like a radar display, only in miniature—captures phenomena rarely before observed by researchers. “A radar sees reflections from big targets like raindrops and bugs,” says Spuler. “We’re seeing reflections from particles about 10,000 times smaller.”

The lidar’s first big test came at the Pentagon in the spring of 2004, where a breakthrough blend of high-tech instruments and weather forecasting models took shape. Coordinated by scientists at NCAR and sponsored by the Defense Advanced Research Projects Agency, the tests scanned for potential airborne hazards near the Pentagon and predicted their motion and impact on the building.

REAL, along with other observing tools and a set of nested computer models, charted complex air circulations around the Pentagon at resolutions as fine as every 2 meters (7 feet). At the Pentagon, REAL’s display lit up with signals from the myriad of aerosol plumes common in the center of a busy, densely populated urban area. At home next to NCAR’s Foothills Laboratory, REAL watches the daily ebb and flow of pollutants and routinely captures trails of exhaust from freight trains passing near the site.

Late in 2004, a test in Utah showed REAL’s ability to detect clouds of particles standing in for more dangerous aerosols, such as anthrax spores. While the lidar can’t distinguish an innocuous aerosol from a lethal one, says Mayor, “REAL could be used as a first layer of remote sensing that would trigger additional sensors when needed.”

Test-driving tomorrow’s forecast model

A team of researchers is putting the nation’s next major computer forecast model, the outgrowth of a vast community effort, through a set of challenging paces at NCAR. Thus far, the Weather Research and Forecasting model has held its own against thunderstorms and hurricanes—no easy feat.

Shane Mayor

Three of NCAR’s leaders on the WRF effort: Jordan Powers, Wei Wang, and Joseph Klemp. (Photo by Carlye Calvin.)

Like almost any new model, WRF is more complex and finer-scale than its predecessors. What makes WRF unique is its dual nature as a tool for working forecasters and a springboard for research. The model was designed for both tasks by allowing various depictions of atmospheric physics and other components to be added or subtracted as the user sees fit. “You can run idealized cases or produce forecasts with real data,” says NCAR’s Jordan Powers. WRF partners include NCAR, NOAA, the U.S. Navy and Air Force, the Federal Aviation Administration, and the University of Oklahoma.

During 2003 and 2004, NCAR used WRF to generate daily forecasts of convection (showers and thunderstorms) across much of the central and eastern United States. It was the first time software had modeled individual showers and storms in real time over such a large area, in contrast to the usual approach of estimating storm coverage with much less detail. The output proved useful in the BAMEX field study (see page 16) and received high marks from NOAA forecasters who sampled it.

A newer version of WRF tracked several recent Atlantic hurricanes. The 2004 test featured a high-resolution window with a 4-km (2.5-mi) grid, designed to capture a hurricane’s core, embedded within a larger, lower-resolution domain. Although many models struggle with hurricane intensity, the nested WRF correctly forecast surface winds of more than 210 km/hr (130 mph) in Hurricane Ivan. Meanwhile, lower-resolution runs of WRF designed for long-range forecasts correctly projected Ivan’s landfall near the Alabama-Florida border more than five days in advance. For the 2004 season as a whole, WRF’s hurricane tracks were as good or better than those from current operational models, and the nested-grid technique provided big gains in efficiency.

In 2004–05, WRF began carrying out high-resolution daily predictions of U.S. winter weather. Much of this work takes place at the Developmental Testbed Center, a joint NCAR-NOAA facility housed at NCAR. The DTC allows a wide range of new methods and model components to be thoroughly checked before they are moved closer to operational use. “It’s a place where you can try out new ideas in numerical weather prediction without interfering with forecast operations,” says DTC head Robert Gall. As forecasters in the National Weather Service begin using the first operational version of WRF, which went online in October 2004, the DTC will continue exploring new versions of this versatile model.

 

UCAR at a glance UCAR goals Introduction Who we are The century after tomorrow Sharing our world's air Prognosticators at work Minute by minute Clear and bright Winter's wallop