UCAR > Communications > UCAR at 25 Search

UCAR at 25
The Grand Experiment
A Cathedral for Science
Tools of the Trade
Atmospheric Chemistry
Sun and Earth
Global Change


Teleconnections and Two-Week Forecasts: Observing and Modeling the Global Atmosphere

I think the causes of the General Trade-Winds have not been fully explained by any of those who have wrote on that subject.

George Hadley, 1735

The atmosphere is the prototypical chaotic nonlinear system. This was shown by the simplest atmospheric model, devised by [Edward] Lorenz over 20 years ago, the starting point for modern mathematical studies of such systems. Because the atmosphere is chaotic, atmospheric models are sensitive to small variations in initial conditions and possess an inherent growth of error. These properties impose a theoretical limit on the range of deterministic predictions of large-scale flow patterns of about two weeks.

National Academy of Sciences, Panel on Weather
Prediction Technologies, Research Briefings 1985

George Hadley, an eighteenth-century English lawyer and spare-time scientist, recognized that the trade winds and other large-scale atmospheric currents are part of a global circulation system. He proposed that this general circulation, as it later came to be known, is produced when warm, light air, heated by the intense input of solar energy in the tropics, rises and cool, heavy air moves in from north and south to replace it. This would establish a pattern of warm, high-level air moving toward the poles and cool, low-level air moving back toward the equator, with the easterly and westerly trade winds created by the effect of the earth's rotation. Such patterns do exist near the equator and are known today as Hadley Cells. But the global circulation is much more complicated, and the causes of the trade winds and other large- scale features of the atmospheric general circulation still have not been completely explained by "those who have wrote on that subject."

In the early years of this century, the Bergen School, a group of Norwegian scientists headed by Vilhelm Bjerknes, developed the polar- front theory, one of the foundation stones of modern meteorology. In a paper published in 1904, Bjerknes suggested that weather forecasting could be considered "a problem in mechanics and physics." He proposed that, since the atmosphere is a fluid whose behavior is governed by certain physical laws that can be expressed mathematically, it should be possible to describe the present state of the atmosphere numerically, then predict its future state by performing a series of mathematical computations.

Bjerknes did not develop any specific numerical techniques for weather prediction. But in 1922, an Englishman named Lewis F. Richardson wrote a book proposing methods that are the basis of present-day numerical prediction techniques. However, Richardson's approach would have required data from 2,000 weather stations distributed over the face of the earth, and 64,000 forecasters operating 64,000 mechanical calculating machines would have been needed just to advance Richardson's numerical prediction at the same speed at which the real weather develops. The advent of electronic computers in the 1940s made it possible to test Richardson's theories and prove their validity. In 1946, a group at Princeton University headed by John von Neumann used a computer that they called MANIAC (Mathematical Analyzer, Numerical Integrator, and Computer) to develop working techniques based on Richardson's ideas.

Computer models, like the NCAR general circulation model that produced this early simulation of global-scale weather patterns, are invaluable tools in efforts to improve the time scale and accuracy of long-term weather and climate forecasts.

A number of scientists followed up this pioneering work by developing numerical models of the atmosphere—sets of partial differential equations that represent the physical principles that govern atmospheric structure and motions. By using today's high-speed computers to solve the equations over and over, modelers can simulate days, months, and years of atmospheric behavior in minutes, or hours, depending on the complexity of the model and the speed of the computer. The U.S. National Weather Service and most of the world's other weather services use numerical models to produce large-scale weather forecasts by making the weather "happen" in the computer faster than it happens in the real atmosphere. These forecasts are quite accurate for two to three days, but beyond that, their accuracy decreases.

Scientists in NCAR's Atmospheric Analysis and Prediction (AAP) Division have developed a series of models of the general circulation and other scales of atmospheric motion that are available to the university community for studying many kinds of atmospheric problems. The most widely used one is the community climate model (CCM), developed through a joint effort by NCAR and university scientists that began in 1980. Many of the features of this model grew out of work at Australian and European research centers. The CCM is continually being refined as it is used in research.

One important use of the CCM has been in studying apparent teleconnections—long-distance cause-and-effect relationships—between unusually mild or unusually severe winters in the United States and other parts of the Northern Hemisphere and fluctuations in pressure and temperature over the tropical Pacific Ocean. John Geisler, chair of the Department of Meteorology at the University of Utah, has been investigating these apparent relationships, working in collaboration with Eric Pitcher of the University of Miami and Maurice Blackmon of NCAR's Global Climate Modeling Group.

"We have been using the CCM to simulate what happens in middle latitudes when sea-surface temperatures in the equatorial Pacific are warmer than normal—the well-known El Niño phenomenon," Geisler explains. "When El Niño is very intense, what happens in the middle latitudes? Does it rain more in the west or the east? Is it cold in the east and warm in the west? We put warm sea-surface temperatures into the model, run it on the computer, and compare the response with what we get when we run it without the warm sea-surface temperatures."

El Niño is associated with a warm ocean current that moves southward along the west coast of South America just after Christmas. There have been at least six intense El Niño events since 1951, and William Quinn of Oregon State University has evidence of their periodic recurrence as far back as 1541.

These phenomena are currently being examined for their relationships to a great variety of environmental and societal anomalies, as well as to possible climatic teleconnections. During El Niño, warm water, which is poor in nutrients, replaces cold, nutrient-rich water that normally upwells from the deep ocean along the Peruvian coast. The Peruvian anchovy fishery supported one of the world's largest fishing industries until it collapsed after the 1972–73 El Niño. However, the chain of causation that many took for granted in this sequence of events has recently been challenged by Michael Glantz of NCAR's Environmental and Societal Impacts Group (ESIG). Glantz, a political scientist, correlated political, economic, and technological changes in Peru with El Niño occurrences over the past 40 years. He found strong evidence that the causes of the collapse of the Peruvian fishing industry included technological advances, political changes in the national government, and a lack of government-agency supervision of the fishing industry as well as the impact of El Niño on the anchovies' food supply.

El Niño has been linked with a variety of atmospheric anomalies. Some are local—heavy rains in usually arid regions along the Pacific Coast of South America frequently accompany intense El Niños. But El Niño is also one part of an interrelated set of changes in atmospheric and ocean conditions over much of the Southern Hemisphere often referred to as ENSO, for El Niño–Southern Oscillation. The Southern Oscillation involves a periodic weakening or disappearance of the trade winds, which triggers a complex chain of atmosphere-ocean interactions.

"Most of us feel that it's a coupled phenomenon," says Michael Wallace, chair of the Department of Atmospheric Sciences at the University of Washington. "The atmosphere itself doesn't have enough of an attention span to know what happened a couple of months ago. The ocean can serve as a memory. It can remember what happened a season or a year ago. But the atmosphere, unlike the ocean, has the large-scale systems that make this phenomenon global."

The apparent coupling between El Niño and global weather anomalies seemed very convincing in 1982–83, when El Niño was exceptionally intense. At the same time, unusually severe Pacific storms struck the California, Oregon, and Washington coasts. These storms dumped heavy snow on western U.S. mountains, and spring floods followed. Extreme droughts hit many parts of the world—including the western Pacific and Mexico—and torrential rains and flooding drenched parts of South America and the southern United States. A number of scientists attributed these extreme events to the extraordinary El Niño.

In an ordinary year, many storm systems form or intensify near the east coast of Asia and move across the Pacific Ocean. Eventually, the storms cross the western United States and continue eastward. However, in some El Niño years the Pacific storm track veers northward toward Alaska, altering the usual paths of these winter storms. This happened in the winter of 1976–77, when the western United States had an unusually warm, dry winter, while severe cold and snow swept down over the eastern part of the country as far as Florida.

In 1983, Geisler and his colleagues used the CCM to simulate El Niño events of three different intensities. Although the model produced the most significant features of northern winter anomalies that accompany intense El Niños, it is not clear why one El Niño winter can differ so much from another. There are indications that the difference may be related to the geographical location of the warm surface water.

Although this experiment may sound simple, it required considerable resources and effort. NCAR's high-speed CRAY-1 computer made it possible—the experiment was nearly five times larger than any previous ones and required more than 100 hours of computer time. "Collaboration with NCAR is absolutely essential for this kind of experiment," Geisler says. "This is work that Eric Pitcher and I can't do on our own. In addition to the CCM and the computer, we depend on the strong support of Maurice Blackmon and his colleagues at NCAR. They run the model, then we get together and analyze the results. Very often I'll write the papers for the journals, and I provide feedback on where we go next. The arrangements are informal—we don't apply to NCAR for anything but computer time. It's a sort of case study of the kind of collaboration between university and NCAR people that goes on all the time and achieves a lot of valuable results."

For society, the practical payoff of numerical modeling research is better weather forecasts. John Firor, director of the NCAR Advanced Study Program, says: "Whether we have always admitted it or not, forecasting has been at the core of NCAR research since the very beginning. The original emphasis on a large computer and general circulation models was the direct and tightly coupled outgrowth of problems in forecasting. If you look at the research that is being done today in our largest group, AAP, you'll see that the major fraction of that work is related to improving forecasting."

Philip Thompson, who became NCAR's first associate director in 1960 and is now a senior scientist in the Advanced Study Program, concurs. "Forecasting has always been an important motivation," he says. "It's kind of an unglamorous pursuit that takes a lot of hard work with no dramatic breakthroughs, but better forecasting is one of the ultimate goals of atmospheric research."

According to Richard Anthes, director of AAP (and subsequently director of NCAR), large-scale numerical weather prediction is now accurate enough to be useful about a week in advance, halfway to the theoretical limit of two weeks. "A decade ago," Anthes points out, "the range of useful prediction was only about one-third of the theoretical limit. The improvement has come from a better understanding of atmospheric dynamics, models with higher accuracy and resolution, and better weather data, from satellites and other new sources." Anthes chaired the National Academy of Sciences panel that produced the briefing on weather prediction technologies quoted at the beginning of this section.

From the earliest days of general circulation modeling, two factors have limited the rate of progress. The first is computer speed and capacity. General circulation models require vast numbers of calculations involving tremendous quantifies of data to be run at speeds that, at least for operational prediction, must be considerably faster than the speed at which the real atmosphere changes. The computing power available through NCAR's Scientific Computing Division has made the center a national resource for university scientists who want to model global-scale atmospheric processes.

The second limiting factor is a lack of sufficiently detailed and comprehensive data on the global state of the real atmosphere. Nearly two decades ago, in the introduction to his book The Nature and Theory of the General Circulation of the Atmosphere, Edward Lorenz of the Massachusetts Institute of Technology wrote that the atmospheric circulation "is governed by a set of laws which are known to a fair degree of precision, and in principle it should be possible to use these laws to deduce the circulation. Nevertheless, the problem of deducing the behavior of the atmosphere presents many problems which have not been overcome, and the greater portion of our knowledge of the atmosphere has been the result of direct observation. As a consequence, many of the major advances in our understanding of the atmosphere have followed major improvements in the process of observing it."

The year that those words were published, 1967, was an important one for improving the process of observing the atmosphere. In Geneva, the World Meteorological Organization, made up of representatives from most of the world's national weather services, was planning a World Weather Program to exploit and augment the global observational capabilities of a growing system of meteorological satellites that the United States and the Soviet Union were putting into orbit.

The first U.S. weather satellite, TIROS (Television and Infrared Observational Satellite), went into orbit on 1 April 1960, about a month before Walter Orr Roberts was named NCAR's first director. A little more than a year later, in a special message to a joint session of Congress, President Kennedy requested funds to establish a National Operational Meteorological Satellite System. According to Roberts, many university atmospheric scientists viewed meteorological satellites as an extravagant boondoggle of the weather-forecasting establishment rather than a potentially valuable research tool. "Some of them scornfully described the satellite as a solution in search of a problem," Roberts recalls.

Scientists and equipment were flown to Christmas, Fanning, and Palmyra atolls in this World-War-II–vintage amphibious PBY aircraft during the Line Islands Experiment.

One reason for this attitude was the fact that, when the first weather satellites were launched, it appeared that all they could provide would be pictures of cloud cover, not the numbers that were needed to feed the models that many researchers regarded as the wave of the future in atmospheric science. A second wedge between many university atmospheric scientists and the government meteorologists who were promoting satellites had its roots in the meteorologists' struggle to establish their discipline as a science and not just a service. The scientists who founded UCAR and NCAR had worked long and hard to demolish the image that "people went into meteorology if they weren't good enough for physics," as Roberts puts it. They were hypersensitive to explicit or implied suggestions that their most important role was to serve as handmaidens to the forecasters. On the other hand, the federal meteorological establishment had a reputation for giving a much higher priority to operations than to science—"When the Weather Bureau got a budget cut, the research programs were always the first to go," recalls one university scientist.

With one foot in the university scientific community that had created it and the other in the federal establishment that was supporting its work through the National Science Foundation, UCAR was in an excellent position to bring research scientists and operational meteorologists together to work toward their common goal of advancing human understanding of the global behavior of the atmosphere. A key element in that behavior is the tropics, the region that supplies the driving force for the general circulation through its heavy input of energy from the sun. Because the tropical regions are largely covered by oceans, they are one of the least adequately observed regions of the global atmosphere.

In August 1966, NCAR was host to a meeting of more than 40 participants from both sides of the meteorological community. They represented federal agencies such as the Environmental Science Services Administration (ESSA) and the National Aeronautics and Space Administration (NASA), as well as the National Academy of Sciences (NAS), eight universities, and NCAR. They presented ideas for observational programs designed to fit into an integrated program of tropical meteorological experiments to be conducted over the next five to ten years. These ideas were evaluated by three working groups chaired by scientists from three UCAR member universities—Colin Ramage of the University of Hawaii, Noel La Seur of Florida State University, and Jule Charney of the Massachusetts Institute of Technology.

By early 1967, the first of these field programs had become a reality. NCAR's Edward Zipser was scientific coordinator of the Line Islands Experiment, a cooperative field program that included scientists from Colorado State University, the University of Hawaii, Saint Louis University, Texas A&M University, the University of Wisconsin, and the Woods Hole Oceanographic Institution. Based on Christmas, Fanning, and Palmyra Islands, three atolls in the South Pacific just north of the equator, the experiment was designed to take advantage of satellite observations of that data-poor region by the ATS-1 geosynchronous satellite, which NASA had launched the previous December. NASA, ESSA, and the Department of Defense all contributed heavily to operational support of the experiment. The Line Islands Experiment proved that university scientists and government agencies could operate in close harmony in a cooperative field research program, at least at the level of the working scientist.

Instrumented towers on the Line Islands provided surface-level atmospheric measurements to complement and validate satellite data.

At the same time, harmony was being sought at the level of international policy. Robert White, head of ESSA and U.S. representative to the World Meteorological Organization, asked UCAR President Roberts to serve on a committee that was helping the WMO shape the World Weather Program. The first part of this program was the World Weather Watch, aimed at improving operational forecasting. The second part, the Global Atmospheric Research Program (GARP), would use the satellites, data centers, and other operational elements of WWW as a foundation for scientific programs to investigate the global atmosphere.

To build a bridge between researchers and forecasters at the international level, WMO and the International Council of Scientific Unions (ICSU) established a GARP Joint Organizing Committee representing both organizations. This pattern was mirrored in the United States, where the National Academy of Sciences set up the U.S. GARP Committee, chaired by Jule Charney. Richard Reed of the University of Washington was named executive scientist of the U.S. GARP Committee—a new kind of title that formally acknowledged the role of university researchers in planning and organizing U.S. participation in this massive international research effort. Looking back, Walter Orr Roberts views his contributions to GARP with pride and satisfaction. "The role that UCAR and NCAR played in bringing GARP about was the most important single accomplishment of my term as president of UCAR," he says.

Reed says that GARP marked a critical turning point in the involvement of university scientists in large-scale field research projects in atmospheric science. "In the past, government scientists would plan and organize a field experiment," he says, "and at some stage after most of the planning had been done, usually pretty far along toward the field phase, they might invite participation by university scientists. University people weren't excluded, but they certainly weren't included as full partners."

The trend toward deeper university involvement in large field experiments began with the Barbados Oceanographic and Meteorological Experiment (BOMEX) in 1969. BOMEX used ships, aircraft, satellites, and other observing tools to explore the transfer of energy between the tropical Atlantic Ocean and the atmosphere over a 90,000-square-mile area east of Barbados. The first director of BOMEX was a university scientist, Ben Davidson of New York University, and scientists from the universities and NCAR were deeply involved in BOMEX planning as well as the field research.

NCAR's Electra was among a dozen research aircraft that flew scientific missions from Dakar, Senegal, during the 1974 GARP Atlantic Tropical Experiment (GATE).

By the time the GARP Atlantic Tropical Experiment (GATE), the first big GARP field project, came along in the early 1970s, the idea was firmly implanted that such programs should be jointly planned by government and university scientists, Reed recalls. "That certainly was how GATE was done," he says. "It was a good partnership—everybody worked together harmoniously and effectively.

"One very important factor in preventing friction and rivalry was the data management plan," Reed declares. "We agreed that the great bulk of the GATE data, collected in truly cooperative field work, would go to a data management group that would process it and make it available to the scientific community. Everybody would have access to the data at the same time. That was very important in getting enthusiastic cooperation."

GATE generated tremendous quantities of data. The largest and most complex international scientific field program undertaken up to that time, the experiment involved some 4,000 people from more than 60 nations. The director of GATE was a U.S. scientist, Joachim Kuettner, and the deputy director was Yuri Tarbeev of the Soviet Union. At the international level, the data from the experiment went to five centers, in England, France, West Germany, the Soviet Union, and the United States.

The GATE scientists used six kinds of satellites, 40 ships, a dozen airplanes, and an assortment of balloons, buoys, and other observing tools to study a 20-million-square-mile expanse of tropical land, ocean, and atmosphere during a 100-day period from mid-June to mid-September of 1974. Concentrated observations were made over a smaller area of the tropical Atlantic Ocean centered on an array of research ships stationed about 500 miles off the west coast of Africa. The goal was to gain a better understanding of tropical weather systems and their role in global weather.

The experiment was directed from an international operations control center in Dakar, Senegal. About 50 NCAR scientists, engineers, flight crews, technicians, and support people were stationed in Dakar. NCAR's Edward Zipser was one of a team of scientists who took turns as operations director. Other NCAR scientists flew on research missions and played key roles in other phases of the field research.

Satellite observations were a key element of the GATE field program. A total of six U.S. and Soviet satellites provided data.

Four nations—France, the Soviet Union, the United Kingdom, and the United States—sent research aircraft to fly in the experiment. Of six U.S. research aircraft, three—a turboprop Electra, a twinjet Sabreliner, and a propeller-driven Queen Air—came from NCAR. At the request of the U.S. GARP Committee, NCAR established a special data- management group to process, validate, and archive all the data collected by the U.S. aircraft.

The next major GARP field program was the First GARP Global Experiment (FGGE), also known as the Global Weather Experiment. All 147-member nations of the WMO participated in this program, which used the international network of the World Weather Watch to observe the global atmosphere as closely as possible for one year starting in January 1979. The goal was to determine how great an improvement in large-scale forecasts could be achieved with better observational data.

Because operational weather observations are scarce over large areas of the globe, notably the oceans that cover vast expanses of the tropics and Southern Hemisphere, the World Weather Watch network was augmented with two special observing systems. The aircraft dropwindsonde system, developed at NCAR, used weather instrument packages along with electronic tracking devices. The dropwindsondes were parachuted from high-flying aircraft to obtain wind, temperature, pressure, and humidity data that were transmitted to a ground station. During two 30-day special observing periods, aircraft flew daily dropwindsonde missions from bases in Brazil, Hawaii, Mexico, and the Indian Ocean.

Special observations were also made with the Tropical Constant-Level Balloon System, using 13-foot spherical superpressure plastic balloons equipped with solar-powered electronics packages. The balloons were tracked by satellites to trace the tropical winds that carried them on globe-circling flights of as long as six months. This system was the culmination of research and development work and field experiments begun more than a decade earlier in NCAR's Global Horizontal Sounding Technique (GHOST) program and a French project known as EOLE.

Did this vastly augmented observing network produce better forecasting skill? "The added quality of the FGGE observational data produced a significant improvement in the forecasts," Philip Thompson says. "It wasn't spectacular, but it was much more than just perceptible. Better observations combined with improvements in the models and bigger and faster computers have resulted in tremendous increases in the accuracy of extended-range forecasting. We can make useful forecasts up to about a week, but we're still limited by the low density of the observations. Our best hope is that satellite observations and other remote-sensing and scanning techniques will eventually be accurate enough to replace in-situ observations over the vast regions where we don't have enough weather stations."

A recent publication of the ICSU, which developed and managed GARP in collaboration with the WMO, gave this assessment of the achievements of the global experiment:

Reliable forecasts for four to five days ahead with useful guidance up to six or seven days are now provided by the world's leading forecasting centers for the middle latitudes of the Northern Hemisphere. This represents an advance of three days in predictive skill over the last decade. There has also been a reduction in the number of seriously misleading forecasts. This, in turn, has led to the provision of many new and improved forecasting services for aviation, agriculture, energy production and retailing industries, and of wave and swell forecasts for ship routing and off-shore oil and gas operations.

Droughts, like the one that brought these parched conditions to the Colorado plains, are one of many kinds of climate extremes that affect human life and activities in major ways.

Estimates of the dollar value of such improvements in forecasting are impressive, and future increases in the range and accuracy of weather prediction can bring even greater benefits. In the United States alone, it has been estimated that better weather forecasts could save the agricultural industry more than $3.5 billion annually. The construction industry could save about $1 billion a year from moderately improved 24- hour forecasts of temperature, precipitation, and severe storms. Better 12-hour temperature forecasts could save U.S. power companies $1 billion annually by preparing them for high electrical demand for air conditioning on very hot days. All these estimated savings would result from comparatively modest improvements in forecasting skill.

What are the prospects for a more dramatic improvement—the elusive two-week forecast? Even though GARP did not stretch our forecasting ability out that far, is it still a realistic goal? Are we approaching it, or is it an unattainable ideal? Here is ICSU's assessment:

The data sets arising from the Global Weather Experiment provide an especially rich source for studying the physics and dynamics of the global atmospheric circulation and for improving our understanding of the mechanisms governing changes of weather and climate. This, in turn, will lead to more realistic and detailed numerical models with scope for increasing predictive skill by a further two days. Further improvements are likely to depend on better observations with the possibility of extending the range of useful forecasts up to about 14 days, which may prove to be the limit of deterministic predictability set by the random nature of atmospheric fluctuations. However, some relatively stable atmospheric states, such as those that produce long, dry summers, may possess considerably greater predictability.

More than six decades ago, L.F. Richardson, the Englishman whose scheme for modeling the behavior of the atmosphere was defeated by the need for 64,000 human "computers" operating 64,000 mechanical calculators, wrote: "Perhaps someday in the dim future it will be possible to advance the computations faster than weather advances and at a cost less than the saving to mankind due to the information gained. But this is a dream."

Now Richardson's dream has come true—the computations can be advanced faster than the weather changes, and the resulting information provides a highly favorable ratio of benefits to costs. Will the modelers' dream of a two-week weather forecast also become a reality, given sufficient time, hard work, and detailed knowledge of the state of the real global atmosphere at a given point in time? Optimists say "probably," skeptics say "maybe," but few voices are heard declaring "never."

State-of-the-Art Supercomputing

Scientific computing has been an integral part of research at NCAR since its creation. In its earliest days, NCAR used computers at the University of Colorado and the Boulder laboratories of the National Bureau of Standards. But a state-of-the-art computing facility was one of the resources that the center planned to make available to the university atmospheric science community as well as its own scientists.

NCAR acquired its first computer in 1964. It was a Control Data Corporation 3600 with 32,768 48-bit words of memory, less than most personal computers have today. The center soon moved up to its first "supercomputer," a Control Data 6600. Since the 1960s, scientists at NCAR and the universities have continued to demand increasingly greater speed and larger memory to run more realistic models and analyze more complex data sets. NCAR's Scientific Computing Division (SCD) has responded by working to maintain a state-of-the-art facility with the fastest computers that are available. This has meant acquiring a new computer every few years.

In 1971, NCAR replaced its Control Data 6600 with the next generation, a Control Data 7600. By 1974, a search was under way for a successor to the 7600. Seymour Cray, who designed the 7600 for Control Data Corporation, had started his own company, and NCAR acquired its first CRAY-1 in 1977. A second CRAY-1 was added in 1983, and the 7600 was retired. The two CRAY-1s increased NCAR's computing power to ten times what it had been with the 7600.

NCAR has ordered a new supercomputer that will bring another five-fold increase, to 50 times the computing power that was available just ten years ago. In 1986, the Scientific Computing Facility will acquire a next-generation supercomputer, the CRAY X-MP/48. This will allow NCAR and university scientists to attack problems that the CRAY-1s cannot handle—more realistic climate simulations, thunderstorm and tornado modeling, three-dimensional chemical-dynamical models for studying problems such as acid precipitation, and models of the solar cycle and the general circulation of the ocean.

UCAR at 25IntroductionThe Grand ExperimentStormsAtmospheric ChemistrySun and EarthTeleconnectionsGlobal Change


Other UCAR Publications | UCARNCARUOP

Executive editor Lucy Warner, lwarner@ucar.edu
Prepared for the Web by Jacque Marshall
Last revised: Tue Oct 17 10:56:25 MDT 2000