UCAR > Communications > Staff Notes > September 1999 Search


September 1999

The IBM SP kicks off a new era for SCD

Editor's note: The fall issue of the UCAR Quarterly will introduce NCAR's new IBM SP supercomputer to readers throughout UCAR's member institutions. In this issue of Staff Notes Monthly, UQ editor Carol Rasmussen gives us a sneak preview of her feature story on the IBM SP. We're also including answers to a few questions that UCAR/NCAR/UOP staff may have.Bob Henson

Aaron Andersen (right) guides movers and the new IBM SP into the SCD machine room. (Photo by Carlye Calvin.)

The Scientific Computing Division welcomed the newest member of its computing family with the arrival of an IBM RS/6000 SP at the Mesa Lab on 11 August. The $6.2 million machine arrived on schedule and without a hitch. Not only is the IBM machine NCAR's fastest computer to date, it heralds a new kind of architecture for SCD: clustered, distributed-memory supercomputing.

With an initial delivery of 160 nodes, each containing two processors, the supercomputer offers a peak speed of 204 gigaflops, more than doubling SCD's peak capacity. The SP also includes 512 megabytes of memory with each processor and 2.5 terabytes of disk space. Use of the new computer will be equally divided between NCAR's community users and those in the Climate Simulation Laboratory.

For NCAR, the arrival of the SP "begins an aggressive transition to clustered shared-memory processors, following the computer industry," says SCD director Al Kellie.

This chart depicts major SCD computers from the 1960s onward, along with the sustained gigaflops (billions of floating-point calculations per second) attained by the SCD machines from 1986 to the end of fiscal year 1999. Arrows at right denote the machines that will be operating at the start of FY00. Possible upgrades to the IBM SP that arrived in August should further fuel SCD in the coming year. The division is aiming to bring its collective computing power to 100 Gfps by the end of FY00, 200 Gfps in FY01, and 1 teraflop by FY03. (Illustration by Mike Shibao)

The architecture of the IBM computer differs from that of vector computers such as NCAR's CRAY C90 and also from that of a massively parallel machine like our SGI Origin 128. In a vector computer, all processors have access to the same shared memory. The Origin is a distributed-shared-memory computer: memory is physically distributed among the many processors, but can be programmed as if it were shared. The SP and other distributed-memory clustered machines consist of many nodes in which memory is shared by the processors within the nodes. Between nodes, information is exchanged by message passing, using MPI (message passing interface, an industry standard) or other means. Programming a single node can be done with the same shared-memory techniques used on the CRAY C90 and the SGI Origin. However, programming for the multiple nodes of the IBM SP requires distributed-memory techniques.

Why change architecture?

This IBM computer is only the first step in a transition toward clustered computing expected to send our computing power skyrocketing. NCAR director Bob Serafin says, "I think we'll have, within a year, a very substantial increase in computing capacity at NCAR--a factor of ten above what we have now."

Al Kellie. (Photo by Carlye Calvin.)

Al points out that all U.S. vendors are currently offering clusters of shared-memory systems to do the biggest computing jobs. According to an SCD memo to users, "This trend is independent of whether such systems are composed of vector processors or microprocessors, and it is driven by both market forces and limits on scalability for building large systems." The cost of computation on the new architecture is considerably cheaper than on a vector computer: about $100,000 to $150,000 per sustained gigaflop, roughly an order of magnitude less than for vector computing. "At that price we can't afford not to use these machines," Al says.

In choosing which computer to buy, Al adds, "We wanted to capitalize on the vendor that had the most mature support for coding, porting, optimization, and conversion. IBM has the richest support structure out there. They wanted NCAR as a client."

Several other research centers have acquired similar IBM systems in the past year, including NOAA's National Centers for Environmental Prediction (NCEP) and the U.S. Department of Energy's Oak Ridge National Laboratory. NCAR has also signed a memorandum of understanding with NSF's two Partnerships for Advanced Computational Infrastructure centers (the San Diego Supercomputing Center and the National Center for Supercomputing Applications). The collaboration includes evaluation of computers offered by various vendors, training on the new architecture, and outreach to users.

The bottom line is that clusters of shared-memory computers represent the highest level of supercomputing currently available, according to Rodney James, a software engineer in SCD's Computational Science Section. Sally Haerer, as head of SCD's Technical Support and Development Section, is leading NCAR's training efforts. According to Sally, "There are users who ask, 'Why don't you give us the same old thing but with more power?' If that existed we probably would, but it doesn't."

Supporting the porting

Lou Bifano, IBM vice president for the SP series, grants an interview at a Mesa Lab press conference on 11 August. (Photo by Carlye Calvin.)

NCAR has been preparing for months to support those inside and outside the institution who must transfer their computer programs to the new machine. "We're converting 1,500 users to this system," says Sally, "everything from high-visibility, high-impact codes [such as NCAR's climate system model] to single users writing a code to do their research at their institution. We'll do everything from hand-holding to broad lectures."

SCD has crafted a three-tiered training approach. First, users will get an overview of the system, architecture, computing environment, and programming strategies and paradigms. Then, says Sally, "They're going to have to know what they need to do to prepare to port their codes [transfer them to the IBM]. Some codes are more standardized than others, some codes have more 'Crayisms' than others." Finally comes the actual porting. "The support [that] users will require at that stage is going to vary greatly depending on the code, its complexities, and how parallelized it already is."

Rodney James and Steve Hammond. (Photo by Carlye Calvin.)

According to Steve Hammond, head of SCD's Computational Science Section, about 90% of the jobs that currently run on SCD's CRAY J90s are single-processor jobs. For these users, conversion may be relatively painless. "If they're happy with their performance now," he says, "they'll be happy on the IBM." On the other hand, Rodney says, "The people who are doing the grand-challenge problems will have to use a cluster of shared-memory machines. Otherwise, it may be sufficient to use one shared-memory node. You [will] have to decide if you want to do the work of converting."

Like the complexity of their codes, the users' technical know-how also varies greatly. "Some users just need the exact syntax for the IBM and they're ready to go," Sally says. Others need an introduction to the basic philosophy of this kind of programming, training in message passing, and other groundwork. Sally has offered two introductory classes for users who ran more than one general accounting unit (an allotment of supercomputer time) on NCAR computers last year. Since not all users will go through the transition at the same time, classes will be offered as long as needed.

All the support staff emphasize that their goal is to transition users to the new architecture in general, not only to the IBM. SCD plans to upgrade the IBM to four processors per node next spring, and some additional new machines are also in the long-term plan.

"It's not quite SCD as usual," says Sally. "We're still on a steep learning scale. The programming paradigm is so different. We may not be able to answer even the simple problems immediately, as we did before. We're going to lean a lot on the programmer-scientists. Right now Steve Hammond's group has the numerical analysis and practice on the new paradigm, but the consulting office is growing that expertise." SCD has also enlisted IBM's assistance in training staff and helping with such internal issues as tuning the system to be most effective for the greatest number of people.

How do you move $6.2 million of hardware? Very carefully. (Photo by Carlye Calvin.)

"Fraught with angst"

No one is expecting the changeover to be a snap. "Change is fraught with angst," says Al. "There's a lot of promise in these systems, but realizing that promise represents some work. The most significant issues are that the science community understands the need to make the transition, and [that the change] doesn't retard the progress of the science."

Some scientists are actively looking forward to using the new architecture. For example, Mark Rast (HAO) will now be able to run his solar magnetohydrodynamics code here; he formerly used the now-defunct Pittsburgh Supercomputing Center. "This is scalably parallel code, so you can run it across as many processors as you can get your hands on," he says. "Recently we discovered that sunspots are surrounded by bright rings. The problem I want to do next is to model sunspots to see if there's a convective origin for that radiation surrounding sunspots. We're pretty much set to go."

Sally sums up, "We're trying to move all of our community forward in the best way we can, with everybody pitching in for this critical period. It's going to be a jolt, but once done, our user community will be extremely well poised to run on whatever our country comes up with in the way of supercomputers." •Carol Rasmussen

You might be wondering . . .

The IBM SP, code-named blackforest, in its new home. (Photo by Lynda Lester.)

Why is the new IBM called "blackforest"? One reason is its appearance. The computer's mainframe consists of eight tall, dark towers. The name also refers to the forested area along the Palmer Divide between Colorado Springs and Denver, which includes the town of Black Forest. The name came from SCD's Aaron Andersen, who was chatting with an IBM engineer one day before the system's arrival. "I don't remember if I said it or he said it, but someone said, 'Yep, and this row will be the black forest,' referring to the system's appearance. Then I put the two together and suggested the name."

The name blackforest continues an SCD tradition of naming big machines after Colorado fourteeners and other state landmarks. However, SCD is also starting to name some of its new machines after the functions they serve. For instance, an SGI computer recently acquired to perform data processing is called dataproc.

Are the climate system model (CSM) and the NCAR/Penn State mesoscale model (MM5) able to run on the IBM? The CSM will migrate as soon as possible, according to CGD's Jim Hack. The conversion will be made easier by the fact that the CSM is made up of several parallel components. The atmospheric portion, known as the community climate model, or CCM3, includes message-passing capabilities and has already been run on the new platform. Many of the CSM's 300 users run various components of the model (atmosphere, land, ocean, and sea ice) on machines outside of NCAR. The full CSM is primarily run on SCD's machines because of its size and complexity.

A parallel version of the MM5 is ready to go. It was created years ago at Argonne National Laboratory through a collaboration with MMM and is currently running in production on an IBM SP at the Air Force Weather Agency (at Offutt Air Force Base in Omaha, Nebraska). The MM5's successor-to-be, the weather research and forecasting (WRF) model now under development, is being designed to run on a variety of platforms.

John Michalakes. (Photo by Carlye Calvin.)

Aside from the code conversion, what other issues are emerging with the change in architecture? Getting data out of the machine efficiently is a growing concern, says John Michalakes (MMM and Argonne). Programs may run faster when more processors are used, but it still takes a relatively fixed amount of computing time for input and output tasks (I/O), "unless you do something about it," says John. The result is that I/O takes an increasing fraction of the total time for a given job as more processors are involved. For example, a program run on one processor might require only a small percentage of its time for I/O, "but on 64 processors your I/O time might be approaching 50%--not because the I/O time has gotten longer but because the computation time has gotten so much shorter."

What else is in the works for SCD? The current agreement with IBM allows SCD to upgrade blackforest this fall, potentially doubling the number of nodes in the current configuration. An option for next spring allows for bringing the machine from two to four processors per node, roughly doubling its peak speed.

What about the other SCD supercomputers? SCD will be decommissioning some of its older machines over the next year, gradually winding down their use as science applications are ported to the new IBMs. The IBM SP is taking on much of the computing formerly done by the CRAY C90 (antero). That machine is being decommissioned as soon as feasible, probably within the next few months. Two J90 "classics" J90s (aztec and paiute) are scheduled for decommissioning in roughly six and nine months, respectively; two others (ouray and chipeta) will be retained, along with the SGI Origin 2000 (ute).

Where can I learn more about blackforest? See SCD's official Web page for the new IBM SP. •BH


In this issue...
Other issues of Staff Notes Monthly


UCAR | NCAR | UOP

Edited by Bob Henson, bhenson@ucar.edu
Prepared for the Web by Jacque Marshall