UCAR Staff Notes masthead
Home Our Organization Research News Center Education Community Tools Libraries
About Staff Notes
Past Issues
Favorite Photos
Feedback
How to Subscribe
Search

 

staff notes header

April 2005

Small but quick,
Blue Gene arrives at the mesa

If you walk by the new Blue Gene/L supercomputer in the Mesa Lab, the first thing you’ll notice is what’s not there.

Other supercomputers consist of rows of cabinets extending through much of SCD’s computer room. Blue Gene, however, resides mostly in just one average-sized cabinet.

But don’t let the small size fool you. That single cabinet, which is easily recognizable by its tilted profile, contains 2,048 processors with a top speed of 5.73 teraflops (trillion operations per second). It’s unofficially rated as the world’s 33rd fastest system, according to a benchmark set of equations known as Linpack that is used for judging the speed of computers.

“This kind of machine represents a breakthrough in many ways, and it is likely the forerunner of other machines to follow,” says Rich Loft, manager of SCD’s Computational Science Section.

Blue Gene promises to usher in a new era of supercomputing that will feature far denser and more energy-efficient machines. The manufacturer, IBM, used the innovative approach of ganging together thousands of slower processors that demand relatively little electricity (the ML system consumes about 25 kilowatts compared to approximately 400 kilowatts for NCAR’s flagship supercomputer, Blue Sky) and can be packed more tightly because they produce less heat. Hence the nickname for the new machine: Frost.

It may be a good fit for ML, where electrical power is limited. “This machine is a highly parallel system with a very low power footprint for the amount of computational power it presents,” Rich explains.

Blue Gene is far more compact than Blue Sky, which has 50 cabinets that each contain 32 processors. The new system, by one key measure, is also faster. Although Blue Sky has a top speed of 8.3 teraflops, it achieves just 4.2 teraflops on the Linpack benchmark. Blue Gene, in contrast, runs Linpack equations at 4.6 teraflops; a much larger Blue Gene system at Lawrence Livermore National Laboratory recently set a new world record by running Linpack at more than 135 teraflops.

But Blue Gene, at least for the moment, remains an experimental architecture with limited capabilities. Unlike Blue Sky, which can work on a number of experiments simultaneously, NCAR’s new machine processes just two tasks at a time because it can’t be easily partitioned. It also lacks a queueing system at present that would allow different projects to be handled
in turn.

“Blue Sky is a production workhorse while Blue Gene is kind of an experimental whiz kid,” Rich says. “It’s an immature but important technology.”

SCD collaborated with researchers from CU-Denver and CU-Boulder to get NSF funding to bring Blue Gene to the mesa. The machine was delivered on March 15; just eight days later, it was already operational and undergoing tests. IBM helped defray the costs of the system, in part because the company wants to get feedback about the supercomputer’s performance.

hiaper

Henry Tufo of SCD and CU (left) and Rich Loft of SCD are principal investigators for Blue Gene. (Photo by Carlye Calvin.)

Targeted research

Researchers will use Blue Gene to test several applications and to test and debug new pieces of system software as they become available.

One of the most intriguing applications, from an NCAR perspective, is the superparameterization of convective processes in clouds. This emerging and computationally intensive technique incorporates two-dimensional cloud models within a three-dimensional regional or global model. The process can capture cloud properties on scales as fine as one to five kilometers, thereby helping scientists simulate, with far greater accuracy, the impacts of clouds on climate.

Another application, being tested by CU researchers, involves a set of numerical methods known as multigrid solvers. Such methods can provide accelerated solutions of the partial differential equations that govern dynamics. If they can be developed to run effectively on large numbers of processors, modelers could simulate additional detail about the atmosphere for the same computational cost. That’s where a highly “scalable” machine like Blue Gene comes in. (When researchers use a scalable machine, the time needed for solving a problem remains constant as the size of the problem and the number of processors are increased.)
Blue Gene will also be used for certain types of other applications, such as wildfire and flight test simulations.

Rich, a 10-year veteran of SCD, says he has rarely been this excited by a new technology. When IBM created Blue Gene, it was for specific research applications at a single Department of Energy lab. But the architecture demonstrated such great potential that IBM began retooling it for broader types of research.

As Rich puts it, “What we have here is a computer that’s escaped from the lab.”

He adds that Blue Gene may soon evolve into a more general purpose architecture. “I suspect this machine is the harbinger of other machines that will be increasingly flexible and usable for supercomputing,” he explains. “We’re helping IBM learn to evolve this technology by providing them with customer feedback based on experience with actual applications.”

• by David Hosansky

 

On the Web

For more about research involving Blue Gene


Also in this issue...

Here's HIAPER

A research veteran turns to HIAPER

CGD research:
Climate change is inevitable

Random Profile:
Gail Rutledge

The value of a good forecast

Just One Look


 

Staff Notes home page | News Center