The Fastest Computers in the World

Sat, 07/31/2004 - 8:00pm
The Fastest Computers in the World

The prospects for another Earth-Simulator-like event in 2005 are very good
Horst D. Simon

Since 1993, the TOP500 list of the world's fastest supercomputers has been released twice a year. The publication of the 23rd list a few weeks ago during the International Supercomputer Conference in Heidelberg, Germany, was a much-anticipated and closely watched event.

Artist's rendition of Blue Gene, a full-scale BG/L with 360 Tflop/s peak scheduled to become fully operational in early 2005. The computer's name is derived from its principle intended purpose - to model the folding of human proteins.

Over the years, the TOP500 has become one of the most important tools of the high-end computing community to assess the state of the field, in particular to detect trends in use of processors and architectures, market share of vendors, and geographical distribution of supercomputers. However, with its success, the list also has created its own dynamics. The simple fact that there is a ranked list has led to a sometimes unhealthy competition for the top spots on the list, and many vendors and sites are going out of their way to do anything to land a position among the Top 10.

Japanese Earth Simulator

One particular event that created world-wide attention was the introduction of the Japanese Earth Simulator system. The Earth Simulator was listed on the number one spot of the 19th TOP500 list for the first time in June of 2002. Several factors that have been widely discussed in the high-end computing community since that time contributed to the impact that the Earth Simulator had:
•its vector architecture, which was running against the trend of using commodity processors
•its high sustained performance, which was benchmarked quickly on several key applications
•its high systems price, which was generally listed at about $400 million, and
•finally, the fact that it was so far ahead when compared to the competition. The Earth Simulator had a higher performance than the next 19 systems on the list combined, when measured by the Rmax value, the performance of the largest feasible run of the LINPACK benchmark in Tflop/s.

In June 2002, the Earth Simulator had a higher performance than the next 19 systems on the list combined, when measured by the Rmax value, the performance of the largest feasible run of the LINPACK benchmark in Tflop/s.

This seemed truly remarkable and stunned the world wide supercomputing community at the time. Jack Dongarra, Director of the Innovative Computing Laboratory at the University of Tennessee, was quoted in The New York Times: "In some sense we have a Computenik on our hands." But was it really such a unique step forward in the continuing race for supercomputer performance? Intriguingly, the TOP500 list itself can tell us.

Impact of top systems

Erich Strohmaier, founding co-editor of the TOP500 List, and I have produced a number of rankings of historically important supercomputers on the list. One way to measure the impact of a particular new entry to the list is by computing the fraction of its Rmax performance compared to the total performance of the list at the time. By computing this ratio, the actual performance level is factored out, and we can compare systems that were introduced at different times in the last dozen years.

ASCI Red interconnections. The full ASCI Red achieved 1.3 Tflop/s in June 1997.

It turns out that, even by this measure, the Earth Simulator was unique. In June 2002, it accounted for a whopping 16.2 percent of the performance of the entire list. One sixth of the performance of the list was concentrated in this one single machine. For comparison, Table 1 shows the other machines that occupied the number one spot of the TOP500 in the past, and the corresponding fraction of the total list performance they were able to capture:

While this list confirms from yet another perspective the unique position of the Earth Simulator in the history of supercomputing, there are several other interesting observations that one can make. First of all, there are three Japanese machines among the seven systems that occupied the number one spot. This is not reflective at all of the list in its entirety, where only about 15 to 20 percent of the systems are based in Japan. Clearly, Japan has a tradition of building a sequence of very large, architecturally unique systems for the computational science community, as the Numerical Wind Tunnel (NWT) and the CP-PACS system show. Interestingly, all three machine were custom-built to a specific set of applications: computational fluid dynamics on the NWT, lattice QCD on the CP-PACS, and climate and geo applications on the ES. Also, one cannot fail to notice that the three major Japanese vendors had their turn in producing the leading machine of the country. In historical perspective, while still imposing because of its great performance, the Earth Simulator can be viewed quite simply as having been the next logical step in a progression of Japanese super systems.

With a peak performance of 12.3 Tflop/s, ASCI White helps maintain the safety and reliability of the U.S. nuclear stockpile by simulating in three dimensions the aging and operation of nuclear weapons.

All the U.S. machines on this list of super systems are based at labs of the National Nuclear Security Administration (NNSA) of the Department of Energy (DOE). The machines introduced after 1995 are all associated with the Advanced Simulation and Computing (ASC) Program in NNSA. Clearly, the ASC program has been the leader in the U.S. high-end computing community in the last decade. However, the table also reveals an imbalance: the only U.S. machine that was used for open basic science was the CM-5 at LANL, whereas all three Japanese systems were focusing on basic science applications.

What will happen next?

Over the years, the TOP500 list has documented the incredible performance gains of the installed base of high-end systems. Total performance of the TOP500 systems has grown exponentially since the inception of the list. Therefore, it is fairly easy to predict that sum of the performance of all the systems by November 2004 will exceed 1 Pflop/s and reach possibly 1.5 Pflop/s by June 2005. In spite of this continued growth, the prospects for another Earth-Simulator-like event in 2005 are very good.

Table 1: Rankings of Historically Important Supercomputers by % Rmax of Total List Performance

Lawrence Livermore National Laboratory and IBM are gearing up to deploy a full scale BG/L (Blue Gene) with 360 Tflop/s peak in early 2005. Almost certainly this will be the next number one machine for the TOP500, but will it also be able to compete on a historical basis with the ES for domination of the list? In order to do this BG/L would have to produce about a whopping 240 Tflop/s Rmax on LINPACK (about 16 percent of the projected 1.5 Pflop/s total performance for the whole list in June 2005). This is within the range of the projected performance of BG/L but, as usual in the always changing world of supercomputing, quite a challenge ahead for my colleagues at LLNL. I wish them best of luck and success to make BG/L the outstanding new high-end system that it promises to be.

Horst Simon is Associate Laboratory Director for Computing Sciences at Lawrence Berkley National Laboratory. He may be contacted at


Share this Story

You may login with either your assigned username or your e-mail address.
The password field is case sensitive.