Advertisement
Articles
Advertisement

31st TOP500 List Topped by First-ever Petaflops Supercomputer

Tue, 09/02/2008 - 12:17pm
Hans Werner Meuer

15 years of International Rankings

Manufacturers’ shares
click to enlarge

Figure 1: Manufacturers’ shares
 

One of the interesting things about us humans is how we like to quantify things. For example, we make lists such as the Top 10 songs, the Fortune 500 or the 100 best restaurants in San Francisco. At the same time, we also like to establish numerical barriers almost magical in their psychological significance. Examples include flying faster than the speed of sound, running a mile in less than four minutes, going faster than 150 miles-per-hour on a motorcycle.

In the world of high performance computing, building a system that performed at the petaflop/s level — carrying out 1,000 trillion floating point operations per second – has been such a barrier for more than 10 years. With the June 9, 2008, announcement that the IBM Roadrunner supercomputer at the U.S. Department of Energy’s Los Alamos National Laboratory achieved a sustained 1.026 petaflop/s performance while running the Linpack benchmark, the first such system has entered the record books.

This performance milestone will, of course, land atop a list — in this case the TOP500 list of the world’s most powerful supercomputers, which I helped launch 15 years ago. When the thirty-first edition of the twice-yearly TOP500 list was released on June 18, it was be headed by Roadrunner, which is twice as fast as IBM’s Blue Gene system at Lawrence Livermore National Laboratory, the holder of the top spot on the list since November 2004.

While the news about Roadrunner has caused quite a stir in the community, the implications of this new reality have not yet set in. At the

First and 30th TOP500 lists by country
click to enlarge

Table 1: First and 30th TOP500 lists by country
 

International Supercomputing Conference (ISC) held June 17-20 in Dresden, a special panel discussion on "Roadrunner — the First Petaflop/s System in the World and its Impact on Supercomputing" was on the conference program. The panel included HPC experts from around the world.

For me personally, breaking the petaflop/s barrier is the equivalent of a runner finally running the 100-meter race in 9.5 seconds — a level of performance everyone hopes for but proves elusive to actually achieve. The significance of this milestone is that, when reaching this performance level with a real system, there is no longer any (psychological) doubt that you can improve even further. The situation is quite similar to 11 years ago when Intel ASCI Red became the number one in our ninth TOP500 list in June 1997 as the first teraflops system on earth. And remember, it was only 22 years ago in 1986, the legendary Cray 2 passed the 1 gigaflop/s level.

Roadrunner is a hybrid supercomputer built by IBM, using 6,912 dual-core Opteron processors from Advanced Micro Devices (AMD) and 12,960 of IBM’s Cell eDP accelerators. At the end of May, the system posted a peak performance of 1.026 petaflop/s running the Linpack benchmark. This test consisted of solving linear equations involving more than 2 million equations and an equal number of unknowns.

A little background on the TOP500 list

T he TOP500 project was launched in 1993 to provide a reliable basis for tracking and detecting trends in high performance computing. Twice a year, a list

First and 30th TOP500 lists - Manufacturers
click to enlarge

Table 2: First and 30th TOP500 lists - Manufacturers
 

of the sites operating the world’s 500 most powerful computer systems is compiled and released. The best performance on the Linpack benchmark is used as the measurement for ranking the computer systems. The list contains a variety of information including the systems’ specifications and major application areas. Information on all TOP500 lists issued to date is available at: www.top500.org

The TOP500 list actually grew out of the meeting that has become the ISC. From 1986 through 1992, the Mannheim supercomputer statistics were presented to participants of the Supercomputer Seminars at Mannheim University, and we noticed an increased interest in this kind of data from year to year. The statistics simply counted the vector computer systems installed in the U.S., Japan and Europe, since in the mid-1980s a supercomputer was synonymous with a vector computer. Counting the vector computers installed worldwide primarily depended on the input provided by the manufacturers of the systems, which made the statistics less reliable. Whereas we knew very well which vector systems existed in the U.S. and Europe, information regarding systems in Japan was much more difficult to collect. We therefore, contacted the three Japanese vector computer manufacturers — Fujitsu, NEC and Hitachi — for information on all systems installed in Japan and used this not — entirely reliable data as the basis for our yearly estimations.

In 1992, we released the last Mannheim statistics, counting 530 supercomputers installed worldwide. Figure 1 shows the result of our 7 — year activity regarding the share of the different manufacturers in the supercomputer market. Cray clearly led with a constant share of about 60 percent; the second U.S. manufacturer, CDC (Control Data

Supercomputer installations worldwide
click to enlarge

Figure 2: Supercomputer installations worldwide
 

Corporation), had been doing rather well with just under 10 percent—until the end of the eighties when their share started to drop, and they were completely out of the supercomputer business in 1991. The Japanese vector computer manufacturers Fujitsu, NEC and Hitachi entered into our statistics in 1986 with a combined share of 20 percent, and were able to expand their share to about 40 percent in 1992, with Fujitsu clearly in the lead at 30 percent of all vector computers installed worldwide.

Though useful, the Mannheim supercomputer statistics, lacked a reliable database. Additionally, from the early nineties on, vector computers were no longer the only supercomputer architecture; massively parallel systems such as the CM2 of Thinking Machines (TMC) had entered the market. What we needed was a method to define what constituted a "supercomputer" and could be updated on a yearly basis.

This is why I, along with Erich Strohmaier, started the TOP500 project at the University of Mannheim, Germany, in spring 1993. Here are our guiding principles:

• List the 500 most powerful computers in the world

• Use Rmax, the best Linpack performance, as the benchmark

• Update and publish the TOP500 list twice a year, in June at the ISC in Germany and in November at SC in the U.S.

• Make all TOP500 data publicly available at www.top500.org

Many people who are familiar with the list have probably discussed its relevance, as well as wondered about certain aspects of the project. Here are answers to the questions we hear most often:

Why is it "the 500 most powerful computers? One reason is that the last time we counted the supercomputers worldwide in 1992, we ended up with 530. And another reason surely is the (emotional) influence of the Forbes 500 lists, e.g. of the 500 richest men or the 500 biggest corporations in the world.

Supercomputer installations Asia
click to enlarge

Figure 3: Supercomputer installations Asia
 

"Most powerful" is defined by a common benchmark, for which we had chosen Linpack. But why Linpack? Linpack data, above all Rmax, are well known and easily available for all systems in question. Strictly speaking, TOP500 lists computers only by their ability to solve a set of linear equations, A x = b, using a dense random matrix A.

An alternative to updating the TOP500 list twice a year would be to continuously update the list. Why don’t we do this? First, updating the TOP500 list is a time-consuming and complex process. Second, we thought that a biannual publication would be a much better way to show significant changes, which the HPC community is primarily interested in, and this has proven to be true over the years.

In addition to myself and Erich Strohmaier, now with Lawrence Berkeley National Laboratory (LBNL), USA; the other TOP500 editors are Jack Dongarra, the "Father of Linpack," University of Tennessee, USA; and Horst Simon, also of LBNL, who has supported the TOP500 project from the very beginning and officially joined the project in 2000.

The reasons behind the list’s staying power

After more than 15 years and 31 lists, we have managed to establish the TOP500 list among HPC users, manufacturers and the media as the instrument for analyzing the HPC market. One of the most important reasons for TOP500’s success is that we foster competition between countries, manufacturers and computing sites.

Competition between countries

When Japan’s Earth Simulator grabbed the number one spot in 2002, it triggered a large—scale response in the U.S., including the creation of the Leadership Computing Program by the U.S. Department of Energy, proof that such competition is taken very seriously.

Manufacturers/systems
click to enlarge

Figure 4: Manufacturers/systems
 

Based on our 1992 Mannheim supercomputer statistics, we expected a neck—and—neck race between the U.S. and Japan for our first TOP500 list (see Table 1). However, the first TOP500 list showed the U.S. clearly leading with 45 percent of all TOP500 installations, and Japan was far behind with only 22 percent.

If we look at the thirtieth TOP500 list, published in November 2007, we see that the U.S. dominance is even bigger today: Now they have a share of 56.6 percent of all systems installed, while Japan’s share is only 4 percent. Even the U.K., with a 9.6 percent share, and Germany, with a 6.2 percent share, are ahead of Japan, which is followed closely by France with 3.4 percent.

The overall development of the various countries’ share through the 30 previous lists is also very interesting (see Figure 2). In 1993, the U.S. started with a huge share of 45 percent, which they have even managed to expand slightly. Japan, however, started with a 22 percent share but has fallen back significantly. In Europe, Germany, which had always clearly been in front of the U.K., is now far behind the U.K.

Figure 3 illustrates the development of the supercomputer installations in Asia since 1993. It shows the rapid drop in Japan’s share and indicates that China and India will enter the HPC market as new players in the medium term. Future lists will bear out whether this trend continues.

Competition between manufacturers

Vendors take a great deal of pride in both the ranking and number of their systems on the list. And, as might be expected in a fast-moving industry, there is a lot of change. In Table 2, Cray Research was the clear leader on our first TOP500 list with a 41 percent share, ahead of Fujitsu with 14 percent. Third place was already held by TMC — a non-vector supercomputer

Top 20 through 30 TOP500 lists
click to enlarge

Table 3: Top 20 through 30 TOP500 lists
 

manufacturer — with 10.8 percent, ahead of Intel with 8.8 percent. At that time, Intel still had its Supercomputer Division, which also produced non-vector supercomputers. Interestingly, today’s leading HPC manufacturers, IBM and Hewlett-Packard, were not represented on the first TOP500 list at all.

In the thirtieth TOP500 list of November 2007, IBM has the clear lead with a 46.4 percent share. The second position is held by Hewlett Packard with 33.2 percent, and the leader of 1993, Cray Research (now Cray Inc.), is now down to 2.8 percent.

If we look at the development of the manufacturers since 1993 (see Figure 4), we notice that the HPC market has been very dynamic: in only 15 years, the market has seen a complete transformation. Cray has turned from the market leader into a minor player, and IBM — of virtually no importance at the very beginning — has become the dominant market leader. Hewlett-Packard — once a small HPC manufacturer represented in the first TOP500 list only by Convex, which they later took over — has established itself as number two, right after IBM. The importance of Sun Microsystems, which used to be number two among the HPC manufacturers, has dropped dramatically. But Sun is now trying to catch up with the other HPC players.

Competition between sites

T able 3 lists the 20 most powerful sites through 30 TOP500 lists. The percentage in the right-hand column is a site’s relative contribution to the Rmax total of the average list of 30. In this list, the U.S. leads with two—thirds of the sites (14) ahead of Japan, with four centers (20 percent). The fact that the U.S. has the four most powerful sites in the world also shows its dominance as a consumer and producer of HPC systems. Europe is represented by Germany (Forschungszentrum Jülich) at position 18 and by the U.K. (ECMWF) at position 15. (Note that ECMWF is a European and not purely a U.K. site.)

Some final thoughts

Our simple TOP500 approach does not define "the supercomputer" as such, but it benchmarks systems on whether or not they qualify for the TOP500 list. The benchmark we decided upon was Linpack, which means that systems are ranked only by their ability to solve a set of linear equations, A x = b, using a dense random matrix A. Therefore, any supercomputer architecture can make it into the TOP500 list, as long as it is able to solve a set of linear equations using floating point arithmetic. We have been criticized for this choice from the very beginning, but now, after 15 years, we can say that it was exactly this choice that has made TOP500 so successful. And there was, and still is, no alternative to Linpack. Any other benchmark would have been similarly specific, but would not have been so easily available for all systems—a very important factor, as compiling the TOP500 lists twice a year is a very complex process.

One of Linpack’s advantages is also its scalability in the sense that it has allowed us to benchmark systems that cover a performance range of nine to 10 orders of magnitude. It is true that Linpack delivers performance figures that occupy the upper end of any other application performance. In fact, no other realistic application delivers a better efficiency (Rmax/Rpeak) of a system. But using the peak performance instead of Linpack, which "experts" have often recommended to us, does not make any sense. We have seen a lot of new systems that were not able to run the Linpack test because they were not stable enough. In fact, the Linpack test is kind of a first selection procedure for new HPC systems.

The misinterpretation of the TOP500 results has led to a negative attitude toward Linpack. Politicians, for example, tend to see a system’s TOP500 rank as a general rank that is valid for all applications which, of course, is not true. The TOP500 rank only reflects a system’s ability to solve a linear set of equations, and it does not tell anything about its performance with respect to other applications. Therefore, the TOP500 list is not a tool for finding a supercomputer system for an organization. In this case, you would have to run your own benchmark tests that are relevant to your own applications. In this context, an approach such as the "HPC Challenge Benchmark" consisting of seven different benchmarks, which test different parts of a supercomputer, is critical.

The TOP500 lists’ success lies in compiling and analyzing data over so many years. Despite relying solely on Linpack, we have been able to correctly identify and track all developments and trends over the past 15 years, covering manufactures and users of HPC systems, architectures, interconnects, processors, operating systems, etcetera. And above all, TOP500’s strength is that it has proved an exceptionally reliable tool for forecasting developments in system performance.

It is very unlikely that another benchmark will replace Linpack as basis for the TOP500 lists in the near future. And in any case, we would stick to the concept of a single benchmark because this is the easiest way to trigger competition between manufacturers, countries and sites, which is extremely important for the overall acceptance of the TOP500 lists. Of course, we appreciate it if alternative benchmarks are introduced to complement Linpack. In fact, we are working on this already and encourage other HPC experts to come up with constructive suggestions, too.

Hans Werner Meuer, is a professor at the University of Mannheim and Prometeus GmbH, Germany. He may be reached at editor@ScientificComputing.com.

Acronyms
AMD Advanced Micro Devices | ASCI Accelerated Strategic Computing Initiative | CDC Control Data Corporation | ECMWF European Center for Medium range Weather Forecasting | ERDC U.S. Army Engineer Research and Development Center | FZJ Forschungszentrum Jülich | HPC High Performance Computing | ISC International Supercomputing Conference | LBNL Lawrence Berkeley National Laboratory | MSRC Major Shared Resource Center | NCSA National Center for Supercomputing Applications | NERSC National Energy Research Scientific Applications | TMC Thinking Machines Corporation

JAGUAR (#5 ranking, June 2008)
JAGUAR uses more than 31,000 processing cores to deliver up to 263 trillion calculations a second. #5 ranking, June 2008 
EKA (#8 ranking, June 2008)
CALLED EKA (Sanskrit for #1), the supercomputer at the CRL facility in Pune, India, follows a near-circular layout of the data center, unlike traditional hot-aisle/cold-aisle rows. #8 ranking, June 2008 
RANGER (#4 ranking, June 2008)
RANGER is the largest computing resource on the NSF TeraGrid, a nationwide network of academic advanced computing centers. #4 ranking, June 2008 
BlueGene/L (#2 ranking, June 2008)
THE UNUSUAL slant to BlueGene/L’s cabinets is a necessary design element to keep cooled air flowing properly around each cabinet’s processors. #2 ranking, June 2008 
Roadrunner, currently the world's fastest computer (June 2008)
CURRENTLY the world's fastest computer (June 2008), Roadrunner is the first supercomputer to use a hybrid processor architecture. Housed at Los Alamos National Laboratory, the lab worked collaboratively with IBM for six years to deliver a novel architecture to meet the nation’s evolving national security needs.

 

The SC08 Conference
SC08 attendees see first-hand how high performance computing, networking, storage and analysis touch all disciplines to enhance people's abilities to understand information, as well as how they lead to new understanding, promote interdisciplinary projects and affect the educational process through the use of computers in modeling and simulation in the classroom. Following the traditions set by the first SC Conference in 1988, SC08 is celebrating 20 years of unleashing the power of HPC, featuring the latest scientific and technical innovations from around the world. Bringing together scientists, engineers, researchers, educators, programmers, system administrators and managers, the conference provides a forum for demonstrating how these developments are driving new ideas, new discoveries and new industries, solving heretofore unsolvable problems in nanoscience, biotechnology, climate research, astrophysics, chemistry, fusion research, drug research, homeland defense, nuclear technologies and many other fields.

New for 2008 are two Technology Thrusts: Energy and Biomedical Informatics. The Energy Thrust will focus both on the use of high
The SC08 Conference
performance computing in renewable energy and energy efficiency research, as well as the challenge of best practices and technology trends aimed at energy-efficient data centers. The Biomedical Informatics Thrust will focus on the use of grid and high performance computing technologies to support translational biomedical research, computational biology, large-scale image analysis, personalized medicine and systems biology.

Also a new conference component, the Music Initiative complements Austin’s title as the “Live Music Capital of the World.” While music and musicians may not readily come to mind when you think of supercomputing, it’s no secret that a large number of attendees are also composers, musicians and music lovers, as well as scientists, mathematicians and engineers. Scientists and programmers who create visualizations of their models and simulations were invited to compose or integrate music or sound to add to the viewer’s experience and understanding and to share this work at SC08. The ViSCiTunes (pronounced "Vizzytunes") library is the first international, cool collection of scientific compositions and visualizations with algorithmically-connected sound and music that aims to enrich the viewers understanding of the science both visually and aurally.

Scientific Computing is proud to be an SC08 Media Sponsor. Visit us at Booth 1316.For more information: sc08.supercomputing.org
Advertisement

Share this Story

X
You may login with either your assigned username or your e-mail address.
The password field is case sensitive.
Loading