Advertisement
Driving the Need for Computing Power and Speed

Advancements in technical computing from 1980 to the future
Anne Fitzpatrick, Ph.D.

The Connection Machine, a massively parallel supercomputer designed by Thinking Machines Corporation in the late 1980s to work on simulating intelligence and life [5]

Broadly speaking, computers have played roles in laboratories since antiquity, from the time astronomers and navigators employed simple calculating technologies to assist with their respective work. But not until the mid-twentieth century did computationally-based, large-scale research involving massive brute force number crunching arise, with the advent of digital electronic computers. Such machines were initially based on vacuum tubes, then replaced by integrated circuits and, later, silicon chips.

Architecturally, the first vacuum tube computers, such as the University of Pennsylvania's Electronic and Numeric Integrator and Calculator (ENIAC) and other early electronic digital machines, were serial machines of the "von Neumann" type, with one central processor or CPU that all operations had to move through. Since then, other types of architectures were developed in order to increase computing speed and methods of operation, including vector-type architecture first seen in the 1970s.

Technology historian Donald MacKenzie has suggested that, given the enormous computational demands of nuclear weapons design and development, the needs of Los Alamos and Livermore, in particular, influenced Seymour Cray's earliest supercomputer design architecture, particularly in the 1960s and 1970s. [1] Later, other institutions and types of scientific problems swayed the direction and shapes of high-performance hardware and software: weather modeling, for example, bore on computer development in the vector machine era.

The 1980s
The 1980s saw large increases in computing speed and power, along with some novel directions. The United States and Japan have led the way in high-performance scientific computing in recent decades. On August 12, 1981, IBM launched the open-architecture IBM PC, which employed a 4.77 MHz Intel 8088 microprocessor. The PC came equipped with 16 kilobytes of memory, expandable to 256k, and had one or two 160k floppy disk drives and an optional color monitor. Notably, the PC led scientific computing towards the desktop via distributed computing and away from a central location to which everyone went to run problems, raising the prospects for parallel computing. Throughout the 1980s, the Defense Advanced Research Projects Agency (DARPA) and academic researchers helped to push forward the state of parallel computation.

Yet vector machines dominated the 1980s. And, although scores of specialized computer systems and companies have appeared and disappeared over the last couple of decades, a few stand out as the most influential in scientific settings.

During this period in the United States, Cray Research dominated. Founded in 1972 as the Control Data Corporation, Cray Research was the world's leader in providing multi-million dollar supercomputers for specialized defense and other laboratories. Los Alamos Scientific Laboratory had received the first Cray-1 - then the world's fastest vector computer — in 1976 in order to simulate nuclear weapons' behavior.

In 1982, Cray Research released the X-MP, which featured a multi-processor approach. The X-MP was first available in a two-processor version and later a four-processor variant. In 1985, the Cray-2 became available and could run at up to 1 billion operations per second — 10 times faster than the Cray-1.

Cray Research introduced the CRAY Y-MP in 1988, essentially a larger and faster version of the X-MP. The Y-MP allowed up to eight processors and was the world's first supercomputer to sustain over 1 Gigaflop on many applications. Many of these machines were delivered to the National Laboratories and to commercial enterprises, such as Boeing. Cray Research had a few competitors. In 1983, Danny Hillis co-founded Thinking Machines Corporation to promote unique massively parallel systems and developed the Connection Machine. The largest of these in the late 1980s was the CM2 scalable parallel computer, which performed at 28 Gigaflops. Architecturally, the CM2 was made up of an array of simple proprietary bit-serial processors directly connected to local memory; it could have up to 64,000 processors.

Thinking Machines' subsequent CM5 system consisted of 32 processing nodes and a control processor. It was capable of performing 64-bit floating-point arithmetic at a rate of 128 Megaflops. Yet, competition in the commercial high-performance computing field was fierce, especially for smaller companies; Thinking Machines Corporation declared bankruptcy in 1994.

Los Alamos' HP ASCI Q, part of a DOE collaboration begun in 1995 to create leading-edge modeling and simulation capabilities to maintain the U.S. nuclear stockpile [5]

By 1989, Cray Research split into two companies. Seymour Cray left to head a spin-off — Cray Computer Corporation. Although none were ever sold, Seymour Cray worked on the Cray-3 — based on gallium-arsenide chips — at Cray Computer until his untimely death in 1996.

Japan also began to get in on the supercomputer market in the 1980s, building machines that were fairly similar to the Cray machines in architecture and speed. In 1982, Fujitsu released the vector VP100 and VP200 series, the latter sporting a peak rate of 500 Megaflops. The VP400 followed in 1985, which achieved over a 1 Gigaflop performance. In 1984, Hitachi Corporation designed the S810, with peak rates of up to 800 megaflops. One of the first production models went to the National Institute for Molecular Science in Okazaki. In 1985, NEC delivered the SX-2 vector supercomputer, achieving 46 megaflops in performance using one processor. All of the above-mentioned Japanese machines had expandable vector capabilities. Although many were used for nuclear power research in Japan and in the Japanese National Universities, others were shipped to Europe, Australia, and New Zealand for use in universities and in various scientific and industrial settings.

Fearing its severe technological lag behind the west, the Soviet Union likewise attempted to build supercomputers well into the 1980s. Vsevolod Burtsev proposed the El'brus computer family in 1970 based on symmetric multi-processor stack-based CPU architecture instead of vector-pipelining, limiting the machine's performance speeds. [2] El'brus-1 was completed in 1980 and ran at up to 15 million operations per second. In 1985, El'brus-2 clocked a processing rate of 125 million operations per second. Both El'brus systems went into serial production, but were used almost exclusively in defense institutions, such as the Space Monitoring Center in the Soviet Far East, and were driven by military as opposed to pure scientific research needs. Sadly, the Soviet Union's centralized economy, inefficient factory production system (consider that El'brus was conceived in 1970 but not completed for a decade) and stifling government bureaucracy never allowed its scientists to catch up with the West to create a flourishing domestic high-performance computer industry.

The 1990s
As the Soviet Union came apart, the 1990s witnessed a massive move toward parallel computation in science, where hundreds and even thousands of standard desktop-type CPUs could be linked together to process one or more applications simultaneously. To keep in step, in 1993, Cray Research offered its first massively parallel processing system, the Cray T3D, followed by the T3E in 1995 — the first machine to achieve 1 trillion operations per second. But Cray machines were expensive compared to the new off-the-shelf clusters of CPUs that many scientists were starting to favor. This phenomenon, combined with the cessation of nuclear testing in 1992, gave rise to the newest directions in computing.

Since 1995, the United States Department of Energy's (DOE) National Laboratories and the American commercial computing industry have been developing several powerful scientific computers as part of the Accelerated Strategic Computing Initiative (ASCI). This program is a key part of the DOE's Science Based Stockpile Stewardship program, which intends to maintain the American nuclear weapons stockpile without performing actual tests.

ASCI Red, a 3 teraOPS (or three trillion operations per second) machine was the first of these, installed at Sandia National Laboratory in 1997 and built by Intel. A year later Silicon Graphics Incorporated constructed another similar computer, Blue Mountain, also a 3 teraOPS machine at Los Alamos National Laboratory. Next was ASCI White at Lawrence Livermore National Laboratory, a 12 teraOPS IBM machine employing over 8000 processors.

The present day
Today, computing in scientific laboratories is taking several varying directions based on the types of problems people are trying to solve. Scientists favor two major architectures currently: clusters of Cray-inspired vector machines and clusters of scalar uni and multi-processors. [3] In addition, scientific computing is becoming more and more concerned with software as well as hardware.

Because of the now wide availability of PCs, a popular type of distributed computing known as Beowulf clusters — another form of parallel computing — is now found in university science laboratories all over the world. Such clusters divide a problem amongst almost any number of machines running Beowulf, which runs standard software such as Linux. Although the clusters are cheap and easy to build, they are not appropriate for solving problems that require large amounts of shared memory where codes cannot be partitioned.

For now, the Japanese have pulled ahead with a contemporary type of vector supercomputer. The Top 500 List ranks NEC's Earth Simulator in Yokohama, Japan, as the fastest machine in the world. The Earth Simulator occupies a space the size of four tennis courts and contains 640 processor nodes and 5,120 CPUs. It has achieved a peak performance of 40 TeraOPS.

The Top 500 List also cites Los Alamos' Hewlett-Packard Q machine as second fastest. [4] Being assembled and brought on line in three stages, when completed, this computer will be able to operate at 30 trillion operations per second. Q will have 12,288 CPUs, 22 terabytes of memory and 664 terabytes of global storage. It will be used for nuclear weapon simulations, as well as for climate modeling, cosmological problems, and many other applications.

Also under construction, the specialized IBM-built Blue Gene at Lawrence Livermore will contain 65,000 processors and is expected to operate at 360 trillion operations per second when complete. Other large computers are found at National Science Foundation-sponsored computing centers such as the National Center for Supercomputing Applications at the University of Illinois Urbana-Champagne. This center is currently installing an Intel Xeon-based Linux cluster from Dell with an expected peak performance of 17.7 Teraflops. Laboratories from academic institutions to private industry to military and defense-related facilities have all, over time, driven the need for speed and power. Even the National Security Agency - with its own information gathering-focused laboratories - has to some degree pushed the state-of-the-art over the last decades. Today, supercomputers are integral parts of the laboratory environment and, given the staggering leaps in speed and memory available, are allowing scientists to pursue problems they were previously unable to tackle.

The future of scientific computing
What the future holds is not crystal clear, but it is doubtless that laboratories will never have enough computing power for scientific applications ranging from nuclear weapons simulations to protein folding to quantum chemistry to countless other research areas.

With bigger computers and more complex software come some obstacles. Supercomputers are not infinitely scalable, and it may be unlikely that the leaps in speed and power as seen in the last 20-plus years will continue on such a staggering upward linear path. Difficulties with linking numerous CPUs together raise issues such as cost and delivery time versus performance. Brand new architectures may provide some solutions to such problems.

Regardless of the future forms scientific computing will take, simulations are now inextricable in the research process because there are some experiments you simply cannot do hands-on, such as looking into how black holes form or collapse. Although simulations cannot always give perfect answers like real experiments, they are providing an entirely new way of approaching science. Some scientists have even likened the employment of computing to a "third" component in the scientific method, in addition to theory and experimentation. This third methodology may become more concrete when we enter the petaflop era in computing power.

Most likely, there are a wide variety of computer architectures on the far horizon, where we may witness exciting new directions such as quantum computing or designs inspired by biological systems.

References
1. Donald MacKenzie, "The Influence of Los Alamos and Livermore National Labora-tories on the Development of Supercomputing," IEEE Annals of the History of Computing, Vol. 13, No. 2, 1991, 179-201.
2. Peter Wolcott and Mikhail N. Dorojevets, "The Institute of Precision Mechanics and Computer Technology and the El'brus family of Computers," IEEE Annals of the History of Computing, Vol. 20., No.1 1998, 4-14.
3. Gordon Bell and Jim Gray, "What's Next in High-Performance Computing?" Communi-cations of the ACM, February 2002, Vol. 45, No. 2, 91-95.
4. www.Top500.org
5. Images courtesy of Los Alamos National Lab

Anne Fitzpatrick, Ph.D., is an information technology policy specialist and works in the Computer and Computational Sciences Division at Los Alamos National Laboratory. She may be contacted at editor@scimag.com.

Advertisement
Advertisement