Advertisement

The Gordon Bell Prize recognizes the extraordinary progress made each year in the innovative application of parallel computing to challenges in science, engineering and large-scale data analytics. Financial support is made possible by Gordon Bell, a pioneer in high-performance and parallel computing and past winner of the IEEE Seymour Cray Award for his exceptional contributions in the design of several computer systems that changed the world of high performance computing. Courtesy of Queensland University of TechnologyThis year’s finalists have been selected for the ACM Gordon Bell Prize in High Performance Computing, supercomputing’s most prestigious competition. The Gordon Bell Prize recognizes “the extraordinary progress made each year in the innovative application of parallel computing to challenges in science, engineering and large-scale data analytics. Prizes may be awarded for peak performance or special achievements in scalability and time-to-solution on important science and engineering problems.”

Gordon Bell Prize finalists are selected by a committee that includes past Gordon Bell winners, as well as leaders in the field of high performance computing. As the SC15 site explains, “solving an important scientific or engineering problem in HPC is important, but scientific outcomes alone are not sufficient for this prize — finalists are selected from submissions that describe the innovations of the project, detail the performance levels achieved on one or more real-world applications, and outline what the implications of the approach are for the broader HPC community.”

A $10,000 prize will be presented to a single winner during SC15 in Austin, TX. Financial support of the prize is made possible by Gordon Bell, a pioneer in high-performance and parallel computing and past winner of the IEEE Seymour Cray Award for exceptional contributions in the design of several computer systems that changed the world of high performance computing.

The 2014 ACM Gordon Bell Prize for best performance of a high performance application went to “Anton 2: Raising the Bar for Performance and Programmability in a Special-Purpose Molecular Dynamics Supercomputer,” from author David E. Shaw and collaborators at D.E. Shaw Research. It is part of the proceedings of SC14 and is available in the ACM Digital Library.

“The task of selecting this year’s finalists was difficult, but rewarding,” notes co-chair of ACM’s Award committee, Cherri M. Pancake of Oregon State University. “Each year, the Bell submissions reflect the very best of what is happening in the high performance computing technical community and the progress that has been made in applying these remarkable computing resources to society’s most challenging problems.”

2015 ACM Gordon Bell Prize in High Performance Computing Finalists

This year’s finalists represent the broad impact the field of high performance computing has across many disciplines of science and engineering. The 2015 Gordon Bell Prize winner will be announced during SC15 on Thursday, November 19.

  1. Massively Parallel Models of the Human Circulatory System,” with research led by Amanda Randles of Duke University and a team of collaborators from Lawrence Livermore National Laboratory and IBM

Authors

  • Amanda Randles — Duke University

  • Erik W. Draeger — Lawrence Livermore National Laboratory

  • Tomas Oppelstrup — Lawrence Livermore National Laboratory

  • Liam Krauss — Lawrence Livermore National Laboratory

  • John Gunnels — IBM Corporation

Abstract: The potential impact of blood flow simulations on the diagnosis and treatment of patients suffering from vascular disease is tremendous. Empowering models of the full arterial tree can provide insight into diseases, such as arterial hypertension, and enables the study of the influence of local factors on global hemodynamics. We present a new, highly scalable implementation of the Lattice Boltzmann method which addresses key challenges such as multiscale coupling, limited memory capacity and bandwidth, and robust load balancing in complex geometries. We demonstrate the strong scaling of a three-dimensional, high-resolution simulation of hemodynamics in the systemic arterial tree on 1,572,864 cores of Blue Gene/Q. Faster calculation of flow in full arterial networks enables unprecedented risk stratification on a per-patient basis. In pursuit of this goal, we have introduced computational advances that significantly reduce time-to-solution for biofluidic simulations.

SC15 session details: Wednesday, November 18, 2015, 10:30 – 11:00 a.m., Room 17AB

  1. The In-Silico Lab-On-A-Chip: Petascale And High-Throughput Simulations Of Microfluidics At Cell Resolution,” led by Diego Rossinelli of ETH Zurich and an international team of researchers from Brown University, the University of Italian Switzerland, the National Research Council of Italy, NVIDIA Corporation, and Oak Ridge National Laboratory

Authors

  • Diego Rossinelli — ETH Zurich

  • Yu-Hang Tang — Brown University

  • Kirill Lykov — University of Italian Switzerland

  • Dmitry Alexeev — ETH Zurich

  • Massimo Bernaschi — National Research Council of Italy

  • Panagiotis Hajidoukas — ETH Zurich

  • Mauro Bisson — NVIDIA Corporation

  • Wayne Joubert — Oak Ridge National Laboratory

  • Christian Conti — ETH Zurich

  • George Karniadakis — Brown University

  • Massimiliano Fatica — NVIDIA Corporation

  • Igor Pivkin — University of Italian Switzerland

  • Petros Koumoutsakos — ETH Zurich

Abstract: We present simulations of blood and cancer cell separation in complex microfluidic channels with subcellular resolution, demonstrating unprecedented time-to-solution and performing at 42 percent of the nominal 39.4 Peta-instructions/s on the 18'688 nodes of the Titan supercomputer.

These simulations outperform by one to three orders of magnitude the current state-of-the-art in terms of numbers of cells and computational elements. We demonstrate an improvement of up to 30X over competing state-of-the-art solvers, thus setting the frontier of particle-based simulations.

The present in silico lab-on-a-chip provides submicron resolution while accessing time scales relevant to engineering designs. The simulation setup follows the realism of the conditions and the geometric complexity of microfluidic experiments, and our results confirm the experimental findings. These simulations redefine the role of computational science for the development of microfluidic devices — a technology that is becoming as important to medicine as integrated circuits have been to computers.

SC15 session details: Wednesday, November 18, 2015, 11:00 – 11:30 a.m., Room 17AB

  1. Pushing Back the Limit of Ab-initio Quantum Transport Simulations on Hybrid Supercomputers,” led by Mauro Calderara with a team from ETH Zurich

Authors

  • Mauro Calderara — ETH Zurich

  • Sascha Brueck — ETH Zurich

  • Andreas Pedersen — ETH Zurich

  • Mohammad Hossein Bani-Hashemian — ETH Zurich

  • Joost VandeVondele — ETH Zurich

  • Mathieu Luisier — ETH Zurich

Abstract: The capabilities of CP2K, a density-functional theory package and OMEN, a nano-device simulator, are combined to study transport phenomena from first-principles in unprecedentedly large nanostructures. Based on the Hamiltonian and overlap matrices generated by CP2K for a given system, OMEN solves the Schr"odinger equation with open boundary conditions (OBCs) for all possible electron momenta and energies. To accelerate this core operation a robust algorithm called SplitSolve has been developed. It allows to simultaneously treat the OBCs on CPUs and the Schr"odinger equation on GPUs, taking advantage of hybrid nodes. Our key achievements on the Cray-XK7 Titan are (i) a reduction in time-to-solution by more than one order of magnitude as compared to standard methods, enabling the simulation of structures with more than 50000 atoms, (ii) a parallel efficiency of 97 percent when scaling from 756 up to 18564 nodes, and (iii) a sustained performance of 14.1 DP-PFlop/s.

SC15 session details: Wednesday, November 18, 2015, 11:30 a.m. – 12:00 p.m., Room 17AB

  1. Implicit Nonlinear Wave Simulation with 1.08T DOF and 0.270T Unstructured Finite Elements to Enhance Comprehensive Earthquake Simulation,” led by a team that includes the University of Tokyo, RIKEN, Niigata University, the University of Tsukuba, and the Research Organization for Information Science and Technology

Chair and authors

  • Subhash Saini (Chair) — NASA Ames Research Center

  • Tsuyoshi Ichimura — University of Tokyo

  • Kohei Fujita — RIKEN

  • Pher Errol Balde Quinay — Niigata University

  • Lalith Maddegedara — University of Tokyo

  • Muneo Hori — University of Tokyo

  • Seizo Tanaka — University of Tsukuba

  • Yoshihisa Shizawa — Research Organization for Information Science and Technology

  • Hiroshi Kobayashi — Research Organization for Information Science and Technology

  • Kazuo Minami — RIKEN

Abstract: This paper presents a new heroic computing method for unstructured, low-order, finite-element, implicit nonlinear wave simulation: 1.97 PFLOPS (18.6 percent of peak) was attained on the full K computer when solving a 1.08T-degrees-of-freedom (DOF) and 0.270T-element problem. This is 40.1 times more DOF and elements, a 2.68-fold improvement in peak performance, and 3.67 times faster in time-to-solution compared to the SC14 Gordon Bell finalist's state-of-the-art simulation. The method scales up to the full K computer with 663,552 CPU cores with 96.6 percent sizeup efficiency, enabling solving of a 1.08T-DOF problem in 29.7 s per time step. Using such heroic computing, we solved a practical problem involving an area 23.7 times larger than the state-of-the-art, and conducted a comprehensive earthquake simulation by combining earthquake wave propagation analysis and evacuation analysis. Application at such scale is a groundbreaking accomplishment and is expected to change the quality of earthquake disaster estimation and contribute to society.

SC15 session details: Thursday, November 19, 2015, 10:30 – 11:00 a.m., Room 17AB

  1. An Extreme-Scale Implicit Solver for Complex PDEs: Highly Heterogeneous Flow in Earth’s Mantle,” led by Johann Rudi from University of Texas at Austin and a team that includes IBM, the Courant Institute of Mathematical Sciences, University of Texas at Austin, and the California Institute of Technology

Chair and authors

  • Subhash Saini (Chair) — NASA Ames Research Center

  • Johann Rudi — The University of Texas at Austin

  • Cristiano I. Malossi — IBM Corporation

  • Tobin Isaac — The University of Texas at Austin

  • Georg Stadler — Courant Institute of Mathematical Sciences

  • Michael Gurnis — California Institute of Technology

  • Peter W. J. Staar — IBM Corporation

  • Yves Ineichen — IBM Corporation

  • Costas Bekas — IBM Corporation

  • Alessandro Curioni — IBM Corporation

  • Omar Ghattas — The University of Texas at Austin

Abstract: Mantle convection is the fundamental physical process within Earth’s interior responsible for the thermal and geological evolution of the planet, including plate tectonics. The mantle is modeled as a viscous, incompressible, non-Newtonian fluid. The wide range of spatial scales, extreme variability and anisotropy in material properties, and severely nonlinear rheology have made global mantle convection modeling with realistic parameters prohibitive. Here we present a new implicit solver that exhibits optimal algorithmic performance and is capable of extreme scaling for hard PDE problems, such as mantle convection. To maximize accuracy and minimize runtime, the solver incorporates a number of advances, including aggressive multi-octree adaptivity, mixed continuous-discontinuous discretization, arbitrarily-high-order accuracy, hybrid spectral/geometric/algebraic multigrid and novel Schur-complement preconditioning. These features present enormous challenges for extreme scalability. We demonstrate that — contrary to conventional wisdom — algorithmically optimal implicit solvers can be designed that scale out to 0.5 million cores for severely nonlinear, ill-conditioned, heterogeneous and localized PDEs.

SC15 session details: Thursday, November 19, 2015, 11:00 – 11:30 a.m., Room 17AB

About SC15

The SC15 Web site states: “HPC is transforming our everyday lives, as well as our not-so-ordinary ones. From nanomaterials to jet aircrafts, from medical treatments to disaster preparedness, and even the way we wash our clothes; the HPC community has transformed the world in multifaceted ways.

“For its 27th anniversary, the annual SC Conference will return to Austin, TX, a city that continues to develop new ways of engaging our senses and incubating technology of all types, including supercomputing.  SC15 will yet again provide a unique venue for spotlighting HPC and scientific applications, and innovations from around the world.

“SC15 will bring together the international supercomputing community — an unparalleled ensemble of scientists, engineers, researchers, educators, programmers, system administrators and developers — for an exceptional program of technical papers, informative tutorials, research posters and Birds-of-a-Feather (BOF) sessions. The SC15 Exhibition Hall will feature exhibits of the latest and greatest technologies from industry, academia and government research organizations; many of these technologies making their debut in Austin. No conference is better poised to demonstrate how HPC can transform both the everyday and the incredible.”

Advertisement
Advertisement