Partnering with organizations around the world on modernization of public domain HPC code
Progress toward solving “Grand Challenges” scientific and engineering problems — climate modeling, disease research, energy discovery, material stress analysis — will have profound impact on the future of human life. Increasingly, this research is carried out on parallel supercomputers, whose power races ahead, unabated, year after year.
In fact, the high-performance computing (HPC) industry has set its sights on achieving exascale levels of computational power by the end of the decade. That’s a quintillion (10 with 18 zero’s) computations per second.
Parallel systems are comprised of thousands (soon to be millions) of powerful processors that break a big problem into smaller parts, work on them in parallel, and then combine the results into a single answer. Developing more powerful parallel supercomputers means that a greater number of formulas and simulations can be run in less time and higher resolutions of complex models can be viewed and analyzed, that scientific and engineering problems can be attacked more effectively.
But like a Formula One race car stuck in a traffic jam, HPC hardware performance is frequently hampered by HPC software. This is because some of the most widely used application codes have not been updated for years, if ever, leaving them unable to leverage advances in parallel systems. As hardware power moves toward exascale, the imbalance between hardware and software will only get worse.
The problem of updating essential scientific applications goes by many names — code modernization, refactoring, vectorization, parallelization. The need is for algorithms and software that can efficiently utilize massive numbers of processors simultaneously, to reprogram codes to increase their parallelism and scalability. This is complicated, time-consuming work that commonly slips through the budgetary cracks of academic institutions and government organizations.
To address this problem, Intel has stepped outside of its role as a developer of HPC hardware to partner with supercomputing organizations around the world on modernization of public domain HPC code. To date, 31 Intel Parallel Computing Centers (IPCC) have begun operation at academic, government and private institutions in the United States, Germany, U.K., Finland, Italy, France, India, Korea, Russia, Brazil, Ireland, Japan and other countries.
As public domain software is updated by the IPCCs, the enhanced code will be available to all other scientists and engineers using that code.
“Reducing simulation times from weeks to days or days to hours, dramatically improves the opportunity to find solutions to society’s biggest challenges” said Bob Burroughs, Director, Technical Computing Ecosystem Enabling at Intel. “Optimizing these essential software programs to take advantage of the parallelism available in current processors and coprocessor will accelerate scientific discovery and enable better engineering — that’s the mission of this program.”
The IPCC program helps fund the work of software engineers dedicated to modernizing public domain code used throughout the HPC community. Grants may include funding for personnel — typically post-doctoral and graduate students — hardware, software tools and training.
“We’ve reached a limit with research computing with high performance computers, and we’re not going to be able to advance our research in the way we need to if we don’t start using state-of-the-art parallel systems represented by the Intel Xeon Phi coprocessors,” said Steve Tally, Senior Strategist-Science, Technology, Engineering & Math, at Purdue University. “Updating code will let us push forward on the frontiers of science that wouldn’t be achievable using more traditional computing architectures.”
Purdue's Conte supercomputer, the fastest campus supercomputer in the nation, makes use of Intel Xeon Phi coprocessors. The IPCC at Purdue is supported by five graduate students and will optimize the performance of NEMO (NanoElectronics Modeling) simulation software used to analyze how electrons flow through nano-scale devices, such as next-generation transistors and smart phones.
IPCCs were selected after Intel issued an RFP late last year and awarded two-year contracts to the centers. Key criteria were the applicants’ domain knowledge and code optimization projects having the broadest possible impact on the HPC community.
“The Parallel Computing Center program enables institutions to work with Intel on the software, algorithms and best practices to enable researchers to use parallel systems more effectively,” said Edmond Chow, Associate Professor, Computational Science and Engineering, Georgia Institute of Technology. “Parallel code modernization liberates the computing power previously unavailable to researchers to do big science. The Intel program is critically important to bringing HPC hardware and software into balance.”
The IPCC at Georgia Tech will develop new parallel algorithms and software for quantum chemistry and biomolecular simulation, including the study of proteins implicated in diseases, such as HIV. Research will target large-scale computer systems using Intel Xeon Processors and Intel Xeon Phi coprocessors. The center also will develop new curricular materials to educate computer scientists on the use of parallel computing for scientific applications.
The IPCC program supports open computing standards with the goal of creating robust, collaborative HPC ecosystems in which advances in software performance are widely shared. The goal: enable the benefits of software modernization to impact the broadest possible number of users.
“Some code modernization work is somewhat straightforward, yet can yield tremendous gains,” said James Reinders, Director, Chief Evangelist, Intel Software. “For example, programming a large number of mathematical calculations is a matter of looping data elements, assigning a matrix of calculations across the processors within the parallel system. More complicated is an algorithmic problem, the programmer has to reimagine the problem so that it can be done in parallel. The programmer also needs to document the optimization work so that similar improvements can be made in the future by programmers and engineers using the software.”
In Boulder, CO, the National Center for Atmospheric Research (NCAR) and the University of Colorado Boulder formed an IPCC to modernize climate and weather software, whose large-scale models have an insatiable appetite for computing power.
“The computational algorithms in our models must keep pace with new architectures in order to run efficiently and provide scientists with an enhanced understanding of climate and weather phenomena,” said Rich Loft, Director of Technology Development for the Computational and Information Systems Laboratory at NCAR. “The IPCCs work is critically important to the effective use of Intel's next generation of highly parallel processors and supercomputers in atmospheric science.”
The IPCCs are already achieving substantial software performance improvements, according to Burroughs, by focusing on the basics of preparing code for parallel platforms. “We’re encouraged by the modernization work that’s already been accomplished, and the program has only begun. Even if the centers can grab the low hanging fruit of code modernization — never mind the more complex work — they will deliver tremendous benefit for the HPC community.”
On May 28, 2014, Intel released a second RFP to expand this already successful program and expects to add several additional Parallel Computing Centers each quarter as it continues its effort to lead the drive for code modernization.
Additional information on Intel’s Parallel Computing Centers can be found at software.intel.com/Ipcc
Doug Black is a communications professional who has been involved in the HPC industry for close to 20 years. He may be reached at editor@ScientificComputing.com.