Internet offers unprecedented R&D efficiencies web exclusive Kevin McLaughlin Research departments in various fields, especially those in the pharmaceutical manufacturing and chemical industries, are involved in the synthesis of compounds and their subsequent confirmation and purification. Often, the major concern for researchers handling great numbers of samples is how to achieve data as quickly and easily as possible
Exascale systems will handle both ravenous and daintier appetites Steve Conway “Supercomputer jobs have an insatiable appetite for computing power,” the decades-old saying goes, but even the biggest supercomputers have typically run a mix of large, ravenous “capability” problems and smaller, less-voracious “capacity” jobs. Both types of jobs have contributed to the return-on-investment for these mega-computers
Achieving the right balance between graphics and statistical analysis Mark Anawis Andrew Lang once said disparagingly of someone: “He uses statistics as a drunken man uses lampposts — for support rather than for illumination.” This whimsical quote illustrates the goal of statistics, which is to provide insight for further action.
A dramatic architectural shift is upon us Pavan Balaji Today’s leadership-class systems already have crossed the petaflop barrier. As we move farther into the petaflop era and look forward to multi-petaflop and exaflop systems, we notice that modern high-end computing systems are undergoing a dramatic change in their fundamental architectural model
A complete new skill set may be required to get the best out of HPC systems Ian Bush, Ph.D. Can’t find somebody nice enough to write your code, and the performance you are getting from MATLAB, SciLAB or Octave is not acceptable: It’s the longer route for you! Where does this road take you, and what might you see?
While HPC technology improvements have been spectacular, conventional power-generating capabilities have lagged behind Rob Farber High performance computing developers are the race car drivers of the supercomputing world. Basically, if a piece of hardware in the supercomputer is not performing useful work, the developer is not doing their job
Comparing network fabrics, performance and the cost of power Gilad Shainer, Brian Sparks, Tong Liu, and Pak Lui Up until recently, when the cost of power was negligible versus the cost of the system, high-performance computing was all about speed. Today, with the increased cost to power and cooling, power costs are nearly similar to the system cost. As a result, we have witnessed a paradigm shift, where the frequency was replaced with multi-core technology
High-performance computing facilities turn to environmental sensors to improve energy efficiency Mike May Cost and carbon footprint are critical concerns, but they aren’t the only issues at play, says Kathy Yelick, associate laboratory director for the computing sciences directorate at Lawrence Berkeley National Laboratory. Performance is suffering, too
In less than a decade, GPGPUs may provide a pathway to exascale computing Gerhard Wellein To estimate the current and future potentials of GPGPUs for real world code, all these pros and cons have to be evaluated carefully and in an unbiased way. SC 2010 featured a session on GPGPU performance with selected papers to address these issues in different application areas of broad interest
New GPU-to-GPU communications model increases cluster efficiency Gilad Shainer, Ali Ayoub, Pak Lui, Tong Liu As GPU-based computing becomes popular, there is a need to create direct communications between GPUs using the fastest available interconnect solutions, such as InfiniBand, and to increase the applications modifications for utilizing GPU and parallel GPU computations more effectively
Steve Conway General purpose graphics processing units (GPGPUs) are no longer novelties in high performance computing (HPC). Systems vendors are lining up to offer GPGPUs as complements to x86 or other base processors
Predictions for global HPC Andrew Jones As users, we seek the HPC solution that enables us to get the most science done for our budget. “Most” might mean the greatest throughput or largest simulation. But, because in the real world we start with a budget, the speed of the solution is not the most important metric — it is the cost of a given performance level that matters most
A fresh look at alternative processor strategies Steve Conway At the recent HPC User Forum meeting in Seattle, steering committee chairman Steve Finn of BAE Systems led a panel of experts from AMD, Army Research Laboratory, Cray, ET International, Intel, NVIDIA, and Pacific Northwest National Laboratory (PNNL) through a discussion of alternative processor strategies
Creating a platform to explore the range of GPUs in scientific computation Mike May GPUs may have been invented to power video games, but today these massively parallel devices are being pressed into high-performance computing. With improving programming toolsets, commercial computer vendors have become more confident in selling GPU-accelerated systems but, in the world of science, GPUs are still almost as experimental as the problems they are expected to solve
Perfect storm of opportunities delivers fresh approaches Rob Farber General purpose graphics processor unit technology has arrived during a perfect storm of opportunities. Multi-threaded software is now a necessity as x86 and other conventional processor designs have been forced to adopt a multi-core approach. Parallelism is now the path to performance