A look at real GPU options and some issues that still need to be addressed Jacques du Toit While the cost of high performance computing (HPC) has been reducing steadily over recent years, it may still put some people off. The advent of general purpose graphical processing units (GPGPU) has both accelerated the cost reduction and improved the energy efficiency of some HPC installations. The experiences of those who have written GPGPU algorithms, and who know HPC systems (some with GPGPU capability), may help you decide whether this technology is right for you.
Adding three new steps allows GPU and MapReduce to work together Mike Martin A modified version of MapReduce — Google’s patented program for distributed and cluster computing — harnesses the power of graphics processing units (GPU) for large-scale, high-performance applications, claim University of California, Davis computer science researchers. In benchmark performance tests, GPMapReduce increased both speed and efficiency on a GPU cluster, explained UC Davis graduate student Jeff Stuart, who with electrical and computer engineering professor John Owens developed the new approach.
Multiple GPU and hybrid CPU+GPU performance is heavily dependent upon vendor implementation of the PCIe bus Rob Farber GPU technology provides orders of magnitude speedups with a single GPU over a conventional processor. Plugging two or four GPUs into a workstation or computational node can double or quadruple the performance of computational applications and games. Even more performance can be achieved by utilizing the multicore capability of the host processor in concert with the GPUs in a system.
General-purpose computing on graphics processing units — a brief history William L. Weaver, Ph.D. In the mid-1970s, filmmaker George Lucas sought to capture his vision for a space opera called The Star Wars on motion picture film. The difficulty was that the technology did not yet exist to create the vast special effect sequences that were required to tell an ancient story set in a galaxy far, far away. To solve this problem, Lucas acquired space in a vacant warehouse located next to the Van Nuys Airport near Los Angeles, CA and assembled an interdisciplinary team of special effects artists, model makers and engineers that later became known as Industrial Light and Magic (ILM).
While still generally experimental, GPUs have tripled their worldwide footprint in the past two years Steve Conway What a difference two years can make in the fast-paced world of HPC technology adoption — a market considerably less risk-averse than its mainstream IT counterpart. IDC’s 2008 worldwide study on HPC processors revealed that nine percent of HPC sites were using some form of accelerator technology in their installed systems. GPGPUs (henceforth to be called GPUs) shared the accelerator habitat back then with FPGAs, Cell processors and a few rarer species.
Tom Murphy isn’t one of those teachers who thinks the best way to get students to learn is to pack their heads full of ideas and concepts. Instead, he’d rather have his students figure out the best ways to pack powerful computing systems into briefcases, suitcases and small shipping boxes. Murphy has come up with a number of innovative approaches to teaching his students about high performance computing.
Central Mexico has been a hub of culture and commerce for as long as humans have gathered there. As early as A.D. 750, this region was inhabited by over 100,000 people, and was known as the “Place of the Gods.” Today, Mexico City is the growth center of high performance computing in Mexico and greater Latin America.
As science becomes more data-intensive, whether due to massive amounts of data collected by experimental facilities or increasingly detailed simulations on supercomputers, that same research is in turn increasingly reliant on networking. For most of its 25 years, the Department of Energy’s Energy Sciences Network,has been criticalt in supporting DOE’s research missions.
Last June, in the midst of a nation reeling from the most devastating natural disasters in its nearly 3,000-year history, the high performance computing (HPC) industry quaked in its own surprise with the debut of the newest leader on the Top 500 list of the world's fastest supercomputer — the “K-Computer” at the Advanced Institute for Computation Sciences (AICS) at RIKEN Center in Kobe, Japan.
Kelly Gaither is a major driving force in HPC visualization, development of large “superdisplays” comprised of large, tiled viz-walls in dealing with large data and parallel systems. As Director of Visualization at the Texas Advanced Computing Center (TACC), she currently hosts one of the world's largest scientific visualization “SciVis” systems.
Research Management Bennett Lass Ph.D., PMP Web Exclusive This is the sixth and final article in a series on best practices in Electronic Lab Notebook (ELN) implementation. This article discusses the fifth and last core area: Research Management.
Tackling green data center development challenges Mike Martin On the ground or in the cloud, energy consumption can pose costly dilemmas to data center operators looking to maximize revenue and minimize expense. To keep power costs down and paying clients happy, a three-person international research team has developed — and tested — a straightforward yet novel algorithm that optimizes server operations by balancing power with performance.
Recent research shows HPC sites plan expansion despite growing concerns Steve Conway A decade ago, power and cooling didn’t make it onto the top 10 list of issues HPC data centers said they were facing. Today, power and cooling consistently ranks among data center managers’ top two or three challenges. What’s changed?
Custom cooling distribution unit built on commodity hardware delivers energy and space savings Brent Draney The U.S. Department of Energy’s National Energy Research Scientific Computing Center (NERSC) is one of the largest facilities in the world devoted to providing computing resources and expertise for basic science research to nearly 4,000 researchers from around the globe. To facilitate this research, the center houses a range of HPC systems — including a new 1,120 node system that serves as a combined high performance computing cluster and a scientific cloud computing testbed. The system was installed last year to replace two existing clusters and to support an American Recovery and Reinvestment Act project, called Magellan, that explores whether a cloud computing model could benefit needs of scientists.
Happy Sithole is pioneering all aspects of research and technology frontiers on behalf of South Africa and across the continent. Happy has been integral to numerous African “firsts,” beginning with the inauguration of South Africa's Centre for High Performance Computing (CHPC) in 2007, which featured the first Top 500 system listing for Africa.