Judicious selection of a logical sequence of designs leads to a model that pays for itself Mark A. Anawis The journalist David Brinkley once said “A successful man is one who can lay a firm foundation with the bricks others have thrown at him.” In science and engineering, a good experimental design is the key to a successful product, as well as understanding a process. A good design creates a useful mathematical model that can be used to understand and optimize a process.
In the past, I have extolled STATISTICA 10 statistical software for its ability to cover just about any type of test the user may need. Now, the product line consists of bits and pieces of the whole for special applications and/or limited needs. The new Data Miner product contains all of the routine and advanced statistical tests, as well as a number of very sophisticated mining routines.
Randy C. Hice Web Exclusive At first, my business trip to Puerto Rico had all of the hallmarks of a roaring disaster. To start, the only airline that could deliver me to the Isla Del Encanto in a reasonable time frame was the redoubtable AirTran Airways, whose precursor company, ValueJet, in 1996, managed to auger a plane into the Everglades with 110 people aboard. So poisoned was the brand name that ValueJet was forced to merge with the smaller AirTran just so passengers would cross the jet way to unknowingly throw the dice one more time.
The future of ELN will require innovative and disruptive thinking Michael H. Elliott Technology convergence — it is happening all around us in the consumer world where smartphones consolidate diaries, calendars, messaging systems and phones into a single platform. In the last two years, mobile devices, such as the iPad, have converged most of the smartphone functionality into a thin, mobile — yet simple to use — computing platform. App stores provide the freedom users need to make individual choices of software solutions. No longer do consumers have to be beholden to monolithic applications that are difficult to install, support and use.
What’s so remarkable about it? Well, for one, it was originally geared to chemical engineers, but is now used by a wide spectrum of chemists. The package is remarkable for its choice of tools, namely experimental design (DOE) and multivariate statistics. It does a good job of both, and the developers have added many new features (this is NOT your father’s version 9.2!).
A look at real GPU options and some issues that still need to be addressed Jacques du Toit While the cost of high performance computing (HPC) has been reducing steadily over recent years, it may still put some people off. The advent of general purpose graphical processing units (GPGPU) has both accelerated the cost reduction and improved the energy efficiency of some HPC installations. The experiences of those who have written GPGPU algorithms, and who know HPC systems (some with GPGPU capability), may help you decide whether this technology is right for you.
Adding three new steps allows GPU and MapReduce to work together Mike Martin A modified version of MapReduce — Google’s patented program for distributed and cluster computing — harnesses the power of graphics processing units (GPU) for large-scale, high-performance applications, claim University of California, Davis computer science researchers. In benchmark performance tests, GPMapReduce increased both speed and efficiency on a GPU cluster, explained UC Davis graduate student Jeff Stuart, who with electrical and computer engineering professor John Owens developed the new approach.
Multiple GPU and hybrid CPU+GPU performance is heavily dependent upon vendor implementation of the PCIe bus Rob Farber GPU technology provides orders of magnitude speedups with a single GPU over a conventional processor. Plugging two or four GPUs into a workstation or computational node can double or quadruple the performance of computational applications and games. Even more performance can be achieved by utilizing the multicore capability of the host processor in concert with the GPUs in a system.
General-purpose computing on graphics processing units — a brief history William L. Weaver, Ph.D. In the mid-1970s, filmmaker George Lucas sought to capture his vision for a space opera called The Star Wars on motion picture film. The difficulty was that the technology did not yet exist to create the vast special effect sequences that were required to tell an ancient story set in a galaxy far, far away. To solve this problem, Lucas acquired space in a vacant warehouse located next to the Van Nuys Airport near Los Angeles, CA and assembled an interdisciplinary team of special effects artists, model makers and engineers that later became known as Industrial Light and Magic (ILM).
What a difference two years can make in the fast-paced world of HPC technology adoption — a market considerably less risk-averse than its mainstream IT counterpart. IDC’s 2008 worldwide study on HPC processors revealed that nine percent of HPC sites were using some form of accelerator technology in their installed systems. GPGPUs (henceforth to be called GPUs) shared the accelerator habitat back then with FPGAs, Cell processors and...
Tom Murphy isn’t one of those teachers who thinks the best way to get students to learn is to pack their heads full of ideas and concepts. Instead, he’d rather have his students figure out the best ways to pack powerful computing systems into briefcases, suitcases and small shipping boxes. Murphy has come up with a number of innovative approaches to teaching his students about high performance computing.
Central Mexico has been a hub of culture and commerce for as long as humans have gathered there. As early as A.D. 750, this region was inhabited by over 100,000 people, and was known as the “Place of the Gods.” Today, Mexico City is the growth center of high performance computing in Mexico and greater Latin America.
As science becomes more data-intensive, whether due to massive amounts of data collected by experimental facilities or increasingly detailed simulations on supercomputers, that same research is in turn increasingly reliant on networking. For most of its 25 years, the Department of Energy’s Energy Sciences Network,has been criticalt in supporting DOE’s research missions.
Last June, in the midst of a nation reeling from the most devastating natural disasters in its nearly 3,000-year history, the high performance computing (HPC) industry quaked in its own surprise with the debut of the newest leader on the Top 500 list of the world's fastest supercomputer — the “K-Computer” at the Advanced Institute for Computation Sciences (AICS) at RIKEN Center in Kobe, Japan.
Kelly Gaither is a major driving force in HPC visualization, development of large “superdisplays” comprised of large, tiled viz-walls in dealing with large data and parallel systems. As Director of Visualization at the Texas Advanced Computing Center (TACC), she currently hosts one of the world's largest scientific visualization “SciVis” systems.