Advertisement
Parallel Computing
Subscribe to Parallel Computing

The Lead

Artist’s impression of a proton depicting three interacting valence quarks inside. Courtesy of Jefferson Lab

HPC Community Experts Weigh in on Code Modernization

December 17, 2014 4:33 pm | by Doug Black | Articles | Comments

Sense of urgency and economic impact emphasized: The “hardware first” ethic is changing. Hardware retains the glamour, but there is now the stark realization that the newest parallel supercomputers will not realize their full potential without reengineering the software code to efficiently divide computational problems among the thousands of processors that comprise next-generation many-core computing platforms.

Mapping the "Big Bang" of Bird Evolution

December 12, 2014 6:04 pm | by Kelly Rae Chi, Duke University | News | Comments

The genomes of modern birds tell a story of how they emerged and evolved after the mass...

A Supermassive Black Hole Comes to the Big Screen

December 11, 2014 3:34 pm | by University of Arizona | News | Comments

What does a black hole look like up close? As the sci-fi movie Interstellar wows...

Rattled Atoms Mimic High-temperature Superconductivity

December 5, 2014 2:40 pm | by SLAC National Accelerator Laboratory | News | Comments

An experiment at the Department of Energy’s SLAC National Accelerator Laboratory provided the...

View Sample

FREE Email Newsletter

Clusterstor Engineered Solution for Lustre

Clusterstor Engineered Solution for Lustre

November 25, 2014 11:28 am | by Seagate Technology | Product Releases | Comments

ClusterStor Engineered Solution for Lustre offers improved metadata performance and scalability by implementing the Distributed Namespace (DNE) features in the Lustre 2.5 parallel file system. In addition to the Base Metadata Management Server capability, ClusterStor users have the option to add up to 16 Lustre Distributed Namespace metadata servers per single file system, providing client metadata performance improvement of up to 700 percent.

For the US Army, and DoD and intelligence community as a whole, GIS Federal developed an innovative approach to quickly filter, analyze, and visualize big data from hundreds of data providers, with a particular emphasis on geospatial data.

HPC Innovation Excellence Award: GIS Federal

November 17, 2014 6:35 pm | Award Winners

For the US Army, and DoD and intelligence community as a whole, GIS Federal developed an innovative approach to quickly filter, analyze, and visualize big data from hundreds of data providers with a particular emphasis on geospatial data.

The Renaissance Computing Institute’s high performance computing cluster quickly generates better intelligence about coastal hazards and risk. Courtesy of RENCI

HPC Matters to our Quality of Life and Prosperity

November 11, 2014 2:22 pm | by Don Johnston | Articles | Comments

The complexity of high-end computing technology makes it largely invisible to the public. HPC simply lacks the Sputnik sex appeal of the space race, to which current global competition in supercomputing is often compared. Rather, it is seen as the exclusive realm of academia and national labs. Yet, its impact reaches into almost every aspect of daily life. Organizers of SC14 had this reach in mind when selecting the “HPC Matters” theme.

Advertisement
The IPCC at Lawrence Berkeley National Laboratory is performing code modernization work on NWChem.

A Focus on Code Modernization: Observing Year One of the Intel Parallel Computing Centers

November 10, 2014 11:11 am | by Doug Black | Articles | Comments

One year ago, recognizing a rapidly emerging challenge facing the HPC community, Intel launched the Parallel Computing Centers program. With the great majority of the world’s technical HPC computing challenges being handled by systems based on Intel architecture, the company was keenly aware of the growing need to modernize a large portfolio of public domain scientific applications, to prepare these critically important codes for multi-core

The 15 boxes in this image show the simulated intensity of spin excitations in 15 iron-based materials, including iron compounds that are high-temperature superconductors (images d–h). The x axis shows the momentum of the spin excitation in selected locat

Spin Dynamics: Computational Model Predicts Superconductivity

November 4, 2014 1:52 pm | by Katie Elyce Jones, Oak Ridge National Laboratory | News | Comments

Researchers studying iron-based superconductors are combining novel electronic structure algorithms with the high-performance computing power of the Department of Energy’s Titan supercomputer at Oak Ridge National Laboratory to predict spin dynamics, or the ways electrons orient and correlate their spins in a material.

Each of eight participating teams will play for a charitable organization, which will receive a $26,000 donation from Intel in recognition of the 26th anniversary of the Supercomputing conference.

Single Elimination Tournament Raises Awareness of Parallelization’s Importance

October 30, 2014 1:03 pm | by Suzanne Tracy, Editor-in-Chief, Scientific Computing and HPC Source | Blogs | Comments

As the SC14 conference approaches, Intel is preparing to host the second annual Intel Parallel Universe Computing Challenge (PUCC) from November 17 to 20, 2014. Each of eight participating teams will play for a charitable organization, which will receive a $26,000 donation from Intel in recognition of the 26th anniversary of the Supercomputing conference.

Artist impression of an electron wave function (blue), confined in a crystal of silicon-28 atoms (black), controlled by a nanofabricated metal gate (silver). Courtesy of Dr. Stephanie Simmons/UNSW

New Records: Qubits Process Quantum Data with More than 99% Accuracy

October 14, 2014 4:04 pm | by UNSW Australia | News | Comments

Two research teams have found distinct solutions to a critical challenge that has held back the realization of super powerful quantum computers. The teams, working in the same laboratories at UNSW Australia, created two types of quantum bits, or "qubits" — the building blocks for quantum computers — that each process quantum data with an accuracy above 99 percent.

The Oil and Gas High Performance Computing (HPC) Workshop, hosted annually at Rice University, is the premier meeting place for discussion of challenges and opportunities around high performance computing, information technology, and computational science

2015 Rice Oil & Gas High Performance Computing Workshop

October 13, 2014 2:45 pm | by Rice University | Events

The Oil and Gas High Performance Computing (HPC) Workshop, hosted annually at Rice University, is the premier meeting place for discussion of challenges and opportunities around high performance computing, information technology, and computational science and engineering.

Advertisement
Rob Farber is an independent HPC expert to startups and Fortune 100 companies, as well as government and academic organizations.

High Performance Parallelism Pearls: A Teaching Juggernaut

October 13, 2014 9:52 am | by Rob Farber | Blogs | Comments

High Performance Parallelism Pearls, the latest book by James Reinders and Jim Jeffers, is a teaching juggernaut that packs the experience of 69 authors into 28 chapters designed to get readers running on the Intel Xeon Phi family of coprocessors, plus provide tools and techniques to adapt legacy codes, as well as increase application performance on Intel Xeon processors. 

The GS7KTM appliance is the industry’s first scale-out parallel file system solution complete with enterprise-class features, NAS access and Cloud tiering capabilities. The system includes all the performance of a Parallel File System with fully integrate

GS7K Parallel File System Appliance

October 2, 2014 3:01 pm | Datadirect Networks | Product Releases | Comments

The GS7KTM appliance is a scale-out parallel file system solution complete with enterprise-class features, NAS access and Cloud tiering capabilities. The system includes fully integrated enterprise data management and protection capabilities, in a simple, all-in-one, scale-out appliance.

Mathematica Online

Mathematica Online

September 17, 2014 1:59 pm | Wolfram Research, Inc. | Product Releases | Comments

Mathematica Online operates completely in the cloud and is accessible through any modern Web browser, with no installation or configuration required, and is completely interoperable with Mathematicaon the desktop. Users can simply point a Web browser at Mathematica Online, then log in, and immediately start to use the Mathematica notebook interface

The team's solution is to develop new algorithms that divide the data among the processors, allowing each to handle a certain region, and then stitch the image back together at the end.

Multicore Computing helps Fight Lung Cancer, Speeds CT Image Processing

September 12, 2014 3:08 pm | by University of Michigan | News | Comments

A new $1.9 million study at the University of Michigan seeks to make low-dose computed tomography scans a viable screening technique by speeding up the image reconstruction from half an hour or more to just five minutes. The advance could be particularly important for fighting lung cancers, as symptoms often appear too late for effective treatment.

Initial research focused on optimization of the PMEMD classical molecular dynamics code, part of the widely used AMBER Molecular Dynamics software, on multi-core Intel Xeon processors and “manycore” Intel Xeon Phi processors.

SDSC Joins Intel Parallel Computing Centers Program with Focus on Molecular Dynamics, Neuroscience and Life Sciences

September 12, 2014 2:44 pm | by San Diego Supercomputer Center | News | Comments

The San Diego Supercomputer Center (SDSC) at the University of California, San Diego, is working with semiconductor chipmaker Intel to further optimize research software to improve the parallelism, efficiency, and scalability of widely used molecular and neurological simulation technologies.

Advertisement
As part of the Cray CS cluster supercomputer series, Cray offers the CS-  Storm cluster, an accelerator-optimized system that consists of multiple   high-density multi-GPU server nodes, designed for massively parallel   computing workloads.

Cray CS-Storm Accelerator-Optimized Cluster Supercomputer

September 8, 2014 10:58 am | Cray Inc. | Product Releases | Comments

As part of the Cray CS cluster supercomputer series, Cray offers the CS-Storm cluster, an accelerator-optimized system that consists of multiple high-density multi-GPU server nodes, designed for massively parallel computing workloads.

Simulating magnetized plasma devices requires multiple particle interaction models and highly accurate, self-consistent particle trajectory modelling in combined magnetic and space-charge modified electric fields.

3D Space Charge Parallel Processing Module

August 27, 2014 3:03 pm | Cobham Technical Services | Product Releases | Comments

The 3D Space Charge module uses code that is optimized for the shared memory architecture of standard PCs and workstations with multi-core processors. Although the speed benefit of parallel processing depends on model complexity, highly iterative and computationally-intensive analysis tasks can be greatly accelerated by the technique.

With their new method, computer scientists from Saarland University are able, for the first time, to compute all illumination effects in a simpler and more efficient way. Courtesy of AG Slusallek/Saar-Uni

Realistic Computer Graphics Technology Vastly Speeds Process

August 18, 2014 2:15 pm | by University Saarland | News | Comments

Creating a realistic computer simulation of how light suffuses a room is crucial not just for animated movies like Toy Story or Cars, but also in industry. Special computing methods should ensure this, but require great effort. Computer scientists from Saarbrücken have developed a novel approach that vastly simplifies and speeds up the whole calculating process.

NERSC's next-generation supercomputer, a Cray XC, will be named after Gerty Cori, the first American woman to be honored with a Nobel Prize in science. She shared the 1947 Nobel Prize with her husband Carl (pictured) and Argentine physiologist Bernardo Ho

NERSC Launches Next-Generation Code Optimization Effort

August 15, 2014 9:41 am | by NERSC | News | Comments

With the promise of exascale supercomputers looming on the horizon, much of the roadmap is dotted with questions about hardware design and how to make these systems energy efficient enough so that centers can afford to run them. Often taking a back seat is an equally important question: will scientists be able to adapt their applications to take advantage of exascale once it arrives?

With an emphasis on HPC applications in science, engineering and large-scale data analytics; the Gordon Bell Prize tracks the overall progress in parallel computing.

Finalists Compete for Coveted ACM Gordon Bell Prize in High Performance Computing

August 13, 2014 12:01 pm | by SC14 | News | Comments

With five technical papers contending for one of the highest honored awards in high performance computing (HPC), the Association for Computing Machinery’s (ACM) awards committee has four months left to choose a winner for the prestigious 2014 Gordon Bell Prize. The winner of this prize will have demonstrated an outstanding achievement in HPC that helps solve critical science and engineering problems.

On the Trail of Paradigm-Shifting Methods for Solving Mathematical Models

July 15, 2014 10:11 am | by Hengguang Li | Blogs | Comments

How using CPU/GPU parallel computing is the next logical step - My work in computational mathematics is focused on developing new, paradigm-shifting ideas in numerical methods for solving mathematical models in various fields. This includes the Schrödinger equation in quantum mechanics, the elasticity model in mechanical engineering, the Navier-Stokes equation in fluid mechanics, Maxwell’s equations in electromagnetism...

RSC Demonstrates Petasteam Computing Power at ISC'14

July 1, 2014 9:16 am | by RSC Group | News | Comments

RSC Group, developer and integrator of innovative high performance computing (HPC) and data center solutions, made several technology demonstrations and announcements at the International Supercomputing Conference (ISC’14).       

Computational techniques developed by a team from NIST and IU could enable precise computation of atomic properties that are important for nuclear medicine, as well as astrophysics and other fields of atomic research.

New Math Technique Improves Atomic Property Predictions to Historic Accuracy

June 30, 2014 3:56 pm | by NIST | News | Comments

By combining advanced mathematics with high-performance computing, scientists have developed a tool that allowed them to calculate a fundamental property of most atoms on the periodic table to historic accuracy — reducing error by a factor of a thousand in many cases. 

Cambridge’s COSMOS supercomputer, the largest shared-memory computer in Europe, has been named by computer giant Intel as one of its Parallel Computing Centers, building on a long-standing collaboration between Intel and the University of Cambridge.

COSMOS becomes an Intel Parallel Computing Center

June 30, 2014 9:30 am | by University of Cambridge | News | Comments

Cambridge’s COSMOS supercomputer, the largest shared-memory computer in Europe, has been named by computer giant Intel as one of its Parallel Computing Centers, building on a long-standing collaboration between Intel and the University of Cambridge. 

Intel Issues RFP for Intel Parallel Computing Centers

Join the Journey to Accelerate Discovery through Increased Parallelism

May 28, 2014 11:20 am | by Intel Parallel Computing Centers | Blogs | Comments

Solving some of the biggest challenges in society, industry and sciences requires dramatic increases in computing efficiency. Many HPC customers are sitting on incredible untapped compute reserves and they don’t even know it. The very people who are focused on solving the world’s biggest problems with high-performance computing are often only using a small fraction of the compute capability their systems provide. Why? Their software ...

The special focus of this workshop will be on interactive parallel computing with IPython.

4th Workshop on Python for High Performance and Scientific Computing (PyHPC 2014)

May 21, 2014 12:03 pm | by PyHPC 2014 | Events

The workshop will bring together researchers and practitioners from industry, academia, and the wider community using Python in all aspects of high performance and scientific computing. The goal is to present Python applications from mathematics, science, and engineering, to discuss general topics regarding the use of Python (such as language design and performance issues), and to share experience using Python in scientific computing education.

Steve Finn, HPC Consultant, Cherokee Information Services

Steve Finn

April 23, 2014 3:49 pm | Biographies

Steve Finn provides technology assessment and telecommunications support services for a community of HPC users. His background includes vectorization and parallelization of application codes, benchmarking, and HPC system acquisitions.

X
You may login with either your assigned username or your e-mail address.
The password field is case sensitive.
Loading