Advertisement
Parallel Computing
Subscribe to Parallel Computing

The Lead

Mathematica Online

Mathematica Online

September 17, 2014 1:59 pm | Wolfram Research, Inc. | Product Releases | Comments

Mathematica Online operates completely in the cloud and is accessible through any modern Web browser, with no installation or configuration required, and is completely interoperable with Mathematicaon the desktop. Users can simply point a Web browser at Mathematica Online, then log in, and immediately start to use the Mathematica notebook interface

Multicore Computing helps Fight Lung Cancer, Speeds CT Image Processing

September 12, 2014 3:08 pm | by University of Michigan | News | Comments

A new $1.9 million study at the University of Michigan seeks to make low-dose computed...

SDSC Joins Intel Parallel Computing Centers Program with Focus on Molecular Dynamics, Neuroscience and Life Sciences

September 12, 2014 2:44 pm | by San Diego Supercomputer Center | News | Comments

The San Diego Supercomputer Center (SDSC) at the University of California, San Diego, is working...

Cray CS-Storm Accelerator-Optimized Cluster Supercomputer

September 8, 2014 10:58 am | Product Releases | Comments

As part of the Cray CS cluster supercomputer series, Cray offers the CS-Storm cluster, an...

View Sample

FREE Email Newsletter

Simulating magnetized plasma devices requires multiple particle interaction models and highly accurate, self-consistent particle trajectory modelling in combined magnetic and space-charge modified electric fields.

3D Space Charge Parallel Processing Module

August 27, 2014 3:03 pm | Cobham Technical Services | Product Releases | Comments

The 3D Space Charge module uses code that is optimized for the shared memory architecture of standard PCs and workstations with multi-core processors. Although the speed benefit of parallel processing depends on model complexity, highly iterative and computationally-intensive analysis tasks can be greatly accelerated by the technique.

With their new method, computer scientists from Saarland University are able, for the first time, to compute all illumination effects in a simpler and more efficient way. Courtesy of AG Slusallek/Saar-Uni

Realistic Computer Graphics Technology Vastly Speeds Process

August 18, 2014 2:15 pm | by University Saarland | News | Comments

Creating a realistic computer simulation of how light suffuses a room is crucial not just for animated movies like Toy Story or Cars, but also in industry. Special computing methods should ensure this, but require great effort. Computer scientists from Saarbrücken have developed a novel approach that vastly simplifies and speeds up the whole calculating process.

NERSC's next-generation supercomputer, a Cray XC, will be named after Gerty Cori, the first American woman to be honored with a Nobel Prize in science. She shared the 1947 Nobel Prize with her husband Carl (pictured) and Argentine physiologist Bernardo Ho

NERSC Launches Next-Generation Code Optimization Effort

August 15, 2014 9:41 am | by NERSC | News | Comments

With the promise of exascale supercomputers looming on the horizon, much of the roadmap is dotted with questions about hardware design and how to make these systems energy efficient enough so that centers can afford to run them. Often taking a back seat is an equally important question: will scientists be able to adapt their applications to take advantage of exascale once it arrives?

Advertisement
With an emphasis on HPC applications in science, engineering and large-scale data analytics; the Gordon Bell Prize tracks the overall progress in parallel computing.

Finalists Compete for Coveted ACM Gordon Bell Prize in High Performance Computing

August 13, 2014 12:01 pm | by SC14 | News | Comments

With five technical papers contending for one of the highest honored awards in high performance computing (HPC), the Association for Computing Machinery’s (ACM) awards committee has four months left to choose a winner for the prestigious 2014 Gordon Bell Prize. The winner of this prize will have demonstrated an outstanding achievement in HPC that helps solve critical science and engineering problems.

On the Trail of Paradigm-Shifting Methods for Solving Mathematical Models

July 15, 2014 10:11 am | by Hengguang Li | Blogs | Comments

How using CPU/GPU parallel computing is the next logical step - My work in computational mathematics is focused on developing new, paradigm-shifting ideas in numerical methods for solving mathematical models in various fields. This includes the Schrödinger equation in quantum mechanics, the elasticity model in mechanical engineering, the Navier-Stokes equation in fluid mechanics, Maxwell’s equations in electromagnetism...

RSC Demonstrates Petasteam Computing Power at ISC'14

July 1, 2014 9:16 am | by RSC Group | News | Comments

RSC Group, developer and integrator of innovative high performance computing (HPC) and data center solutions, made several technology demonstrations and announcements at the International Supercomputing Conference (ISC’14).       

Computational techniques developed by a team from NIST and IU could enable precise computation of atomic properties that are important for nuclear medicine, as well as astrophysics and other fields of atomic research.

New Math Technique Improves Atomic Property Predictions to Historic Accuracy

June 30, 2014 3:56 pm | by NIST | News | Comments

By combining advanced mathematics with high-performance computing, scientists have developed a tool that allowed them to calculate a fundamental property of most atoms on the periodic table to historic accuracy — reducing error by a factor of a thousand in many cases. 

Cambridge’s COSMOS supercomputer, the largest shared-memory computer in Europe, has been named by computer giant Intel as one of its Parallel Computing Centers, building on a long-standing collaboration between Intel and the University of Cambridge.

COSMOS becomes an Intel Parallel Computing Center

June 30, 2014 9:30 am | by University of Cambridge | News | Comments

Cambridge’s COSMOS supercomputer, the largest shared-memory computer in Europe, has been named by computer giant Intel as one of its Parallel Computing Centers, building on a long-standing collaboration between Intel and the University of Cambridge. 

Advertisement
Intel Issues RFP for Intel Parallel Computing Centers

Join the Journey to Accelerate Discovery through Increased Parallelism

May 28, 2014 11:20 am | by Intel Parallel Computing Centers | Blogs | Comments

Solving some of the biggest challenges in society, industry and sciences requires dramatic increases in computing efficiency. Many HPC customers are sitting on incredible untapped compute reserves and they don’t even know it. The very people who are focused on solving the world’s biggest problems with high-performance computing are often only using a small fraction of the compute capability their systems provide. Why? Their software ...

The special focus of this workshop will be on interactive parallel computing with IPython.

4th Workshop on Python for High Performance and Scientific Computing (PyHPC 2014)

May 21, 2014 12:03 pm | by PyHPC 2014 | Events

The workshop will bring together researchers and practitioners from industry, academia, and the wider community using Python in all aspects of high performance and scientific computing. The goal is to present Python applications from mathematics, science, and engineering, to discuss general topics regarding the use of Python (such as language design and performance issues), and to share experience using Python in scientific computing education.

Steve Finn, HPC Consultant, Cherokee Information Services

Steve Finn

April 23, 2014 3:49 pm | Biographies

Steve Finn provides technology assessment and telecommunications support services for a community of HPC users. His background includes vectorization and parallelization of application codes, benchmarking, and HPC system acquisitions.

Paul Buerger, HPC industry expert

Paul Buerger

April 23, 2014 3:26 pm | Biographies

Paul Buerger, HPC industry expert, spent most of his career at Ohio State University and Ohio Supercomputer Center.  During those 40 years, he worked with everything from minicomputers to mainframes to vector supercomputers to HPC clusters.

Intel is creating Intel Parallel Computing Centers (IPCCs) at leading institutions in HPC research to promote the modernization of essential application codes to increase their parallelism and scalability.

Intel Selects Georgia Tech as Site for Next Parallel Computing Center

April 22, 2014 12:25 pm | by Intel | News | Comments

As modern computer systems become more powerful, utilizing as many as millions of processor cores in parallel, Intel is looking for new ways to efficiently use these high performance computing (HPC) systems to accelerate scientific discovery. As part of this effort, Intel has selected Georgia Tech as the site of one of its Parallel Computing Centers.

Advertisement
William Gropp, Thomas M. Siebel Chair in Computer Science, University of Illinois, Urbana-Champaign

William Gropp

April 17, 2014 10:16 am | Biographies

William Gropp received his B.S. in Mathematics from Case Western Reserve University in 1977, a MS in Physics from the University of Washington in 1978, and a Ph.D. in Computer Science from Stanford in 1982. He held the positions of assistant (1982-1988) and associate (1988-1990) professor in the Computer Science Department at Yale University.

Ganesh Gopalakrishnan, Director, Center for Parallel Computing, University of Utah

Ganesh L. Gopalakrishnan

April 17, 2014 9:56 am | Biographies

Ganesh L. Gopalakrishnan earned his PhD in Computer Science from Stony Brook University in 1986, joining Utah the same year. He was Visiting Assistant Professor at the University of Calgary (1988), and conducted sabbatical work at Stanford University (1995), Intel, Santa Clara (2002), and Utah (2009, developing a Parallel and Concurrent Programming Curriculum Development with Microsoft Research).

Arndt Bode, Director, Leibniz Computer Center (LRZ)

Prof. Dr. Arndt Bode

April 16, 2014 3:17 pm | Biographies

Prof. Dr. Arndt Bode holds the chair for computer technology and computer organization at Technische Universität München. He runs a variety of research projects in the field of computer architecture, tools and applications for parallel and distributed systems. He is Director of Leibniz Computer Center (LRZ), the German Supercomputer Center of the Bavarian Academy of Sciences.

Osamu Tatebe, Associate Professor, University of Tsukuba

Osamu Tatebe

April 16, 2014 10:13 am | Biographies

Osamu Tatebe's research interests include Data Intensive Computing, High Performance Computing, Grid Computing, Distributed Operating System, Runtime System for Parallel Computers, Parallel Numerical Algorithm, and Compilers.

Muniyappa Manjunathaiah, Lecturer, School of Systems Engineering, University of Reading

Muniyappa Manjunathaiah

April 16, 2014 10:10 am | Biographies

Muniyappa Manjunathaiah's research in computational science includes novel and emergent systems and architectures, parallel and distributed computing, cloud computing, mathematical modelling, scalable algorithms, middleware to support parallel and distributed applications.

Matthias Müller,  Professor of High Performance Computing, RWTH Aachen University

Matthias S. Müller

April 16, 2014 10:07 am | Biographies

Matthias S. Müller has been University Professor of High Performance Computing at the RWTH Aachen Faculty of Mathematics, Computer Science, and Natural Sciences since January 2013. His research focuses are the automatic error analysis of parallel programs, parallel programming models, performance analysis, and energy efficiency.

Huynh Phung Huynh, Scientist and Capability Group Manager, A*STAR Institute of High Performance Computing

Huynh Phung Huynh

April 16, 2014 8:45 am | Biographies

Huynh Phung Huynh's research interests include high performance computing (HPC): compiler optimization for GPU, many cores and other accelerators; Parallel computing: framework for parallel programming or scheduling; and HPC for data mining and machine learning algorithms.

Eva Burrows, Postdoctoral Researcher, University of Bergen, Norway

Eva Burrows

April 16, 2014 8:33 am | Biographies

Eva Burrows investigates the possibility of (arbitrary depth) nested parallel programming concepts based on multi-level Data Dependency Algebras (DDAs). The work includes various levels of parallelism from the on-chip parallelism of microprocessors via GPUs, FPGAs, etc up to parallel machine networks. She is combining DDA concepts with hardware programming with a strong focus on multicore and GPU programming (e.g. NVIDIA's CUDA).

Jack Dongarra, University Distinguished Professor of Computer Science, University of Tennessee

Jack Dongarra

April 15, 2014 4:09 pm | Biographies

Jack Dongarra specializes in numerical algorithms in linear algebra, parallel computing, use of advanced-computer architectures, programming methodology, and tools for parallel computers. He was awarded the IEEE Sid Fernbach Award in 2004 and in 2008 he was the recipient of the first IEEE Medal of Excellence in Scalable Computing

ARCHER Supercomputer Targets Research Solutions on Epic Scale

March 27, 2014 3:43 pm | by University of Edinburgh | News | Comments

A new generation supercomputer, capable of more than one million billion calculations a second, is to be launched at the University of Edinburgh. The £43 million ARCHER (Academic Research Computing High End Resource) system will provide high performance computing support for research and industry projects in the UK.

New Model Reduces Data Access Delay, Could increase Speed by up to 100x

March 11, 2014 11:45 am | by Illinois Institute of Technology | News | Comments

As the amount of data grows ever larger but memory speed continues to greatly lag CPU speed, Xian-He Sun has established a new mathematical model for reducing data access delay. Sun is creator of Sun-Ni’s law — one of three scalable computing laws along with Amdahl’s law and Gustafson’s law — and Distinguished Professor of Computer Science at Illinois Institute of Technology.

TotalView 8.13 Parallel Debugger Beta Release

December 3, 2013 1:58 pm | Rogue Wave Software | Product Releases | Comments

The beta release of the TotalView 8.13 parallel debugger for parallel applications written in C, C++ and Fortran features a combination of capabilities for pinpointing and fixing hard-to-find bugs, such as race conditions, memory leaks and memory overruns.

X
You may login with either your assigned username or your e-mail address.
The password field is case sensitive.
Loading