Advertisement
GPU
Subscribe to GPU

The Lead

A new tool lets video gamers play high-end games on their smartphones and tablets using less bandwidth. Duke University researchers tested the tool on Doom 3, a futuristic first-person shooter game. The conventional version of the game relies on a remote

Playing Graphics-intensive Fast-Action Games in the Cloud without Guzzling Gigabytes

May 21, 2015 9:50 am | by Duke University | News | Comments

Gamers might one day be able to enjoy the same graphics-intensive fast-action video games they play on their gaming consoles or personal computers from mobile devices without guzzling gigabytes, thanks to a new tool developed by researchers at Duke University and Microsoft Research. Named “Kahawai," the tool delivers graphics and gameplay on par with conventional cloud-gaming setups for a fraction of the bandwidth.

AMD Announces “Zen” x86 Processor Core

May 7, 2015 12:11 pm | by AMD | News | Comments

AMD provided details the company’s multi-year strategy to drive profitable growth based on...

NYU to Advance Deep Learning Research with Multi-GPU Cluster

May 5, 2015 11:37 am | by Kimberly Powell, NVIDIA | News | Comments

Self-driving cars. Computers that detect tumors. Real-time speech translation. Just a few years...

GPU4EO Challenge 2015: Stimulating Adoption of GPUs in Remote Sensing

April 17, 2015 3:11 pm | by Suzanne Tracy, Editor-in-Chief, Scientific Computing and HPC Source | News | Comments

GPU4EO Challenge 2015 is an international initiative which involves students, researchers and...

View Sample

FREE Email Newsletter

The Living Heart Project’s goal is to enable creation of a customized 3-D heart.

Highly Realistic Human Heart Simulations Transforming Medical Care

March 26, 2015 5:03 pm | by Suzanne Tracy, Editor-in-Chief, Scientific Computing and HPC Source | Articles | Comments

The World Health Organization reports that cardiovascular diseases are the number one cause of death globally. Working to address this imperative public health problem, researchers world-wide are seeking new ways to accelerate research, raise the accuracy of diagnoses and improve patient outcomes. Several initiatives have utilized ground-breaking new simulations to advance research into aspects such as rhythm disturbances and ...

The OpenPOWER Foundation which is a collaboration of technologists encouraging the adoption of an open server architecture for computer data centers has grown to more than 110 businesses, organizations and individuals across 22 countries.

10 New OpenPOWER Foundation Solutions Unveiled

March 19, 2015 3:19 pm | by OpenPOWER Foundation | News | Comments

The OpenPOWER Foundation has announced more than 10 hardware solutions — spanning systems, boards and cards, and a new microprocessor customized for China. Built collaboratively by OpenPOWER members, the new solutions exploit the POWER architecture to provide more choice, customization and performance to customers, including hyperscale data centers. 

Pascal will offer better performance than Maxwell on key deep-learning tasks.

NVIDIA’s Next-Gen Pascal GPU Architecture to Provide 10X Speedup for Deep Learning Apps

March 18, 2015 12:24 pm | News | Comments

NVIDIA has announced that its Pascal GPU architecture, set to debut next year, will accelerate deep learning applications 10X beyond the speed of its current-generation Maxwell processors. NVIDIA CEO and co-founder Jen-Hsun Huang revealed details of Pascal and the company’s updated processor roadmap in front of a crowd of 4,000 during his keynote address at the GPU Technology Conference, in Silicon Valley.

Advertisement
Rob Farber is an independent HPC expert to startups and Fortune 100 companies, as well as government and academic organizations.

Optimizing Application Energy Efficiency Using CPUs, GPUs and FPGAs

March 13, 2015 8:43 am | by Rob Farber | Articles | Comments

The HPC and enterprise communities are experiencing a paradigm shift as FLOPs per watt, rather than FLOPs (floating-point operations per second), are becoming the guiding metric in procurements, system design, and now application development. In short, “performance at any cost” is no longer viable, as the operational costs of supercomputer clusters are now on par with the acquisition cost of the hardware itself.

The conference will focus on High-Performance Computing essentials, new developments and emerging technologies, best practices and hands-on training.

HPC Advisory Council Switzerland Conference 2015

January 20, 2015 10:10 am | by HPC Advisory Council | Events

The HPC Advisory Council and the Swiss Supercomputing Centre will host the HPC Advisory Council Switzerland Conference 2015 in the Lugano Convention Centre, Lugano, Switzerland, from March 23 - March 25, 2015. The conference will focus on High-Performance Computing essentials, new developments and emerging technologies, best practices and hands-on training.

The first OpenPOWER Summit will bring together an ecosystem of hardware and software developers, customers, academics, government agencies, industry luminaries, press and analysts to build OpenPOWER momentum.

First OpenPOWER Summit Announced

December 24, 2014 11:33 am | by OpenPOWER Foundation | News | Comments

The OpenPOWER Foundation, an open development community dedicated to accelerating data center innovation for POWER platforms, has announced its first OpenPOWER Summit will be held March 17 to 19, 2015, at the San Jose Convention Center. It will be hosted within the GPU Technology Conference, which has thousands of technology sector attendees, including developers, researchers, government agencies and industry luminaries.

The First Annual OpenPOWER Summit will be hosted within the GPU Technology Conference (GTC) which has thousands of technology sector attendees and significant industry press and analyst presence including developers, researchers, government agencies, and

First Annual OpenPOWER Summit

December 23, 2014 8:47 am | by OpenPOWER Foundation | Events

The First Annual OpenPOWER Summit will take place at the San Jose Convention Center. It will be hosted within the GPU Technology Conference (GTC) which has thousands of technology sector attendees and significant industry press and analyst presence including developers, researchers, government agencies, and industry luminaries.

A team of MIT neuroscientists has found that some computer programs can identify the objects in these images just as well as the primate brain. Courtesy of the researchers

Deep Computer Neural Networks Catch Up to Primate Brain

December 18, 2014 4:53 pm | by Anne Trafton, MIT | News | Comments

For decades, neuroscientists have been trying to design computer networks that can mimic visual skills such as recognizing objects. Until now, no computer model has been able to match the primate brain at visual object recognition during a brief glance. However, a new study from MIT neuroscientists has found that one of the latest generation of these so-called “deep neural networks” matches the primate brain.

Advertisement
In the latest issue of HPC Source, “A New Dawn: Bringing HPC to the Enterprise,” we look at how small- to-medium-sized manufacturers can realize major benefits from adoption of high performance computing in areas such as modeling, simulation and analysis.

HPC for All

November 21, 2014 4:32 pm | by Suzanne Tracy, Editor-in-Chief, Scientific Computing and HPC Source | Blogs | Comments

In the latest issue of HPC Source, “A New Dawn: Bringing HPC to the Enterprise,” we look at how small- to-medium-sized manufacturers can realize major benefits from adoption of high performance computing in areas such as modeling, simulation and analysis.

Rob Farber is an independent HPC expert to startups and Fortune 100 companies, as well as government and academic organizations.

Today’s Enterprising GPUs

November 20, 2014 2:09 pm | by Rob Farber | Articles | Comments

HPC has always embraced the leading edge of technology and, as such, acts as the trailbreaker and scout for enterprise and business customers. HPC has highlighted and matured the abilities of previously risky devices, like GPUs, that enterprise customers now leverage to create competitive advantage. GPUs have moved beyond “devices with potential” to “production devices” that are used for profit generation.

For the US Army, and DoD and intelligence community as a whole, GIS Federal developed an innovative approach to quickly filter, analyze, and visualize big data from hundreds of data providers, with a particular emphasis on geospatial data.

HPC Innovation Excellence Award: GIS Federal

November 17, 2014 6:35 pm | Award Winners

For the US Army, and DoD and intelligence community as a whole, GIS Federal developed an innovative approach to quickly filter, analyze, and visualize big data from hundreds of data providers with a particular emphasis on geospatial data.

A New Dawn: Bringing HPC to Smaller Manufacturers

HPC Source - A New Dawn: Bringing HPC to Smaller Manufacturers

November 13, 2014 3:43 pm | Digital Editions | Comments

Welcome to SCIENTIFIC COMPUTING's "Bringing HPC to Smaller Manufacturers" edition of HPC Source, an interactive publication devoted exclusively to coverage of high performance computing.

Winning NVIDIA’s 2014 Early Stage Challenge helped GPU-powered startup Map-D bring interactivity to big data in vivid ways.

Hot Young Startups Vie for $100,000 GPU Challenge Prize

October 15, 2014 9:24 am | by Suzanne Tracy, Editor-in-Chief, Scientific Computing and HPC Source | News | Comments

NVIDIA is looking for a dozen would-be competitors for next year’s Early Stage Challenge, which takes place as part of its Emerging Companies Summit (ECS). In this seventh annual contest, hot young startups using GPUs vie for a single $100,000 grand prize.

Advertisement
As part of the Cray CS cluster supercomputer series, Cray offers the CS-  Storm cluster, an accelerator-optimized system that consists of multiple   high-density multi-GPU server nodes, designed for massively parallel   computing workloads.

Cray CS-Storm Accelerator-Optimized Cluster Supercomputer

September 8, 2014 10:58 am | Cray Inc. | Product Releases | Comments

As part of the Cray CS cluster supercomputer series, Cray offers the CS-Storm cluster, an accelerator-optimized system that consists of multiple high-density multi-GPU server nodes, designed for massively parallel computing workloads.

Cray CS-Storm High Density Cluster

Cray CS-Storm High Density Cluster

August 26, 2014 3:11 pm | Cray Inc. | Product Releases | Comments

Cray CS-Storm is a high-density accelerator compute system based on the Cray CS300 cluster supercomputer. Featuring up to eight NVIDIA Tesla GPU accelerators and a peak performance of more than 11 teraflops per node, the Cray CS-Storm system is a powerful single-node cluster.

NVIDIA CUDA 6.5 Production Release

NVIDIA CUDA 6.5 Production Release

August 22, 2014 12:15 pm | Nvidia Corporation | Product Releases | Comments

NVIDIA CUDA 6.5 brings GPU-accelerated computing to 64-bit ARM platforms. The toolkit provides programmers with a platform to develop advanced scientific, engineering, mobile and HPC applications on GPU-accelerated ARM and x86 CPU-based systems. Features include support for Microsoft Visual Studio 2013, cuFFT callbacks capability and improved debugging for CUDA FORTRAN applications.

Brookhaven theoretical physicist Swagato Mukherjee explains that 'invisible' hadrons are like salt molecules floating around in the hot gas of hadrons, making other particles freeze out at a lower temperature than they would if the 'salt' wasn't there.

Invisible Particles Provide First Indirect Evidence of Strange Baryons

August 20, 2014 10:17 am | by Brookhaven National Laboratory | News | Comments

New supercomputing calculations provide the first evidence that particles predicted by the theory of quark-gluon interactions, but never before observed, are being produced in heavy-ion collisions at the Relativistic Heavy Ion Collider. These heavy strange baryons, containing at least one strange quark, still cannot be observed directly, but instead make their presence known by lowering the temperature at which other baryons "freeze out"

With their new method, computer scientists from Saarland University are able, for the first time, to compute all illumination effects in a simpler and more efficient way. Courtesy of AG Slusallek/Saar-Uni

Realistic Computer Graphics Technology Vastly Speeds Process

August 18, 2014 2:15 pm | by University Saarland | News | Comments

Creating a realistic computer simulation of how light suffuses a room is crucial not just for animated movies like Toy Story or Cars, but also in industry. Special computing methods should ensure this, but require great effort. Computer scientists from Saarbrücken have developed a novel approach that vastly simplifies and speeds up the whole calculating process.

NVIDIA Quadro K5200

NVIDIA Quadro K5200, K4200, K2200, K620 and K420 GPUs

August 12, 2014 3:59 pm | Nvidia Corporation | Product Releases | Comments

Quadro K5200, K4200, K2200, K620 and K420 GPUs deliver an enterprise-grade visual computing platform with up to twice the application performance and data-handling capability of the previous generation. They enable users to interact with graphics applications from a Quadro-based workstation from essentially any device.

FirePro S9150 Server GPU for HPC

FirePro S9150 Server GPU for HPC

August 7, 2014 10:56 am | AMD | Product Releases | Comments

The AMD FirePro S9150 server card is based on the AMD Graphics Core Next (GCN) architecture, the first AMD architecture designed specifically with compute workloads in mind. It is the first to support enhanced double precision and to break the 2.0 TFLOPS double precision barrier.

Jeffrey Potoff is a professor of Chemical Engineering and Materials Science, and Loren Schwiebert is an associate professor of Computer Science at Wayne State University.

Using Powerful GPU-Based Monte Carlo Simulation Engine to Model Larger Systems, Reduce Data Errors, Improve System Prototyping

July 22, 2014 8:33 am | by Jeffrey Potoff and Loren Schwiebert | Blogs | Comments

Recently, our research work got a shot in the arm because Wayne State University was the recipient of a complete high-performance compute cluster donated by Silicon Mechanics as part of its 3rd Annual Research Cluster Grant competition. The new HPC cluster gives us some state-of-the-art hardware, which will enhance the development of what we’ve been working on — a novel GPU-Optimized Monte Carlo simulation engine for molecular systems.

On the Trail of Paradigm-Shifting Methods for Solving Mathematical Models

July 15, 2014 10:11 am | by Hengguang Li | Blogs | Comments

How using CPU/GPU parallel computing is the next logical step - My work in computational mathematics is focused on developing new, paradigm-shifting ideas in numerical methods for solving mathematical models in various fields. This includes the Schrödinger equation in quantum mechanics, the elasticity model in mechanical engineering, the Navier-Stokes equation in fluid mechanics, Maxwell’s equations in electromagnetism...

Eurotech Combines Applied Micro 64-bit ARM CPUs and NVIDIA GPU Accelerators for HPC

July 2, 2014 8:07 am | Eurotech, Nvidia Corporation | News | Comments

Eurotech has teamed up with AppliedMicro Circuits Corporation and NVIDIA to develop a new, original high performance computing (HPC) system architecture that combines extreme density and best-in-class energy efficiency. The new architecture is based on an innovative highly modular and scalable packaging concept. 

Multiple server vendors are leveraging the performance of NVIDIA GPU accelerators to launch the world’s first 64-bit ARM development systems for high performance computing (HPC).

NVIDIA GPUs Open the Door to ARM64 Entry into High Performance Computing

June 26, 2014 9:48 am | Nvidia Corporation | News | Comments

NVIDIA has announced that multiple server vendors are leveraging the performance of NVIDIA GPU accelerators to launch the world’s first 64-bit ARM development systems for high performance computing (HPC).             

Dr. Paul Calleja, Director of High Performance Computing Service at the University of Cambridge, will present a keynote on the SKA Project.

HPC Advisory European Conference Workshop to Focus on Productivity

June 3, 2014 11:39 am | by Brian Sparks, HPC Advisory Council Media Relations and Events Director | Blogs | Comments

In conjunction with ISC’14, we will hold a one-day HPC Advisory European Conference Workshop on June 22, 2014. This workshop will focus on HPC productivity, and advanced HPC topics and futures, and will bring together system managers, researchers, developers, computational scientists and industry affiliates to discuss recent developments and future advancements in High-Performance Computing. Our keynote session will feature the SKA Project

X
You may login with either your assigned username or your e-mail address.
The password field is case sensitive.
Loading