Advertisement
Processors
Subscribe to Processors

The Lead

Artist’s impression of a proton depicting three interacting valence quarks inside. Courtesy of Jefferson Lab

HPC Community Experts Weigh in on Code Modernization

December 17, 2014 4:33 pm | by Doug Black | Articles | Comments

Sense of urgency and economic impact emphasized: The “hardware first” ethic is changing. Hardware retains the glamour, but there is now the stark realization that the newest parallel supercomputers will not realize their full potential without reengineering the software code to efficiently divide computational problems among the thousands of processors that comprise next-generation many-core computing platforms.

Green500: German Supercomputer a World Champion in Saving Energy

November 26, 2014 10:51 am | by Goethe-Universität Frankfurt am Main | News | Comments

The new L-CSC supercomputer at the GSI Helmholtz Centre for Heavy Ion Research is ranked as the...

PowerEdge C4130 Server

November 24, 2014 2:56 pm | Product Releases | Comments

The PowerEdge C4130 is an accelerator-optimized, GPU-dense, HPC-focused rack server purpose-...

Today’s Enterprising GPUs

November 20, 2014 2:09 pm | by Rob Farber | Articles | Comments

HPC has always embraced the leading edge of technology and, as such, acts as the trailbreaker...

View Sample

FREE Email Newsletter

Karol Kowalski, Capability Lead for NWChem Development, works in the Environmental Molecular Sciences Laboratory (EMSL) at PNNL.

Advancing Computational Chemistry with NWChem

November 18, 2014 3:07 pm | by Mike Bernhardt, HPC Community Evangelist, Intel | Articles | Comments

An interview with PNNL’s Karol Kowalski, Capability Lead for NWChem Development - NWChem is an open source high performance computational chemistry tool developed for the Department of Energy at Pacific Northwest National Lab in Richland, WA. I recently visited with Karol Kowalski, Capability Lead for NWChem Development, who works in the Environmental Molecular Sciences Laboratory (EMSL) at PNNL.

Each of eight participating teams will play for a charitable organization, which will receive a $26,000 donation from Intel in recognition of the 26th anniversary of the Supercomputing conference.

Single Elimination Tournament Raises Awareness of Parallelization’s Importance

October 30, 2014 1:03 pm | by Suzanne Tracy, Editor-in-Chief, Scientific Computing and HPC Source | Blogs | Comments

As the SC14 conference approaches, Intel is preparing to host the second annual Intel Parallel Universe Computing Challenge (PUCC) from November 17 to 20, 2014. Each of eight participating teams will play for a charitable organization, which will receive a $26,000 donation from Intel in recognition of the 26th anniversary of the Supercomputing conference.

The Met Office uses more than 10 million weather observations a day and an advanced atmospheric model to create 3,000 tailored forecasts and briefings each day that are delivered to customers ranging from government, businesses, the general public, armed

UK National Weather Service Awards $128M Supercomputer Contract

October 29, 2014 11:43 am | by Cray | News | Comments

Cray announced it has been awarded a contract to provide the Met Office in the United Kingdom with multiple Cray XC supercomputers and Cray Sonexion storage systems. Consisting of three phases spanning multiple years, the $128 million contract expands Cray’s presence in the global weather and climate community, and is the largest supercomputer contract ever for Cray outside of the United States.

Advertisement
An IBM logo displayed in Berlin, VT. IBM is paying $1.5 billion to Globalfoundries in order to shed its costly chip division. (AP Photo/Toby Talbot)

IBM to Pay $1.5B to Shed Costly Chip Division

October 20, 2014 10:54 am | by Michelle Chapman, AP Business Writer | News | Comments

IBM will pay $1.5 billion to Globalfoundries in order to shed its costly chip division. IBM Director of Research John E. Kelly III said in an interview on October 20, 2104, that handing over control of the semiconductor operations will allow it to grow faster, while IBM continues to invest in and expand its chip research.

Rob Farber is an independent HPC expert to startups and Fortune 100 companies, as well as government and academic organizations.

High Performance Parallelism Pearls: A Teaching Juggernaut

October 13, 2014 9:52 am | by Rob Farber | Blogs | Comments

High Performance Parallelism Pearls, the latest book by James Reinders and Jim Jeffers, is a teaching juggernaut that packs the experience of 69 authors into 28 chapters designed to get readers running on the Intel Xeon Phi family of coprocessors, plus provide tools and techniques to adapt legacy codes, as well as increase application performance on Intel Xeon processors. 

IBM has announced new capabilities for its System z mainframe.

IBM Delivers New Analytics Offerings for the Mainframe to Provide Real-Time Customer Insights

October 7, 2014 2:09 pm | by IBM | News | Comments

Building on client demand to integrate real-time analytics with consumer transactions, IBM has announced new capabilities for its System z mainframe. The integration of analytics with transactional data can provide businesses with real-time, actionable insights on commercial transactions as they occur to take advantage of new opportunities to increase sales and help minimize loss through fraud prevention.

A researcher proposes to construct a new quantum computer, able to perform multiple operations in a few seconds, which is based on the diamond structure to process information similarly to regular computers but with their own units of information called q

From Diamonds to Supercomputers

September 29, 2014 3:37 pm | by Investigación y Desarrollo | News | Comments

A researcher proposes to construct a new quantum computer, able to perform multiple operations in a few seconds, which is based on the diamond structure to process information similarly to regular computers but with their own units of information called qubits that allow much faster data processing, equal to one thousand computers working simultaneously.

“Scalability and performance means taking a careful look at the code modernization opportunities that exist for both message passing and threads as well as opportunities for vectorization and SIMDization.” Rick Stevens, Argonne National Laboratory

Extending the Lifespan of Critical Resources through Code Modernization

September 9, 2014 2:05 pm | by Doug Black | Articles | Comments

As scientific computing moves inexorably toward the Exascale era, an increasingly urgent problem has emerged: many HPC software applications — both public domain and proprietary commercial — are hamstrung by antiquated algorithms and software unable to function in manycore supercomputing environments. Aside from developing an Exascale-level architecture, HPC code modernization is the most important challenge facing the HPC community.

Advertisement
Simulating magnetized plasma devices requires multiple particle interaction models and highly accurate, self-consistent particle trajectory modelling in combined magnetic and space-charge modified electric fields.

3D Space Charge Parallel Processing Module

August 27, 2014 3:03 pm | Cobham Technical Services | Product Releases | Comments

The 3D Space Charge module uses code that is optimized for the shared memory architecture of standard PCs and workstations with multi-core processors. Although the speed benefit of parallel processing depends on model complexity, highly iterative and computationally-intensive analysis tasks can be greatly accelerated by the technique.

Cray CS-Storm High Density Cluster

Cray CS-Storm High Density Cluster

August 26, 2014 3:11 pm | Cray Inc. | Product Releases | Comments

Cray CS-Storm is a high-density accelerator compute system based on the Cray CS300 cluster supercomputer. Featuring up to eight NVIDIA Tesla GPU accelerators and a peak performance of more than 11 teraflops per node, the Cray CS-Storm system is a powerful single-node cluster.

Advanced techniques such as "structured placement," shown here and developed by Markov's group, are currently being used to wring out optimizations in chip layout. Different circuit modules on an integrated circuit are shown in different colors. Algorithm

Reviewing Frontier Technologies to Determine Fundamental Limits of Computer Scaling

August 15, 2014 12:31 pm | by NSF | News | Comments

Igor Markov reviews limiting factors in the development of computing systems to help determine what is achievable, identifying loose limits and viable opportunities for advancements through the use of emerging technologies. He summarizes and examines limitations in the areas of manufacturing and engineering, design and validation, power and heat, time and space, as well as information and computational complexity.​

NERSC's next-generation supercomputer, a Cray XC, will be named after Gerty Cori, the first American woman to be honored with a Nobel Prize in science. She shared the 1947 Nobel Prize with her husband Carl (pictured) and Argentine physiologist Bernardo Ho

NERSC Launches Next-Generation Code Optimization Effort

August 15, 2014 9:41 am | by NERSC | News | Comments

With the promise of exascale supercomputers looming on the horizon, much of the roadmap is dotted with questions about hardware design and how to make these systems energy efficient enough so that centers can afford to run them. Often taking a back seat is an equally important question: will scientists be able to adapt their applications to take advantage of exascale once it arrives?

NVIDIA Quadro K5200

NVIDIA Quadro K5200, K4200, K2200, K620 and K420 GPUs

August 12, 2014 3:59 pm | Nvidia Corporation | Product Releases | Comments

Quadro K5200, K4200, K2200, K620 and K420 GPUs deliver an enterprise-grade visual computing platform with up to twice the application performance and data-handling capability of the previous generation. They enable users to interact with graphics applications from a Quadro-based workstation from essentially any device.

Advertisement
A brain-inspired chip to transform mobility and Internet of Things through sensory perception. Courtesy of IBM

Chip with Brain-inspired Non-Von Neumann Architecture has 1M Neurons, 256M Synapses

August 11, 2014 12:13 pm | by IBM | News | Comments

Scientists from IBM have unveiled the first neurosynaptic computer chip to achieve an unprecedented scale of one million programmable neurons, 256 million programmable synapses and 46 billion synaptic operations per second per watt. At 5.4 billion transistors, this fully functional and production-scale chip is currently one of the largest CMOS chips ever built, yet, while running at biological real time, it consumes a minuscule 70mW.

Optalysys is currently developing two products, a ‘Big Data’ analysis system and an Optical Solver Supercomputer, both of which are expected to be launched in 2017.

Light-speed Computing: Prototype Optical Processor Set to Revolutionize Supercomputing

August 8, 2014 4:13 pm | by Optalysys | News | Comments

Cambridge UK-based start up Optalysys has stated that it is only months away from launching a prototype optical processor with the potential to deliver exascale levels of processing power on a standard-sized desktop computer. The company will demonstrate its prototype, which meets NASA Technology Readiness Level 4, in January of next year.

FirePro S9150 Server GPU for HPC

FirePro S9150 Server GPU for HPC

August 7, 2014 10:56 am | AMD | Product Releases | Comments

The AMD FirePro S9150 server card is based on the AMD Graphics Core Next (GCN) architecture, the first AMD architecture designed specifically with compute workloads in mind. It is the first to support enhanced double precision and to break the 2.0 TFLOPS double precision barrier.

AMD Opteron 64-Bit ARM-Based Developer Kits

AMD Opteron 64-Bit ARM-Based Developer Kits

July 30, 2014 12:45 pm | Advanced Micro Devices, Inc. | Product Releases | Comments

The AMD Opteron A1100-Series developer kit features AMD's first 64-bit ARM-based processor, codenamed "Seattle." The processor supports 4 and 8 ARM Cortex-A57 cores; up to 4 MB of shared L2 and 8 MB of shared L3 cache; configurable dual DDR3 or DDR4 memory channels with ECC at up to 1866...

Fundamental Chemistry Findings Could Help Extend Moore’s Law

July 16, 2014 11:49 am | by Lawrence Berkeley National Laboratory | News | Comments

Over the years, computer chips have gotten smaller, thanks to advances in materials science and manufacturing technologies. This march of progress, the doubling of transistors on a microprocessor roughly every two years, is called Moore’s Law. But there’s one component of the chip-making process in need of an overhaul if Moore’s law is to continue: the chemical mixture called photoresist.

Cray Awarded Contract to Install India's First Cray XC30 Supercomputer

July 16, 2014 3:33 am | by Cray | News | Comments

The Cray XC30 system will be used by a nation-wide consortium of scientists called the Indian Lattice Gauge Theory Initiative (ILGTI). The group will research the properties of a phase of matter called the quark-gluon plasma, which existed when the universe was approximately a microsecond old. ILGTI also carries out research on exotic and heavy-flavor hadrons, which will be produced in hadron collider experiments.

On the Trail of Paradigm-Shifting Methods for Solving Mathematical Models

July 15, 2014 10:11 am | by Hengguang Li | Blogs | Comments

How using CPU/GPU parallel computing is the next logical step - My work in computational mathematics is focused on developing new, paradigm-shifting ideas in numerical methods for solving mathematical models in various fields. This includes the Schrödinger equation in quantum mechanics, the elasticity model in mechanical engineering, the Navier-Stokes equation in fluid mechanics, Maxwell’s equations in electromagnetism...

IBM Announces $3B Research Initiative to Tackle Chip Grand Challenges for Cloud and Big Data Systems

July 9, 2014 4:58 pm | by IBM | News | Comments

IBM has announced it is investing $3 billion over the next five years in two broad research and early stage development programs to push the limits of chip technology needed to meet the emerging demands of cloud computing and Big Data systems. These investments are intended to push IBM's semiconductor innovations from today’s breakthroughs into the advanced technology leadership required for the future.

FirePro W8100 Professional Graphics Card

July 8, 2014 3:52 pm | AMD | Product Releases | Comments

The FirePro W8100 professional graphics card is designed to enable new levels of workstation performance delivered by the second-generation AMD Graphics Core Next (GCN) architecture. Powered by OpenCL, it is ideal for the next generation of 4K CAD (computing-aided design) workflows, engineering analysis and supercomputing applications.

Research Could Lead to Dramatic Data Farm Energy Savings

July 2, 2014 3:52 pm | by Tina Hilding, Washington State University | News | Comments

Washington State University has developed a wireless network on a computer chip that could reduce energy consumption at huge data farms by as much as 20 percent.                         

AppliedMicro Readies 64-bit ARM-based SoC for HPC

July 2, 2014 11:58 am | by AppliedMicro | News | Comments

AppliedMicro has announced the readiness of the X-Gene Server on a Chip based on the 64-bit ARMv8-Aarchitecture for High Performance Computing (HPC) workloads.                         

Eurotech Combines Applied Micro 64-bit ARM CPUs and NVIDIA GPU Accelerators for HPC

July 2, 2014 8:07 am | Eurotech, Nvidia Corporation | News | Comments

Eurotech has teamed up with AppliedMicro Circuits Corporation and NVIDIA to develop a new, original high performance computing (HPC) system architecture that combines extreme density and best-in-class energy efficiency. The new architecture is based on an innovative highly modular and scalable packaging concept. 

X
You may login with either your assigned username or your e-mail address.
The password field is case sensitive.
Loading