Advertisement
Processors
Subscribe to Processors

The Lead

Single Elimination Tournament Raises Awareness of Parallelization’s Importance

October 30, 2014 1:03 pm | by Suzanne Tracy, Editor-in-Chief, Scientific Computing and HPC Source | Blogs | Comments

As the SC14 conference approaches, Intel is preparing to host the second annual Intel Parallel Universe Computing Challenge (PUCC) from November 17 to 20, 2014. Each of eight participating teams will play for a charitable organization, which will receive a $26,000 donation from Intel in recognition of the 26th anniversary of the Supercomputing conference.

UK National Weather Service Awards $128M Supercomputer Contract

October 29, 2014 11:43 am | by Cray | News | Comments

Cray announced it has been awarded a contract to provide the Met Office in the United Kingdom...

IBM to Pay $1.5B to Shed Costly Chip Division

October 20, 2014 10:54 am | by Michelle Chapman, AP Business Writer | News | Comments

IBM will pay $1.5 billion to Globalfoundries in order to shed its costly chip division. IBM...

High Performance Parallelism Pearls: A Teaching Juggernaut

October 13, 2014 9:52 am | by Rob Farber | Blogs | Comments

High Performance Parallelism Pearls, the latest book by James Reinders and Jim Jeffers...

View Sample

FREE Email Newsletter

IBM has announced new capabilities for its System z mainframe.

IBM Delivers New Analytics Offerings for the Mainframe to Provide Real-Time Customer Insights

October 7, 2014 2:09 pm | by IBM | News | Comments

Building on client demand to integrate real-time analytics with consumer transactions, IBM has announced new capabilities for its System z mainframe. The integration of analytics with transactional data can provide businesses with real-time, actionable insights on commercial transactions as they occur to take advantage of new opportunities to increase sales and help minimize loss through fraud prevention.

A researcher proposes to construct a new quantum computer, able to perform multiple operations in a few seconds, which is based on the diamond structure to process information similarly to regular computers but with their own units of information called q

From Diamonds to Supercomputers

September 29, 2014 3:37 pm | by Investigación y Desarrollo | News | Comments

A researcher proposes to construct a new quantum computer, able to perform multiple operations in a few seconds, which is based on the diamond structure to process information similarly to regular computers but with their own units of information called qubits that allow much faster data processing, equal to one thousand computers working simultaneously.

“Scalability and performance means taking a careful look at the code modernization opportunities that exist for both message passing and threads as well as opportunities for vectorization and SIMDization.” Rick Stevens, Argonne National Laboratory

Extending the Lifespan of Critical Resources through Code Modernization

September 9, 2014 2:05 pm | by Doug Black | Articles | Comments

As scientific computing moves inexorably toward the Exascale era, an increasingly urgent problem has emerged: many HPC software applications — both public domain and proprietary commercial — are hamstrung by antiquated algorithms and software unable to function in manycore supercomputing environments. Aside from developing an Exascale-level architecture, HPC code modernization is the most important challenge facing the HPC community.

Advertisement
Simulating magnetized plasma devices requires multiple particle interaction models and highly accurate, self-consistent particle trajectory modelling in combined magnetic and space-charge modified electric fields.

3D Space Charge Parallel Processing Module

August 27, 2014 3:03 pm | Cobham Technical Services | Product Releases | Comments

The 3D Space Charge module uses code that is optimized for the shared memory architecture of standard PCs and workstations with multi-core processors. Although the speed benefit of parallel processing depends on model complexity, highly iterative and computationally-intensive analysis tasks can be greatly accelerated by the technique.

Cray CS-Storm High Density Cluster

Cray CS-Storm High Density Cluster

August 26, 2014 3:11 pm | Cray Inc. | Product Releases | Comments

Cray CS-Storm is a high-density accelerator compute system based on the Cray CS300 cluster supercomputer. Featuring up to eight NVIDIA Tesla GPU accelerators and a peak performance of more than 11 teraflops per node, the Cray CS-Storm system is a powerful single-node cluster.

Advanced techniques such as "structured placement," shown here and developed by Markov's group, are currently being used to wring out optimizations in chip layout. Different circuit modules on an integrated circuit are shown in different colors. Algorithm

Reviewing Frontier Technologies to Determine Fundamental Limits of Computer Scaling

August 15, 2014 12:31 pm | by NSF | News | Comments

Igor Markov reviews limiting factors in the development of computing systems to help determine what is achievable, identifying loose limits and viable opportunities for advancements through the use of emerging technologies. He summarizes and examines limitations in the areas of manufacturing and engineering, design and validation, power and heat, time and space, as well as information and computational complexity.​

NERSC's next-generation supercomputer, a Cray XC, will be named after Gerty Cori, the first American woman to be honored with a Nobel Prize in science. She shared the 1947 Nobel Prize with her husband Carl (pictured) and Argentine physiologist Bernardo Ho

NERSC Launches Next-Generation Code Optimization Effort

August 15, 2014 9:41 am | by NERSC | News | Comments

With the promise of exascale supercomputers looming on the horizon, much of the roadmap is dotted with questions about hardware design and how to make these systems energy efficient enough so that centers can afford to run them. Often taking a back seat is an equally important question: will scientists be able to adapt their applications to take advantage of exascale once it arrives?

NVIDIA Quadro K5200

NVIDIA Quadro K5200, K4200, K2200, K620 and K420 GPUs

August 12, 2014 3:59 pm | Nvidia Corporation | Product Releases | Comments

Quadro K5200, K4200, K2200, K620 and K420 GPUs deliver an enterprise-grade visual computing platform with up to twice the application performance and data-handling capability of the previous generation. They enable users to interact with graphics applications from a Quadro-based workstation from essentially any device.

Advertisement
A brain-inspired chip to transform mobility and Internet of Things through sensory perception. Courtesy of IBM

Chip with Brain-inspired Non-Von Neumann Architecture has 1M Neurons, 256M Synapses

August 11, 2014 12:13 pm | by IBM | News | Comments

Scientists from IBM have unveiled the first neurosynaptic computer chip to achieve an unprecedented scale of one million programmable neurons, 256 million programmable synapses and 46 billion synaptic operations per second per watt. At 5.4 billion transistors, this fully functional and production-scale chip is currently one of the largest CMOS chips ever built, yet, while running at biological real time, it consumes a minuscule 70mW.

Optalysys is currently developing two products, a ‘Big Data’ analysis system and an Optical Solver Supercomputer, both of which are expected to be launched in 2017.

Light-speed Computing: Prototype Optical Processor Set to Revolutionize Supercomputing

August 8, 2014 4:13 pm | by Optalysys | News | Comments

Cambridge UK-based start up Optalysys has stated that it is only months away from launching a prototype optical processor with the potential to deliver exascale levels of processing power on a standard-sized desktop computer. The company will demonstrate its prototype, which meets NASA Technology Readiness Level 4, in January of next year.

FirePro S9150 Server GPU for HPC

FirePro S9150 Server GPU for HPC

August 7, 2014 10:56 am | AMD | Product Releases | Comments

The AMD FirePro S9150 server card is based on the AMD Graphics Core Next (GCN) architecture, the first AMD architecture designed specifically with compute workloads in mind. It is the first to support enhanced double precision and to break the 2.0 TFLOPS double precision barrier.

AMD Opteron 64-Bit ARM-Based Developer Kits

AMD Opteron 64-Bit ARM-Based Developer Kits

July 30, 2014 12:45 pm | Advanced Micro Devices, Inc. | Product Releases | Comments

The AMD Opteron A1100-Series developer kit features AMD's first 64-bit ARM-based processor, codenamed "Seattle." The processor supports 4 and 8 ARM Cortex-A57 cores; up to 4 MB of shared L2 and 8 MB of shared L3 cache; configurable dual DDR3 or DDR4 memory channels with ECC at up to 1866...

Fundamental Chemistry Findings Could Help Extend Moore’s Law

July 16, 2014 11:49 am | by Lawrence Berkeley National Laboratory | News | Comments

Over the years, computer chips have gotten smaller, thanks to advances in materials science and manufacturing technologies. This march of progress, the doubling of transistors on a microprocessor roughly every two years, is called Moore’s Law. But there’s one component of the chip-making process in need of an overhaul if Moore’s law is to continue: the chemical mixture called photoresist.

Advertisement

Cray Awarded Contract to Install India's First Cray XC30 Supercomputer

July 16, 2014 3:33 am | by Cray | News | Comments

The Cray XC30 system will be used by a nation-wide consortium of scientists called the Indian Lattice Gauge Theory Initiative (ILGTI). The group will research the properties of a phase of matter called the quark-gluon plasma, which existed when the universe was approximately a microsecond old. ILGTI also carries out research on exotic and heavy-flavor hadrons, which will be produced in hadron collider experiments.

On the Trail of Paradigm-Shifting Methods for Solving Mathematical Models

July 15, 2014 10:11 am | by Hengguang Li | Blogs | Comments

How using CPU/GPU parallel computing is the next logical step - My work in computational mathematics is focused on developing new, paradigm-shifting ideas in numerical methods for solving mathematical models in various fields. This includes the Schrödinger equation in quantum mechanics, the elasticity model in mechanical engineering, the Navier-Stokes equation in fluid mechanics, Maxwell’s equations in electromagnetism...

IBM Announces $3B Research Initiative to Tackle Chip Grand Challenges for Cloud and Big Data Systems

July 9, 2014 4:58 pm | by IBM | News | Comments

IBM has announced it is investing $3 billion over the next five years in two broad research and early stage development programs to push the limits of chip technology needed to meet the emerging demands of cloud computing and Big Data systems. These investments are intended to push IBM's semiconductor innovations from today’s breakthroughs into the advanced technology leadership required for the future.

FirePro W8100 Professional Graphics Card

July 8, 2014 3:52 pm | AMD | Product Releases | Comments

The FirePro W8100 professional graphics card is designed to enable new levels of workstation performance delivered by the second-generation AMD Graphics Core Next (GCN) architecture. Powered by OpenCL, it is ideal for the next generation of 4K CAD (computing-aided design) workflows, engineering analysis and supercomputing applications.

Research Could Lead to Dramatic Data Farm Energy Savings

July 2, 2014 3:52 pm | by Tina Hilding, Washington State University | News | Comments

Washington State University has developed a wireless network on a computer chip that could reduce energy consumption at huge data farms by as much as 20 percent.                         

AppliedMicro Readies 64-bit ARM-based SoC for HPC

July 2, 2014 11:58 am | by AppliedMicro | News | Comments

AppliedMicro has announced the readiness of the X-Gene Server on a Chip based on the 64-bit ARMv8-Aarchitecture for High Performance Computing (HPC) workloads.                         

Eurotech Combines Applied Micro 64-bit ARM CPUs and NVIDIA GPU Accelerators for HPC

July 2, 2014 8:07 am | Eurotech, Nvidia Corporation | News | Comments

Eurotech has teamed up with AppliedMicro Circuits Corporation and NVIDIA to develop a new, original high performance computing (HPC) system architecture that combines extreme density and best-in-class energy efficiency. The new architecture is based on an innovative highly modular and scalable packaging concept. 

Design lets chip manage local memory stores efficiently using an Internet-style communication network.

Chip with 36 Cores Unveiled

June 25, 2014 9:18 am | by Larry Hardesty, MIT | News | Comments

For years, Li-Shiuan Peh has argued that the massively multicore chips of the future will need to resemble little Internets, where each core has an associated router, and data travels between cores in packets of fixed size. This week, at the International Symposium on Computer Architecture, Peh’s group unveiled a 36-core chip that features just such a “network-on-chip.”

A new structure developed by UCLA researchers for more energy-efficient computer chips. The arrows indicate the effective magnetic field due to the structure's asymmetry. Courtesy of UCLA Engineering

Innovative Nanoscale Structure Could Yield Higher-performance Computer Memory

June 12, 2014 3:22 pm | by Matthew Chin, UCLA | News | Comments

Researchers at UCLA have created a nanoscale magnetic component for computer memory chips that could significantly improve their energy efficiency and scalability. The design brings a new and highly sought-after type of magnetic memory one step closer to being used in computers, mobile electronics such as smart phones and tablets, as well as large computing systems for big data.

High-resolution CESM simulation run on Yellowstone. This featured CAM-5 spectral element at roughly 0.25deg grid spacing, and POP2 on a nominal 0.1deg grid.

Building Momentum for Code Modernization: The Intel Parallel Computing Centers

June 9, 2014 12:06 pm | by Doug Black | Articles | Comments

Like a Formula One race car stuck in a traffic jam, HPC hardware performance is frequently hampered by HPC software. This is because some of the most widely used application codes have not been updated for years, if ever, leaving them unable to leverage advances in parallel systems. As hardware power moves toward exascale, the imbalance between hardware and software will only get worse. The problem of updating essential scientific ...

Intel Issues RFP for Intel Parallel Computing Centers

Join the Journey to Accelerate Discovery through Increased Parallelism

May 28, 2014 11:20 am | by Intel Parallel Computing Centers | Blogs | Comments

Solving some of the biggest challenges in society, industry and sciences requires dramatic increases in computing efficiency. Many HPC customers are sitting on incredible untapped compute reserves and they don’t even know it. The very people who are focused on solving the world’s biggest problems with high-performance computing are often only using a small fraction of the compute capability their systems provide. Why? Their software ...

The new system will be named “Cori” in honor of American biochemist Gerty Theresa Cori, who was the first American woman to win a Nobel Prize in science, and the first woman to be awarded the Nobel Prize in Physiology or Medicine.

NERSC, Cray, Intel Partner on Next-gen Extreme-Scale Computing System

April 30, 2014 9:46 am | by NERSC | News | Comments

The U.S. Department of Energy’s (DOE) National Energy Research Scientific Computing (NERSC) Center and Cray Inc. announced that they have signed a contract for a next generation of supercomputer to enable scientific discovery at the DOE’s Office of Science (DOE SC).

X
You may login with either your assigned username or your e-mail address.
The password field is case sensitive.
Loading