The new L-CSC supercomputer at the GSI Helmholtz Centre for Heavy Ion Research is ranked as the world's most energy-efficient supercomputer. The new system reached first place on the "Green500" list published on November 20, 2014, comparing the energy efficiency of the fastest supercomputers around the world. With a computing power of 5.27 gigaflops per watt, the L-CSC has also set a new world record for energy efficiency.
The PowerEdge C4130 is an accelerator-optimized, GPU-dense, HPC-focused rack server purpose-...
HPC has always embraced the leading edge of technology and, as such, acts as the trailbreaker...
As the SC14 conference approaches, Intel is preparing to host the second annual Intel Parallel Universe Computing Challenge (PUCC) from November 17 to 20, 2014. Each of eight participating teams will play for a charitable organization, which will receive a $26,000 donation from Intel in recognition of the 26th anniversary of the Supercomputing conference.
Cray announced it has been awarded a contract to provide the Met Office in the United Kingdom with multiple Cray XC supercomputers and Cray Sonexion storage systems. Consisting of three phases spanning multiple years, the $128 million contract expands Cray’s presence in the global weather and climate community, and is the largest supercomputer contract ever for Cray outside of the United States.
IBM will pay $1.5 billion to Globalfoundries in order to shed its costly chip division. IBM Director of Research John E. Kelly III said in an interview on October 20, 2104, that handing over control of the semiconductor operations will allow it to grow faster, while IBM continues to invest in and expand its chip research.
High Performance Parallelism Pearls, the latest book by James Reinders and Jim Jeffers, is a teaching juggernaut that packs the experience of 69 authors into 28 chapters designed to get readers running on the Intel Xeon Phi family of coprocessors, plus provide tools and techniques to adapt legacy codes, as well as increase application performance on Intel Xeon processors.
Building on client demand to integrate real-time analytics with consumer transactions, IBM has announced new capabilities for its System z mainframe. The integration of analytics with transactional data can provide businesses with real-time, actionable insights on commercial transactions as they occur to take advantage of new opportunities to increase sales and help minimize loss through fraud prevention.
A researcher proposes to construct a new quantum computer, able to perform multiple operations in a few seconds, which is based on the diamond structure to process information similarly to regular computers but with their own units of information called qubits that allow much faster data processing, equal to one thousand computers working simultaneously.
As scientific computing moves inexorably toward the Exascale era, an increasingly urgent problem has emerged: many HPC software applications — both public domain and proprietary commercial — are hamstrung by antiquated algorithms and software unable to function in manycore supercomputing environments. Aside from developing an Exascale-level architecture, HPC code modernization is the most important challenge facing the HPC community.
The 3D Space Charge module uses code that is optimized for the shared memory architecture of standard PCs and workstations with multi-core processors. Although the speed benefit of parallel processing depends on model complexity, highly iterative and computationally-intensive analysis tasks can be greatly accelerated by the technique.
Cray CS-Storm is a high-density accelerator compute system based on the Cray CS300 cluster supercomputer. Featuring up to eight NVIDIA Tesla GPU accelerators and a peak performance of more than 11 teraflops per node, the Cray CS-Storm system is a powerful single-node cluster.
Igor Markov reviews limiting factors in the development of computing systems to help determine what is achievable, identifying loose limits and viable opportunities for advancements through the use of emerging technologies. He summarizes and examines limitations in the areas of manufacturing and engineering, design and validation, power and heat, time and space, as well as information and computational complexity.
With the promise of exascale supercomputers looming on the horizon, much of the roadmap is dotted with questions about hardware design and how to make these systems energy efficient enough so that centers can afford to run them. Often taking a back seat is an equally important question: will scientists be able to adapt their applications to take advantage of exascale once it arrives?
Quadro K5200, K4200, K2200, K620 and K420 GPUs deliver an enterprise-grade visual computing platform with up to twice the application performance and data-handling capability of the previous generation. They enable users to interact with graphics applications from a Quadro-based workstation from essentially any device.
Scientists from IBM have unveiled the first neurosynaptic computer chip to achieve an unprecedented scale of one million programmable neurons, 256 million programmable synapses and 46 billion synaptic operations per second per watt. At 5.4 billion transistors, this fully functional and production-scale chip is currently one of the largest CMOS chips ever built, yet, while running at biological real time, it consumes a minuscule 70mW.
Cambridge UK-based start up Optalysys has stated that it is only months away from launching a prototype optical processor with the potential to deliver exascale levels of processing power on a standard-sized desktop computer. The company will demonstrate its prototype, which meets NASA Technology Readiness Level 4, in January of next year.
The AMD FirePro S9150 server card is based on the AMD Graphics Core Next (GCN) architecture, the first AMD architecture designed specifically with compute workloads in mind. It is the first to support enhanced double precision and to break the 2.0 TFLOPS double precision barrier.
The AMD Opteron A1100-Series developer kit features AMD's first 64-bit ARM-based processor, codenamed "Seattle." The processor supports 4 and 8 ARM Cortex-A57 cores; up to 4 MB of shared L2 and 8 MB of shared L3 cache; configurable dual DDR3 or DDR4 memory channels with ECC at up to 1866...
Over the years, computer chips have gotten smaller, thanks to advances in materials science and manufacturing technologies. This march of progress, the doubling of transistors on a microprocessor roughly every two years, is called Moore’s Law. But there’s one component of the chip-making process in need of an overhaul if Moore’s law is to continue: the chemical mixture called photoresist.
The Cray XC30 system will be used by a nation-wide consortium of scientists called the Indian Lattice Gauge Theory Initiative (ILGTI). The group will research the properties of a phase of matter called the quark-gluon plasma, which existed when the universe was approximately a microsecond old. ILGTI also carries out research on exotic and heavy-flavor hadrons, which will be produced in hadron collider experiments.
How using CPU/GPU parallel computing is the next logical step - My work in computational mathematics is focused on developing new, paradigm-shifting ideas in numerical methods for solving mathematical models in various fields. This includes the Schrödinger equation in quantum mechanics, the elasticity model in mechanical engineering, the Navier-Stokes equation in fluid mechanics, Maxwell’s equations in electromagnetism...
IBM Announces $3B Research Initiative to Tackle Chip Grand Challenges for Cloud and Big Data SystemsJuly 9, 2014 4:58 pm | by IBM | News | Comments
IBM has announced it is investing $3 billion over the next five years in two broad research and early stage development programs to push the limits of chip technology needed to meet the emerging demands of cloud computing and Big Data systems. These investments are intended to push IBM's semiconductor innovations from today’s breakthroughs into the advanced technology leadership required for the future.
The FirePro W8100 professional graphics card is designed to enable new levels of workstation performance delivered by the second-generation AMD Graphics Core Next (GCN) architecture. Powered by OpenCL, it is ideal for the next generation of 4K CAD (computing-aided design) workflows, engineering analysis and supercomputing applications.
Washington State University has developed a wireless network on a computer chip that could reduce energy consumption at huge data farms by as much as 20 percent.
AppliedMicro has announced the readiness of the X-Gene Server on a Chip based on the 64-bit ARMv8-Aarchitecture for High Performance Computing (HPC) workloads.
Eurotech has teamed up with AppliedMicro Circuits Corporation and NVIDIA to develop a new, original high performance computing (HPC) system architecture that combines extreme density and best-in-class energy efficiency. The new architecture is based on an innovative highly modular and scalable packaging concept.
For years, Li-Shiuan Peh has argued that the massively multicore chips of the future will need to resemble little Internets, where each core has an associated router, and data travels between cores in packets of fixed size. This week, at the International Symposium on Computer Architecture, Peh’s group unveiled a 36-core chip that features just such a “network-on-chip.”
- Page 1