A researcher proposes to construct a new quantum computer, able to perform multiple operations in a few seconds, which is based on the diamond structure to process information similarly to regular computers but with their own units of information called qubits that allow much faster data processing, equal to one thousand computers working simultaneously.
As scientific computing moves inexorably toward the Exascale era, an increasingly urgent problem...
The 3D Space Charge module uses code that is optimized for the shared memory architecture of...
Igor Markov reviews limiting factors in the development of computing systems to help determine what is achievable, identifying loose limits and viable opportunities for advancements through the use of emerging technologies. He summarizes and examines limitations in the areas of manufacturing and engineering, design and validation, power and heat, time and space, as well as information and computational complexity.
With the promise of exascale supercomputers looming on the horizon, much of the roadmap is dotted with questions about hardware design and how to make these systems energy efficient enough so that centers can afford to run them. Often taking a back seat is an equally important question: will scientists be able to adapt their applications to take advantage of exascale once it arrives?
Quadro K5200, K4200, K2200, K620 and K420 GPUs deliver an enterprise-grade visual computing platform with up to twice the application performance and data-handling capability of the previous generation. They enable users to interact with graphics applications from a Quadro-based workstation from essentially any device.
Scientists from IBM have unveiled the first neurosynaptic computer chip to achieve an unprecedented scale of one million programmable neurons, 256 million programmable synapses and 46 billion synaptic operations per second per watt. At 5.4 billion transistors, this fully functional and production-scale chip is currently one of the largest CMOS chips ever built, yet, while running at biological real time, it consumes a minuscule 70mW.
Cambridge UK-based start up Optalysys has stated that it is only months away from launching a prototype optical processor with the potential to deliver exascale levels of processing power on a standard-sized desktop computer. The company will demonstrate its prototype, which meets NASA Technology Readiness Level 4, in January of next year.
The AMD FirePro S9150 server card is based on the AMD Graphics Core Next (GCN) architecture, the first AMD architecture designed specifically with compute workloads in mind. It is the first to support enhanced double precision and to break the 2.0 TFLOPS double precision barrier.
The AMD Opteron A1100-Series developer kit features AMD's first 64-bit ARM-based processor, codenamed "Seattle." The processor supports 4 and 8 ARM Cortex-A57 cores; up to 4 MB of shared L2 and 8 MB of shared L3 cache; configurable dual DDR3 or DDR4 memory channels with ECC at up to 1866...
Over the years, computer chips have gotten smaller, thanks to advances in materials science and manufacturing technologies. This march of progress, the doubling of transistors on a microprocessor roughly every two years, is called Moore’s Law. But there’s one component of the chip-making process in need of an overhaul if Moore’s law is to continue: the chemical mixture called photoresist.
The Cray XC30 system will be used by a nation-wide consortium of scientists called the Indian Lattice Gauge Theory Initiative (ILGTI). The group will research the properties of a phase of matter called the quark-gluon plasma, which existed when the universe was approximately a microsecond old. ILGTI also carries out research on exotic and heavy-flavor hadrons, which will be produced in hadron collider experiments.
How using CPU/GPU parallel computing is the next logical step - My work in computational mathematics is focused on developing new, paradigm-shifting ideas in numerical methods for solving mathematical models in various fields. This includes the Schrödinger equation in quantum mechanics, the elasticity model in mechanical engineering, the Navier-Stokes equation in fluid mechanics, Maxwell’s equations in electromagnetism...
IBM Announces $3B Research Initiative to Tackle Chip Grand Challenges for Cloud and Big Data SystemsJuly 9, 2014 4:58 pm | by IBM | News | Comments
IBM has announced it is investing $3 billion over the next five years in two broad research and early stage development programs to push the limits of chip technology needed to meet the emerging demands of cloud computing and Big Data systems. These investments are intended to push IBM's semiconductor innovations from today’s breakthroughs into the advanced technology leadership required for the future.
The FirePro W8100 professional graphics card is designed to enable new levels of workstation performance delivered by the second-generation AMD Graphics Core Next (GCN) architecture. Powered by OpenCL, it is ideal for the next generation of 4K CAD (computing-aided design) workflows, engineering analysis and supercomputing applications.
Washington State University has developed a wireless network on a computer chip that could reduce energy consumption at huge data farms by as much as 20 percent.
AppliedMicro has announced the readiness of the X-Gene Server on a Chip based on the 64-bit ARMv8-Aarchitecture for High Performance Computing (HPC) workloads.
Eurotech has teamed up with AppliedMicro Circuits Corporation and NVIDIA to develop a new, original high performance computing (HPC) system architecture that combines extreme density and best-in-class energy efficiency. The new architecture is based on an innovative highly modular and scalable packaging concept.
For years, Li-Shiuan Peh has argued that the massively multicore chips of the future will need to resemble little Internets, where each core has an associated router, and data travels between cores in packets of fixed size. This week, at the International Symposium on Computer Architecture, Peh’s group unveiled a 36-core chip that features just such a “network-on-chip.”
Researchers at UCLA have created a nanoscale magnetic component for computer memory chips that could significantly improve their energy efficiency and scalability. The design brings a new and highly sought-after type of magnetic memory one step closer to being used in computers, mobile electronics such as smart phones and tablets, as well as large computing systems for big data.
Like a Formula One race car stuck in a traffic jam, HPC hardware performance is frequently hampered by HPC software. This is because some of the most widely used application codes have not been updated for years, if ever, leaving them unable to leverage advances in parallel systems. As hardware power moves toward exascale, the imbalance between hardware and software will only get worse. The problem of updating essential scientific ...
Solving some of the biggest challenges in society, industry and sciences requires dramatic increases in computing efficiency. Many HPC customers are sitting on incredible untapped compute reserves and they don’t even know it. The very people who are focused on solving the world’s biggest problems with high-performance computing are often only using a small fraction of the compute capability their systems provide. Why? Their software ...
The U.S. Department of Energy’s (DOE) National Energy Research Scientific Computing (NERSC) Center and Cray Inc. announced that they have signed a contract for a next generation of supercomputer to enable scientific discovery at the DOE’s Office of Science (DOE SC).
IBM has debuted new Power Systems servers that allow data centers to manage staggering data requirements with unprecedented speed, all built on an open server platform. In a move that sharply contrasts other chip and server manufacturers' proprietary business models, IBM released detailed technical specifications for its POWER8 processor, inviting collaborators and competitors alike to innovate on the processor and server platform
As modern computer systems become more powerful, utilizing as many as millions of processor cores in parallel, Intel is looking for new ways to efficiently use these high performance computing (HPC) systems to accelerate scientific discovery. As part of this effort, Intel has selected Georgia Tech as the site of one of its Parallel Computing Centers.
Matthew Tolentino is a Research Scientist at Intel and an Affiliate Assistant Professor at the University of Washington.
Thomas Wild's research interests include Many-core system on chip (SoC) architectures, Network processor (NPU) architectures, On-chip communication architectures, networks on chip (NoC), System level design methodologies, and Design space exploration.
The AMD FirePro W9100 professional graphics card is designed for next-generation 4K workstations accelerated by OpenCL (Open Computing Language). With up to 2.62 TFLOPS double precision of GPU compute power and ultra-high resolution multi-display capabilities, video, design and engineering professionals can utilize 16 GB of ultra-fast GDDR5 memory and multi-task across up to six 4K displays.
- Page 1