NVIDIA has announced that its Pascal GPU architecture, set to debut next year, will accelerate deep learning applications 10X beyond the speed of its current-generation Maxwell processors. NVIDIA CEO and co-founder Jen-Hsun Huang revealed details of Pascal and the company’s updated processor roadmap in front of a crowd of 4,000 during his keynote address at the GPU Technology Conference, in Silicon Valley.
The Penguin Tundra cluster platform is based on Open Compute Project rack-level infrastructure,...
Ansys has announced that engineers using ANSYS 16.0 in combination with Intel Xeon technology...
The HPC and enterprise communities are experiencing a paradigm shift as FLOPs per watt, rather...
The Intel Xeon Processor D-1500 High Density Server Family is a new class of low-power, high density server solutions optimized for Embedded and hyperscale workloads in data center and cloud environments. The servers are available in a growing line of single processor (UP) motherboards, 1U and Mini-Tower server for Embedded, Network Communication/Security applications and coming high density 6U 56-node MicroBlade microserver for hyperscale environments.
A relentless global effort to shrink transistors has made computers continually faster, cheaper and smaller over the last 40 years. This effort has enabled chipmakers to double the number of transistors on a chip roughly every 18 months — a trend referred to as Moore's Law. In the process, the U.S. semiconductor industry has become one of the nation's largest export industries, valued at more than $65 billion a year.
The drive toward exascale computing, renewed emphasis on data-centric processing, energy efficiency concerns, and limitations of memory and I/O performance are all working to reshape HPC platforms, according to Intersect360 Research’s Top Six Predictions for HPC in 2015. The report cites many-core accelerators, flash storage, 3-D memory, integrated networking, and optical interconnects as just some of the technologies propelling future...
Children don’t have to be told that “cat” and “cats” are variants of the same word — they pick it up just by listening. To a computer, though, they’re as different as, well, cats and dogs. Yet it’s computers that are assumed to be superior in detecting patterns and rules, not four-year-olds. Researchers are trying to, if not to solve that puzzle definitively, at least provide the tools to do so.
We computational chemists are an impatient lot. Despite the fact that we routinely deal with highly complicated chemical processes running on our laboratory’s equally complex HPC clusters, we want answers in minutes or hours, not days, months or even years. In many instances, that’s just not feasible; in fact, there are times when the magnitude of the problem simply exceeds the capabilities of the HPC resources available to us.
GPU-accelerated applications have become ubiquitous in scientific supercomputing. Now, we are seeing increased adoption of GPU technology in other computationally demanding disciplines, including deep learning, one of the fastest growing areas in the machine learning and data science fields
Computer chips’ clocks have stopped getting faster. To keep delivering performance improvements, chipmakers are instead giving chips more processing units, or cores, which can execute computations in parallel. But the ways in which a chip carves up computations can make a big difference to performance.
Optimization for high-performance and energy efficiency is a necessary next step after verifying that an application works correctly. In the HPC world, profiling means collecting data from hundreds to potentially many thousands of compute nodes over the length of a run. In other words, profiling is a big-data task, but one where the rewards can be significant — including potentially saving megawatts of power or reducing the time to solution
Researchers have created the first transistors made of silicene, the world’s thinnest silicon material. Their research holds the promise of building dramatically faster, smaller and more efficient computer chips.
IBM and SUNY Polytechnic Institute (SUNY Poly) have announced that more than 220 engineers and scientists who lead IBM's advanced chip research and development efforts at SUNY Poly's Albany Nanotech campus will become part of IBM Research, the technology industry's largest and most influential research organization.
Every undergraduate computer-science major takes a course on data structures, which describes different ways of organizing data in a computer’s memory. Every data structure has its own advantages: Some are good for fast retrieval, some for efficient search, some for quick insertions and deletions, and so on.
Researchers have developed a programming language making the massive costs associated with designing hardware more manageable. Chip manufacturers have been using the same chip design techniques for 20 years. The current process calls for extensive testing after each design step. The newly developed, functional programming language makes it possible to prove, in advance, that a design transformation is 100-percent error-free.
In Silicon Valley, it's never too early to become an entrepreneur. Just ask 13-year-old Shubham Banerjee. The California eighth-grader has launched a company to develop low-cost machines to print Braille, the tactile writing system for the visually impaired. Tech giant Intel recently invested in his startup, Braigo Labs.
Rackform iServ R4420 and R4422 high-density servers are designed to deliver cost-effective, energy-efficient compute power in a small footprint. The 2U 4-node products provide high throughput and processing capabilities based on Supermicro TwinPro architecture.
Intel has announced a number of technology advancements and initiatives aimed at accelerating computing into the next dimension. The announcements include the Intel Curie module, a button-sized hardware product for wearable solutions; new applications for Intel RealSense cameras spanning robots, flying multi-copter drones and 3-D immersive experiences; and a broad, new Diversity in Technology initiative.
Sense of urgency and economic impact emphasized: The “hardware first” ethic is changing. Hardware retains the glamour, but there is now the stark realization that the newest parallel supercomputers will not realize their full potential without reengineering the software code to efficiently divide computational problems among the thousands of processors that comprise next-generation many-core computing platforms.
The new L-CSC supercomputer at the GSI Helmholtz Centre for Heavy Ion Research is ranked as the world's most energy-efficient supercomputer. The new system reached first place on the "Green500" list published on November 20, 2014, comparing the energy efficiency of the fastest supercomputers around the world. With a computing power of 5.27 gigaflops per watt, the L-CSC has also set a new world record for energy efficiency.
The PowerEdge C4130 is an accelerator-optimized, GPU-dense, HPC-focused rack server purpose-built to accelerate the most demanding HPC workloads. It is the only Intel Xeon E5-2600v3 1U server to offer up to four GPUs/accelerators and can achieve over 7.2 Teraflops on a single 1U server, with a performance/watt ratio of up to 4.17 Gigaflops per watt.
HPC has always embraced the leading edge of technology and, as such, acts as the trailbreaker and scout for enterprise and business customers. HPC has highlighted and matured the abilities of previously risky devices, like GPUs, that enterprise customers now leverage to create competitive advantage. GPUs have moved beyond “devices with potential” to “production devices” that are used for profit generation.
An interview with PNNL’s Karol Kowalski, Capability Lead for NWChem Development - NWChem is an open source high performance computational chemistry tool developed for the Department of Energy at Pacific Northwest National Lab in Richland, WA. I recently visited with Karol Kowalski, Capability Lead for NWChem Development, who works in the Environmental Molecular Sciences Laboratory (EMSL) at PNNL.
As the SC14 conference approaches, Intel is preparing to host the second annual Intel Parallel Universe Computing Challenge (PUCC) from November 17 to 20, 2014. Each of eight participating teams will play for a charitable organization, which will receive a $26,000 donation from Intel in recognition of the 26th anniversary of the Supercomputing conference.
Cray announced it has been awarded a contract to provide the Met Office in the United Kingdom with multiple Cray XC supercomputers and Cray Sonexion storage systems. Consisting of three phases spanning multiple years, the $128 million contract expands Cray’s presence in the global weather and climate community, and is the largest supercomputer contract ever for Cray outside of the United States.
IBM will pay $1.5 billion to Globalfoundries in order to shed its costly chip division. IBM Director of Research John E. Kelly III said in an interview on October 20, 2104, that handing over control of the semiconductor operations will allow it to grow faster, while IBM continues to invest in and expand its chip research.
High Performance Parallelism Pearls, the latest book by James Reinders and Jim Jeffers, is a teaching juggernaut that packs the experience of 69 authors into 28 chapters designed to get readers running on the Intel Xeon Phi family of coprocessors, plus provide tools and techniques to adapt legacy codes, as well as increase application performance on Intel Xeon processors.
Building on client demand to integrate real-time analytics with consumer transactions, IBM has announced new capabilities for its System z mainframe. The integration of analytics with transactional data can provide businesses with real-time, actionable insights on commercial transactions as they occur to take advantage of new opportunities to increase sales and help minimize loss through fraud prevention.
- Page 1