Advertisement
Processors
Subscribe to Processors

The Lead

NVIDIA’s Next-Gen Pascal GPU Architecture to Provide 10X Speedup for Deep Learning Apps

March 18, 2015 12:24 pm | News | Comments

NVIDIA has announced that its Pascal GPU architecture, set to debut next year, will accelerate deep learning applications 10X beyond the speed of its current-generation Maxwell processors. NVIDIA CEO and co-founder Jen-Hsun Huang revealed details of Pascal and the company’s updated processor roadmap in front of a crowd of 4,000 during his keynote address at the GPU Technology Conference, in Silicon Valley.

Penguin Tundra Cluster Platform

March 13, 2015 9:18 am | Product Releases | Comments

The Penguin Tundra cluster platform is based on Open Compute Project rack-level infrastructure,...

ANSYS, Intel Collaborate to Spur Innovation

March 13, 2015 9:10 am | by ANSYS | News | Comments

Ansys has announced that engineers using ANSYS 16.0 in combination with Intel Xeon technology...

Optimizing Application Energy Efficiency Using CPUs, GPUs and FPGAs

March 13, 2015 8:43 am | by Rob Farber | Articles | Comments

The HPC and enterprise communities are experiencing a paradigm shift as FLOPs per watt, rather...

View Sample

FREE Email Newsletter

The Intel Xeon Processor D-1500 High Density Server Family is a new class of low-power, high density server solutions optimized for Embedded and hyperscale workloads in data center and cloud environments. The servers are available in a growing line of sin

Intel Xeon Processor D-1500 High Density Server Family

March 10, 2015 10:02 am | Super Micro Computer, Inc. | Product Releases | Comments

The Intel Xeon Processor D-1500 High Density Server Family is a new class of low-power, high density server solutions optimized for Embedded and hyperscale workloads in data center and cloud environments. The servers are available in a growing line of single processor (UP) motherboards, 1U and Mini-Tower server for Embedded, Network Communication/Security applications and coming high density 6U 56-node MicroBlade microserver for hyperscale environments.

Visualizations of future nano-transistors, clockwise starting at upper left: a) the organization of the atoms in an Ultra Thin Body (UTB) transistor and the amount of electric potential along the transistor. b) a visualization of the organization of the a

Designing the Building Blocks of Future Nano-computing Technologies

March 4, 2015 12:38 pm | by NSF | News | Comments

A relentless global effort to shrink transistors has made computers continually faster, cheaper and smaller over the last 40 years. This effort has enabled chipmakers to double the number of transistors on a chip roughly every 18 months — a trend referred to as Moore's Law. In the process, the U.S. semiconductor industry has become one of the nation's largest export industries, valued at more than $65 billion a year.

According to Chief Research Officer Christopher Willard, Ph.D. “2015 will see increased architectural experimentation. Users will test both low-cost nodes and new technology strategies in an effort to find a balance between these options that delivers the

Top 6 Predictions for High Performance Computing in 2015

March 2, 2015 12:41 pm | by Intersect360 Research | Blogs | Comments

The drive toward exascale computing, renewed emphasis on data-centric processing, energy efficiency concerns, and limitations of memory and I/O performance are all working to reshape HPC platforms, according to Intersect360 Research’s Top Six Predictions for HPC in 2015. The report cites many-core accelerators, flash storage, 3-D memory, integrated networking, and optical interconnects as just some of the technologies propelling future...

Advertisement
The University of Chicago’s Research Computing Center is helping linguists visualize the grammar of a given word in bodies of language containing millions or billions of words. Courtesy of Ricardo Aguilera/Research Computing Center

Billions of Words: Visualizing Natural Language

February 27, 2015 3:14 pm | by Benjamin Recchie, University of Chicago | News | Comments

Children don’t have to be told that “cat” and “cats” are variants of the same word — they pick it up just by listening. To a computer, though, they’re as different as, well, cats and dogs. Yet it’s computers that are assumed to be superior in detecting patterns and rules, not four-year-olds. Researchers are trying to, if not to solve that puzzle definitively, at least provide the tools to do so.

NWChem molecular modeling software takes full advantage of a wide range of parallel computing systems, including Cascade. Courtesy of PNNL

PNNL Shifts Computational Chemistry into Overdrive

February 25, 2015 8:29 am | by Karol Kowalski, Ph.D., and Edoardo Apra, Ph.D. | Articles | Comments

We computational chemists are an impatient lot. Despite the fact that we routinely deal with highly complicated chemical processes running on our laboratory’s equally complex HPC clusters, we want answers in minutes or hours, not days, months or even years. In many instances, that’s just not feasible; in fact, there are times when the magnitude of the problem simply exceeds the capabilities of the HPC resources available to us.

Stephen Jones is Product Manager, Strategic Alliances at NVIDIA.

Powering a New Era of Deep Learning

February 20, 2015 12:42 pm | by Stephen Jones, NVIDIA | Blogs | Comments

GPU-accelerated applications have become ubiquitous in scientific supercomputing. Now, we are seeing increased adoption of GPU technology in other computationally demanding disciplines, including deep learning, one of the fastest growing areas in the machine learning and data science fields

Daniel Sanchez, Nathan Beckmann and Po-An Tsai have found that the ways in which a chip carves up computations can make a big difference to performance. -- Courtesy of Bryce Vickmark

Making Smarter, Much Faster Multicore Chips

February 19, 2015 2:02 pm | by Larry Hardesty, MIT | News | Comments

Computer chips’ clocks have stopped getting faster. To keep delivering performance improvements, chipmakers are instead giving chips more processing units, or cores, which can execute computations in parallel. But the ways in which a chip carves up computations can make a big difference to performance.

Rob Farber is an independent HPC expert to startups and Fortune 100 companies, as well as government and academic organizations.

Using Profile Information for Optimization, Energy Savings and Procurements

February 9, 2015 12:11 pm | by Rob Farber | Articles | Comments

Optimization for high-performance and energy efficiency is a necessary next step after verifying that an application works correctly. In the HPC world, profiling means collecting data from hundreds to potentially many thousands of compute nodes over the length of a run. In other words, profiling is a big-data task, but one where the rewards can be significant — including potentially saving megawatts of power or reducing the time to solution

Advertisement
Researchers have created the first transistors made of silicene, the world’s   thinnest silicon material. Their research holds the promise of building dramatically   faster, smaller and more efficient computer chips.

One-Atom-Thin Silicon Transistors become a Reality for Super-Fast Computing

February 3, 2015 3:44 pm | by University of Texas at Austin | News | Comments

Researchers have created the first transistors made of silicene, the world’s thinnest silicon material. Their research holds the promise of building dramatically faster, smaller and more efficient computer chips.           

The IBM-SUNY Poly partnership expands beyond Albany, as SUNY Poly continues its explosive growth across New York.

IBM Research to Lead Advanced Computer Chip R&D at SUNY Poly

February 2, 2015 11:47 am | by IBM | News | Comments

IBM and SUNY Polytechnic Institute (SUNY Poly) have announced that more than 220 engineers and scientists who lead IBM's advanced chip research and development efforts at SUNY Poly's Albany Nanotech campus will become part of IBM Research, the technology industry's largest and most influential research organization.

In simulations, algorithms using the new data structure continued to demonstrate performance improvement with the addition of new cores, up to a total of 80 cores. Courtesy of Christine Daniloff/MIT

Parallelizing Common Algorithms: Priority Queue Implemention Keeps Pace with New Cores

January 30, 2015 3:49 pm | by Larry Hardesty, MIT News Office | News | Comments

Every undergraduate computer-science major takes a course on data structures, which describes different ways of organizing data in a computer’s memory. Every data structure has its own advantages: Some are good for fast retrieval, some for efficient search, some for quick insertions and deletions, and so on.

In his doctoral thesis, Baaij describes the world-wide production of microchips through the years.

Massive Chip Design Savings on the Horizon

January 26, 2015 4:35 pm | by University of Twente | News | Comments

Researchers have developed a programming language making the massive costs associated with designing hardware more manageable. Chip manufacturers have been using the same chip design techniques for 20 years. The current process calls for extensive testing after each design step. The newly developed, functional programming language makes it possible to prove, in advance, that a design transformation is 100-percent error-free.

Shubham Banerjee works on his lego robotics braille printer. Banerjee launched a company to develop a low-cost machine to print Braille materials for the blind based on a prototype he built with his Lego robotics kit. Last month, Intel invested in his sta

Eighth-grader Builds Braille Printer with Legos, Launches Company

January 21, 2015 1:02 pm | by Terence Chea, Associated Press | News | Comments

In Silicon Valley, it's never too early to become an entrepreneur. Just ask 13-year-old Shubham Banerjee. The California eighth-grader has launched a company to develop low-cost machines to print Braille, the tactile writing system for the visually impaired. Tech giant Intel recently invested in his startup, Braigo Labs.

Advertisement
Rackform iServ R4420 and R4422 High-density Servers

Rackform iServ R4420 and R4422 High-density Servers

January 16, 2015 9:54 am | Silicon Mechanics | Product Releases | Comments

Rackform iServ R4420 and R4422 high-density servers are designed to deliver cost-effective, energy-efficient compute power in a small footprint. The 2U 4-node products provide high throughput and processing capabilities based on Supermicro TwinPro architecture.

Button-sized prototype of the Intel Curie module, a tiny hardware product based on the company’s first purpose-built system-on-chip (SoC) for wearable devices.

Intel’s CEO Outlines Future of Computing

January 7, 2015 3:54 pm | by Intel | News | Comments

Intel has announced a number of technology advancements and initiatives aimed at accelerating computing into the next dimension. The announcements include the Intel Curie module, a button-sized hardware product for wearable solutions; new applications for Intel RealSense cameras spanning robots, flying multi-copter drones and 3-D immersive experiences; and a broad, new Diversity in Technology initiative.

Artist’s impression of a proton depicting three interacting valence quarks inside. Courtesy of Jefferson Lab

HPC Community Experts Weigh in on Code Modernization

December 17, 2014 4:33 pm | by Doug Black | Articles | Comments

Sense of urgency and economic impact emphasized: The “hardware first” ethic is changing. Hardware retains the glamour, but there is now the stark realization that the newest parallel supercomputers will not realize their full potential without reengineering the software code to efficiently divide computational problems among the thousands of processors that comprise next-generation many-core computing platforms.

The Saudi Arabian computer SANAM, also developed in Frankfurt and Darmstadt, reached second place on the "Green500" list in 2012. Courtesy of GSI

Green500: German Supercomputer a World Champion in Saving Energy

November 26, 2014 10:51 am | by Goethe-Universität Frankfurt am Main | News | Comments

The new L-CSC supercomputer at the GSI Helmholtz Centre for Heavy Ion Research is ranked as the world's most energy-efficient supercomputer. The new system reached first place on the "Green500" list published on November 20, 2014, comparing the energy efficiency of the fastest supercomputers around the world. With a computing power of 5.27 gigaflops per watt, the L-CSC has also set a new world record for energy efficiency.

PowerEdge C4130 Server

PowerEdge C4130 Server

November 24, 2014 2:56 pm | Dell Computer Corporation | Product Releases | Comments

The PowerEdge C4130 is an accelerator-optimized, GPU-dense, HPC-focused rack server purpose-built to accelerate the most demanding HPC workloads. It is the only Intel Xeon E5-2600v3 1U server to offer up to four GPUs/accelerators and can achieve over 7.2 Teraflops on a single 1U server, with a performance/watt ratio of up to 4.17 Gigaflops per watt.

Rob Farber is an independent HPC expert to startups and Fortune 100 companies, as well as government and academic organizations.

Today’s Enterprising GPUs

November 20, 2014 2:09 pm | by Rob Farber | Articles | Comments

HPC has always embraced the leading edge of technology and, as such, acts as the trailbreaker and scout for enterprise and business customers. HPC has highlighted and matured the abilities of previously risky devices, like GPUs, that enterprise customers now leverage to create competitive advantage. GPUs have moved beyond “devices with potential” to “production devices” that are used for profit generation.

Karol Kowalski, Capability Lead for NWChem Development, works in the Environmental Molecular Sciences Laboratory (EMSL) at PNNL.

Advancing Computational Chemistry with NWChem

November 18, 2014 3:07 pm | by Mike Bernhardt, HPC Community Evangelist, Intel | Articles | Comments

An interview with PNNL’s Karol Kowalski, Capability Lead for NWChem Development - NWChem is an open source high performance computational chemistry tool developed for the Department of Energy at Pacific Northwest National Lab in Richland, WA. I recently visited with Karol Kowalski, Capability Lead for NWChem Development, who works in the Environmental Molecular Sciences Laboratory (EMSL) at PNNL.

Each of eight participating teams will play for a charitable organization, which will receive a $26,000 donation from Intel in recognition of the 26th anniversary of the Supercomputing conference.

Single Elimination Tournament Raises Awareness of Parallelization’s Importance

October 30, 2014 1:03 pm | by Suzanne Tracy, Editor-in-Chief, Scientific Computing and HPC Source | Blogs | Comments

As the SC14 conference approaches, Intel is preparing to host the second annual Intel Parallel Universe Computing Challenge (PUCC) from November 17 to 20, 2014. Each of eight participating teams will play for a charitable organization, which will receive a $26,000 donation from Intel in recognition of the 26th anniversary of the Supercomputing conference.

The Met Office uses more than 10 million weather observations a day and an advanced atmospheric model to create 3,000 tailored forecasts and briefings each day that are delivered to customers ranging from government, businesses, the general public, armed

UK National Weather Service Awards $128M Supercomputer Contract

October 29, 2014 11:43 am | by Cray | News | Comments

Cray announced it has been awarded a contract to provide the Met Office in the United Kingdom with multiple Cray XC supercomputers and Cray Sonexion storage systems. Consisting of three phases spanning multiple years, the $128 million contract expands Cray’s presence in the global weather and climate community, and is the largest supercomputer contract ever for Cray outside of the United States.

An IBM logo displayed in Berlin, VT. IBM is paying $1.5 billion to Globalfoundries in order to shed its costly chip division. (AP Photo/Toby Talbot)

IBM to Pay $1.5B to Shed Costly Chip Division

October 20, 2014 10:54 am | by Michelle Chapman, AP Business Writer | News | Comments

IBM will pay $1.5 billion to Globalfoundries in order to shed its costly chip division. IBM Director of Research John E. Kelly III said in an interview on October 20, 2104, that handing over control of the semiconductor operations will allow it to grow faster, while IBM continues to invest in and expand its chip research.

Rob Farber is an independent HPC expert to startups and Fortune 100 companies, as well as government and academic organizations.

High Performance Parallelism Pearls: A Teaching Juggernaut

October 13, 2014 9:52 am | by Rob Farber | Blogs | Comments

High Performance Parallelism Pearls, the latest book by James Reinders and Jim Jeffers, is a teaching juggernaut that packs the experience of 69 authors into 28 chapters designed to get readers running on the Intel Xeon Phi family of coprocessors, plus provide tools and techniques to adapt legacy codes, as well as increase application performance on Intel Xeon processors. 

IBM has announced new capabilities for its System z mainframe.

IBM Delivers New Analytics Offerings for the Mainframe to Provide Real-Time Customer Insights

October 7, 2014 2:09 pm | by IBM | News | Comments

Building on client demand to integrate real-time analytics with consumer transactions, IBM has announced new capabilities for its System z mainframe. The integration of analytics with transactional data can provide businesses with real-time, actionable insights on commercial transactions as they occur to take advantage of new opportunities to increase sales and help minimize loss through fraud prevention.

X
You may login with either your assigned username or your e-mail address.
The password field is case sensitive.
Loading