Advertisement
Processors
Subscribe to Processors

The Lead

Scientists are now closer to imitating key electronic aspects of the human brain — a vital step towards creating a bionic brain.

Researchers take Vital Step toward Creating Bionic Brain

May 19, 2015 3:08 pm | by RMIT University | News | Comments

Researchers have mimicked the way the human brain processes information with the development of an electronic long-term memory cell, which mirrors the brain’s ability to simultaneously process and store multiple strands of information. The development brings them closer to imitating key electronic aspects of the human brain — a vital step toward creating a bionic brain and unlocking treatments for Alzheimer’s and Parkinson’s diseases.

Recap: The Week's Top Stories — May 8-14

May 15, 2015 2:34 pm | by Suzanne Tracy, Editor-in-Chief, Scientific Computing and HPC Source | News | Comments

In case you missed it, here's another chance to catch this week's biggest hits. Writing like a...

AMD Announces “Zen” x86 Processor Core

May 7, 2015 12:11 pm | by AMD | News | Comments

AMD provided details the company’s multi-year strategy to drive profitable growth based on...

NYU to Advance Deep Learning Research with Multi-GPU Cluster

May 5, 2015 11:37 am | by Kimberly Powell, NVIDIA | News | Comments

Self-driving cars. Computers that detect tumors. Real-time speech translation. Just a few years...

View Sample

FREE Email Newsletter

Cloud Security Reaches Silicon: Defending against Memory-access Attacks

April 23, 2015 1:53 pm | by Larry Hardesty, MIT | News | Comments

In the last 10 years, computer security researchers have shown that malicious hackers don’t need to see your data in order to steal your data. From the pattern in which your computer accesses its memory banks, adversaries can infer a shocking amount about what’s stored there.

Argonne’s decision to utilize Intel’s HPC scalable system framework stems from the fact it is designed to deliver a well-balanced and adaptable system capable of supporting both compute-intensive and data-intensive workloads

Intel to Deliver Nation’s Most Powerful Supercomputer at Argonne

April 9, 2015 2:07 pm | by Intel | News | Comments

Intel has announced that the U.S. Department of Energy’s (DOE) Argonne Leadership Computing Facility (ALCF) has awarded Intel Federal LLC, a wholly-owned subsidiary of Intel Corporation, a contract to deliver two next-generation supercomputers to Argonne National Laboratory.

The current Pangea supercomputer is a 2.3 petaflop system based on the Intel Xeon E5-2670 v3 processor that consists of 110,592 cores and contains 442 terabytes of memory built on SGI ICE X, one of the world's fastest commercial distributed memory superco

Total Partners with SGI to Upgrade its Pangea Supercomputer

April 1, 2015 11:27 am | by SGI | News | Comments

Total has chosen SGI to upgrade its supercomputer Pangea. Total is one of the largest integrated oil and gas companies in the world, with activities in more than 130 countries. Its 100,000 employees put their expertise to work in every part of the industry — the exploration and production of oil and natural gas, refining, chemicals, marketing and new energies. This updated system would place in the top 10 of the latest TOP500 list.

Advertisement
Pascal will offer better performance than Maxwell on key deep-learning tasks.

NVIDIA’s Next-Gen Pascal GPU Architecture to Provide 10X Speedup for Deep Learning Apps

March 18, 2015 12:24 pm | News | Comments

NVIDIA has announced that its Pascal GPU architecture, set to debut next year, will accelerate deep learning applications 10X beyond the speed of its current-generation Maxwell processors. NVIDIA CEO and co-founder Jen-Hsun Huang revealed details of Pascal and the company’s updated processor roadmap in front of a crowd of 4,000 during his keynote address at the GPU Technology Conference, in Silicon Valley.

Penguin Tundra Cluster Platform

Penguin Tundra Cluster Platform

March 13, 2015 9:18 am | Penguin Computing | Product Releases | Comments

The Penguin Tundra cluster platform is based on Open Compute Project rack-level infrastructure, and is designed to deliver the highest density and lowest total-cost-of-ownership for high performance technical computing clusters. The product line includes compute sled, storage sled and an Intel Xeon Phi processor-based motherboard.

ANSYS 16.0's structural mechanics suite supports Xeon Phi with shared-memory and distributed-memory parallelism for both the Linux and Windows platforms.

ANSYS, Intel Collaborate to Spur Innovation

March 13, 2015 9:10 am | by ANSYS | News | Comments

Ansys has announced that engineers using ANSYS 16.0 in combination with Intel Xeon technology can realize a 300 percent decrease in solution time. The ANSYS and Intel partnership ensures that simulation engineers performing structural analysis can expect seamless high-performance computing (HPC) operations with multi-core Xeon E5 v3 processors and many-core Xeon Phi coprocessors.

Rob Farber is an independent HPC expert to startups and Fortune 100 companies, as well as government and academic organizations.

Optimizing Application Energy Efficiency Using CPUs, GPUs and FPGAs

March 13, 2015 8:43 am | by Rob Farber | Articles | Comments

The HPC and enterprise communities are experiencing a paradigm shift as FLOPs per watt, rather than FLOPs (floating-point operations per second), are becoming the guiding metric in procurements, system design, and now application development. In short, “performance at any cost” is no longer viable, as the operational costs of supercomputer clusters are now on par with the acquisition cost of the hardware itself.

The Intel Xeon Processor D-1500 High Density Server Family is a new class of low-power, high density server solutions optimized for Embedded and hyperscale workloads in data center and cloud environments. The servers are available in a growing line of sin

Intel Xeon Processor D-1500 High Density Server Family

March 10, 2015 10:02 am | Super Micro Computer, Inc. | Product Releases | Comments

The Intel Xeon Processor D-1500 High Density Server Family is a new class of low-power, high density server solutions optimized for Embedded and hyperscale workloads in data center and cloud environments. The servers are available in a growing line of single processor (UP) motherboards, 1U and Mini-Tower server for Embedded, Network Communication/Security applications and coming high density 6U 56-node MicroBlade microserver for hyperscale environments.

Advertisement
Visualizations of future nano-transistors, clockwise starting at upper left: a) the organization of the atoms in an Ultra Thin Body (UTB) transistor and the amount of electric potential along the transistor. b) a visualization of the organization of the a

Designing the Building Blocks of Future Nano-computing Technologies

March 4, 2015 12:38 pm | by NSF | News | Comments

A relentless global effort to shrink transistors has made computers continually faster, cheaper and smaller over the last 40 years. This effort has enabled chipmakers to double the number of transistors on a chip roughly every 18 months — a trend referred to as Moore's Law. In the process, the U.S. semiconductor industry has become one of the nation's largest export industries, valued at more than $65 billion a year.

According to Chief Research Officer Christopher Willard, Ph.D. “2015 will see increased architectural experimentation. Users will test both low-cost nodes and new technology strategies in an effort to find a balance between these options that delivers the

Top 6 Predictions for High Performance Computing in 2015

March 2, 2015 12:41 pm | by Intersect360 Research | Blogs | Comments

The drive toward exascale computing, renewed emphasis on data-centric processing, energy efficiency concerns, and limitations of memory and I/O performance are all working to reshape HPC platforms, according to Intersect360 Research’s Top Six Predictions for HPC in 2015. The report cites many-core accelerators, flash storage, 3-D memory, integrated networking, and optical interconnects as just some of the technologies propelling future...

The University of Chicago’s Research Computing Center is helping linguists visualize the grammar of a given word in bodies of language containing millions or billions of words. Courtesy of Ricardo Aguilera/Research Computing Center

Billions of Words: Visualizing Natural Language

February 27, 2015 3:14 pm | by Benjamin Recchie, University of Chicago | News | Comments

Children don’t have to be told that “cat” and “cats” are variants of the same word — they pick it up just by listening. To a computer, though, they’re as different as, well, cats and dogs. Yet it’s computers that are assumed to be superior in detecting patterns and rules, not four-year-olds. Researchers are trying to, if not to solve that puzzle definitively, at least provide the tools to do so.

NWChem molecular modeling software takes full advantage of a wide range of parallel computing systems, including Cascade. Courtesy of PNNL

PNNL Shifts Computational Chemistry into Overdrive

February 25, 2015 8:29 am | by Karol Kowalski, Ph.D., and Edoardo Apra, Ph.D. | Articles | Comments

We computational chemists are an impatient lot. Despite the fact that we routinely deal with highly complicated chemical processes running on our laboratory’s equally complex HPC clusters, we want answers in minutes or hours, not days, months or even years. In many instances, that’s just not feasible; in fact, there are times when the magnitude of the problem simply exceeds the capabilities of the HPC resources available to us.

Stephen Jones is Product Manager, Strategic Alliances at NVIDIA.

Powering a New Era of Deep Learning

February 20, 2015 12:42 pm | by Stephen Jones, NVIDIA | Blogs | Comments

GPU-accelerated applications have become ubiquitous in scientific supercomputing. Now, we are seeing increased adoption of GPU technology in other computationally demanding disciplines, including deep learning, one of the fastest growing areas in the machine learning and data science fields

Advertisement
Daniel Sanchez, Nathan Beckmann and Po-An Tsai have found that the ways in which a chip carves up computations can make a big difference to performance. -- Courtesy of Bryce Vickmark

Making Smarter, Much Faster Multicore Chips

February 19, 2015 2:02 pm | by Larry Hardesty, MIT | News | Comments

Computer chips’ clocks have stopped getting faster. To keep delivering performance improvements, chipmakers are instead giving chips more processing units, or cores, which can execute computations in parallel. But the ways in which a chip carves up computations can make a big difference to performance.

Rob Farber is an independent HPC expert to startups and Fortune 100 companies, as well as government and academic organizations.

Using Profile Information for Optimization, Energy Savings and Procurements

February 9, 2015 12:11 pm | by Rob Farber | Articles | Comments

Optimization for high-performance and energy efficiency is a necessary next step after verifying that an application works correctly. In the HPC world, profiling means collecting data from hundreds to potentially many thousands of compute nodes over the length of a run. In other words, profiling is a big-data task, but one where the rewards can be significant — including potentially saving megawatts of power or reducing the time to solution

Researchers have created the first transistors made of silicene, the world’s   thinnest silicon material. Their research holds the promise of building dramatically   faster, smaller and more efficient computer chips.

One-Atom-Thin Silicon Transistors become a Reality for Super-Fast Computing

February 3, 2015 3:44 pm | by University of Texas at Austin | News | Comments

Researchers have created the first transistors made of silicene, the world’s thinnest silicon material. Their research holds the promise of building dramatically faster, smaller and more efficient computer chips.           

The IBM-SUNY Poly partnership expands beyond Albany, as SUNY Poly continues its explosive growth across New York.

IBM Research to Lead Advanced Computer Chip R&D at SUNY Poly

February 2, 2015 11:47 am | by IBM | News | Comments

IBM and SUNY Polytechnic Institute (SUNY Poly) have announced that more than 220 engineers and scientists who lead IBM's advanced chip research and development efforts at SUNY Poly's Albany Nanotech campus will become part of IBM Research, the technology industry's largest and most influential research organization.

In simulations, algorithms using the new data structure continued to demonstrate performance improvement with the addition of new cores, up to a total of 80 cores. Courtesy of Christine Daniloff/MIT

Parallelizing Common Algorithms: Priority Queue Implemention Keeps Pace with New Cores

January 30, 2015 3:49 pm | by Larry Hardesty, MIT News Office | News | Comments

Every undergraduate computer-science major takes a course on data structures, which describes different ways of organizing data in a computer’s memory. Every data structure has its own advantages: Some are good for fast retrieval, some for efficient search, some for quick insertions and deletions, and so on.

In his doctoral thesis, Baaij describes the world-wide production of microchips through the years.

Massive Chip Design Savings on the Horizon

January 26, 2015 4:35 pm | by University of Twente | News | Comments

Researchers have developed a programming language making the massive costs associated with designing hardware more manageable. Chip manufacturers have been using the same chip design techniques for 20 years. The current process calls for extensive testing after each design step. The newly developed, functional programming language makes it possible to prove, in advance, that a design transformation is 100-percent error-free.

Shubham Banerjee works on his lego robotics braille printer. Banerjee launched a company to develop a low-cost machine to print Braille materials for the blind based on a prototype he built with his Lego robotics kit. Last month, Intel invested in his sta

Eighth-grader Builds Braille Printer with Legos, Launches Company

January 21, 2015 1:02 pm | by Terence Chea, Associated Press | News | Comments

In Silicon Valley, it's never too early to become an entrepreneur. Just ask 13-year-old Shubham Banerjee. The California eighth-grader has launched a company to develop low-cost machines to print Braille, the tactile writing system for the visually impaired. Tech giant Intel recently invested in his startup, Braigo Labs.

Rackform iServ R4420 and R4422 High-density Servers

Rackform iServ R4420 and R4422 High-density Servers

January 16, 2015 9:54 am | Silicon Mechanics | Product Releases | Comments

Rackform iServ R4420 and R4422 high-density servers are designed to deliver cost-effective, energy-efficient compute power in a small footprint. The 2U 4-node products provide high throughput and processing capabilities based on Supermicro TwinPro architecture.

Button-sized prototype of the Intel Curie module, a tiny hardware product based on the company’s first purpose-built system-on-chip (SoC) for wearable devices.

Intel’s CEO Outlines Future of Computing

January 7, 2015 3:54 pm | by Intel | News | Comments

Intel has announced a number of technology advancements and initiatives aimed at accelerating computing into the next dimension. The announcements include the Intel Curie module, a button-sized hardware product for wearable solutions; new applications for Intel RealSense cameras spanning robots, flying multi-copter drones and 3-D immersive experiences; and a broad, new Diversity in Technology initiative.

Artist’s impression of a proton depicting three interacting valence quarks inside. Courtesy of Jefferson Lab

HPC Community Experts Weigh in on Code Modernization

December 17, 2014 4:33 pm | by Doug Black | Articles | Comments

Sense of urgency and economic impact emphasized: The “hardware first” ethic is changing. Hardware retains the glamour, but there is now the stark realization that the newest parallel supercomputers will not realize their full potential without reengineering the software code to efficiently divide computational problems among the thousands of processors that comprise next-generation many-core computing platforms.

The Saudi Arabian computer SANAM, also developed in Frankfurt and Darmstadt, reached second place on the "Green500" list in 2012. Courtesy of GSI

Green500: German Supercomputer a World Champion in Saving Energy

November 26, 2014 10:51 am | by Goethe-Universität Frankfurt am Main | News | Comments

The new L-CSC supercomputer at the GSI Helmholtz Centre for Heavy Ion Research is ranked as the world's most energy-efficient supercomputer. The new system reached first place on the "Green500" list published on November 20, 2014, comparing the energy efficiency of the fastest supercomputers around the world. With a computing power of 5.27 gigaflops per watt, the L-CSC has also set a new world record for energy efficiency.

PowerEdge C4130 Server

PowerEdge C4130 Server

November 24, 2014 2:56 pm | Dell Computer Corporation | Product Releases | Comments

The PowerEdge C4130 is an accelerator-optimized, GPU-dense, HPC-focused rack server purpose-built to accelerate the most demanding HPC workloads. It is the only Intel Xeon E5-2600v3 1U server to offer up to four GPUs/accelerators and can achieve over 7.2 Teraflops on a single 1U server, with a performance/watt ratio of up to 4.17 Gigaflops per watt.

X
You may login with either your assigned username or your e-mail address.
The password field is case sensitive.
Loading