Advertisement
Supercomputers
Subscribe to Supercomputers

The Lead

This simulation illustrates the total mass density (left) and temperature (right) of a dimethyl ether jet fuel simulation. It is a snapshot of the solution that corresponds to a physical time of 0.00006 seconds. Courtesy of Matthew Emmett, Weiqun Zhang

Optimized Algorithms Give Combustion Simulations a Boost

December 18, 2014 4:32 pm | by Lawrence Berkeley National Laboratory | News | Comments

Turbulent combustion simulations, which provide input to the design of more fuel-efficient combustion systems, have gotten their own efficiency boost. Researchers developed new algorithmic features that streamline turbulent flame simulations, which play an important role in designing more efficient combustion systems. They tested the enhanced code on the Hopper supercomputer and achieved a dramatic decrease in simulation times.

Global High-resolution Models Fuel New Golden Age of Climate Science

December 18, 2014 4:14 pm | by Lawrence Berkeley National Laboratory | News | Comments

Not long ago, it would have taken several years to run a high-resolution simulation on a global...

Big Data Analysis Reveals Shared Genetic Code between Species

December 18, 2014 11:32 am | by Mike Williams, Rice University | News | Comments

Researchers have detected at least three instances of cross-species mating that likely...

ISC Cloud & Big Data Conferences to Merge in 2015

December 16, 2014 12:11 pm | by Suzanne Tracy, Editor-in-Chief, Scientific Computing and HPC Source | Blogs | Comments

ISC has announced the ISC Cloud & Big Data conference, which has merged into a three-day...

View Sample

FREE Email Newsletter

For the family of bee-eaters (on the photo Merops bullocki), the study revealed a close relationship to oscine birds, parrots, and birds of prey. Courtesy of Peter Houde

Bird Tree of Life Reproduced using Gene Analysis, Supercomputing

December 15, 2014 1:57 pm | by Karlsruhe Institute of Technology | News | Comments

About 95 percent of the more than 10,000 bird species known only evolved upon the extinction of dinosaurs about 66 million years ago. According to computer analyses of the genetic data, today's diversity developed from a few species at a virtually explosive rate after 15 million years. Scientists designed the algorithms for the comprehensive analysis of the evolution of birds; a computing capacity of 300 processor-years was required.

Duke researchers led by associate professor of neurobiology Erich Jarvis, left, did most of the DNA extraction from bird tissue samples used in the Avian Phylogenomics Consortium. (l-r: Carole Parent, Nisarg Dabhi, Jason Howard). Courtesy of Les Todd, Duk

Mapping the "Big Bang" of Bird Evolution

December 12, 2014 6:04 pm | by Kelly Rae Chi, Duke University | News | Comments

The genomes of modern birds tell a story of how they emerged and evolved after the mass extinction that wiped out dinosaurs and almost everything else 66 million years ago. That story is now coming to light, thanks to an ambitious international collaboration that has been underway for four years. The first findings of the Avian Phylogenomics Consortium are being reported nearly simultaneously in 28 papers.

ISC High Performance is the only yearly international HPC forum that introduces over 300 hand-picked speakers to their attendees.

ISC High Performance Program to Offer Greater Diversity

December 12, 2014 4:32 pm | by ISC | News | Comments

Celebrating its 30th conference anniversary, ISC High Performance has announced that the 2015 program’s technical content “will be strikingly broad in subject matter, differentiated and timely.” Over 2,600 attendees will gather in Frankfurt, from July 12 to 16, to discuss their organizational needs and the industry’s challenges, as well as learn about the latest research, products and solutions.

Advertisement
Results of large-scale simulations showing the Alnico alloy separates into FeCo-rich and NiAl-rich phases at low temperatures and is a homogenized phase at high temperatures.

Solving the Shaky Future of Super-strong Rare Earth Magnets

December 11, 2014 4:15 pm | by Katie Elyce Jones, Oak Ridge National Laboratory | News | Comments

The US Department of Energy is mining for solutions to the rare earth problem — but with high-performance computing instead of bulldozers. Researchers are using the hybrid CPU-GPU, 27-petaflop Titan supercomputer managed by the Oak Ridge Leadership Computing Facility at Oak Ridge National Laboratory to discover alternative materials that can substitute for rare earths.

A black hole as depicted in the movie Interstellar -- Courtesy of Paramount Pictures

A Supermassive Black Hole Comes to the Big Screen

December 11, 2014 3:34 pm | by University of Arizona | News | Comments

What does a black hole look like up close? As the sci-fi movie Interstellar wows audiences with its computer-generated views of one of most enigmatic and fascinating phenomena in the universe, University of Arizona (UA) astrophysicists Chi-kwan Chan, Dimitrios Psaltis and Feryal Ozel are likely nodding appreciatively and saying something like, "Meh, that looks nice, but check out what we've got."

Galactic gas from the Feedback in Realistic Environments (FIRE) simulation. Represented here is a Milky Way mass halo, with colors denoting different densities.

Interstellar Mystery Solved by Supercomputer Simulations

December 10, 2014 4:25 pm | by Jorge Salazar, Texas Advanced Computing Center | News | Comments

An interstellar mystery of why stars form has been solved thanks to the most realistic supercomputer simulations of galaxies yet made. Theoretical astrophysicist Philip Hopkins led research that found that stellar activity — like supernova explosions or even just starlight — plays a big part in the formation of other stars and the growth of galaxies.

The HPCAC-ISC Student Cluster Competition is an opportunity to showcase the world’s brightest computer science students’ expertise in a friendly, yet spirited competition.

University Teams for HPCAC-ISC 2015 Student Cluster Competition Announced

December 10, 2014 3:45 pm | by ISC | News | Comments

In a real-time challenge, the 11 teams of undergraduate students will build a small cluster of their own design on the ISC 2015 exhibit floor and race to demonstrate the greatest performance across a series of benchmarks and applications. It all concludes with a ceremony on the main conference keynote stage to award and recognize all student participants in front of thousands of HPC luminaries.

While most genome centers focus on research, the CPGM develops new clinical tests as a starting point for next‐generation medical treatments to improve outcomes in patients.

Applying HPC to Improve Business ROI

December 9, 2014 3:31 pm | by Suzanne Tracy, Editor-in-Chief, Scientific Computing and HPC Source | Blogs | Comments

At the SC14 conference, which took place recently in New Orleans, IDC’s HPC Innovation Excellence Award Program continued to showcase benefits of investment in high performance computing (HPC). Initiated in 2011 to recognize innovative achievements using HPC, the program is designed to provide a means to evaluate the economic and scientific value HPC systems contribute.

Advertisement
The Intelligence Advanced Research Projects Activity (IARPA), within the Office of the Director of National Intelligence (ODNI), has embarked on a multi-year research effort to develop a superconducting computer. If successful, technology developed under

IARPA to Develop a Superconducting SuperComputer

December 5, 2014 4:24 pm | by IARPA | News | Comments

The Intelligence Advanced Research Projects Activity (IARPA), within the Office of the Director of National Intelligence (ODNI), has embarked on a multi-year research effort to develop a superconducting computer. If successful, technology developed under the Cryogenic Computer Complexity (C3) program will pave the way to a new generation of superconducting supercomputers that are far more energy efficient.

Leaders in science, engineering, government, and industry will address fast-moving opportunities and challenges in the field of “big data” at the Virginia Summit on Science, Engineering, and Medicine.

Big Data Challenges at Virginia Academy Summit

December 4, 2014 5:25 pm | by Virginia Tech | News | Comments

Leaders in science, engineering, government, and industry will address fast-moving opportunities and challenges in the field of “big data” at the Virginia Summit on Science, Engineering, and Medicine.              

A team of researchers from Argonne National Laboratory and DataDirect Networks (DDN) moved 65 terabytes of data in under just 100 minutes at a recent supercomputing conference.

Argonne Researchers Demonstrate Extraordinary Throughput at SC14

December 3, 2014 3:34 pm | by Computation Institute | News | Comments

A team of researchers from Argonne National Laboratory and DataDirect Networks (DDN) moved 65 terabytes of data in under just 100 minutes at a recent supercomputing conference.                     

Most recently released at the SC14 conference, this “much anticipated, much watched and much debated twice-yearly event” reveals the 500 most powerful commercially available computer systems, ranked by their performance on the LINPACK Benchmark.

Ranking the World’s Top Supercomputers

December 3, 2014 12:42 pm | by Suzanne Tracy, Editor-in-Chief, Scientific Computing and HPC Source | Blogs | Comments

Twice each year, a ranking of general purpose systems that are in common use for high-end applications is compiled and published by the TOP500 Project. Most recently released at the SC14 conference, this “much anticipated, much watched and much debated twice-yearly event” reveals the 500 most powerful commercially available computer systems, ranked by their performance on the LINPACK Benchmark.

For their calculations, researchers at the Helmholtz-Zentrum Dresden-Rossendorf (HZDR) will now, starting in 2015, have access to the World’s second-fastest computer. The Dresden scientists are hoping that the computations will yield new insights that may

Titan Calculates for HZDR Cancer Research

December 2, 2014 3:21 pm | by HZDR | News | Comments

For their calculations, researchers at the Helmholtz-Zentrum Dresden-Rossendorf (HZDR) will now, starting in 2015, have access to the World’s second-fastest computer. The Dresden scientists are hoping that the computations will yield new insights that may prove useful in proton-based cancer therapy.

Advertisement
In 1997, IBM’s Deep Blue computer beat chess wizard Garry Kasparov. This year, a computer system developed at the University of Wisconsin-Madison equaled or bested scientists at the complex task of extracting data from scientific publications and placing

Computer Equal To or Better Than Humans at Cataloging Science

December 2, 2014 2:53 pm | by David Tenenbaum, University of Wisconsin-Madison | News | Comments

In 1997, IBM’s Deep Blue computer beat chess wizard Garry Kasparov. This year, a computer system developed at the University of Wisconsin-Madison equaled or bested scientists at the complex task of extracting data from scientific publications and placing it in a database that catalogs the results of tens of thousands of individual studies.

The Saudi Arabian computer SANAM, also developed in Frankfurt and Darmstadt, reached second place on the "Green500" list in 2012. Courtesy of GSI

Green500: German Supercomputer a World Champion in Saving Energy

November 26, 2014 10:51 am | by Goethe-Universität Frankfurt am Main | News | Comments

The new L-CSC supercomputer at the GSI Helmholtz Centre for Heavy Ion Research is ranked as the world's most energy-efficient supercomputer. The new system reached first place on the "Green500" list published on November 20, 2014, comparing the energy efficiency of the fastest supercomputers around the world. With a computing power of 5.27 gigaflops per watt, the L-CSC has also set a new world record for energy efficiency.

This year’s team from The University of Texas – Austin won in the SC14 Student Cluster Competition overall category. Courtesy of TACC

Showcasing Student Expertise

November 25, 2014 2:55 pm | by Suzanne Tracy, Editor-in-Chief, Scientific Computing and HPC Source | Blogs | Comments

After 48-hours of real-time, spirited competition, two triumphant winners emerged in this year’s SC14 Student Cluster Competition. The annual challenge is designed to introduce the next generation of students to the high-performance computing community. Over the last few years, it has drawn teams of undergraduate and/or high school students from around the world, including Australia, Canada, China, Costa Rica, Germany, Russia and Taiwan.

Clusterstor Engineered Solution for Lustre

Clusterstor Engineered Solution for Lustre

November 25, 2014 11:28 am | by Seagate Technology | Product Releases | Comments

ClusterStor Engineered Solution for Lustre offers improved metadata performance and scalability by implementing the Distributed Namespace (DNE) features in the Lustre 2.5 parallel file system. In addition to the Base Metadata Management Server capability, ClusterStor users have the option to add up to 16 Lustre Distributed Namespace metadata servers per single file system, providing client metadata performance improvement of up to 700 percent.

As supercomputing — also known as high performance computing or HPC — becomes central to the work and progress of researchers in all fields, from genomics and ecology to medicine and education, new kinds of computing resources and more inclusive modes of

NSF Commits $16M to Build Cloud-based, Data-intensive Computing Systems for Open Science

November 25, 2014 10:01 am | by NSF | News | Comments

Tens of thousands of researchers currently harness the power of supercomputers to solve research problems that cannot be answered in the lab. However, this represents only a fraction of the potential users of such resources. As high performance computing becomes central to the work and progress of researchers in all fields, from genomics and ecology to medicine and education, new kinds of computing resources are required.

PowerEdge C4130 Server

PowerEdge C4130 Server

November 24, 2014 2:56 pm | Dell Computer Corporation | Product Releases | Comments

The PowerEdge C4130 is an accelerator-optimized, GPU-dense, HPC-focused rack server purpose-built to accelerate the most demanding HPC workloads. It is the only Intel Xeon E5-2600v3 1U server to offer up to four GPUs/accelerators and can achieve over 7.2 Teraflops on a single 1U server, with a performance/watt ratio of up to 4.17 Gigaflops per watt.

The L-CSC cluster was the first and only supercomputer on the list to surpass 5 gigaflops/watt.

L-CSC Cluster Awarded Top Spot on Green500 List

November 24, 2014 1:39 pm | by Green500 | News | Comments

A new supercomputer, L-CSC from the GSI Helmholtz Center, emerged as the most energy-efficient supercomputer in the world, according to the 16th edition of the twice-yearly Green500 list of the world’s most energy-efficient supercomputers. The cluster was the first and only supercomputer on the list to surpass 5 gigaflops/watt. It was powered by Intel Ivy Bridge CPUs and a FDR Infiniband network and accelerated by AMD FirePro S9150 GPUs.

New Orleans became the hub for the world’s fastest computer network — SCinet — which featured 1.5 Terabits of bandwidth. The network featured 84 miles of fiber deployed throughout the convention center and $18 million in loaned equipment. It was all made

Supercomputing 2014 Sets New Records

November 24, 2014 1:24 pm | by SC14 | News | Comments

Supercomputing 2014 (SC14), the 26th anniversary conference of high performance computing, networking, storage and analysis, celebrated the contributions of researchers, from those just starting their careers to those whose contributions have made lasting impacts.

Supercomputing 2014 Recognizes Outstanding Achievements in HPC

Supercomputing 2014 Recognizes Outstanding Achievements in HPC

November 24, 2014 7:13 am | by SC14 | News | Comments

SC14, the international conference for high performance computing, networking, storage and analysis, celebrated the contributions of researchers, from those just starting their careers to those whose contributions have made lasting impacts, in a special awards session. The conference drew over 10,160 attendees who attended a technical program spanning six days and viewed the offerings of 356 exhibitors.

In the latest issue of HPC Source, “A New Dawn: Bringing HPC to the Enterprise,” we look at how small- to-medium-sized manufacturers can realize major benefits from adoption of high performance computing in areas such as modeling, simulation and analysis.

HPC for All

November 21, 2014 4:32 pm | by Suzanne Tracy, Editor-in-Chief, Scientific Computing and HPC Source | Blogs | Comments

In the latest issue of HPC Source, “A New Dawn: Bringing HPC to the Enterprise,” we look at how small- to-medium-sized manufacturers can realize major benefits from adoption of high performance computing in areas such as modeling, simulation and analysis.

The Huawei FusionServer X6800 is a next-generation data center server optimized to support all business in one solution. This X6800 provides a broad portfolio of server nodes to flexibly meet elastic configuration requirements of differentiated services f

Huawei FusionServer X6800

November 20, 2014 2:40 pm | by Huawei | Product Releases | Comments

The Huawei FusionServer X6800 is a next-generation data center server optimized to support all business in one solution. This X6800 provides a broad portfolio of server nodes to flexibly meet elastic configuration requirements of differentiated services for computing, storage, and I/O resources. It also supports simplified system management and efficient operation and maintenance (O&M). 

For the fourth consecutive time, Tianhe-2, a supercomputer developed by China’s National University of Defense Technology, has retained its position as the world’s No. 1 system with a performance of 33.86 petaflop/s (quadrillions of calculations per secon

Tianhe-2 Remains World's Top Computer

November 19, 2014 3:34 pm | by Top500 | News | Comments

For the fourth consecutive time, Tianhe-2, a supercomputer developed by China’s National University of Defense Technology, has retained its position as the world’s No. 1 system with a performance of 33.86 petaflop/s (quadrillions of calculations per second) on the Linpack benchmark, according to the 44th edition of the twice-yearly TOP500 list of the world’s most powerful supercomputers.

X
You may login with either your assigned username or your e-mail address.
The password field is case sensitive.
Loading