Turbulent combustion simulations, which provide input to the design of more fuel-efficient combustion systems, have gotten their own efficiency boost. Researchers developed new algorithmic features that streamline turbulent flame simulations, which play an important role in designing more efficient combustion systems. They tested the enhanced code on the Hopper supercomputer and achieved a dramatic decrease in simulation times.
Not long ago, it would have taken several years to run a high-resolution simulation on a global...
Researchers have detected at least three instances of cross-species mating that likely...
About 95 percent of the more than 10,000 bird species known only evolved upon the extinction of dinosaurs about 66 million years ago. According to computer analyses of the genetic data, today's diversity developed from a few species at a virtually explosive rate after 15 million years. Scientists designed the algorithms for the comprehensive analysis of the evolution of birds; a computing capacity of 300 processor-years was required.
The genomes of modern birds tell a story of how they emerged and evolved after the mass extinction that wiped out dinosaurs and almost everything else 66 million years ago. That story is now coming to light, thanks to an ambitious international collaboration that has been underway for four years. The first findings of the Avian Phylogenomics Consortium are being reported nearly simultaneously in 28 papers.
Celebrating its 30th conference anniversary, ISC High Performance has announced that the 2015 program’s technical content “will be strikingly broad in subject matter, differentiated and timely.” Over 2,600 attendees will gather in Frankfurt, from July 12 to 16, to discuss their organizational needs and the industry’s challenges, as well as learn about the latest research, products and solutions.
The US Department of Energy is mining for solutions to the rare earth problem — but with high-performance computing instead of bulldozers. Researchers are using the hybrid CPU-GPU, 27-petaflop Titan supercomputer managed by the Oak Ridge Leadership Computing Facility at Oak Ridge National Laboratory to discover alternative materials that can substitute for rare earths.
What does a black hole look like up close? As the sci-fi movie Interstellar wows audiences with its computer-generated views of one of most enigmatic and fascinating phenomena in the universe, University of Arizona (UA) astrophysicists Chi-kwan Chan, Dimitrios Psaltis and Feryal Ozel are likely nodding appreciatively and saying something like, "Meh, that looks nice, but check out what we've got."
An interstellar mystery of why stars form has been solved thanks to the most realistic supercomputer simulations of galaxies yet made. Theoretical astrophysicist Philip Hopkins led research that found that stellar activity — like supernova explosions or even just starlight — plays a big part in the formation of other stars and the growth of galaxies.
In a real-time challenge, the 11 teams of undergraduate students will build a small cluster of their own design on the ISC 2015 exhibit floor and race to demonstrate the greatest performance across a series of benchmarks and applications. It all concludes with a ceremony on the main conference keynote stage to award and recognize all student participants in front of thousands of HPC luminaries.
At the SC14 conference, which took place recently in New Orleans, IDC’s HPC Innovation Excellence Award Program continued to showcase benefits of investment in high performance computing (HPC). Initiated in 2011 to recognize innovative achievements using HPC, the program is designed to provide a means to evaluate the economic and scientific value HPC systems contribute.
The Intelligence Advanced Research Projects Activity (IARPA), within the Office of the Director of National Intelligence (ODNI), has embarked on a multi-year research effort to develop a superconducting computer. If successful, technology developed under the Cryogenic Computer Complexity (C3) program will pave the way to a new generation of superconducting supercomputers that are far more energy efficient.
Leaders in science, engineering, government, and industry will address fast-moving opportunities and challenges in the field of “big data” at the Virginia Summit on Science, Engineering, and Medicine.
A team of researchers from Argonne National Laboratory and DataDirect Networks (DDN) moved 65 terabytes of data in under just 100 minutes at a recent supercomputing conference.
Twice each year, a ranking of general purpose systems that are in common use for high-end applications is compiled and published by the TOP500 Project. Most recently released at the SC14 conference, this “much anticipated, much watched and much debated twice-yearly event” reveals the 500 most powerful commercially available computer systems, ranked by their performance on the LINPACK Benchmark.
For their calculations, researchers at the Helmholtz-Zentrum Dresden-Rossendorf (HZDR) will now, starting in 2015, have access to the World’s second-fastest computer. The Dresden scientists are hoping that the computations will yield new insights that may prove useful in proton-based cancer therapy.
In 1997, IBM’s Deep Blue computer beat chess wizard Garry Kasparov. This year, a computer system developed at the University of Wisconsin-Madison equaled or bested scientists at the complex task of extracting data from scientific publications and placing it in a database that catalogs the results of tens of thousands of individual studies.
The new L-CSC supercomputer at the GSI Helmholtz Centre for Heavy Ion Research is ranked as the world's most energy-efficient supercomputer. The new system reached first place on the "Green500" list published on November 20, 2014, comparing the energy efficiency of the fastest supercomputers around the world. With a computing power of 5.27 gigaflops per watt, the L-CSC has also set a new world record for energy efficiency.
After 48-hours of real-time, spirited competition, two triumphant winners emerged in this year’s SC14 Student Cluster Competition. The annual challenge is designed to introduce the next generation of students to the high-performance computing community. Over the last few years, it has drawn teams of undergraduate and/or high school students from around the world, including Australia, Canada, China, Costa Rica, Germany, Russia and Taiwan.
ClusterStor Engineered Solution for Lustre offers improved metadata performance and scalability by implementing the Distributed Namespace (DNE) features in the Lustre 2.5 parallel file system. In addition to the Base Metadata Management Server capability, ClusterStor users have the option to add up to 16 Lustre Distributed Namespace metadata servers per single file system, providing client metadata performance improvement of up to 700 percent.
Tens of thousands of researchers currently harness the power of supercomputers to solve research problems that cannot be answered in the lab. However, this represents only a fraction of the potential users of such resources. As high performance computing becomes central to the work and progress of researchers in all fields, from genomics and ecology to medicine and education, new kinds of computing resources are required.
The PowerEdge C4130 is an accelerator-optimized, GPU-dense, HPC-focused rack server purpose-built to accelerate the most demanding HPC workloads. It is the only Intel Xeon E5-2600v3 1U server to offer up to four GPUs/accelerators and can achieve over 7.2 Teraflops on a single 1U server, with a performance/watt ratio of up to 4.17 Gigaflops per watt.
A new supercomputer, L-CSC from the GSI Helmholtz Center, emerged as the most energy-efficient supercomputer in the world, according to the 16th edition of the twice-yearly Green500 list of the world’s most energy-efficient supercomputers. The cluster was the first and only supercomputer on the list to surpass 5 gigaflops/watt. It was powered by Intel Ivy Bridge CPUs and a FDR Infiniband network and accelerated by AMD FirePro S9150 GPUs.
Supercomputing 2014 (SC14), the 26th anniversary conference of high performance computing, networking, storage and analysis, celebrated the contributions of researchers, from those just starting their careers to those whose contributions have made lasting impacts.
SC14, the international conference for high performance computing, networking, storage and analysis, celebrated the contributions of researchers, from those just starting their careers to those whose contributions have made lasting impacts, in a special awards session. The conference drew over 10,160 attendees who attended a technical program spanning six days and viewed the offerings of 356 exhibitors.
In the latest issue of HPC Source, “A New Dawn: Bringing HPC to the Enterprise,” we look at how small- to-medium-sized manufacturers can realize major benefits from adoption of high performance computing in areas such as modeling, simulation and analysis.
The Huawei FusionServer X6800 is a next-generation data center server optimized to support all business in one solution. This X6800 provides a broad portfolio of server nodes to flexibly meet elastic configuration requirements of differentiated services for computing, storage, and I/O resources. It also supports simplified system management and efficient operation and maintenance (O&M).
For the fourth consecutive time, Tianhe-2, a supercomputer developed by China’s National University of Defense Technology, has retained its position as the world’s No. 1 system with a performance of 33.86 petaflop/s (quadrillions of calculations per second) on the Linpack benchmark, according to the 44th edition of the twice-yearly TOP500 list of the world’s most powerful supercomputers.
- Page 1