Cray XC40 will be First Supercomputer in Berkeley Lab’s New Computational Research and Theory FacilityApril 23, 2015 3:17 pm | by NERSC and Berkeley Lab | News | Comments
The U.S. Department of Energy’s (DOE) National Energy Research Scientific Computing (NERSC) Center and Cray announced they have finalized a new contract for a Cray XC40 supercomputer that will be the first NERSC system installed in the newly built Computational Research and Theory facility at Lawrence Berkeley National Laboratory.
The HPC Advisory Council and the Swiss Supercomputing Centre will host the HPC Advisory Council...
The Intelligence Advanced Research Projects Activity (IARPA), within the Office of the Director...
U.S. Secretary of Energy Ernest Moniz announced two new high performance computing (HPC) awards...
The complexity of high-end computing technology makes it largely invisible to the public. HPC simply lacks the Sputnik sex appeal of the space race, to which current global competition in supercomputing is often compared. Rather, it is seen as the exclusive realm of academia and national labs. Yet, its impact reaches into almost every aspect of daily life. Organizers of SC14 had this reach in mind when selecting the “HPC Matters” theme.
Highly motivated to organize the Argonne Training Program on Extreme-Scale Computing, Paul Messina reflects on what makes the program unique and a can’t-miss opportunity for the next generation of HPC scientists. ATPESC is an intense, two-week program that covers most of the topics and skills necessary to conduct computational science and engineering research on today’s and tomorrow’s high-end computers.
The Council on Competitiveness has released a new report that explores the value of government leadership in supercomputing for industrial competitiveness, titled Solve. The Exascale Effect: the Benefits of Supercomputing Investment for U.S. Industry. As the federal government pursues exascale computing to achieve national security and science missions, Solve examines how U.S.-based companies also benefit from leading-edge computation
The Oil and Gas High Performance Computing (HPC) Workshop, hosted annually at Rice University, is the premier meeting place for discussion of challenges and opportunities around high performance computing, information technology, and computational science and engineering.
High Performance Parallelism Pearls, the latest book by James Reinders and Jim Jeffers, is a teaching juggernaut that packs the experience of 69 authors into 28 chapters designed to get readers running on the Intel Xeon Phi family of coprocessors, plus provide tools and techniques to adapt legacy codes, as well as increase application performance on Intel Xeon processors.
Computationally intensive research in Sweden will soon get a boost from the fastest academic supercomputer in the Nordic countries, to be installed in October 2014 at KTH Royal Institute of Technology. KTH is due to begin using the fastest academic supercomputer of any university in Scandinavia. A Cray XC30 with 1,676 nodes and a memory of 104.7 terabytes will be installed at KTH’s PDC Center for High Performance Computing.
Some two hundred scientists from more than 40 countries are researching what the next generation of ultrascale computing systems will be like. The study is being carried out under the auspices of NESUS, one of the largest European research networks of this type coordinated by Universidad Carlos III de Madrid (UC3M).
For all the money and effort poured into supercomputers, their life spans can be brutally short – on average about four years. So, what happens to one of the world's greatest supercomputers when it reaches retirement age? If it's the Texas Advanced Computer Center's (TACC) Ranger supercomputer, it continues making an impact in the world. If the system could talk, it might proclaim, "There is life after retirement!"
RSC Group, developer and integrator of innovative high performance computing (HPC) and data center solutions, made several technology demonstrations and announcements at the International Supercomputing Conference (ISC’14).
Even as CPU power and memory bandwidth march forward, a major bottleneck hampering overall supercomputing performance has presented a significant challenge over the past decade: I/O interconnectivity. The vision behind Intel’s new Omni Scale Fabric is to deliver a platform for the next generation of HPC systems.
In the late 90s, I was teaching parallel programming in C using MPI to students. The most important lesson I wanted them to remember is that communication is much more important than computation. The form of the benchmark couldn't be more common: a set of convolutional filters applied to an image, one filter after the other in a pipelined fashion.
In conjunction with ISC’14, we will hold a one-day HPC Advisory European Conference Workshop on June 22, 2014. This workshop will focus on HPC productivity, and advanced HPC topics and futures, and will bring together system managers, researchers, developers, computational scientists and industry affiliates to discuss recent developments and future advancements in High-Performance Computing. Our keynote session will feature the SKA Project
The PRACE Scientific and Industrial Conference 2014 – PRACEdays14 – was held from 20 to 22 May 2014 in Barcelona, Spain. Hosted by PRACE and supported by the Barcelona Supercomputing Centre, the conference attracted over 200 participants from academia and industry. Three Awards were presented to...
Partitioned Global Address Space (PGAS) approaches have become a hot topic for the exascale computing domain. For instance, several exascale projects funded by the European Commission or the US Department of Energy extend current PGAS approaches. An example is the EC-funded EPiGRAM project that has identified the gaps to be filled when attempting to master the Exascale challenge with PGAS.
Sometimes the HPC industry gets excited by the hype surrounding progress towards Exascale systems. But we need to remember that building an Exascale system is not an end in itself. The purpose of such a machine is to run applications, and to deliver insights that could not be achieved with less powerful systems.
The U.S. Department of Energy’s (DOE) National Energy Research Scientific Computing (NERSC) Center and Cray Inc. announced that they have signed a contract for a next generation of supercomputer to enable scientific discovery at the DOE’s Office of Science (DOE SC).
As modern computer systems become more powerful, utilizing as many as millions of processor cores in parallel, Intel is looking for new ways to efficiently use these high performance computing (HPC) systems to accelerate scientific discovery. As part of this effort, Intel has selected Georgia Tech as the site of one of its Parallel Computing Centers.
Torsten Hoefler is working in the Blue Waters Directorate at the University of Illinois at Urbana-Champaign, where he is responsible for performance modeling and simulation of the Blue Waters Petascale computer and applications running on it. He is co-chair of the collective operations working group in the MPI Forum. Hoefler is interested in Collective Communications, Process Topologies, One Sided Operations, and Hybrid Programming in MPI. He is also a member of ACM SIGHPC, ACM, and IEEE.
Marie-Christine Sawley holds a degree in Physics and a PhD in Plasma Physics from EPFL (1985). After a postdoc at the University of Sydney, she joined the EPFL in 1988 to lead the support group for HPC applications. She lead a number of HPC initiatives for introducing new technology at the EPFL such as the PATP with the Cray T3D, the SwissTX prototype and the establishing the Vital IT partnership between HP, EPFL and the SIB.
NVIDIA plans to integrate a high-speed interconnect, called NVIDIA NVLink, into its future GPUs, enabling GPUs and CPUs to share data five to 12 times faster than they can today. This will eliminate a longstanding bottleneck and help pave the way for a new generation of exascale supercomputers.
Advance registration at reduced rates is now open for the 2014 International Supercomputing Conference (ISC’14), which will be held June 22-26 in Leipzig, Germany. By registering now, ISC’14 attendees can save over 25 percent off the onsite registration rates at the Leipzig Congress Center.
Lawrence Livermore has joined forces with two other national labs to deliver next generation supercomputers able to perform up to 200 peak petaflops (quadrillions of floating point operations per second), about 10 times faster than today's most powerful high performance computing (HPC) systems.
The HPC Advisory Council and the Swiss Supercomputing Centre will host the HPC Advisory Council Switzerland Conference 2014. The conference will focus on High-Performance Computing essentials, new developments and emerging technologies, best practices and hands-on training.
The goal of this conference is to bring together all the developers and researchers involved in solving the software challenges of the exascale era. The conference focuses on issues of applications for exascale and the associated tools, software programming models and libraries.
On December 24, 2013, Japan's Ministry of Education, Culture, Sports, Science and Technology selected the RIKEN Institute of Physical and Chemical Research to develop a new exascale supercomputer that is expected to keep Japan at the leading edge of computing science and technology.
- Page 1