Cray XC40 will be First Supercomputer in Berkeley Lab’s New Computational Research and Theory FacilityApril 23, 2015 3:17 pm | by NERSC and Berkeley Lab | News | Comments
The U.S. Department of Energy’s (DOE) National Energy Research Scientific Computing (NERSC) Center and Cray announced they have finalized a new contract for a Cray XC40 supercomputer that will be the first NERSC system installed in the newly built Computational Research and Theory facility at Lawrence Berkeley National Laboratory.
World-renowned for automotive quality and safety, Daimler’s Mercedes-Benz cars are also highly...
Solving Puzzle-Like Bond for Biofuels: First Look at One of Nature's Strongest Biomolecular InteractionsMarch 17, 2015 3:02 pm | by Texas Advanced Computing Center | News | Comments
One of life's strongest bonds has been discovered by a science team researching biofuels with...
Children don’t have to be told that “cat” and “cats” are variants of the same word — they pick it up just by listening. To a computer, though, they’re as different as, well, cats and dogs. Yet it’s computers that are assumed to be superior in detecting patterns and rules, not four-year-olds. Researchers are trying to, if not to solve that puzzle definitively, at least provide the tools to do so.
Researchers are creating ground-breaking computer software, which has the potential to develop some of the world’s fastest supercomputers by increasing their ability to process masses of data at higher speeds than ever before. The new software has the potential to combat major global issues, including climate change and life-threatening diseases, by simulating detailed models of natural events.
Optimization for high-performance and energy efficiency is a necessary next step after verifying that an application works correctly. In the HPC world, profiling means collecting data from hundreds to potentially many thousands of compute nodes over the length of a run. In other words, profiling is a big-data task, but one where the rewards can be significant — including potentially saving megawatts of power or reducing the time to solution
Sense of urgency and economic impact emphasized: The “hardware first” ethic is changing. Hardware retains the glamour, but there is now the stark realization that the newest parallel supercomputers will not realize their full potential without reengineering the software code to efficiently divide computational problems among the thousands of processors that comprise next-generation many-core computing platforms.
The genomes of modern birds tell a story of how they emerged and evolved after the mass extinction that wiped out dinosaurs and almost everything else 66 million years ago. That story is now coming to light, thanks to an ambitious international collaboration that has been underway for four years. The first findings of the Avian Phylogenomics Consortium are being reported nearly simultaneously in 28 papers.
What does a black hole look like up close? As the sci-fi movie Interstellar wows audiences with its computer-generated views of one of most enigmatic and fascinating phenomena in the universe, University of Arizona (UA) astrophysicists Chi-kwan Chan, Dimitrios Psaltis and Feryal Ozel are likely nodding appreciatively and saying something like, "Meh, that looks nice, but check out what we've got."
An experiment at the Department of Energy’s SLAC National Accelerator Laboratory provided the first fleeting glimpse of the atomic structure of a material as it entered a state resembling room-temperature superconductivity – a long-sought phenomenon in which materials might conduct electricity with 100 percent efficiency under everyday conditions.
ClusterStor Engineered Solution for Lustre offers improved metadata performance and scalability by implementing the Distributed Namespace (DNE) features in the Lustre 2.5 parallel file system. In addition to the Base Metadata Management Server capability, ClusterStor users have the option to add up to 16 Lustre Distributed Namespace metadata servers per single file system, providing client metadata performance improvement of up to 700 percent.
For the US Army, and DoD and intelligence community as a whole, GIS Federal developed an innovative approach to quickly filter, analyze, and visualize big data from hundreds of data providers with a particular emphasis on geospatial data.
The complexity of high-end computing technology makes it largely invisible to the public. HPC simply lacks the Sputnik sex appeal of the space race, to which current global competition in supercomputing is often compared. Rather, it is seen as the exclusive realm of academia and national labs. Yet, its impact reaches into almost every aspect of daily life. Organizers of SC14 had this reach in mind when selecting the “HPC Matters” theme.
One year ago, recognizing a rapidly emerging challenge facing the HPC community, Intel launched the Parallel Computing Centers program. With the great majority of the world’s technical HPC computing challenges being handled by systems based on Intel architecture, the company was keenly aware of the growing need to modernize a large portfolio of public domain scientific applications, to prepare these critically important codes for multi-core
Researchers studying iron-based superconductors are combining novel electronic structure algorithms with the high-performance computing power of the Department of Energy’s Titan supercomputer at Oak Ridge National Laboratory to predict spin dynamics, or the ways electrons orient and correlate their spins in a material.
As the SC14 conference approaches, Intel is preparing to host the second annual Intel Parallel Universe Computing Challenge (PUCC) from November 17 to 20, 2014. Each of eight participating teams will play for a charitable organization, which will receive a $26,000 donation from Intel in recognition of the 26th anniversary of the Supercomputing conference.
Two research teams have found distinct solutions to a critical challenge that has held back the realization of super powerful quantum computers. The teams, working in the same laboratories at UNSW Australia, created two types of quantum bits, or "qubits" — the building blocks for quantum computers — that each process quantum data with an accuracy above 99 percent.
The Oil and Gas High Performance Computing (HPC) Workshop, hosted annually at Rice University, is the premier meeting place for discussion of challenges and opportunities around high performance computing, information technology, and computational science and engineering.
High Performance Parallelism Pearls, the latest book by James Reinders and Jim Jeffers, is a teaching juggernaut that packs the experience of 69 authors into 28 chapters designed to get readers running on the Intel Xeon Phi family of coprocessors, plus provide tools and techniques to adapt legacy codes, as well as increase application performance on Intel Xeon processors.
The GS7KTM appliance is a scale-out parallel file system solution complete with enterprise-class features, NAS access and Cloud tiering capabilities. The system includes fully integrated enterprise data management and protection capabilities, in a simple, all-in-one, scale-out appliance.
Mathematica Online operates completely in the cloud and is accessible through any modern Web browser, with no installation or configuration required, and is completely interoperable with Mathematicaon the desktop. Users can simply point a Web browser at Mathematica Online, then log in, and immediately start to use the Mathematica notebook interface
A new $1.9 million study at the University of Michigan seeks to make low-dose computed tomography scans a viable screening technique by speeding up the image reconstruction from half an hour or more to just five minutes. The advance could be particularly important for fighting lung cancers, as symptoms often appear too late for effective treatment.
SDSC Joins Intel Parallel Computing Centers Program with Focus on Molecular Dynamics, Neuroscience and Life SciencesSeptember 12, 2014 2:44 pm | by San Diego Supercomputer Center | News | Comments
The San Diego Supercomputer Center (SDSC) at the University of California, San Diego, is working with semiconductor chipmaker Intel to further optimize research software to improve the parallelism, efficiency, and scalability of widely used molecular and neurological simulation technologies.
As part of the Cray CS cluster supercomputer series, Cray offers the CS-Storm cluster, an accelerator-optimized system that consists of multiple high-density multi-GPU server nodes, designed for massively parallel computing workloads.
The 3D Space Charge module uses code that is optimized for the shared memory architecture of standard PCs and workstations with multi-core processors. Although the speed benefit of parallel processing depends on model complexity, highly iterative and computationally-intensive analysis tasks can be greatly accelerated by the technique.
Creating a realistic computer simulation of how light suffuses a room is crucial not just for animated movies like Toy Story or Cars, but also in industry. Special computing methods should ensure this, but require great effort. Computer scientists from Saarbrücken have developed a novel approach that vastly simplifies and speeds up the whole calculating process.
With the promise of exascale supercomputers looming on the horizon, much of the roadmap is dotted with questions about hardware design and how to make these systems energy efficient enough so that centers can afford to run them. Often taking a back seat is an equally important question: will scientists be able to adapt their applications to take advantage of exascale once it arrives?
With five technical papers contending for one of the highest honored awards in high performance computing (HPC), the Association for Computing Machinery’s (ACM) awards committee has four months left to choose a winner for the prestigious 2014 Gordon Bell Prize. The winner of this prize will have demonstrated an outstanding achievement in HPC that helps solve critical science and engineering problems.
- Page 1