For all the money and effort poured into supercomputers, their life spans can be brutally short – on average about four years. So, what happens to one of the world's greatest supercomputers when it reaches retirement age? If it's the Texas Advanced Computer Center's (TACC) Ranger supercomputer, it continues making an impact in the world. If the system could talk, it might proclaim, "There is life after retirement!"
In support of the updated Climate Data Initiative announced by the White House July 29, 2014,...
Ensemble forecasting is a key part of weather forecasting. Computers typically run multiple...
IBM is making high performance computing more accessible through the cloud for clients grappling...
A team of Dartmouth scientists and their colleagues have devised a breakthrough laser that uses a single artificial atom to generate and emit particles of light — and may play a crucial role in the development of quantum computers, which are predicted to eventually outperform even today’s most powerful supercomputers.
A case study published in The International Journal of Business Process Integration and Management demonstrates that the adoption of integrated cloud-computing solutions can lead to significant cost savings for businesses, as well as large reductions in the size of an organization's carbon footprint.
The second ISC Big Data conference themed “From Data To Knowledge,” builds on the success of the inaugural 2013 event. A comprehensive program has been put together by the Steering Committee under the leadership of Sverre Jarp, who retired officially as the CTO of CERN openlab in March of this year.
The Cray XC30 system will be used by a nation-wide consortium of scientists called the Indian Lattice Gauge Theory Initiative (ILGTI). The group will research the properties of a phase of matter called the quark-gluon plasma, which existed when the universe was approximately a microsecond old. ILGTI also carries out research on exotic and heavy-flavor hadrons, which will be produced in hadron collider experiments.
The discovery 30 years ago of soccer-ball-shaped carbon molecules called buckyballs helped to spur an explosion of nanotechnology research. Now, there appears to be a new ball on the pitch. Researchers have shown that a cluster of 40 boron atoms forms a hollow molecular cage similar to a carbon buckyball. It’s the first experimental evidence that a boron cage structure — previously only a matter of speculation — does indeed exist.
Registration is now open for the 2014 ISC Cloud and ISC Big Data Conferences, which will be held this fall in Heidelberg, Germany. The fifth ISC Cloud Conference will take place in the Marriott Hotel from September 29 to 30, and the second ISC Big Data will be held from October 1 to 2 at the same venue.
Michael M. Resch, the Director of the Stuttgart High Performance Computing Center (HLRS) will be talking about “HPC and Simulation in the Cloud – How Academia and Industry Can Benefit.” His keynote is of special interest to cloud skeptics, given that prior to 2011, Resch himself was a vocal cloud pessimist. Three years later, he feels that this technology provides a practical option for many users.
The National Nuclear Security Administration (NNSA) and Cray have entered into a contract agreement for a next-generation supercomputer, called Trinity, to advance the mission for the Stockpile Stewardship Program. Managed by NNSA, Trinity is a joint effort of the New Mexico Alliance for Computing at Extreme Scale between Los Alamos and Sandia national laboratories as part of the NNSA Advanced Simulation and Computing Program.
For the past 21 years, the TOP500.org has been ranking supercomputers by their performance on the LINPACK Benchmark. Reported two times a year, the release of the list is anticipated by the industry. As with any such ranking, the top of the list often garners the most attention. However, such emphasis on the top of such a list, would limit one’s understanding of the different supercomputers in the TOP500...
In nearly every field of science, experiments, instruments, observations, sensors, simulations, and surveys are generating massive data volumes that grow at exponential rates. Discoverable, shareable data enables collaboration and supports repurposing for new discoveries — and for cross-disciplinary research enabled by exchange across communities that include both scientists and citizens.
The Supercomputing Conference (SC14) awards committee has announced that “A Multi-level Algorithm for Partitioning Graphs,” co-authored by Bruce Hendrickson and Rob Leland of Sandia National Laboratories, has won the prestigious Test of Time Award. The award recognizes the most transformative and inspiring research published at the SC conference and will be presented at the SC14 awards ceremony in New Orleans, LA in November 2014.
Tandem protein mass spectrometry is one of the most widely used methods in proteomics, the large-scale study of proteins, particularly their structures and functions. Researchers in the Marcotte group at the University of Texas at Austin are using the Stampede supercomputer to develop and test computer algorithms that let them more accurately and efficiently interpret proteomics mass spectrometry data.
The FirePro W8100 professional graphics card is designed to enable new levels of workstation performance delivered by the second-generation AMD Graphics Core Next (GCN) architecture. Powered by OpenCL, it is ideal for the next generation of 4K CAD (computing-aided design) workflows, engineering analysis and supercomputing applications.
A powerful new model to detect life on planets outside of our solar system, more accurately than ever before, has been developed by UCL researchers. The new model focuses on methane, the simplest organic molecule, widely acknowledged to be a sign of potential life.
Dan C. Stanzione Jr. has been named executive director of the Texas Advanced Computing Center (TACC) at The University of Texas at Austin. A nationally recognized leader in high performance computing, Stanzione has served as deputy director since June 2009 and assumed the new post July 1, 2014.
The A-Class Supercomputer System is a highly integrated platform designed to simplify the transition to multi-petaflops computations. The system features two independent head nodes, 256 compute nodes and 60 InfiniBand and Ethernet switches, all tightly coupled into a single, powerful computing resource.
AppliedMicro has announced the readiness of the X-Gene Server on a Chip based on the 64-bit ARMv8-Aarchitecture for High Performance Computing (HPC) workloads.
Boston Limited and CoolIT Systems, Inc., congratulates the student team from EPCC at The University of Edinburgh for their 1st place ranking for the Highest LINPACK in the ISC14 Student Cluster Challenge.
"Big data" is playing an increasingly big role in the renewable energy industry and the transformation of the nation's electrical grid, and no single entity provides a better tool for such data than the Energy Department's Energy Systems Integration Facility (ESIF) located on the campus of the National Renewable Energy Laboratory (NREL).
Eurotech has teamed up with AppliedMicro Circuits Corporation and NVIDIA to develop a new, original high performance computing (HPC) system architecture that combines extreme density and best-in-class energy efficiency. The new architecture is based on an innovative highly modular and scalable packaging concept.
RSC Group, developer and integrator of innovative high performance computing (HPC) and data center solutions, made several technology demonstrations and announcements at the International Supercomputing Conference (ISC’14).
When the International Supercomputing Conference convenes next year for its 30th annual global HPC meet, it will take place in the international city of Frankfurt. The next conference and exhibition will be held July 12-16, 2015.
Cambridge’s COSMOS supercomputer, the largest shared-memory computer in Europe, has been named by computer giant Intel as one of its Parallel Computing Centers, building on a long-standing collaboration between Intel and the University of Cambridge.
Integration between Moab HPC Suite and Bright Cluster Manager provides enhanced functionality that enables users to dynamically provision HPC clusters based on both resource and workload monitoring. Combined capabilities also create a more optimal solution to managing technical computing and Big Workflow requirements.
Altair has announced that the National Supercomputing Center for Energy and the Environment (NSCEE) at the University of Nevada, Las Vegas, (UNLV) has chosen PBS Professional to replace its previous high-performance computing (HPC) workload management implementation.
- Page 1