The HPC Advisory Council and ISC High Performance call on undergraduate students from around the world to submit their application for partaking in the 2015 Student Cluster Competition (SCC). The 11 teams selected will receive the opportunity to build a small cluster of their design and run a series of benchmarks and applications in real time for four days, on the ISC 2015 exhibition floor.
Red Hat Storage Server 3 is an open software-defined storage solution for scale-out file storage...
The HPC Advisory Council, an organization for HPC research, outreach and education, and the ISC...
Cray CS-Storm is a high-density accelerator compute system based on the Cray CS300 cluster supercomputer. Featuring up to eight NVIDIA Tesla GPU accelerators and a peak performance of more than 11 teraflops per node, the Cray CS-Storm system is a powerful single-node cluster.
As university students around the world prepare to head back to school this fall, 12 groups are already looking ahead to November when they will converge at SC14 in New Orleans for the Student Cluster Competition. In this real-time, non-stop, 48-hour challenge, teams students assemble a small cluster on the SC14 exhibit floor and race to demonstrate the greatest sustained performance across a series of applications.
A team of students from the University of Tennessee has been preparing since June 2014 at Oak Ridge National Laboratory for the Student Cluster Competition, which will last for 48 continuous hours during the SC14 supercomputing conference on November 16 to 21, 2014, in New Orleans.
In an age of “big data,” a single computer cannot always find the solution a user wants. Computational tasks must instead be distributed across a cluster of computers that analyze a massive data set together. It's how Facebook and Google mine your Web history to present you with targeted ads, and how Amazon and Netflix recommend your next favorite book or movie. But big data is about more than just marketing.
Using Powerful GPU-Based Monte Carlo Simulation Engine to Model Larger Systems, Reduce Data Errors, Improve System PrototypingJuly 22, 2014 8:33 am | by Jeffrey Potoff and Loren Schwiebert | Blogs | Comments
Recently, our research work got a shot in the arm because Wayne State University was the recipient of a complete high-performance compute cluster donated by Silicon Mechanics as part of its 3rd Annual Research Cluster Grant competition. The new HPC cluster gives us some state-of-the-art hardware, which will enhance the development of what we’ve been working on — a novel GPU-Optimized Monte Carlo simulation engine for molecular systems.
IBM is announcing a new software defined storage-as-a-service on IBM SoftLayer, code named Elastic Storage on Cloud, that gives organizations access to a fully-supported, ready-to-run storage environment, which includes SoftLayer bare metal resources and high performance data management and allows organizations to move data between their on-premise infrastructure and the cloud.
Boston Limited and CoolIT Systems, Inc., congratulates the student team from EPCC at The University of Edinburgh for their 1st place ranking for the Highest LINPACK in the ISC14 Student Cluster Challenge.
RSC Group, developer and integrator of innovative high performance computing (HPC) and data center solutions, made several technology demonstrations and announcements at the International Supercomputing Conference (ISC’14).
By combining advanced mathematics with high-performance computing, scientists have developed a tool that allowed them to calculate a fundamental property of most atoms on the periodic table to historic accuracy — reducing error by a factor of a thousand in many cases.
Integration between Moab HPC Suite and Bright Cluster Manager provides enhanced functionality that enables users to dynamically provision HPC clusters based on both resource and workload monitoring. Combined capabilities also create a more optimal solution to managing technical computing and Big Workflow requirements.
Altair has announced that the National Supercomputing Center for Energy and the Environment (NSCEE) at the University of Nevada, Las Vegas, (UNLV) has chosen PBS Professional to replace its previous high-performance computing (HPC) workload management implementation.
Even as CPU power and memory bandwidth march forward, a major bottleneck hampering overall supercomputing performance has presented a significant challenge over the past decade: I/O interconnectivity. The vision behind Intel’s new Omni Scale Fabric is to deliver a platform for the next generation of HPC systems.
ThinkParQ and Q-Leap Networks have announced a close partnership to deliver scalable storage solutions to their customers that are easy to deploy and operate. Based on the parallel filesystem BeeGFS and the cluster OS Qlustar, this will enable customers to build extremely fast storage for a wide range of workloads.
The third annual HPCAC-ISC Student Cluster Competition (SCC) is a joint event hosted by the HPC Advisory Council (HPCAC) and the organizers if the International Supercomputing Conference (ISC). It is an excellent opportunity to showcase students’ HPC expertise in a friendly yet spirited competition. The SCC event will be held during the ISC’14 Conference and Exhibition in Leipzig, Germany.
Silicon Mechanics has announced that Wayne State University (WSU) is the recipient of the company’s 3rd Annual Research Cluster Grant, a program in which the company and its partners are donating a complete high-performance compute cluster. The university, located in midtown Detroit, is one of the nation’s 50 largest public universities
DecisionHPC 14.1 business analytics software is a platform-neutral SaaS solution which gives insight into the value HPC is providing to the business in a “Single Pane of Glass” and provides complete HPC cluster management. Features include comprehensive cluster Scheduler Reports and an innovative Attribute Heat Map visualization capability.
The evolution of cluster technologies is expected to substantially impact emerging research areas, such as the increasingly important Data Science field. Therefore, we have chosen this year to highlight research topics expected to bring substantial progress in the way clusters can help in addressing Big Data challenges. Specific topics are dedicated to this direction within all conference tracks alongside more traditional topics. In addition, special tutorials and workshops will focus on cluster technologies for Big Data storage and processing.
The HPC Advisory Council (HPCAC), an organization for high-performance computing research, outreach and education, and the International Supercomputing Conference (ISC) have announces 11 university teams from around the world for the HPCAC-ISC 2014 Student Cluster Competition during the ISC’14 Conference and Exhibition held from June 22 to 26, 2014.
Businesses increasingly report that they are able to boost their productivity and competitiveness in the global market by deploying computer simulations and digital modeling. Such applications require high-end computing power and storage that are provided by HPC products and services. The ISC’14 two-day Industry Innovation through HPC track is designed to help engineers, manufacturers and designers gain the right set of tools and methods
Mellanox Technologies a supplier of high-performance, end-to-end interconnect solutions for data center servers and storage systems, announced a collaboration with the University of Cambridge for the Square Kilometer Array (SKA) project. The University of Cambridge selected the company’s Virtual Protocol Interconnect (VPI) solution to provide it with interconnect performance and protocol flexibility for SKA test-bed clusters. The University of Cambridge and Mellanox will use the compute clusters for various development projects for the SKA project, an international effort to build the world’s largest radio telescope.
Just as Netflix uses an algorithm to recommend movies we ought to see, a Stanford software system offers by-the-moment advice to thousands of server-farm computers on how to efficiently share the workload. We hear a lot about the future of computing in the cloud, but not much about the efficiency of the data centers that make the cloud possible, where clusters work together to host applications ranging from big data analytics
Silicon Mechanics has launched its 3rd Annual Research Cluster Grant program, in which the company will donate a complete high-performance computer cluster as part of a highly competitive research grant. The competition is open to all US and Canadian qualified post-secondary institutions, university-affiliated research institutions, non-profit research institutions, and researchers at federal labs with university affiliations.
eQUEUE is designed to be an intuitive Web-based front-end job submission tool and management portal that increases cluster utilization by making it easier to run jobs from any Web browser. It has the added value of virtually eliminating errors through pre-defined job submission scripts.
Global supercomputer leader Cray announced the Center for Computational Sciences (CCS) at the University of Tsukuba in Japan has put a Cray CS300 cluster supercomputer into production. The new Cray CS300 system has been combined with the University’s current Cray cluster supercomputer, and is providing researchers and scientists with a 1.1 petaflop system for computational science.
Sponsored by Raytheon and coached by staff of the Texas Advanced Computing Center (TACC) at The University of Texas at Austin, Team Texas won the 8th annual Student Cluster Competition this year at the Supercomputing Conference (SC13) in Denver. The University Tower was lit orange to acknowledge this accomplishment.
- Page 1