In an age of “big data,” a single computer cannot always find the solution a user wants. Computational tasks must instead be distributed across a cluster of computers that analyze a massive data set together. It's how Facebook and Google mine your Web history to present you with targeted ads, and how Amazon and Netflix recommend your next favorite book or movie. But big data is about more than just marketing.
Ensemble forecasting is a key part of weather forecasting. Computers typically run multiple...
Alan Turing led a team of code breakers at Bletchley Park which cracked the German Enigma...
HPC-X Scalable Software Toolkit is a comprehensive software suite for high-performance...
IBM is making high performance computing more accessible through the cloud for clients grappling with big data and other computationally intensive activities. A new option from SoftLayer will provide industry-standard InfiniBand networking technology to connect SoftLayer bare metal servers. This will enable very high data throughput speeds between systems, allowing companies to move workloads traditionally associated with HPC to the cloud.
Maybe you’ve sat on the lawn, even hung out on the flightline. Now, for the first time since 1997, NASA Ames Research Center is opening their house. An announcement posted on NASA.gov states: “For our 75th anniversary, we're inviting all of the Bay Area and Silicon Valley to come inside the gates and get to know NASA's center in Silicon Valley. Take a two-mile walking tour through the center and visit with Ames engineers and scientists..."
A team of Dartmouth scientists and their colleagues have devised a breakthrough laser that uses a single artificial atom to generate and emit particles of light — and may play a crucial role in the development of quantum computers, which are predicted to eventually outperform even today’s most powerful supercomputers.
Using Powerful GPU-Based Monte Carlo Simulation Engine to Model Larger Systems, Reduce Data Errors, Improve System PrototypingJuly 22, 2014 8:33 am | by Jeffrey Potoff and Loren Schwiebert | Blogs | Comments
Recently, our research work got a shot in the arm because Wayne State University was the recipient of a complete high-performance compute cluster donated by Silicon Mechanics as part of its 3rd Annual Research Cluster Grant competition. The new HPC cluster gives us some state-of-the-art hardware, which will enhance the development of what we’ve been working on — a novel GPU-Optimized Monte Carlo simulation engine for molecular systems.
A case study published in The International Journal of Business Process Integration and Management demonstrates that the adoption of integrated cloud-computing solutions can lead to significant cost savings for businesses, as well as large reductions in the size of an organization's carbon footprint.
The second ISC Big Data conference themed “From Data To Knowledge,” builds on the success of the inaugural 2013 event. A comprehensive program has been put together by the Steering Committee under the leadership of Sverre Jarp, who retired officially as the CTO of CERN openlab in March of this year.
The Cray XC30 system will be used by a nation-wide consortium of scientists called the Indian Lattice Gauge Theory Initiative (ILGTI). The group will research the properties of a phase of matter called the quark-gluon plasma, which existed when the universe was approximately a microsecond old. ILGTI also carries out research on exotic and heavy-flavor hadrons, which will be produced in hadron collider experiments.
The discovery 30 years ago of soccer-ball-shaped carbon molecules called buckyballs helped to spur an explosion of nanotechnology research. Now, there appears to be a new ball on the pitch. Researchers have shown that a cluster of 40 boron atoms forms a hollow molecular cage similar to a carbon buckyball. It’s the first experimental evidence that a boron cage structure — previously only a matter of speculation — does indeed exist.
IBM is announcing a new software defined storage-as-a-service on IBM SoftLayer, code named Elastic Storage on Cloud, that gives organizations access to a fully-supported, ready-to-run storage environment, which includes SoftLayer bare metal resources and high performance data management and allows organizations to move data between their on-premise infrastructure and the cloud.
Registration is now open for the 2014 ISC Cloud and ISC Big Data Conferences, which will be held this fall in Heidelberg, Germany. The fifth ISC Cloud Conference will take place in the Marriott Hotel from September 29 to 30, and the second ISC Big Data will be held from October 1 to 2 at the same venue.
Michael M. Resch, the Director of the Stuttgart High Performance Computing Center (HLRS) will be talking about “HPC and Simulation in the Cloud – How Academia and Industry Can Benefit.” His keynote is of special interest to cloud skeptics, given that prior to 2011, Resch himself was a vocal cloud pessimist. Three years later, he feels that this technology provides a practical option for many users.
How using CPU/GPU parallel computing is the next logical step - My work in computational mathematics is focused on developing new, paradigm-shifting ideas in numerical methods for solving mathematical models in various fields. This includes the Schrödinger equation in quantum mechanics, the elasticity model in mechanical engineering, the Navier-Stokes equation in fluid mechanics, Maxwell’s equations in electromagnetism...
The National Nuclear Security Administration (NNSA) and Cray have entered into a contract agreement for a next-generation supercomputer, called Trinity, to advance the mission for the Stockpile Stewardship Program. Managed by NNSA, Trinity is a joint effort of the New Mexico Alliance for Computing at Extreme Scale between Los Alamos and Sandia national laboratories as part of the NNSA Advanced Simulation and Computing Program.
For the past 21 years, the TOP500.org has been ranking supercomputers by their performance on the LINPACK Benchmark. Reported two times a year, the release of the list is anticipated by the industry. As with any such ranking, the top of the list often garners the most attention. However, such emphasis on the top of such a list, would limit one’s understanding of the different supercomputers in the TOP500...
In nearly every field of science, experiments, instruments, observations, sensors, simulations, and surveys are generating massive data volumes that grow at exponential rates. Discoverable, shareable data enables collaboration and supports repurposing for new discoveries — and for cross-disciplinary research enabled by exchange across communities that include both scientists and citizens.
The Supercomputing Conference (SC14) awards committee has announced that “A Multi-level Algorithm for Partitioning Graphs,” co-authored by Bruce Hendrickson and Rob Leland of Sandia National Laboratories, has won the prestigious Test of Time Award. The award recognizes the most transformative and inspiring research published at the SC conference and will be presented at the SC14 awards ceremony in New Orleans, LA in November 2014.
Tandem protein mass spectrometry is one of the most widely used methods in proteomics, the large-scale study of proteins, particularly their structures and functions. Researchers in the Marcotte group at the University of Texas at Austin are using the Stampede supercomputer to develop and test computer algorithms that let them more accurately and efficiently interpret proteomics mass spectrometry data.
The FirePro W8100 professional graphics card is designed to enable new levels of workstation performance delivered by the second-generation AMD Graphics Core Next (GCN) architecture. Powered by OpenCL, it is ideal for the next generation of 4K CAD (computing-aided design) workflows, engineering analysis and supercomputing applications.
A powerful new model to detect life on planets outside of our solar system, more accurately than ever before, has been developed by UCL researchers. The new model focuses on methane, the simplest organic molecule, widely acknowledged to be a sign of potential life.
Dan C. Stanzione Jr. has been named executive director of the Texas Advanced Computing Center (TACC) at The University of Texas at Austin. A nationally recognized leader in high performance computing, Stanzione has served as deputy director since June 2009 and assumed the new post July 1, 2014.
The A-Class Supercomputer System is a highly integrated platform designed to simplify the transition to multi-petaflops computations. The system features two independent head nodes, 256 compute nodes and 60 InfiniBand and Ethernet switches, all tightly coupled into a single, powerful computing resource.
Moab HPC Suite-Enterprise Edition 8.0 (Moab 8.0) is designed to enhance Big Workflow by processing intensive simulations and big data analysis to accelerate insights. It delivers dynamic scheduling, provisioning and management of multi-step/multi-application services across HPC, cloud and big data environments. The software suite bolsters Big Workflow’s core services: unifying data center resources, optimizing the analysis process and guaranteeing services to the business.
AppliedMicro has announced the readiness of the X-Gene Server on a Chip based on the 64-bit ARMv8-Aarchitecture for High Performance Computing (HPC) workloads.
Boston Limited and CoolIT Systems, Inc., congratulates the student team from EPCC at The University of Edinburgh for their 1st place ranking for the Highest LINPACK in the ISC14 Student Cluster Challenge.
"Big data" is playing an increasingly big role in the renewable energy industry and the transformation of the nation's electrical grid, and no single entity provides a better tool for such data than the Energy Department's Energy Systems Integration Facility (ESIF) located on the campus of the National Renewable Energy Laboratory (NREL).
- Page 1