The complexity of high-end computing technology makes it largely invisible to the public. HPC simply lacks the Sputnik sex appeal of the space race, to which current global competition in supercomputing is often compared. Rather, it is seen as the exclusive realm of academia and national labs. Yet, its impact reaches into almost every aspect of daily life. Organizers of SC14 had this reach in mind when selecting the “HPC Matters” theme.
One year ago, recognizing a rapidly emerging challenge facing the HPC community, Intel launched the Parallel Computing Centers program. With the great majority of the world’s technical HPC computing challenges being handled by systems based on Intel architecture, the company was keenly aware of the growing need to modernize a large portfolio of public domain scientific applications, to prepare these critically important codes for multi-core
It should come as no surprise to readers of this column that JMP is a personal favorite and, along with SAS, one of my most-used programs. There are a number of reasons for this. Of the many advantages that most packages can offer, breadth and depth of the statistics offered, quality of the diagnostics, interconnectivity of graphics with both data and analyses, and ease-of-use issues are uppermost in my mind as most desirable.
Déjà Vu All Over Again: Knowledge management is not an IT problem, but a challenge to the culture of an organizationNovember 7, 2014 8:48 am | by Michael H. Elliott | Comments
In the late 1990s and the early 2000s, “Knowledge Management” (KM) was all the rage. Companies invested millions on enterprise content management (ECM) systems and teams of KM practitioners. It was believed that the codification of all knowledge assets across the enterprise would lead to new insights and higher levels of innovation.
Highly motivated to organize the Argonne Training Program on Extreme-Scale Computing, Paul Messina reflects on what makes the program unique and a can’t-miss opportunity for the next generation of HPC scientists. ATPESC is an intense, two-week program that covers most of the topics and skills necessary to conduct computational science and engineering research on today’s and tomorrow’s high-end computers.
The purpose of this series is to discuss the impact of GMP (Good Manufacturing Practice) regulations on cloud computing and to debate some of the regulatory issues facing an organization contemplating this approach. In this part, we look at a process to select a suitable hosting provider that can demonstrate compliance with GMP and possession of qualified IT infrastructure.
The purpose of this series is to discuss the impact of GMP (Good Manufacturing Practice) regulations on cloud computing and to debate some of the regulatory issues facing an organization contemplating this approach. In this part, we look at the options for auditing a cloud service provider.
The purpose of this series is to discuss the impact of GMP (Good Manufacturing Practice) regulations on cloud computing and to debate some of the regulatory issues facing an organization contemplating this approach. In this part, we look at the SaaS (Software as a Service) hosting options available to consider for regulated users and the requirement for qualified IT infrastructure.
The purpose of this series is to discuss the impact of GMP (Good Manufacturing Practice) regulations on cloud computing and to debate some of the regulatory issues facing an organization contemplating this approach. In this part, we look at the applicable regulations.
As scientific computing moves inexorably toward the Exascale era, an increasingly urgent problem has emerged: many HPC software applications — both public domain and proprietary commercial — are hamstrung by antiquated algorithms and software unable to function in manycore supercomputing environments. Aside from developing an Exascale-level architecture, HPC code modernization is the most important challenge facing the HPC community.
Recently, the Harvard-Smithsonian Center for Astrophysics unveiled an unprecedented simulation of the universe’s development. Called the Illustris project, the simulation depicts more than 13 billion years of cosmic evolution across a cube of the universe that’s 350-million-light-years on each side. But why was it important to conduct such a simulation?
The Research Data Alliance seeks to build the social and technical bridges that enable open sharing and reuse of data, so as to address cross-border and cross-disciplinary challenges faced by researchers. This September, the RDA will be hosting its Fourth Plenary Meeting. Ahead of the event, iSGTW spoke to Gary Berg-Cross, general secretary of the Spatial Ontology Community of Practice and a member of the US advisory committee for RDA.
Albert Einstein's work laid down the foundation for modern quantum mechanics. His analysis of the “spookiness” of quantum mechanics opened up a whole range of applications, including quantum teleportation and quantum cryptography, but he wasn’t completely convinced by the theory of quantum mechanics — and that story is as fascinating as the theory he attempted to nail down. Quantum mechanics is downright bizarre...
Like a Formula One race car stuck in a traffic jam, HPC hardware performance is frequently hampered by HPC software. This is because some of the most widely used application codes have not been updated for years, if ever, leaving them unable to leverage advances in parallel systems. As hardware power moves toward exascale, the imbalance between hardware and software will only get worse. The problem of updating essential scientific ...
A recent United States Pharmacopoeia (USP) stimulus to the revision process paper1 has taken a life cycle approach to the development, validation and use of analytical procedures. Do chromatography data systems (CDS) have adequate functions to help analytical scientists meet these requirements when the pharmacopoeia is updated?