Qlucore is a software platform for the analysis of genomics, proteomics and related data. As with most statistical and genomics software, it generates an immediate graphic for most analyses. Its specific areas of use include gene expression, protein arrays, DNA methylation, miRNA, proteomics, and pattern and structure identification in multivariate data.
Despite the fact that industries won’t change working processes unless there is a mandatory need to do so, major milestones are expected in 2015 in the battle to adopt data and standardization in our scientific community. The need for deployment of these integration standards to enable efficient sharing of knowledge across our internal and external partners is re-enforced by regulatory bodies.
Sense of urgency and economic impact emphasized: The “hardware first” ethic is changing. Hardware retains the glamour, but there is now the stark realization that the newest parallel supercomputers will not realize their full potential without reengineering the software code to efficiently divide computational problems among the thousands of processors that comprise next-generation many-core computing platforms.
HPC has always embraced the leading edge of technology and, as such, acts as the trailbreaker and scout for enterprise and business customers. HPC has highlighted and matured the abilities of previously risky devices, like GPUs, that enterprise customers now leverage to create competitive advantage. GPUs have moved beyond “devices with potential” to “production devices” that are used for profit generation.
The introduction of newer sequencing methodologies, DNA microarrays and high-throughput technology has resulted in a deluge of large data sets that require new strategies to clean, normalize and analyze the data. All of these and more are covered in approximately 300 pages with extraordinary clarity and minimal mathematics.
An interview with PNNL’s Karol Kowalski, Capability Lead for NWChem Development - NWChem is an open source high performance computational chemistry tool developed for the Department of Energy at Pacific Northwest National Lab in Richland, WA. I recently visited with Karol Kowalski, Capability Lead for NWChem Development, who works in the Environmental Molecular Sciences Laboratory (EMSL) at PNNL.
A decade of close scrutiny has shed much more light on the technical computing needs of small and medium enterprises (SMEs), but they are still shrouded in partial darkness. That’s hardly surprising for a diverse global group with millions of members ranging from automotive suppliers and shotgun genomics labs to corner newsstands and strip mall nail salons.
Folk wisdom can sometimes be right on target. For example, there’s that old bromide about leading a horse to water. In this case, the water is high performance computing, and the reluctant equine is the huge base of small- to medium-sized manufacturers in the U.S. According to the National Center for Manufacturing Sciences, there are approximately 300,000 manufacturers in the U.S. Over 95 percent of them can be characterized as SMMs.
Current research in the area of digital image forensics is developing better ways to convert image files into frequencies, such as using wavelet transforms in addition to more traditional cosine transforms and more sensitive methods for determining if each area of an image belongs to the whole.
Recent gender diversity reports from Google, Facebook and Apple (to name a few) have spurred a number of positive efforts to bring more women into computer science, including the SC14 Women in High Performance Computing workshop, NVIDIA’s Women who CUDA campaign, and Google’s $50M Women Who Code program.
The complexity of high-end computing technology makes it largely invisible to the public. HPC simply lacks the Sputnik sex appeal of the space race, to which current global competition in supercomputing is often compared. Rather, it is seen as the exclusive realm of academia and national labs. Yet, its impact reaches into almost every aspect of daily life. Organizers of SC14 had this reach in mind when selecting the “HPC Matters” theme.
One year ago, recognizing a rapidly emerging challenge facing the HPC community, Intel launched the Parallel Computing Centers program. With the great majority of the world’s technical HPC computing challenges being handled by systems based on Intel architecture, the company was keenly aware of the growing need to modernize a large portfolio of public domain scientific applications, to prepare these critically important codes for multi-core
It should come as no surprise to readers of this column that JMP is a personal favorite and, along with SAS, one of my most-used programs. There are a number of reasons for this. Of the many advantages that most packages can offer, breadth and depth of the statistics offered, quality of the diagnostics, interconnectivity of graphics with both data and analyses, and ease-of-use issues are uppermost in my mind as most desirable.
Déjà Vu All Over Again: Knowledge management is not an IT problem, but a challenge to the culture of an organizationNovember 7, 2014 8:48 am | by Michael H. Elliott | Comments
In the late 1990s and the early 2000s, “Knowledge Management” (KM) was all the rage. Companies invested millions on enterprise content management (ECM) systems and teams of KM practitioners. It was believed that the codification of all knowledge assets across the enterprise would lead to new insights and higher levels of innovation.
Highly motivated to organize the Argonne Training Program on Extreme-Scale Computing, Paul Messina reflects on what makes the program unique and a can’t-miss opportunity for the next generation of HPC scientists. ATPESC is an intense, two-week program that covers most of the topics and skills necessary to conduct computational science and engineering research on today’s and tomorrow’s high-end computers.