For those on the front lines of treating cancer, speed and precision are key to patients’ survival. Pediatric cancer researchers have been making incredible strides in accelerating delivery of new diagnostic and treatment options. Supercomputer-powered genetic diagnosis is being used to harness the power of high throughput genomic and proteomic methods and is playing a key role in improving the outcome for children with genetic diseases.
The author of this wonderful text delivers a brief, easy-to-absorb, yet very comprehensive text on modeling real-world data with Maple. Maple is software for performing mathematics, with a none-too-steep learning curve. In the introduction, the author is quick to point out that this is neither a detailed textbook of mathematical modeling, nor Maple. It is, however, a very well-written manual of introductory modeling and use of Maple.
Optimization for high-performance and energy efficiency is a necessary next step after verifying that an application works correctly. In the HPC world, profiling means collecting data from hundreds to potentially many thousands of compute nodes over the length of a run. In other words, profiling is a big-data task, but one where the rewards can be significant — including potentially saving megawatts of power or reducing the time to solution
Quick, think of a four-letter name beginning with “N” for a federal agency involved in space science. Though NASA or NOAA would rightfully pop into mind first, crossword puzzle aficionados should know that NIST would be a correct answer as well — because the National Institute of Standards and Technology has been an integral part of readying technology for blastoff for decades.
This is not a text for the novice. However, for those math/statistics aficionados, there is much to be had. The book’s great strength lies in two areas: the first is Peter Congdon’s generation of an excellent bibliography of the most modern techniques available, and the other is his (slightly) more straightforward explanations of the strengths and weaknesses of these techniques and suggestions for optimizing the results.
Qlucore is a software platform for the analysis of genomics, proteomics and related data. As with most statistical and genomics software, it generates an immediate graphic for most analyses. Its specific areas of use include gene expression, protein arrays, DNA methylation, miRNA, proteomics, and pattern and structure identification in multivariate data.
Despite the fact that industries won’t change working processes unless there is a mandatory need to do so, major milestones are expected in 2015 in the battle to adopt data and standardization in our scientific community. The need for deployment of these integration standards to enable efficient sharing of knowledge across our internal and external partners is re-enforced by regulatory bodies.
Sense of urgency and economic impact emphasized: The “hardware first” ethic is changing. Hardware retains the glamour, but there is now the stark realization that the newest parallel supercomputers will not realize their full potential without reengineering the software code to efficiently divide computational problems among the thousands of processors that comprise next-generation many-core computing platforms.
HPC has always embraced the leading edge of technology and, as such, acts as the trailbreaker and scout for enterprise and business customers. HPC has highlighted and matured the abilities of previously risky devices, like GPUs, that enterprise customers now leverage to create competitive advantage. GPUs have moved beyond “devices with potential” to “production devices” that are used for profit generation.
The introduction of newer sequencing methodologies, DNA microarrays and high-throughput technology has resulted in a deluge of large data sets that require new strategies to clean, normalize and analyze the data. All of these and more are covered in approximately 300 pages with extraordinary clarity and minimal mathematics.
An interview with PNNL’s Karol Kowalski, Capability Lead for NWChem Development - NWChem is an open source high performance computational chemistry tool developed for the Department of Energy at Pacific Northwest National Lab in Richland, WA. I recently visited with Karol Kowalski, Capability Lead for NWChem Development, who works in the Environmental Molecular Sciences Laboratory (EMSL) at PNNL.
A decade of close scrutiny has shed much more light on the technical computing needs of small and medium enterprises (SMEs), but they are still shrouded in partial darkness. That’s hardly surprising for a diverse global group with millions of members ranging from automotive suppliers and shotgun genomics labs to corner newsstands and strip mall nail salons.
Folk wisdom can sometimes be right on target. For example, there’s that old bromide about leading a horse to water. In this case, the water is high performance computing, and the reluctant equine is the huge base of small- to medium-sized manufacturers in the U.S. According to the National Center for Manufacturing Sciences, there are approximately 300,000 manufacturers in the U.S. Over 95 percent of them can be characterized as SMMs.
Current research in the area of digital image forensics is developing better ways to convert image files into frequencies, such as using wavelet transforms in addition to more traditional cosine transforms and more sensitive methods for determining if each area of an image belongs to the whole.
Recent gender diversity reports from Google, Facebook and Apple (to name a few) have spurred a number of positive efforts to bring more women into computer science, including the SC14 Women in High Performance Computing workshop, NVIDIA’s Women who CUDA campaign, and Google’s $50M Women Who Code program.