High Performance Parallelism Pearls, the latest book by James Reinders and Jim Jeffers, is a teaching juggernaut that packs the experience of 69 authors into 28 chapters designed to get readers running on the Intel Xeon Phi family of coprocessors, plus provide tools and techniques to adapt legacy codes, as well as increase application performance on Intel Xeon processors.
The IEEE Technology Time Machine (TTM) is going further into the future. Now in its third year,...
For centuries, scientific research has been about data, and as data in research continues to...
On Tuesday, September 23, Scientific Computing will host a live panel discussion that examines how researchers and engineers are looking for ways to make product innovation, research and data insight faster and more competitive — including adopting or expanding their use of high performance computing to more users and projects. This educational webinar will explore real successes, research and proven approaches.
Cloud computing is not only the latest revolution in the Information and Communication Technology (ICT) world, but a key enhancer of innovation and economic development. Within the framework of the project CLOUDS, Madrid-based researchers have made crucial scientific advances in the state-of-the-art of cloud computing.
Ah, sad news in the Hice household. The patient is terminal, and I’m keeping it alive on life support. I keep wallowing in self-pity and ask myself, “Why me?” I feel as though I’m somehow responsible for the illness. Well, OK, I’m definitely responsible, why lie? I may as well have been sharing blood-soaked hypos with a drug addict, but what I did was equally careless. In one brief lapse of concentration, I didn’t examine the URL ...
The command center: any place which provides centralized command, a source of leadership and guidance to the rest of the organization. That’s what I see the concept of ELN developing into in research and development (R&D) across all sectors. Furthermore, it won’t be just a notebook, but a ’workplace.’
In my 15 or so years leading the charge for Ethernet into higher speeds “high performance computing” and “research and development” have always been two areas that the industry could count on where higher speeds would be needed for its networking applications. For example, during the incarnation of the IEEE 802.3 Higher Speed Ethernet Study Group that looked beyond 10GbE, and ultimately defined the 40 Gigabit and 100 Gigabit Ethernet ...
Enabling Innovation and Discovery through Data-Intensive High Performance Cloud and Big Data InfrastructureJuly 29, 2014 2:34 pm | by George Vacek, DataDirect Networks | Comments
As the size and scale of life sciences datasets increases — think large-cohort longitudinal studies with multiple samples and multiple protocols — so does the challenge of storing, interpreting and analyzing this data. Researchers and data scientists are under increasing pressure to identify the most relevant and critical information within massive and messy data sets, so they can quickly make the next discovery.
The following first appeared as a guest post by Nicholas Taylor, Web Archiving Service Manager for Stanford University Libraries. The Internet Archive Wayback Machine has been mentioned in several news articles within the last week for having archived a since-deleted blog post from a Ukrainian separatist leader touting his shooting down a military transport plane which may have actually been Malaysia Airlines Flight 17.
Using Powerful GPU-Based Monte Carlo Simulation Engine to Model Larger Systems, Reduce Data Errors, Improve System PrototypingJuly 22, 2014 8:33 am | by Jeffrey Potoff and Loren Schwiebert | Comments
Recently, our research work got a shot in the arm because Wayne State University was the recipient of a complete high-performance compute cluster donated by Silicon Mechanics as part of its 3rd Annual Research Cluster Grant competition. The new HPC cluster gives us some state-of-the-art hardware, which will enhance the development of what we’ve been working on — a novel GPU-Optimized Monte Carlo simulation engine for molecular systems.
How using CPU/GPU parallel computing is the next logical step - My work in computational mathematics is focused on developing new, paradigm-shifting ideas in numerical methods for solving mathematical models in various fields. This includes the Schrödinger equation in quantum mechanics, the elasticity model in mechanical engineering, the Navier-Stokes equation in fluid mechanics, Maxwell’s equations in electromagnetism...
For the past 21 years, the TOP500.org has been ranking supercomputers by their performance on the LINPACK Benchmark. Reported two times a year, the release of the list is anticipated by the industry. As with any such ranking, the top of the list often garners the most attention. However, such emphasis on the top of such a list, would limit one’s understanding of the different supercomputers in the TOP500...
Even as CPU power and memory bandwidth march forward, a major bottleneck hampering overall supercomputing performance has presented a significant challenge over the past decade: I/O interconnectivity. The vision behind Intel’s new Omni Scale Fabric is to deliver a platform for the next generation of HPC systems.
The recent PRACE Days conference in Barcelona provided powerful reminders that massive data doesn't always become big data — mainly because moving and storing massive data can cost massive money. PRACE is the Partnership for Advanced Computing in Europe, and the 2014 conference was the first to bring together scientific and industrial users of PRACE supercomputers located in major European nations.
An energy efficient supercomputer with warm water. How cool is that? Enlightenment has long been the ultimate pursuit of artists, philosophers, scientists, theologians and other sentient minds. Whether it is delivering the proof to support their theses, or to investigate a perplexing problem before them, they have poured a vast amount of energy into the situation. Energy has now become the problem.
Internet regulation in the United States is potentially facing a major change. FCC Internet Neutrality rules — also referred to as Net Neutrality rules — currently apply, but thanks to pressure from Internet Service Providers (ISP), legislators and recent court rulings, that might change. You have undoubtedly heard the term Net Neutrality before, but may be at a loss regarding what it means or what its implications are.
Since June of 1993, the Top500 List has been presenting information on the world’s 500 most powerful computer systems. The statistics about these systems have proven to be of substantial interest to computer manufacturers, users and funding authorities. While interest in the list is focused on the computers, less attention is paid to the countries hosting them. Let’s take a look at the Top500 List countries. Who are they?