Researchers have long believed that supercomputers give universities a competitive edge in scientific research, but now they have some hard data showing it’s true. A Clemson University team found that universities with locally available supercomputers were more efficient in producing research in critical fields than universities that lacked supercomputers.
For the fourth consecutive time, Tianhe-2, a supercomputer developed by China’s National...
Intersect360 Research, an industry analyst firm providing market forecasting, research and...
High Performance Parallelism Pearls, the latest book by James Reinders and Jim Jeffers...
For all the money and effort poured into supercomputers, their life spans can be brutally short – on average about four years. So, what happens to one of the world's greatest supercomputers when it reaches retirement age? If it's the Texas Advanced Computer Center's (TACC) Ranger supercomputer, it continues making an impact in the world. If the system could talk, it might proclaim, "There is life after retirement!"
For the past 21 years, the TOP500.org has been ranking supercomputers by their performance on the LINPACK Benchmark. Reported two times a year, the release of the list is anticipated by the industry. As with any such ranking, the top of the list often garners the most attention. However, such emphasis on the top of such a list, would limit one’s understanding of the different supercomputers in the TOP500...
For the third consecutive time, Tianhe-2, a supercomputer developed by China’s National University of Defense Technology, has retained its position as the world’s No. 1 system with a performance of 33.86 petaflop/s (quadrillions of calculations per second) on the Linpack benchmark, according to the 43rd edition of the twice-yearly TOP500 list of the world’s most powerful supercomputers.
An energy efficient supercomputer with warm water. How cool is that? Enlightenment has long been the ultimate pursuit of artists, philosophers, scientists, theologians and other sentient minds. Whether it is delivering the proof to support their theses, or to investigate a perplexing problem before them, they have poured a vast amount of energy into the situation. Energy has now become the problem.
Since June of 1993, the Top500 List has been presenting information on the world’s 500 most powerful computer systems. The statistics about these systems have proven to be of substantial interest to computer manufacturers, users and funding authorities. While interest in the list is focused on the computers, less attention is paid to the countries hosting them. Let’s take a look at the Top500 List countries. Who are they?
The University of Maryland has unveiled Deepthought2, one of the nation's fastest university-owned supercomputers, to support advanced research activities ranging from studying the formation of the first galaxies to simulating fire and combustion for fire protection advancements. Developed with high-performance computing solutions from Dell, Deepthought2 has a processing speed of about 300 teraflops.
I’m excited to announce that Scientific Computing has created an International Supercomputing Conference (ISC) resource site, in cooperation with ISC Events. The ISC Conference page is a one-stop destination that offers comprehensive information on all things ISC, collected together in one place where it’s easy to locate. This valuable resource is specifically designed to help you quickly find everything you need ...
The last time the IEEE 802.3 Working Group addressed the “Next Rate” of Ethernet was when 10 GbE was Ethernet’s fastest rate. That effort resulted in the development of two new rates — 40 GbE and 100 GbE. The justification for two rates was that 40 GbE was intended to provide the upgrade path for servers, while 100 GbE would target network aggregation applications.
Lawrence Berkeley National Laboratory's Erich Strohmaier is an internationally known expert in assessing and improving the performance of high performance computing systems. How the system performs over the long haul, rather than its short-term potential, is what matters. And as one of the co-founders of the twice-yearly TOP500 list of the world's most powerful supercomputers, Strohmaier has found that the most important information comes from looking at the changes in the list over time
Jack Dongarra specializes in numerical algorithms in linear algebra, parallel computing, use of advanced-computer architectures, programming methodology, and tools for parallel computers. He was awarded the IEEE Sid Fernbach Award in 2004 and in 2008 he was the recipient of the first IEEE Medal of Excellence in Scalable Computing
Dr. Horst Simon is the Deputy Director of Lawrence Berkeley National Laboratory (Berkeley Lab). Simon’s research interests are in the development of sparse matrix algorithms, algorithms for large-scale eigenvalue problems, and domain decomposition algorithms for unstructured domains for parallel processing.
Satoshi Matsuoka is a Professor at the Global Scientific Information and Computing Center of Tokyo Institute of Technology (GSIC). He is the leader of TSUBAME series of supercomputers, which became the 4th fastest in the world on the Top500 and awarded the "Greenest Production Supercomputer in the World" by the Green 500 in November, 2010 and June 2011.
The TOP500 project was launched in 1993 to improve and renew the Mannheim supercomputer statistics, which had been in use for seven years. Our simple TOP500 approach does not define “supercomputer” as such, but we use a benchmark to rank systems and to decide on whether or not they qualify for the TOP500 list.
The TOP500 project was started in 1993 to provide a reliable basis for tracking and detecting trends in high-performance computing. Twice a year, a list of the sites operating the 500 most powerful computer systems is assembled and released. The best performance on the Linpack benchmark is used as performance measure for ranking the computer systems. T
Advance registration at reduced rates is now open for the 2014 International Supercomputing Conference (ISC’14), which will be held June 22-26 in Leipzig, Germany. By registering now, ISC’14 attendees can save over 25 percent off the onsite registration rates at the Leipzig Congress Center.
The 2014 International Supercomputing Conference is now accepting submissions, ranging from tutorial and birds of a feather (BoF) session proposals, research paper and poster abstracts, as well as student volunteer program applications. The ISC’14 Call for Papers is supported by the IEEE Germany Section.
The HPC Advisory Council (HPCAC), a leading organization for high-performance computing research, outreach and education, and the International Supercomputing Conference (ISC), have announced the 11 university teams from around the world for the HPCAC-ISC 2013 Student Cluster Competition during the ISC’14 program of events.
It is easy to cast jealous eyes towards the most powerful supercomputers in the world, e.g. Tianhe-2 with its three million cores of Xeon and Phi processors, or Titan with its 18,000 GPUs, and wish you had the budget to deploy such facilities. However, most HPC service managers and users must return from such whims, plummeting back to the much smaller-scale HPC that is their reality.
IBM supercomputers have taken the top three spots on the latest Graph500 list released during the Supercomputing Conference in Denver, Colo. The biannual list ranks high-performance computing systems on the basis of processing massive amounts of big data.
Tianhe-2, a supercomputer developed by China’s National University of Defense Technology, retained its position as the world’s No. 1 system with a performance of 33.86 petaflop/s (quadrillions of calculations per second) on the Linpack benchmark, according to the 2nd edition of the twice-yearly TOP500 list of the world’s most powerful supercomputers. The list was announced Nov. 18 at the SC13 conference in Denver, CO.
ISC is an IEEE-recognized key global conference and exhibition for high performance computing, networking and storage. It provides a platform for learning and networking for some 2,500 like-minded HPC researchers, technology leaders, scientists and IT-decision makers.
Jack Dongarra of the University of Tennessee will receive the ACM-IEEE Computer Society Ken Kennedy Award for his leadership in designing and promoting standards for mathematical software used to solve numerical problems common to high performance computing (HPC). His work has led to the development of major software libraries of algorithms and methods that boost performance and portability in HPC environments
The HPC Advisory Council (HPCAC) and the International Supercomputing Conference call on high school and undergraduate students from around the world to submit their application for the Student Cluster Competition. Submission Deadline: November 1, 2013
The adage that a supercomputer is a complicated device that turns a compute bound problem into an IO bound problem is becoming ever more apparent in the age of big data. The trick to avoid the painful truth in this adage is to ensure that the application workload is dominated by streaming IO operations.
The HPC market is entering a kind of perfect storm. For years, HPC architectures have tilted farther and farther away from optimal balance between processor speed, memory access and I/O speed. As successive generations of HPC systems have upped peak processor performance without corresponding advances in per-core memory capacity and speed, the systems have become increasingly compute centric
- Page 1