For the US Army, and DoD and intelligence community as a whole, GIS Federal developed an innovative approach to quickly filter, analyze, and visualize big data from hundreds of data providers with a particular emphasis on geospatial data.
The complexity of high-end computing technology makes it largely invisible to the public. HPC...
One year ago, recognizing a rapidly emerging challenge facing the HPC community, Intel launched...
Researchers studying iron-based superconductors are combining novel electronic structure...
As the SC14 conference approaches, Intel is preparing to host the second annual Intel Parallel Universe Computing Challenge (PUCC) from November 17 to 20, 2014. Each of eight participating teams will play for a charitable organization, which will receive a $26,000 donation from Intel in recognition of the 26th anniversary of the Supercomputing conference.
Two research teams have found distinct solutions to a critical challenge that has held back the realization of super powerful quantum computers. The teams, working in the same laboratories at UNSW Australia, created two types of quantum bits, or "qubits" — the building blocks for quantum computers — that each process quantum data with an accuracy above 99 percent.
The Oil and Gas High Performance Computing (HPC) Workshop, hosted annually at Rice University, is the premier meeting place for discussion of challenges and opportunities around high performance computing, information technology, and computational science and engineering.
High Performance Parallelism Pearls, the latest book by James Reinders and Jim Jeffers, is a teaching juggernaut that packs the experience of 69 authors into 28 chapters designed to get readers running on the Intel Xeon Phi family of coprocessors, plus provide tools and techniques to adapt legacy codes, as well as increase application performance on Intel Xeon processors.
The GS7KTM appliance is a scale-out parallel file system solution complete with enterprise-class features, NAS access and Cloud tiering capabilities. The system includes fully integrated enterprise data management and protection capabilities, in a simple, all-in-one, scale-out appliance.
Mathematica Online operates completely in the cloud and is accessible through any modern Web browser, with no installation or configuration required, and is completely interoperable with Mathematicaon the desktop. Users can simply point a Web browser at Mathematica Online, then log in, and immediately start to use the Mathematica notebook interface
A new $1.9 million study at the University of Michigan seeks to make low-dose computed tomography scans a viable screening technique by speeding up the image reconstruction from half an hour or more to just five minutes. The advance could be particularly important for fighting lung cancers, as symptoms often appear too late for effective treatment.
SDSC Joins Intel Parallel Computing Centers Program with Focus on Molecular Dynamics, Neuroscience and Life SciencesSeptember 12, 2014 2:44 pm | by San Diego Supercomputer Center | News | Comments
The San Diego Supercomputer Center (SDSC) at the University of California, San Diego, is working with semiconductor chipmaker Intel to further optimize research software to improve the parallelism, efficiency, and scalability of widely used molecular and neurological simulation technologies.
As part of the Cray CS cluster supercomputer series, Cray offers the CS-Storm cluster, an accelerator-optimized system that consists of multiple high-density multi-GPU server nodes, designed for massively parallel computing workloads.
The 3D Space Charge module uses code that is optimized for the shared memory architecture of standard PCs and workstations with multi-core processors. Although the speed benefit of parallel processing depends on model complexity, highly iterative and computationally-intensive analysis tasks can be greatly accelerated by the technique.
Creating a realistic computer simulation of how light suffuses a room is crucial not just for animated movies like Toy Story or Cars, but also in industry. Special computing methods should ensure this, but require great effort. Computer scientists from Saarbrücken have developed a novel approach that vastly simplifies and speeds up the whole calculating process.
With the promise of exascale supercomputers looming on the horizon, much of the roadmap is dotted with questions about hardware design and how to make these systems energy efficient enough so that centers can afford to run them. Often taking a back seat is an equally important question: will scientists be able to adapt their applications to take advantage of exascale once it arrives?
With five technical papers contending for one of the highest honored awards in high performance computing (HPC), the Association for Computing Machinery’s (ACM) awards committee has four months left to choose a winner for the prestigious 2014 Gordon Bell Prize. The winner of this prize will have demonstrated an outstanding achievement in HPC that helps solve critical science and engineering problems.
How using CPU/GPU parallel computing is the next logical step - My work in computational mathematics is focused on developing new, paradigm-shifting ideas in numerical methods for solving mathematical models in various fields. This includes the Schrödinger equation in quantum mechanics, the elasticity model in mechanical engineering, the Navier-Stokes equation in fluid mechanics, Maxwell’s equations in electromagnetism...
RSC Group, developer and integrator of innovative high performance computing (HPC) and data center solutions, made several technology demonstrations and announcements at the International Supercomputing Conference (ISC’14).
By combining advanced mathematics with high-performance computing, scientists have developed a tool that allowed them to calculate a fundamental property of most atoms on the periodic table to historic accuracy — reducing error by a factor of a thousand in many cases.
Cambridge’s COSMOS supercomputer, the largest shared-memory computer in Europe, has been named by computer giant Intel as one of its Parallel Computing Centers, building on a long-standing collaboration between Intel and the University of Cambridge.
Solving some of the biggest challenges in society, industry and sciences requires dramatic increases in computing efficiency. Many HPC customers are sitting on incredible untapped compute reserves and they don’t even know it. The very people who are focused on solving the world’s biggest problems with high-performance computing are often only using a small fraction of the compute capability their systems provide. Why? Their software ...
The workshop will bring together researchers and practitioners from industry, academia, and the wider community using Python in all aspects of high performance and scientific computing. The goal is to present Python applications from mathematics, science, and engineering, to discuss general topics regarding the use of Python (such as language design and performance issues), and to share experience using Python in scientific computing education.
Steve Finn provides technology assessment and telecommunications support services for a community of HPC users. His background includes vectorization and parallelization of application codes, benchmarking, and HPC system acquisitions.
Paul Buerger, HPC industry expert, spent most of his career at Ohio State University and Ohio Supercomputer Center. During those 40 years, he worked with everything from minicomputers to mainframes to vector supercomputers to HPC clusters.
As modern computer systems become more powerful, utilizing as many as millions of processor cores in parallel, Intel is looking for new ways to efficiently use these high performance computing (HPC) systems to accelerate scientific discovery. As part of this effort, Intel has selected Georgia Tech as the site of one of its Parallel Computing Centers.
William Gropp received his B.S. in Mathematics from Case Western Reserve University in 1977, a MS in Physics from the University of Washington in 1978, and a Ph.D. in Computer Science from Stanford in 1982. He held the positions of assistant (1982-1988) and associate (1988-1990) professor in the Computer Science Department at Yale University.
Ganesh L. Gopalakrishnan earned his PhD in Computer Science from Stony Brook University in 1986, joining Utah the same year. He was Visiting Assistant Professor at the University of Calgary (1988), and conducted sabbatical work at Stanford University (1995), Intel, Santa Clara (2002), and Utah (2009, developing a Parallel and Concurrent Programming Curriculum Development with Microsoft Research).
Prof. Dr. Arndt Bode holds the chair for computer technology and computer organization at Technische Universität München. He runs a variety of research projects in the field of computer architecture, tools and applications for parallel and distributed systems. He is Director of Leibniz Computer Center (LRZ), the German Supercomputer Center of the Bavarian Academy of Sciences.
- Page 1