Advertisement
Supercomputers
Subscribe to Supercomputers

The Lead

This still from a KIPAC visualization shows a jet of energy and particles streaming from a black hole. (Visualization: Ralf Kaehler / Simulation: Jonathan McKinney, Alexander Tchekhovskoy, and Roger Blandford)

Dramatically Intricate 3-D Universes Tell Important Stories about the Cosmos

August 21, 2014 3:16 pm | by Kelen Tuttle, Kavli Foundation | Articles | Comments

Recently, the Harvard-Smithsonian Center for Astrophysics unveiled an unprecedented simulation of the universe’s development. Called the Illustris project, the simulation depicts more than 13 billion years of cosmic evolution across a cube of the universe that’s 350-million-light-years on each side. But why was it important to conduct such a simulation?

12 Teams Set to Compete in SC14 Student Cluster Competition

August 20, 2014 10:50 am | by SC14 | News | Comments

As university students around the world prepare to head back to school this fall, 12 groups are...

Invisible Particles Provide First Indirect Evidence of Strange Baryons

August 20, 2014 10:17 am | by Brookhaven National Laboratory | News | Comments

New supercomputing calculations provide the first evidence that particles predicted by the...

Blue Waters Project to Offer Graduate Visualization Course in Spring 2015

August 18, 2014 12:12 pm | by NCSA | News | Comments

NCSA’s Blue Waters project will offer a graduate course on High Performance Visualization for...

View Sample

FREE Email Newsletter

On Tuesday, August 12, through Thursday, August 14, the University of Tennessee team performed a "mock run" for the SC14 Student Cluster Competition in which they compiled, optimized and ran test cases for applications using the supercomputing cluster the

SC14 Student Cluster Competition Mock Run Allows for Self-assessment

August 18, 2014 11:46 am | by University of Tennessee | News | Comments

A team of students from the University of Tennessee has been preparing since June 2014 at Oak Ridge National Laboratory for the Student Cluster Competition, which will last for 48 continuous hours during the SC14 supercomputing conference on November 16 to 21, 2014, in New Orleans.

Advanced techniques such as "structured placement," shown here and developed by Markov's group, are currently being used to wring out optimizations in chip layout. Different circuit modules on an integrated circuit are shown in different colors. Algorithm

Reviewing Frontier Technologies to Determine Fundamental Limits of Computer Scaling

August 15, 2014 12:31 pm | by NSF | News | Comments

Igor Markov reviews limiting factors in the development of computing systems to help determine what is achievable, identifying loose limits and viable opportunities for advancements through the use of emerging technologies. He summarizes and examines limitations in the areas of manufacturing and engineering, design and validation, power and heat, time and space, as well as information and computational complexity.​

NERSC's next-generation supercomputer, a Cray XC, will be named after Gerty Cori, the first American woman to be honored with a Nobel Prize in science. She shared the 1947 Nobel Prize with her husband Carl (pictured) and Argentine physiologist Bernardo Ho

NERSC Launches Next-Generation Code Optimization Effort

August 15, 2014 9:41 am | by NERSC | News | Comments

With the promise of exascale supercomputers looming on the horizon, much of the roadmap is dotted with questions about hardware design and how to make these systems energy efficient enough so that centers can afford to run them. Often taking a back seat is an equally important question: will scientists be able to adapt their applications to take advantage of exascale once it arrives?

Advertisement
Prof. Dr. Stefan Wrobel,Director, Fraunhofer Institute for Intelligent Analysis & Information Systems (IAIS) and Professor of Computer Science, University of Bonn

Prof. Dr. Stefan Wrobel

August 14, 2014 12:29 pm | Biographies

Prof. Dr. Stefan Wrobel, M.S., is director of the Fraunhofer Institute for Intelligent Analysis and Information Systems (IAIS) and Professor of Computer Science at University of Bonn. He studied Computer Science in Bonn and Atlanta, GA, USA (M.S. degree, Georgia Institute of Technology), receiving his doctorate from University of Dortmund.

Dirk Slama, Business Development Director, Bosch Si

Dirk Slama

August 14, 2014 12:15 pm | Biographies

Dirk Slama is Director of Business Development at Bosch Software Innovations. Bosch SI is spearheading the Internet of Things (IoT) activities of Bosch, the global engineering group. As Conference Chair of the Bosch ConnectedWorld, Dirk helps shaping the IoT strategy of Bosch. Dirk has over 20 years experience in very large-scale application projects, system integration and Business Process Management. His international work experience includes projects for Lufthansa Systems, Boeing, AT&T, NTT DoCoMo, HBOS and others.

With an emphasis on HPC applications in science, engineering and large-scale data analytics; the Gordon Bell Prize tracks the overall progress in parallel computing.

Finalists Compete for Coveted ACM Gordon Bell Prize in High Performance Computing

August 13, 2014 12:01 pm | by SC14 | News | Comments

With five technical papers contending for one of the highest honored awards in high performance computing (HPC), the Association for Computing Machinery’s (ACM) awards committee has four months left to choose a winner for the prestigious 2014 Gordon Bell Prize. The winner of this prize will have demonstrated an outstanding achievement in HPC that helps solve critical science and engineering problems.

#HPCmatters

SC14 #HPCmatters

August 13, 2014 9:59 am | by SC Conference Series | Videos | Comments

"High performance computing is solving some of the hardest problems in the world. But it's also at your local supermarket, under the hood of your car, and steering your investments.... It's finding signals in the noise."                                                 

A brain-inspired chip to transform mobility and Internet of Things through sensory perception. Courtesy of IBM

Chip with Brain-inspired Non-Von Neumann Architecture has 1M Neurons, 256M Synapses

August 11, 2014 12:13 pm | by IBM | News | Comments

Scientists from IBM have unveiled the first neurosynaptic computer chip to achieve an unprecedented scale of one million programmable neurons, 256 million programmable synapses and 46 billion synaptic operations per second per watt. At 5.4 billion transistors, this fully functional and production-scale chip is currently one of the largest CMOS chips ever built, yet, while running at biological real time, it consumes a minuscule 70mW.

Advertisement
Optalysys is currently developing two products, a ‘Big Data’ analysis system and an Optical Solver Supercomputer, both of which are expected to be launched in 2017.

Light-speed Computing: Prototype Optical Processor Set to Revolutionize Supercomputing

August 8, 2014 4:13 pm | by Optalysys | News | Comments

Cambridge UK-based start up Optalysys has stated that it is only months away from launching a prototype optical processor with the potential to deliver exascale levels of processing power on a standard-sized desktop computer. The company will demonstrate its prototype, which meets NASA Technology Readiness Level 4, in January of next year.

The simulations, performed on Titan’s Cray XK7 system, produced 3D, high-fidelity power distributions representing conditions expected to occur during the AP1000 core startup and used up to 240,000 computational units in parallel.

Westinghouse-CASL Team Wins Major Computing Award for Reactor Core Simulations on Titan

August 7, 2014 3:35 pm | by Oak Ridge National Laboratory | News | Comments

A team representing Westinghouse Electric Company and the Consortium for Advanced Simulation of Light Water Reactors, a DOE Innovation Hub led by Oak Ridge National Laboratory, has received an HPC Innovation Excellence Award for applied simulation on Titan, the nation’s most powerful supercomputer. The award recognizes achievements made by industry users of high-performance computing technologies.

Sandia National Laboratories researchers Steve Plimpton, left, and Michael Gallis look at a projection of a model of the Russian MIR space station, which fell out of orbit several years ago and disintegrated, with the remains ending up at the bottom of th

Sophisticated 3-D Codes Yield Unprecedented Physics, Engineering Insights

August 6, 2014 4:43 pm | by Sandia National Laboratories | News | Comments

When the space shuttle Columbia disintegrated on re-entry in 2002, sophisticated computer models were key to determining what happened. A piece of foam flew off at launch and hit a tile, damaging the leading edge of the shuttle wing and exposing the underlying structure. Temperatures soared to thousands of degrees as Columbia plunged toward Earth at 27 times the speed of sound, said Gallis, who used NASA codes and Icarus for simulations...

ESnet's Eli Dart moved 56 TB of climate data from 21 sites to NERSC, a task that took three months. In contrast, it took just two days to transfer the raw dataset using Globus from NERSC  to the Mira supercomputer at Argonne National Laboratory.

Weathering the Flood of Big Data in Climate Research

August 6, 2014 4:16 pm | by ESnet | News | Comments

Big Data, it seems, is everywhere, usually characterized as a Big Problem. But researchers at Lawrence Berkeley National Laboratory are adept at accessing, sharing, moving and analyzing massive scientific datasets. At a July 14-16, 2014, workshop focused on climate science, Berkeley Lab experts shared their expertise with other scientists working with big datasets.

John D’Ambrosia is chairman of the Ethernet Alliance and chief Ethernet evangelist, CTO office at Dell.

Ethernet and Data Gathering for HPC

August 5, 2014 4:33 pm | by John D’Ambrosia | Blogs | Comments

In my 15 or so years leading the charge for Ethernet into higher speeds “high performance computing” and “research and development” have always been two areas that the industry could count on where higher speeds would be needed for its networking applications. For example, during the incarnation of the IEEE 802.3 Higher Speed Ethernet Study Group that looked beyond 10GbE, and ultimately defined the 40 Gigabit and 100 Gigabit Ethernet ...

Advertisement
A section of the W7-X plasma vessel. Courtesy of Max Planck Institute for Plasma Physics

Hot Plasma Partial to Bootstrap Current: New Calculations Could Lower Fusion Reactor Costs

August 5, 2014 2:41 pm | by Kathy Kincade, NERSC | News | Comments

Supercomputers at NERSC are helping plasma physicists “bootstrap” a potentially more affordable and sustainable fusion reaction. If successful, fusion reactors could provide almost limitless clean energy. To achieve high enough reaction rates to make fusion a useful energy source, hydrogen contained inside the reactor core must be heated to extremely high temperatures, which transforms it into hot plasma.

The Ranger supercomputer, 2008-2013. Courtesy of TACC

Ranger Supercomputer Begins New Life, Makes Global Journey to Africa

July 30, 2014 3:57 pm | by Jorge Salazar, TACC | News | Comments

For all the money and effort poured into supercomputers, their life spans can be brutally short – on average about four years. So, what happens to one of the world's greatest supercomputers when it reaches retirement age? If it's the Texas Advanced Computer Center's (TACC) Ranger supercomputer, it continues making an impact in the world. If the system could talk, it might proclaim, "There is life after retirement!"

IBM will provide eligible scientists studying climate change-related issues with free access to dedicated virtual supercomputing and a platform to engage the public in their research. Each approved project will have access to up to 100,000 years of comput

IBM to Make Free Supercomputing Power Available to Sustainability Scientists

July 30, 2014 2:06 pm | by IBM | News | Comments

In support of the updated Climate Data Initiative announced by the White House July 29, 2014, IBM will provide eligible scientists studying climate change-related issues with free access to dedicated virtual supercomputing and a platform to engage the public in their research. Each approved project will have access to up to 100,000 years of computing time. The work will be performed on IBM's philanthropic World Community Grid platform.

K computer installed in the computer room. Each computer rack is equipped with about 100 CPUs. In the Computer Building, 800 or more computer racks are installed for the K computer.  Courtesy of Riken

K Computer Runs Largest Ever Ensemble Simulation of Global Weather

July 25, 2014 2:25 pm | by RIKEN | News | Comments

Ensemble forecasting is a key part of weather forecasting. Computers typically run multiple simulations using slightly different initial conditions or assumptions, and then analyze them together to try to improve forecasts. Using Japan’s K computer, researchers have succeeded in running 10,240 parallel simulations of global weather, the largest number ever performed, using data assimilation to reduce the range of uncertainties.

IBM Expands High Performance Computing Capabilities in the Cloud

July 24, 2014 2:18 pm | by IBM | News | Comments

IBM is making high performance computing more accessible through the cloud for clients grappling with big data and other computationally intensive activities. A new option from SoftLayer will provide industry-standard InfiniBand networking technology to connect SoftLayer bare metal servers. This will enable very high data throughput speeds between systems, allowing companies to move workloads traditionally associated with HPC to the cloud.

Breakthrough Laser May Play Crucial Role in Development of Quantum Computers

July 23, 2014 3:09 pm | by Joseph Blumberg, Dartmouth College | News | Comments

A team of Dartmouth scientists and their colleagues have devised a breakthrough laser that uses a single artificial atom to generate and emit particles of light — and may play a crucial role in the development of quantum computers, which are predicted to eventually outperform even today’s most powerful supercomputers.

Study: Cloud Computing can make Business More Green

July 21, 2014 2:21 pm | by Andrew Purcell | News | Comments

A  case study published in The International Journal of Business Process Integration and Management demonstrates that the adoption of integrated cloud-computing solutions can lead to significant cost savings for businesses, as well as large reductions in the size of an organization's carbon footprint.

Internet of Things and Hadoop to be featured at ISC Big Data

July 21, 2014 2:07 pm | by ISC | News | Comments

The second ISC Big Data conference themed “From Data To Knowledge,” builds on the success of the inaugural 2013 event. A comprehensive program has been put together by the Steering Committee under the leadership of Sverre Jarp, who retired officially as the CTO of CERN openlab in March of this year.

Cray Awarded Contract to Install India's First Cray XC30 Supercomputer

July 16, 2014 3:33 am | by Cray | News | Comments

The Cray XC30 system will be used by a nation-wide consortium of scientists called the Indian Lattice Gauge Theory Initiative (ILGTI). The group will research the properties of a phase of matter called the quark-gluon plasma, which existed when the universe was approximately a microsecond old. ILGTI also carries out research on exotic and heavy-flavor hadrons, which will be produced in hadron collider experiments.

Chemists Discover Boron Buckyball

July 15, 2014 11:55 am | by Brown University | News | Comments

The discovery 30 years ago of soccer-ball-shaped carbon molecules called buckyballs helped to spur an explosion of nanotechnology research. Now, there appears to be a new ball on the pitch. Researchers have shown that a cluster of 40 boron atoms forms a hollow molecular cage similar to a carbon buckyball. It’s the first experimental evidence that a boron cage structure — previously only a matter of speculation — does indeed exist.

Registration Opens for ISC Cloud and ISC Big Data Conferences

July 15, 2014 11:28 am | by ISC | News | Comments

Registration is now open for the 2014 ISC Cloud and ISC Big Data Conferences, which will be held this fall in Heidelberg, Germany. The fifth ISC Cloud Conference will take place in the Marriott Hotel from September 29 to 30, and the second ISC Big Data will be held from October 1 to 2 at the same venue.

Michael Resch Keynotes at ISC Cloud

July 15, 2014 10:30 am | by ISC | News | Comments

Michael M. Resch, the Director of the Stuttgart High Performance Computing Center (HLRS) will be talking about “HPC and Simulation in the Cloud – How Academia and Industry Can Benefit.” His keynote is of special interest to cloud skeptics, given that prior to 2011, Resch himself was a vocal cloud pessimist. Three years later, he feels that this technology provides a practical option for many users.

X
You may login with either your assigned username or your e-mail address.
The password field is case sensitive.
Loading