Advertisement
Supercomputers
Subscribe to Supercomputers

The Lead

Cray CS-Storm High Density Cluster

Cray CS-Storm High Density Cluster

August 26, 2014 3:11 pm | Cray Inc. | Product Releases | Comments

Cray CS-Storm is a high-density accelerator compute system based on the Cray CS300 cluster supercomputer. Featuring up to eight NVIDIA Tesla GPU accelerators and a peak performance of more than 11 teraflops per node, the Cray CS-Storm system is a powerful single-node cluster.

Argonne wins HPC Innovation Excellence Award

August 25, 2014 10:47 am | by Argonne National Laboratory | News | Comments

Argonne National Laboratory was one of seven new winners of the HPC Innovation Excellence Award...

Mouth Bacteria Can Change its Diet, Supercomputers Reveal

August 22, 2014 12:28 pm | by Jorge Salazar, TACC | News | Comments

Bacteria inside your mouth drastically change how they act when you're diseased. Scientists say...

NVIDIA CUDA 6.5 Production Release

August 22, 2014 12:15 pm | Product Releases | Comments

NVIDIA CUDA 6.5 brings GPU-accelerated computing to 64-bit ARM platforms. The toolkit provides...

View Sample

FREE Email Newsletter

DESY's 1.7 mile-long PETRA III accelerator is a super microscope that speeds up electrically charged particles nearly to the speed of light.

20 GB Data per Second Shared with 2000+ Scientists Worldwide

August 22, 2014 12:02 pm | by IBM | News | Comments

IBM announced it is collaborating with DESY, a national research center in Germany, to speed up management and storage of massive volumes of x-ray data. The planned Big Data and Analytics architecture can handle more than 20 gigabyte per second of data at peak performance and help scientists worldwide gain faster insights into the atomic structure of novel semiconductors, catalysts, biological cells and other samples.

This still from a KIPAC visualization shows a jet of energy and particles streaming from a black hole. (Visualization: Ralf Kaehler / Simulation: Jonathan McKinney, Alexander Tchekhovskoy, and Roger Blandford)

Dramatically Intricate 3-D Universes Tell Important Stories about the Cosmos

August 21, 2014 3:16 pm | by Kelen Tuttle, Kavli Foundation | Articles | Comments

Recently, the Harvard-Smithsonian Center for Astrophysics unveiled an unprecedented simulation of the universe’s development. Called the Illustris project, the simulation depicts more than 13 billion years of cosmic evolution across a cube of the universe that’s 350-million-light-years on each side. But why was it important to conduct such a simulation?

In the competition, teams of six students partner with vendors to design and build a cutting-edge cluster from commercially available components that does not exceed a 3120-watt power limit, and work with application experts to tune and run the competitio

12 Teams Set to Compete in SC14 Student Cluster Competition

August 20, 2014 10:50 am | by SC14 | News | Comments

As university students around the world prepare to head back to school this fall, 12 groups are already looking ahead to November when they will converge at SC14 in New Orleans for the Student Cluster Competition. In this real-time, non-stop, 48-hour challenge, teams students assemble a small cluster on the SC14 exhibit floor and race to demonstrate the greatest sustained performance across a series of applications.

Advertisement
Brookhaven theoretical physicist Swagato Mukherjee explains that 'invisible' hadrons are like salt molecules floating around in the hot gas of hadrons, making other particles freeze out at a lower temperature than they would if the 'salt' wasn't there.

Invisible Particles Provide First Indirect Evidence of Strange Baryons

August 20, 2014 10:17 am | by Brookhaven National Laboratory | News | Comments

New supercomputing calculations provide the first evidence that particles predicted by the theory of quark-gluon interactions, but never before observed, are being produced in heavy-ion collisions at the Relativistic Heavy Ion Collider. These heavy strange baryons, containing at least one strange quark, still cannot be observed directly, but instead make their presence known by lowering the temperature at which other baryons "freeze out"

The semester-long online course will include video lectures, quizzes, and homework assignments and will provide students with free access to the Blue Waters supercomputer.

Blue Waters Project to Offer Graduate Visualization Course in Spring 2015

August 18, 2014 12:12 pm | by NCSA | News | Comments

NCSA’s Blue Waters project will offer a graduate course on High Performance Visualization for Large-Scale Scientific Data Analytics in Spring 2015 and is seeking university partners who are interested in offering the course for credit to their students. This semester-long online course will include video lectures, quizzes and homework assignments and will provide students with free access to the Blue Waters supercomputer.

On Tuesday, August 12, through Thursday, August 14, the University of Tennessee team performed a "mock run" for the SC14 Student Cluster Competition in which they compiled, optimized and ran test cases for applications using the supercomputing cluster the

SC14 Student Cluster Competition Mock Run Allows for Self-assessment

August 18, 2014 11:46 am | by University of Tennessee | News | Comments

A team of students from the University of Tennessee has been preparing since June 2014 at Oak Ridge National Laboratory for the Student Cluster Competition, which will last for 48 continuous hours during the SC14 supercomputing conference on November 16 to 21, 2014, in New Orleans.

Advanced techniques such as "structured placement," shown here and developed by Markov's group, are currently being used to wring out optimizations in chip layout. Different circuit modules on an integrated circuit are shown in different colors. Algorithm

Reviewing Frontier Technologies to Determine Fundamental Limits of Computer Scaling

August 15, 2014 12:31 pm | by NSF | News | Comments

Igor Markov reviews limiting factors in the development of computing systems to help determine what is achievable, identifying loose limits and viable opportunities for advancements through the use of emerging technologies. He summarizes and examines limitations in the areas of manufacturing and engineering, design and validation, power and heat, time and space, as well as information and computational complexity.​

NERSC's next-generation supercomputer, a Cray XC, will be named after Gerty Cori, the first American woman to be honored with a Nobel Prize in science. She shared the 1947 Nobel Prize with her husband Carl (pictured) and Argentine physiologist Bernardo Ho

NERSC Launches Next-Generation Code Optimization Effort

August 15, 2014 9:41 am | by NERSC | News | Comments

With the promise of exascale supercomputers looming on the horizon, much of the roadmap is dotted with questions about hardware design and how to make these systems energy efficient enough so that centers can afford to run them. Often taking a back seat is an equally important question: will scientists be able to adapt their applications to take advantage of exascale once it arrives?

Advertisement
Prof. Dr. Stefan Wrobel,Director, Fraunhofer Institute for Intelligent Analysis & Information Systems (IAIS) and Professor of Computer Science, University of Bonn

Prof. Dr. Stefan Wrobel

August 14, 2014 12:29 pm | Biographies

Prof. Dr. Stefan Wrobel, M.S., is director of the Fraunhofer Institute for Intelligent Analysis and Information Systems (IAIS) and Professor of Computer Science at University of Bonn. He studied Computer Science in Bonn and Atlanta, GA, USA (M.S. degree, Georgia Institute of Technology), receiving his doctorate from University of Dortmund.

Dirk Slama, Business Development Director, Bosch Si

Dirk Slama

August 14, 2014 12:15 pm | Biographies

Dirk Slama is Director of Business Development at Bosch Software Innovations. Bosch SI is spearheading the Internet of Things (IoT) activities of Bosch, the global engineering group. As Conference Chair of the Bosch ConnectedWorld, Dirk helps shaping the IoT strategy of Bosch. Dirk has over 20 years experience in very large-scale application projects, system integration and Business Process Management. His international work experience includes projects for Lufthansa Systems, Boeing, AT&T, NTT DoCoMo, HBOS and others.

With an emphasis on HPC applications in science, engineering and large-scale data analytics; the Gordon Bell Prize tracks the overall progress in parallel computing.

Finalists Compete for Coveted ACM Gordon Bell Prize in High Performance Computing

August 13, 2014 12:01 pm | by SC14 | News | Comments

With five technical papers contending for one of the highest honored awards in high performance computing (HPC), the Association for Computing Machinery’s (ACM) awards committee has four months left to choose a winner for the prestigious 2014 Gordon Bell Prize. The winner of this prize will have demonstrated an outstanding achievement in HPC that helps solve critical science and engineering problems.

#HPCmatters

SC14 #HPCmatters

August 13, 2014 9:59 am | by SC Conference Series | Videos | Comments

"High performance computing is solving some of the hardest problems in the world. But it's also at your local supermarket, under the hood of your car, and steering your investments.... It's finding signals in the noise."                                                 

A brain-inspired chip to transform mobility and Internet of Things through sensory perception. Courtesy of IBM

Chip with Brain-inspired Non-Von Neumann Architecture has 1M Neurons, 256M Synapses

August 11, 2014 12:13 pm | by IBM | News | Comments

Scientists from IBM have unveiled the first neurosynaptic computer chip to achieve an unprecedented scale of one million programmable neurons, 256 million programmable synapses and 46 billion synaptic operations per second per watt. At 5.4 billion transistors, this fully functional and production-scale chip is currently one of the largest CMOS chips ever built, yet, while running at biological real time, it consumes a minuscule 70mW.

Advertisement
Optalysys is currently developing two products, a ‘Big Data’ analysis system and an Optical Solver Supercomputer, both of which are expected to be launched in 2017.

Light-speed Computing: Prototype Optical Processor Set to Revolutionize Supercomputing

August 8, 2014 4:13 pm | by Optalysys | News | Comments

Cambridge UK-based start up Optalysys has stated that it is only months away from launching a prototype optical processor with the potential to deliver exascale levels of processing power on a standard-sized desktop computer. The company will demonstrate its prototype, which meets NASA Technology Readiness Level 4, in January of next year.

The simulations, performed on Titan’s Cray XK7 system, produced 3D, high-fidelity power distributions representing conditions expected to occur during the AP1000 core startup and used up to 240,000 computational units in parallel.

Westinghouse-CASL Team Wins Major Computing Award for Reactor Core Simulations on Titan

August 7, 2014 3:35 pm | by Oak Ridge National Laboratory | News | Comments

A team representing Westinghouse Electric Company and the Consortium for Advanced Simulation of Light Water Reactors, a DOE Innovation Hub led by Oak Ridge National Laboratory, has received an HPC Innovation Excellence Award for applied simulation on Titan, the nation’s most powerful supercomputer. The award recognizes achievements made by industry users of high-performance computing technologies.

Sandia National Laboratories researchers Steve Plimpton, left, and Michael Gallis look at a projection of a model of the Russian MIR space station, which fell out of orbit several years ago and disintegrated, with the remains ending up at the bottom of th

Sophisticated 3-D Codes Yield Unprecedented Physics, Engineering Insights

August 6, 2014 4:43 pm | by Sandia National Laboratories | News | Comments

When the space shuttle Columbia disintegrated on re-entry in 2002, sophisticated computer models were key to determining what happened. A piece of foam flew off at launch and hit a tile, damaging the leading edge of the shuttle wing and exposing the underlying structure. Temperatures soared to thousands of degrees as Columbia plunged toward Earth at 27 times the speed of sound, said Gallis, who used NASA codes and Icarus for simulations...

ESnet's Eli Dart moved 56 TB of climate data from 21 sites to NERSC, a task that took three months. In contrast, it took just two days to transfer the raw dataset using Globus from NERSC  to the Mira supercomputer at Argonne National Laboratory.

Weathering the Flood of Big Data in Climate Research

August 6, 2014 4:16 pm | by ESnet | News | Comments

Big Data, it seems, is everywhere, usually characterized as a Big Problem. But researchers at Lawrence Berkeley National Laboratory are adept at accessing, sharing, moving and analyzing massive scientific datasets. At a July 14-16, 2014, workshop focused on climate science, Berkeley Lab experts shared their expertise with other scientists working with big datasets.

John D’Ambrosia is chairman of the Ethernet Alliance and chief Ethernet evangelist, CTO office at Dell.

Ethernet and Data Gathering for HPC

August 5, 2014 4:33 pm | by John D’Ambrosia | Blogs | Comments

In my 15 or so years leading the charge for Ethernet into higher speeds “high performance computing” and “research and development” have always been two areas that the industry could count on where higher speeds would be needed for its networking applications. For example, during the incarnation of the IEEE 802.3 Higher Speed Ethernet Study Group that looked beyond 10GbE, and ultimately defined the 40 Gigabit and 100 Gigabit Ethernet ...

A section of the W7-X plasma vessel. Courtesy of Max Planck Institute for Plasma Physics

Hot Plasma Partial to Bootstrap Current: New Calculations Could Lower Fusion Reactor Costs

August 5, 2014 2:41 pm | by Kathy Kincade, NERSC | News | Comments

Supercomputers at NERSC are helping plasma physicists “bootstrap” a potentially more affordable and sustainable fusion reaction. If successful, fusion reactors could provide almost limitless clean energy. To achieve high enough reaction rates to make fusion a useful energy source, hydrogen contained inside the reactor core must be heated to extremely high temperatures, which transforms it into hot plasma.

The Ranger supercomputer, 2008-2013. Courtesy of TACC

Ranger Supercomputer Begins New Life, Makes Global Journey to Africa

July 30, 2014 3:57 pm | by Jorge Salazar, TACC | News | Comments

For all the money and effort poured into supercomputers, their life spans can be brutally short – on average about four years. So, what happens to one of the world's greatest supercomputers when it reaches retirement age? If it's the Texas Advanced Computer Center's (TACC) Ranger supercomputer, it continues making an impact in the world. If the system could talk, it might proclaim, "There is life after retirement!"

IBM will provide eligible scientists studying climate change-related issues with free access to dedicated virtual supercomputing and a platform to engage the public in their research. Each approved project will have access to up to 100,000 years of comput

IBM to Make Free Supercomputing Power Available to Sustainability Scientists

July 30, 2014 2:06 pm | by IBM | News | Comments

In support of the updated Climate Data Initiative announced by the White House July 29, 2014, IBM will provide eligible scientists studying climate change-related issues with free access to dedicated virtual supercomputing and a platform to engage the public in their research. Each approved project will have access to up to 100,000 years of computing time. The work will be performed on IBM's philanthropic World Community Grid platform.

K computer installed in the computer room. Each computer rack is equipped with about 100 CPUs. In the Computer Building, 800 or more computer racks are installed for the K computer.  Courtesy of Riken

K Computer Runs Largest Ever Ensemble Simulation of Global Weather

July 25, 2014 2:25 pm | by RIKEN | News | Comments

Ensemble forecasting is a key part of weather forecasting. Computers typically run multiple simulations using slightly different initial conditions or assumptions, and then analyze them together to try to improve forecasts. Using Japan’s K computer, researchers have succeeded in running 10,240 parallel simulations of global weather, the largest number ever performed, using data assimilation to reduce the range of uncertainties.

IBM Expands High Performance Computing Capabilities in the Cloud

July 24, 2014 2:18 pm | by IBM | News | Comments

IBM is making high performance computing more accessible through the cloud for clients grappling with big data and other computationally intensive activities. A new option from SoftLayer will provide industry-standard InfiniBand networking technology to connect SoftLayer bare metal servers. This will enable very high data throughput speeds between systems, allowing companies to move workloads traditionally associated with HPC to the cloud.

Breakthrough Laser May Play Crucial Role in Development of Quantum Computers

July 23, 2014 3:09 pm | by Joseph Blumberg, Dartmouth College | News | Comments

A team of Dartmouth scientists and their colleagues have devised a breakthrough laser that uses a single artificial atom to generate and emit particles of light — and may play a crucial role in the development of quantum computers, which are predicted to eventually outperform even today’s most powerful supercomputers.

Study: Cloud Computing can make Business More Green

July 21, 2014 2:21 pm | by Andrew Purcell | News | Comments

A  case study published in The International Journal of Business Process Integration and Management demonstrates that the adoption of integrated cloud-computing solutions can lead to significant cost savings for businesses, as well as large reductions in the size of an organization's carbon footprint.

X
You may login with either your assigned username or your e-mail address.
The password field is case sensitive.
Loading