Cray CS-Storm is a high-density accelerator compute system based on the Cray CS300 cluster supercomputer. Featuring up to eight NVIDIA Tesla GPU accelerators and a peak performance of more than 11 teraflops per node, the Cray CS-Storm system is a powerful single-node cluster.
Argonne National Laboratory was one of seven new winners of the HPC Innovation Excellence Award...
Bacteria inside your mouth drastically change how they act when you're diseased. Scientists say...
IBM announced it is collaborating with DESY, a national research center in Germany, to speed up management and storage of massive volumes of x-ray data. The planned Big Data and Analytics architecture can handle more than 20 gigabyte per second of data at peak performance and help scientists worldwide gain faster insights into the atomic structure of novel semiconductors, catalysts, biological cells and other samples.
Recently, the Harvard-Smithsonian Center for Astrophysics unveiled an unprecedented simulation of the universe’s development. Called the Illustris project, the simulation depicts more than 13 billion years of cosmic evolution across a cube of the universe that’s 350-million-light-years on each side. But why was it important to conduct such a simulation?
As university students around the world prepare to head back to school this fall, 12 groups are already looking ahead to November when they will converge at SC14 in New Orleans for the Student Cluster Competition. In this real-time, non-stop, 48-hour challenge, teams students assemble a small cluster on the SC14 exhibit floor and race to demonstrate the greatest sustained performance across a series of applications.
New supercomputing calculations provide the first evidence that particles predicted by the theory of quark-gluon interactions, but never before observed, are being produced in heavy-ion collisions at the Relativistic Heavy Ion Collider. These heavy strange baryons, containing at least one strange quark, still cannot be observed directly, but instead make their presence known by lowering the temperature at which other baryons "freeze out"
NCSA’s Blue Waters project will offer a graduate course on High Performance Visualization for Large-Scale Scientific Data Analytics in Spring 2015 and is seeking university partners who are interested in offering the course for credit to their students. This semester-long online course will include video lectures, quizzes and homework assignments and will provide students with free access to the Blue Waters supercomputer.
A team of students from the University of Tennessee has been preparing since June 2014 at Oak Ridge National Laboratory for the Student Cluster Competition, which will last for 48 continuous hours during the SC14 supercomputing conference on November 16 to 21, 2014, in New Orleans.
Igor Markov reviews limiting factors in the development of computing systems to help determine what is achievable, identifying loose limits and viable opportunities for advancements through the use of emerging technologies. He summarizes and examines limitations in the areas of manufacturing and engineering, design and validation, power and heat, time and space, as well as information and computational complexity.
With the promise of exascale supercomputers looming on the horizon, much of the roadmap is dotted with questions about hardware design and how to make these systems energy efficient enough so that centers can afford to run them. Often taking a back seat is an equally important question: will scientists be able to adapt their applications to take advantage of exascale once it arrives?
Prof. Dr. Stefan Wrobel, M.S., is director of the Fraunhofer Institute for Intelligent Analysis and Information Systems (IAIS) and Professor of Computer Science at University of Bonn. He studied Computer Science in Bonn and Atlanta, GA, USA (M.S. degree, Georgia Institute of Technology), receiving his doctorate from University of Dortmund.
Dirk Slama is Director of Business Development at Bosch Software Innovations. Bosch SI is spearheading the Internet of Things (IoT) activities of Bosch, the global engineering group. As Conference Chair of the Bosch ConnectedWorld, Dirk helps shaping the IoT strategy of Bosch. Dirk has over 20 years experience in very large-scale application projects, system integration and Business Process Management. His international work experience includes projects for Lufthansa Systems, Boeing, AT&T, NTT DoCoMo, HBOS and others.
With five technical papers contending for one of the highest honored awards in high performance computing (HPC), the Association for Computing Machinery’s (ACM) awards committee has four months left to choose a winner for the prestigious 2014 Gordon Bell Prize. The winner of this prize will have demonstrated an outstanding achievement in HPC that helps solve critical science and engineering problems.
"High performance computing is solving some of the hardest problems in the world. But it's also at your local supermarket, under the hood of your car, and steering your investments.... It's finding signals in the noise."
Scientists from IBM have unveiled the first neurosynaptic computer chip to achieve an unprecedented scale of one million programmable neurons, 256 million programmable synapses and 46 billion synaptic operations per second per watt. At 5.4 billion transistors, this fully functional and production-scale chip is currently one of the largest CMOS chips ever built, yet, while running at biological real time, it consumes a minuscule 70mW.
Cambridge UK-based start up Optalysys has stated that it is only months away from launching a prototype optical processor with the potential to deliver exascale levels of processing power on a standard-sized desktop computer. The company will demonstrate its prototype, which meets NASA Technology Readiness Level 4, in January of next year.
A team representing Westinghouse Electric Company and the Consortium for Advanced Simulation of Light Water Reactors, a DOE Innovation Hub led by Oak Ridge National Laboratory, has received an HPC Innovation Excellence Award for applied simulation on Titan, the nation’s most powerful supercomputer. The award recognizes achievements made by industry users of high-performance computing technologies.
When the space shuttle Columbia disintegrated on re-entry in 2002, sophisticated computer models were key to determining what happened. A piece of foam flew off at launch and hit a tile, damaging the leading edge of the shuttle wing and exposing the underlying structure. Temperatures soared to thousands of degrees as Columbia plunged toward Earth at 27 times the speed of sound, said Gallis, who used NASA codes and Icarus for simulations...
Big Data, it seems, is everywhere, usually characterized as a Big Problem. But researchers at Lawrence Berkeley National Laboratory are adept at accessing, sharing, moving and analyzing massive scientific datasets. At a July 14-16, 2014, workshop focused on climate science, Berkeley Lab experts shared their expertise with other scientists working with big datasets.
In my 15 or so years leading the charge for Ethernet into higher speeds “high performance computing” and “research and development” have always been two areas that the industry could count on where higher speeds would be needed for its networking applications. For example, during the incarnation of the IEEE 802.3 Higher Speed Ethernet Study Group that looked beyond 10GbE, and ultimately defined the 40 Gigabit and 100 Gigabit Ethernet ...
Supercomputers at NERSC are helping plasma physicists “bootstrap” a potentially more affordable and sustainable fusion reaction. If successful, fusion reactors could provide almost limitless clean energy. To achieve high enough reaction rates to make fusion a useful energy source, hydrogen contained inside the reactor core must be heated to extremely high temperatures, which transforms it into hot plasma.
For all the money and effort poured into supercomputers, their life spans can be brutally short – on average about four years. So, what happens to one of the world's greatest supercomputers when it reaches retirement age? If it's the Texas Advanced Computer Center's (TACC) Ranger supercomputer, it continues making an impact in the world. If the system could talk, it might proclaim, "There is life after retirement!"
In support of the updated Climate Data Initiative announced by the White House July 29, 2014, IBM will provide eligible scientists studying climate change-related issues with free access to dedicated virtual supercomputing and a platform to engage the public in their research. Each approved project will have access to up to 100,000 years of computing time. The work will be performed on IBM's philanthropic World Community Grid platform.
Ensemble forecasting is a key part of weather forecasting. Computers typically run multiple simulations using slightly different initial conditions or assumptions, and then analyze them together to try to improve forecasts. Using Japan’s K computer, researchers have succeeded in running 10,240 parallel simulations of global weather, the largest number ever performed, using data assimilation to reduce the range of uncertainties.
IBM is making high performance computing more accessible through the cloud for clients grappling with big data and other computationally intensive activities. A new option from SoftLayer will provide industry-standard InfiniBand networking technology to connect SoftLayer bare metal servers. This will enable very high data throughput speeds between systems, allowing companies to move workloads traditionally associated with HPC to the cloud.
A team of Dartmouth scientists and their colleagues have devised a breakthrough laser that uses a single artificial atom to generate and emit particles of light — and may play a crucial role in the development of quantum computers, which are predicted to eventually outperform even today’s most powerful supercomputers.
A case study published in The International Journal of Business Process Integration and Management demonstrates that the adoption of integrated cloud-computing solutions can lead to significant cost savings for businesses, as well as large reductions in the size of an organization's carbon footprint.
- Page 1