As university students around the world prepare to head back to school this fall, 12 groups are already looking ahead to November when they will converge at SC14 in New Orleans for the Student Cluster Competition. In this real-time, non-stop, 48-hour challenge, teams students assemble a small cluster on the SC14 exhibit floor and race to demonstrate the greatest sustained performance across a series of applications.
New supercomputing calculations provide the first evidence that particles predicted by the...
Florida Polytechnic University, Flagship Solutions Group and IBM have announced a new...
NCSA’s Blue Waters project will offer a graduate course on High Performance Visualization for...
A team of students from the University of Tennessee has been preparing since June 2014 at Oak Ridge National Laboratory for the Student Cluster Competition, which will last for 48 continuous hours during the SC14 supercomputing conference on November 16 to 21, 2014, in New Orleans.
Igor Markov reviews limiting factors in the development of computing systems to help determine what is achievable, identifying loose limits and viable opportunities for advancements through the use of emerging technologies. He summarizes and examines limitations in the areas of manufacturing and engineering, design and validation, power and heat, time and space, as well as information and computational complexity.
With the promise of exascale supercomputers looming on the horizon, much of the roadmap is dotted with questions about hardware design and how to make these systems energy efficient enough so that centers can afford to run them. Often taking a back seat is an equally important question: will scientists be able to adapt their applications to take advantage of exascale once it arrives?
The Michael J. Fox Foundation for Parkinson's Research (MJFF) and Intel have announced a collaboration aimed at improving research and treatment for Parkinson's disease — a neurodegenerative brain disease second only to Alzheimer's in worldwide prevalence. The collaboration includes a multiphase research study using a new big data analytics platform that detects patterns in participant data collected from wearable technologies.
The Research Data Alliance seeks to build the social and technical bridges that enable open sharing and reuse of data, so as to address cross-border and cross-disciplinary challenges faced by researchers. This September, the RDA will be hosting its Fourth Plenary Meeting. Ahead of the event, iSGTW spoke to Gary Berg-Cross, general secretary of the Spatial Ontology Community of Practice and a member of the US advisory committee for RDA.
Prof. Dr. Stefan Wrobel, M.S., is director of the Fraunhofer Institute for Intelligent Analysis and Information Systems (IAIS) and Professor of Computer Science at University of Bonn. He studied Computer Science in Bonn and Atlanta, GA, USA (M.S. degree, Georgia Institute of Technology), receiving his doctorate from University of Dortmund.
Dirk Slama is Director of Business Development at Bosch Software Innovations. Bosch SI is spearheading the Internet of Things (IoT) activities of Bosch, the global engineering group. As Conference Chair of the Bosch ConnectedWorld, Dirk helps shaping the IoT strategy of Bosch. Dirk has over 20 years experience in very large-scale application projects, system integration and Business Process Management. His international work experience includes projects for Lufthansa Systems, Boeing, AT&T, NTT DoCoMo, HBOS and others.
With five technical papers contending for one of the highest honored awards in high performance computing (HPC), the Association for Computing Machinery’s (ACM) awards committee has four months left to choose a winner for the prestigious 2014 Gordon Bell Prize. The winner of this prize will have demonstrated an outstanding achievement in HPC that helps solve critical science and engineering problems.
"High performance computing is solving some of the hardest problems in the world. But it's also at your local supermarket, under the hood of your car, and steering your investments.... It's finding signals in the noise."
Rackform iServ R4420 R4422 high-density servers are 2U 4-node products based on TwinPro architecture that provide high throughput and processing capabilities. Each node supports up to two Intel Xeon processors E5-2600v2 series, SAS3 hot-swap drives, and up to 512 GB of DDR3 RAM per node, as well as optional onboard InfiniBand or 10GbE networking.
The Arctica 4806xp open network switch is based on the Broadcom StrataXGS Trident II chipset. It is the first 10/40 Gigabit Ethernet Top-of-Rack (ToR) open switch using an x86 control processor, which provides a flexible platform for Software Defined Networking and customer defined applications.
Scientists from IBM have unveiled the first neurosynaptic computer chip to achieve an unprecedented scale of one million programmable neurons, 256 million programmable synapses and 46 billion synaptic operations per second per watt. At 5.4 billion transistors, this fully functional and production-scale chip is currently one of the largest CMOS chips ever built, yet, while running at biological real time, it consumes a minuscule 70mW.
Cambridge UK-based start up Optalysys has stated that it is only months away from launching a prototype optical processor with the potential to deliver exascale levels of processing power on a standard-sized desktop computer. The company will demonstrate its prototype, which meets NASA Technology Readiness Level 4, in January of next year.
A team representing Westinghouse Electric Company and the Consortium for Advanced Simulation of Light Water Reactors, a DOE Innovation Hub led by Oak Ridge National Laboratory, has received an HPC Innovation Excellence Award for applied simulation on Titan, the nation’s most powerful supercomputer. The award recognizes achievements made by industry users of high-performance computing technologies.
The AMD FirePro S9150 server card is based on the AMD Graphics Core Next (GCN) architecture, the first AMD architecture designed specifically with compute workloads in mind. It is the first to support enhanced double precision and to break the 2.0 TFLOPS double precision barrier.
When the space shuttle Columbia disintegrated on re-entry in 2002, sophisticated computer models were key to determining what happened. A piece of foam flew off at launch and hit a tile, damaging the leading edge of the shuttle wing and exposing the underlying structure. Temperatures soared to thousands of degrees as Columbia plunged toward Earth at 27 times the speed of sound, said Gallis, who used NASA codes and Icarus for simulations...
Big Data, it seems, is everywhere, usually characterized as a Big Problem. But researchers at Lawrence Berkeley National Laboratory are adept at accessing, sharing, moving and analyzing massive scientific datasets. At a July 14-16, 2014, workshop focused on climate science, Berkeley Lab experts shared their expertise with other scientists working with big datasets.
In my 15 or so years leading the charge for Ethernet into higher speeds “high performance computing” and “research and development” have always been two areas that the industry could count on where higher speeds would be needed for its networking applications. For example, during the incarnation of the IEEE 802.3 Higher Speed Ethernet Study Group that looked beyond 10GbE, and ultimately defined the 40 Gigabit and 100 Gigabit Ethernet ...
Supercomputers at NERSC are helping plasma physicists “bootstrap” a potentially more affordable and sustainable fusion reaction. If successful, fusion reactors could provide almost limitless clean energy. To achieve high enough reaction rates to make fusion a useful energy source, hydrogen contained inside the reactor core must be heated to extremely high temperatures, which transforms it into hot plasma.
For all the money and effort poured into supercomputers, their life spans can be brutally short – on average about four years. So, what happens to one of the world's greatest supercomputers when it reaches retirement age? If it's the Texas Advanced Computer Center's (TACC) Ranger supercomputer, it continues making an impact in the world. If the system could talk, it might proclaim, "There is life after retirement!"
In support of the updated Climate Data Initiative announced by the White House July 29, 2014, IBM will provide eligible scientists studying climate change-related issues with free access to dedicated virtual supercomputing and a platform to engage the public in their research. Each approved project will have access to up to 100,000 years of computing time. The work will be performed on IBM's philanthropic World Community Grid platform.
Enabling Innovation and Discovery through Data-Intensive High Performance Cloud and Big Data InfrastructureJuly 29, 2014 2:34 pm | by George Vacek, DataDirect Networks | Blogs | Comments
As the size and scale of life sciences datasets increases — think large-cohort longitudinal studies with multiple samples and multiple protocols — so does the challenge of storing, interpreting and analyzing this data. Researchers and data scientists are under increasing pressure to identify the most relevant and critical information within massive and messy data sets, so they can quickly make the next discovery.
In an age of “big data,” a single computer cannot always find the solution a user wants. Computational tasks must instead be distributed across a cluster of computers that analyze a massive data set together. It's how Facebook and Google mine your Web history to present you with targeted ads, and how Amazon and Netflix recommend your next favorite book or movie. But big data is about more than just marketing.
Ensemble forecasting is a key part of weather forecasting. Computers typically run multiple simulations using slightly different initial conditions or assumptions, and then analyze them together to try to improve forecasts. Using Japan’s K computer, researchers have succeeded in running 10,240 parallel simulations of global weather, the largest number ever performed, using data assimilation to reduce the range of uncertainties.
Alan Turing led a team of code breakers at Bletchley Park which cracked the German Enigma machine cypher during WWII but that is far from being his only legacy. In the year of the 100th anniversary of his birth, researchers published a series of ‘Turing tests’ in the Journal of Experimental & Theoretical Artificial Intelligence; these entailed a series of five-minute conversations between human and machine or human and human.
- Page 1