As university students around the world prepare to head back to school this fall, 12 groups are already looking ahead to November when they will converge at SC14 in New Orleans for the Student Cluster Competition. In this real-time, non-stop, 48-hour challenge, teams students assemble a small cluster on the SC14 exhibit floor and race to demonstrate the greatest sustained performance across a series of applications.
In the age of big data, visualization tools are vital. With a single glance at a graphic display...
New supercomputing calculations provide the first evidence that particles predicted by the...
According to a group of journalists, a spy program known as "Hacienda" is being used by five...
Florida Polytechnic University, Flagship Solutions Group and IBM have announced a new supercomputing center at the University composed of IBM high performance systems, software and cloud-based storage, to help educate students in emerging technology fields. Florida Polytechnic University is the newest addition to the State University System and the only one dedicated exclusively to science, technology, engineering and mathematics (STEM).
Creating a realistic computer simulation of how light suffuses a room is crucial not just for animated movies like Toy Story or Cars, but also in industry. Special computing methods should ensure this, but require great effort. Computer scientists from Saarbrücken have developed a novel approach that vastly simplifies and speeds up the whole calculating process.
NCSA’s Blue Waters project will offer a graduate course on High Performance Visualization for Large-Scale Scientific Data Analytics in Spring 2015 and is seeking university partners who are interested in offering the course for credit to their students. This semester-long online course will include video lectures, quizzes and homework assignments and will provide students with free access to the Blue Waters supercomputer.
The first thousand-robot flash mob has assembled at Harvard University. Instead of one highly-complex robot, a “kilo” of robots collaborate, providing a simple platform for the enactment of complex behaviors. Called Kilobots, these extremely simple robots are each just a few centimeters across and stand on three pin-like legs.
A team of students from the University of Tennessee has been preparing since June 2014 at Oak Ridge National Laboratory for the Student Cluster Competition, which will last for 48 continuous hours during the SC14 supercomputing conference on November 16 to 21, 2014, in New Orleans.
A photo is worth a thousand words, but what if the image could also represent thousands of other images? New software seeks to tame the vast amount of visual data in the world by generating a single photo that can represent massive clusters of images. This tool can give users the photographic gist of a kid on Santa’s lap or housecats. It works by generating an image that literally averages the key features of the other photos.
Igor Markov reviews limiting factors in the development of computing systems to help determine what is achievable, identifying loose limits and viable opportunities for advancements through the use of emerging technologies. He summarizes and examines limitations in the areas of manufacturing and engineering, design and validation, power and heat, time and space, as well as information and computational complexity.
With the promise of exascale supercomputers looming on the horizon, much of the roadmap is dotted with questions about hardware design and how to make these systems energy efficient enough so that centers can afford to run them. Often taking a back seat is an equally important question: will scientists be able to adapt their applications to take advantage of exascale once it arrives?
In a society that has to understand increasingly big and complex datasets, EU researchers are turning to the subconscious for help in unraveling the deluge of information. Big Data refers to large amounts of data produced very quickly by a high number of diverse sources. Data can either be created by people or generated by machines, such as sensors gathering climate information, satellite imagery, digital pictures and videos...
The Michael J. Fox Foundation for Parkinson's Research (MJFF) and Intel have announced a collaboration aimed at improving research and treatment for Parkinson's disease — a neurodegenerative brain disease second only to Alzheimer's in worldwide prevalence. The collaboration includes a multiphase research study using a new big data analytics platform that detects patterns in participant data collected from wearable technologies.
The Research Data Alliance seeks to build the social and technical bridges that enable open sharing and reuse of data, so as to address cross-border and cross-disciplinary challenges faced by researchers. This September, the RDA will be hosting its Fourth Plenary Meeting. Ahead of the event, iSGTW spoke to Gary Berg-Cross, general secretary of the Spatial Ontology Community of Practice and a member of the US advisory committee for RDA.
Prof. Dr. Stefan Wrobel, M.S., is director of the Fraunhofer Institute for Intelligent Analysis and Information Systems (IAIS) and Professor of Computer Science at University of Bonn. He studied Computer Science in Bonn and Atlanta, GA, USA (M.S. degree, Georgia Institute of Technology), receiving his doctorate from University of Dortmund.
Dirk Slama is Director of Business Development at Bosch Software Innovations. Bosch SI is spearheading the Internet of Things (IoT) activities of Bosch, the global engineering group. As Conference Chair of the Bosch ConnectedWorld, Dirk helps shaping the IoT strategy of Bosch. Dirk has over 20 years experience in very large-scale application projects, system integration and Business Process Management. His international work experience includes projects for Lufthansa Systems, Boeing, AT&T, NTT DoCoMo, HBOS and others.
With five technical papers contending for one of the highest honored awards in high performance computing (HPC), the Association for Computing Machinery’s (ACM) awards committee has four months left to choose a winner for the prestigious 2014 Gordon Bell Prize. The winner of this prize will have demonstrated an outstanding achievement in HPC that helps solve critical science and engineering problems.
"High performance computing is solving some of the hardest problems in the world. But it's also at your local supermarket, under the hood of your car, and steering your investments.... It's finding signals in the noise."
Rackform iServ R4420 R4422 high-density servers are 2U 4-node products based on TwinPro architecture that provide high throughput and processing capabilities. Each node supports up to two Intel Xeon processors E5-2600v2 series, SAS3 hot-swap drives, and up to 512 GB of DDR3 RAM per node, as well as optional onboard InfiniBand or 10GbE networking.
The Arctica 4806xp open network switch is based on the Broadcom StrataXGS Trident II chipset. It is the first 10/40 Gigabit Ethernet Top-of-Rack (ToR) open switch using an x86 control processor, which provides a flexible platform for Software Defined Networking and customer defined applications.
Quadro K5200, K4200, K2200, K620 and K420 GPUs deliver an enterprise-grade visual computing platform with up to twice the application performance and data-handling capability of the previous generation. They enable users to interact with graphics applications from a Quadro-based workstation from essentially any device.
Just about everything you ever wanted to know about quantum simulators is summed up in a new review. As part of a Thematic Series on Quantum Simulations, the open access journal European Physical Journal Quantum Technology has published an overview of just what a quantum simulator is, namely a device that actively uses quantum effects to answer questions on model systems.
Ah, sad news in the Hice household. The patient is terminal, and I’m keeping it alive on life support. I keep wallowing in self-pity and ask myself, “Why me?” I feel as though I’m somehow responsible for the illness. Well, OK, I’m definitely responsible, why lie? I may as well have been sharing blood-soaked hypos with a drug addict, but what I did was equally careless. In one brief lapse of concentration, I didn’t examine the URL ...
LabVIEW 2014 system design software standardizes the way users interact with hardware through reuse of the same code and engineering processes across systems, which scales applications for the future. This saves time and money as technology advances, requirements evolve and time-to-market pressure increases.
A computer algorithm being developed by Brown University researchers enables users to instantly change the weather, time of day, season, or other features in outdoor photos with simple text commands. Machine learning and a clever database make it possible.
Scientists from IBM have unveiled the first neurosynaptic computer chip to achieve an unprecedented scale of one million programmable neurons, 256 million programmable synapses and 46 billion synaptic operations per second per watt. At 5.4 billion transistors, this fully functional and production-scale chip is currently one of the largest CMOS chips ever built, yet, while running at biological real time, it consumes a minuscule 70mW.
Cambridge UK-based start up Optalysys has stated that it is only months away from launching a prototype optical processor with the potential to deliver exascale levels of processing power on a standard-sized desktop computer. The company will demonstrate its prototype, which meets NASA Technology Readiness Level 4, in January of next year.
Proficy HMI/SCADA - iFIX 5.8 is designed to enable users to drive better analytics and leverage more reliability, flexibility and scalability across the enterprise. The real-time information management and SCADA solution includes latest-generation visualization tools and a control engine.
- Page 1