For decades, neuroscientists have been trying to design computer networks that can mimic visual skills such as recognizing objects. Until now, no computer model has been able to match the primate brain at visual object recognition during a brief glance. However, a new study from MIT neuroscientists has found that one of the latest generation of these so-called “deep neural networks” matches the primate brain.
Sense of urgency and economic impact emphasized: The “hardware first” ethic is changing...
The Switch Abstraction Interface (SAI) to Open Ethernet switch systems is designed for open...
NASA researchers began flight tests of computer software that shows promise in improving flight...
Communication protocols for digital devices are very efficient but also very brittle: They require information to be specified in a precise order with a precise number of bits. If sender and receiver — say, a computer and a printer — are off by even a single bit relative to each other, communication between them breaks down entirely.
Springer and Simula have launched a new book series, which aims to provide introductions to select research in computing. The series presents both a state-of-the-art disciplinary overview and raises essential critical questions in the field. All Simula SpringerBriefs on Computing are open access, allowing for faster sharing and wider dissemination of knowledge.
Intel demonstrated for the first time with Professor Stephen Hawking a new Intel-created communications platform to replace his decades-old system, dramatically improving his ability to communicate with the world. The customizable platform will be available to research and technology communities by January of next year. It has the potential to become the backbone of a modern, customizable system other researchers and technologists can use.
Computers are good at identifying patterns in huge data sets. Humans, by contrast, are good at inferring patterns from just a few examples. In a paper appearing at the Neural Information Processing Society’s conference next week, MIT researchers present a new system that bridges these two ways of processing information, so that humans and computers can collaborate to make better decisions.
Engineers have designed and built a prism-like device that can split a beam of light into different colors and bend the light at right angles, a development that could eventually lead to computers that use optics, rather than electricity, to carry data. The optical link is a tiny slice of silicon etched with a pattern that resembles a bar code. When a beam of light is shined at the link, two different wavelengths of light split off
NAG Compiler 6.0 accurately follows Fortran and OpenMP programming language standards, supporting OpenMP 3.1 and Fortran 2008, 2003 and 95. Because the code is correct; applications that are developed with and checked by the NAG Compiler are ready to be run on a wide range of current and future computer processors.
Of course, I remember the Berlin Wall being pummeled to gravel 25 years ago. I always hated what it symbolized, and I was excited. I was in Fayetteville, AR, at the finest hotel in town (a multi-story Holiday Inn at the time) when I saw the Germans storming the wall and whack-a-mole-ing the wall with ballpeen hammers. How I came to be in Arkansas is a rather remarkable and foreboding story.
An interview with PNNL’s Karol Kowalski, Capability Lead for NWChem Development - NWChem is an open source high performance computational chemistry tool developed for the Department of Energy at Pacific Northwest National Lab in Richland, WA. I recently visited with Karol Kowalski, Capability Lead for NWChem Development, who works in the Environmental Molecular Sciences Laboratory (EMSL) at PNNL.
Computer scientists have developed software that not only detects and eradicates never-before-seen viruses and other malware, but also automatically repairs damage caused by them. The software then prevents the invader from ever infecting the computer again.
ACM has announced that the funding level for the ACM A.M. Turing Award is now $1,000,000, to be provided by Google. The new amount is four times its previous level. The cash award, which goes into effect for the 2014 ACM Turing Award to be announced early next year, reflects the escalating impact of computing on daily life through the innovations and technologies it enables.
To help moderate the energy needs of increasingly power-hungry supercomputers, researchers at Sandia National Laboratories have released an application programming interface (API) with the goal of standardizing measurement and control of power- and energy-relevant features for HPC systems. The High Performance Computing — Power API specification, still open to collaborators for future development and is vendor-neutral.
One year ago, recognizing a rapidly emerging challenge facing the HPC community, Intel launched the Parallel Computing Centers program. With the great majority of the world’s technical HPC computing challenges being handled by systems based on Intel architecture, the company was keenly aware of the growing need to modernize a large portfolio of public domain scientific applications, to prepare these critically important codes for multi-core
Highly motivated to organize the Argonne Training Program on Extreme-Scale Computing, Paul Messina reflects on what makes the program unique and a can’t-miss opportunity for the next generation of HPC scientists. ATPESC is an intense, two-week program that covers most of the topics and skills necessary to conduct computational science and engineering research on today’s and tomorrow’s high-end computers.
Researchers will begin testing a new computer game that they hope could hold the key to helping visually impaired children lead independent lives. Developed by a team of neuroscientists and video game designers, the Eyelander game features exploding volcanoes, a travelling avatar and animated landscapes.
As transistors get smaller, they also grow less reliable. Increasing their operating voltage can help, but that means a corresponding increase in power consumption. With information technology consuming a steadily growing fraction of the world’s energy supplies, some researchers and hardware manufacturers are exploring the possibility of simply letting chips botch the occasional computation.
In a darkened, hangar-like space inside MIT’s Building 41, a small, Roomba-like robot is trying to make up its mind. Standing in its path is an obstacle — a human pedestrian who’s pacing back and forth. To get to the other side of the room, the robot has to first determine where the pedestrian is, then choose the optimal route to avoid a close encounter.
The control of modern infrastructure, such as intelligent power grids, needs lots of computing capacity. Scientists have developed an algorithm that might revolutionize these processes. With their new software, researchers are able to forego the use of considerable amounts of computing capacity, enabling what they call micro mining.
Bio-engineers are working on the development of biological computers with the aim of designing small circuits made from biological material that can be integrated into cells to change their functions. In the future, such developments could enable cancer cells to be reprogrammed, thereby preventing them from dividing at an uncontrollable rate. Stem cells could likewise be reprogrammed into differentiated organ cells.
New software algorithms have been shown to significantly reduce the time and material needed to produce objects with 3-D printers. The algorithms have been created to address the problem. Researchers from Purdue University have demonstrated one approach that has been shown to reduce printing time by up to 30 percent and the quantity of support material by as much as 65 percent.
From performing surgery to driving cars, today’s robots can do it all. With chatbots recently hailed as passing the Turing test, it appears robots are becoming increasingly adept at posing as humans. While machines are becoming ever more integrated into human lives, the need to imbue them with a sense of morality becomes increasingly urgent. But can we really teach robots how to be good? An innovative piece of research looks into the matter
The Oil and Gas High Performance Computing (HPC) Workshop, hosted annually at Rice University, is the premier meeting place for discussion of challenges and opportunities around high performance computing, information technology, and computational science and engineering.
High Performance Parallelism Pearls, the latest book by James Reinders and Jim Jeffers, is a teaching juggernaut that packs the experience of 69 authors into 28 chapters designed to get readers running on the Intel Xeon Phi family of coprocessors, plus provide tools and techniques to adapt legacy codes, as well as increase application performance on Intel Xeon processors.
Error-correcting codes are one of the glories of the information age: They’re what guarantee the flawless transmission of digital information over the airwaves or through copper wire, even in the presence of the corrupting influences that engineers call “noise.”
MIT researchers have developed an algorithm for bounding that they’ve successfully implemented in a robotic cheetah — a sleek, four-legged assemblage of gears, batteries and electric motors that weighs about as much as its feline counterpart. The team recently took the robot for a test run, where it bounded across the grass at a steady clip. The researchers estimate the robot may eventually reach speeds of up to 30 mph.
As scientific computing moves inexorably toward the Exascale era, an increasingly urgent problem has emerged: many HPC software applications — both public domain and proprietary commercial — are hamstrung by antiquated algorithms and software unable to function in manycore supercomputing environments. Aside from developing an Exascale-level architecture, HPC code modernization is the most important challenge facing the HPC community.
- Page 1