Advertisement
Programming
Subscribe to Programming

The Lead

A team of MIT neuroscientists has found that some computer programs can identify the objects in these images just as well as the primate brain. Courtesy of the researchers

Deep Computer Neural Networks Catch Up to Primate Brain

December 18, 2014 4:53 pm | by Anne Trafton, MIT | News | Comments

For decades, neuroscientists have been trying to design computer networks that can mimic visual skills such as recognizing objects. Until now, no computer model has been able to match the primate brain at visual object recognition during a brief glance. However, a new study from MIT neuroscientists has found that one of the latest generation of these so-called “deep neural networks” matches the primate brain.

HPC Community Experts Weigh in on Code Modernization

December 17, 2014 4:33 pm | by Doug Black | Articles | Comments

Sense of urgency and economic impact emphasized: The “hardware first” ethic is changing...

Open Ethernet Switch Abstraction Interface

December 17, 2014 3:51 pm | Product Releases | Comments

The Switch Abstraction Interface (SAI) to Open Ethernet switch systems is designed for open...

NASA Software May Help Increase Flight Efficiency, Decrease Aircraft Noise

December 16, 2014 11:03 am | by NASA | News | Comments

NASA researchers began flight tests of computer software that shows promise in improving flight...

View Sample

FREE Email Newsletter

Madhu Sudan and his colleagues have begun to describe theoretical limits on the degree of imprecision that communicating computers can tolerate, with very real implications for the design of communication protocols. Courtesy of Jose-Luis Olivares/MIT

New Theory could Yield More Reliable Communication Protocols

December 12, 2014 5:23 pm | by Larry Hardesty, MIT | News | Comments

Communication protocols for digital devices are very efficient but also very brittle: They require information to be specified in a precise order with a precise number of bits. If sender and receiver — say, a computer and a printer — are off by even a single bit relative to each other, communication between them breaks down entirely.

The Simula SpringerBriefs on Computing series will provide introductory volumes on the main topics within Simula’s expertise, including communications technology, software engineering and scientific computing.

New Open Access Book Series Introduces Essentials of Computing Science

December 11, 2014 3:58 pm | by Springer | News | Comments

Springer and Simula have launched a new book series, which aims to provide introductions to select research in computing. The series presents both a state-of-the-art disciplinary overview and raises essential critical questions in the field. All Simula SpringerBriefs on Computing are open access, allowing for faster sharing and wider dissemination of knowledge.

Professor Stephen Hawking using his Intel-powered communication system in his library at home.

Intel Provides Open Access to Hawking’s Advanced Communications Platform

December 10, 2014 4:09 pm | by Intel | News | Comments

Intel demonstrated for the first time with Professor Stephen Hawking a new Intel-created communications platform to replace his decades-old system, dramatically improving his ability to communicate with the world. The customizable platform will be available to research and technology communities by January of next year. It has the potential to become the backbone of a modern, customizable system other researchers and technologists can use.

Advertisement
A new machine-learning algorithm clusters data according to both a small number of shared features (circled in blue) and similarity to a representative example (far right). Courtesy of Christine Daniloff

Teaching by Example: Pattern-recognition Systems Convey What they Learn to Humans

December 9, 2014 2:00 pm | by Larry Hardesty, MIT | News | Comments

Computers are good at identifying patterns in huge data sets. Humans, by contrast, are good at inferring patterns from just a few examples. In a paper appearing at the Neural Information Processing Society’s conference next week, MIT researchers present a new system that bridges these two ways of processing information, so that humans and computers can collaborate to make better decisions.

This tiny slice of silicon, etched in Jelena Vuckovic's lab at Stanford with a pattern that resembles a bar code, is one step on the way toward linking computer components with light instead of wires. Courtesy Vuckovic Lab

New Algorithm a Big Step toward Using Light to Transmit Data

December 9, 2014 1:38 pm | by Stanford University, Chris Cesare | News | Comments

Engineers have designed and built a prism-like device that can split a beam of light into different colors and bend the light at right angles, a development that could eventually lead to computers that use optics, rather than electricity, to carry data. The optical link is a tiny slice of silicon etched with a pattern that resembles a bar code. When a beam of light is shined at the link, two different wavelengths of light split off

NAG Compiler 6.0

NAG Compiler 6.0

November 26, 2014 9:06 am | Nag Ltd | Product Releases | Comments

NAG Compiler 6.0 accurately follows Fortran and OpenMP programming language standards, supporting OpenMP 3.1 and Fortran 2008, 2003 and 95. Because the code is correct; applications that are developed with and checked by the NAG Compiler are ready to be run on a wide range of current and future computer processors.

Brandenburg Gate on December 1, 1989. The structure is already freely accessible from the East, however, the crossing to the Western side will not be officially open until December 22nd.

Another Brick in the Wall: The Legendary Rescue of a Doomed Project

November 20, 2014 2:08 pm | by Randy C. Hice | Blogs | Comments

Of course, I remember the Berlin Wall being pummeled to gravel 25 years ago. I always hated what it symbolized, and I was excited.  I was in Fayetteville, AR, at the finest hotel in town (a multi-story Holiday Inn at the time) when I saw the Germans storming the wall and whack-a-mole-ing the wall with ballpeen hammers. How I came to be in Arkansas is a rather remarkable and foreboding story.

Karol Kowalski, Capability Lead for NWChem Development, works in the Environmental Molecular Sciences Laboratory (EMSL) at PNNL.

Advancing Computational Chemistry with NWChem

November 18, 2014 3:07 pm | by Mike Bernhardt, HPC Community Evangelist, Intel | Articles | Comments

An interview with PNNL’s Karol Kowalski, Capability Lead for NWChem Development - NWChem is an open source high performance computational chemistry tool developed for the Department of Energy at Pacific Northwest National Lab in Richland, WA. I recently visited with Karol Kowalski, Capability Lead for NWChem Development, who works in the Environmental Molecular Sciences Laboratory (EMSL) at PNNL.

Advertisement
Eric Eide, University of Utah research assistant professor of computer science, stands in the computer science department's "Machine Room" where racks of Web servers sit. It is on these computers that Eide, computer science associate professor John Regehr

Self-Repairing Software Tackles Malware

November 14, 2014 3:54 pm | by University of Utah | News | Comments

Computer scientists have developed software that not only detects and eradicates never-before-seen viruses and other malware, but also automatically repairs damage caused by them. The software then prevents the invader from ever infecting the computer again.

Since its inception in 1966, ACM’s Turing Award has honored the computer scientists and engineers who created the systems and underlying theoretical foundations that have propelled the information technology industry.

Turing Award Prize Raised to $1 Million Cash

November 14, 2014 2:25 pm | by ACM | News | Comments

ACM has announced that the funding level for the ACM A.M. Turing Award is now $1,000,000, to be provided by Google. The new amount is four times its previous level. The cash award, which goes into effect for the 2014 ACM Turing Award to be announced early next year, reflects the escalating impact of computing on daily life through the innovations and technologies it enables.

The High Performance Computing — Power Application Program Interface is intended to standardize and control power and energy features of high-performance computing systems.

Interface Helps Standardize Supercomputer Power and Energy Systems

November 12, 2014 3:38 pm | by Sandia National Laboratories | News | Comments

To help moderate the energy needs of increasingly power-hungry supercomputers, researchers at Sandia National Laboratories have released an application programming interface (API) with the goal of standardizing measurement and control of power- and energy-relevant features for HPC systems. The High Performance Computing — Power API specification, still open to collaborators for future development and is vendor-neutral.

The IPCC at Lawrence Berkeley National Laboratory is performing code modernization work on NWChem.

A Focus on Code Modernization: Observing Year One of the Intel Parallel Computing Centers

November 10, 2014 11:11 am | by Doug Black | Articles | Comments

One year ago, recognizing a rapidly emerging challenge facing the HPC community, Intel launched the Parallel Computing Centers program. With the great majority of the world’s technical HPC computing challenges being handled by systems based on Intel architecture, the company was keenly aware of the growing need to modernize a large portfolio of public domain scientific applications, to prepare these critically important codes for multi-core

Highly motivated to organize the Argonne Training Program on Extreme-Scale Computing, Paul Messina reflects on what makes the program unique and a can’t-miss opportunity for the next generation of HPC scientists.

A Q&A with Paul Messina, Director of Science for the Argonne Leadership Computing Facility

November 6, 2014 4:22 pm | by Brian Grabowski, Argonne National Laboratory | Articles | Comments

Highly motivated to organize the Argonne Training Program on Extreme-Scale Computing, Paul Messina reflects on what makes the program unique and a can’t-miss opportunity for the next generation of HPC scientists. ATPESC is an intense, two-week program that covers most of the topics and skills necessary to conduct computational science and engineering research on today’s and tomorrow’s high-end computers.

Advertisement
Become an Eyelander vision training game: Researchers from the University of Lincoln, UK, and Wesc Foundation have developed a new computer game which could hold the key to helping visually-impaired children lead independent lives. It is now in clinical t

Computational Neuroscientist Develops Computer Game to Aid Visually Impaired

November 3, 2014 11:14 am | by University of Lincoln | News | Comments

Researchers will begin testing a new computer game that they hope could hold the key to helping visually impaired children lead independent lives. Developed by a team of neuroscientists and video game designers, the Eyelander game features exploding volcanoes, a travelling avatar and animated landscapes.

A new system lets programmers identify sections of their code that can tolerate a little error. The system then determines which program instructions to assign to unreliable hardware components, to maximize energy savings while still meeting the programme

Harnessing Error-prone Chips Trades Computational Accuracy for Energy Savings

October 31, 2014 2:09 pm | by Larry Hardesty, MIT | News | Comments

As transistors get smaller, they also grow less reliable. Increasing their operating voltage can help, but that means a corresponding increase in power consumption. With information technology consuming a steadily growing fraction of the world’s energy supplies, some researchers and hardware manufacturers are exploring the possibility of simply letting chips botch the occasional computation.

MIT researchers explain their new visualization system that can project a robot's "thoughts." Video screenshot courtesy of Melanie Gonick/MIT

Projecting a Robot’s Intentions: New Spin on Virtual Reality to Read Robots’ Minds

October 30, 2014 4:46 pm | by Jennifer Chu, MIT | News | Comments

In a darkened, hangar-like space inside MIT’s Building 41, a small, Roomba-like robot is trying to make up its mind. Standing in its path is an obstacle — a human pedestrian who’s pacing back and forth. To get to the other side of the room, the robot has to first determine where the pedestrian is, then choose the optimal route to avoid a close encounter.

The software stores only the changes of the system state at specific points in time. Courtesy of Université du Luxembourg, Boshua

New Algorithm Provides Enormous Reduction in Computing Overhead

October 30, 2014 4:37 pm | by University of Luxembourg | News | Comments

The control of modern infrastructure, such as intelligent power grids, needs lots of computing capacity. Scientists have developed an algorithm that might revolutionize these processes. With their new software, researchers are able to forego the use of considerable amounts of computing capacity, enabling what they call micro mining.

Researchers are expanding the applicability of biological circuits. Background: Microscopic image of human kidney cells with fluorescent proteins in cell culture.

Constructing Precisely Functioning, Programmable Bio-computers

October 23, 2014 3:40 pm | by Fabio Bergamin, ETH | News | Comments

Bio-engineers are working on the development of biological computers with the aim of designing small circuits made from biological material that can be integrated into cells to change their functions. In the future, such developments could enable cancer cells to be reprogrammed, thereby preventing them from dividing at an uncontrollable rate. Stem cells could likewise be reprogrammed into differentiated organ cells.

New software algorithms reduce the time and material needed to produce objects with 3-D printers. Here, the wheel on the left was produced with conventional software and the one on the right with the new algorithms. Courtesy of Purdue University/Bedrich B

New Software Algorithms Speed 3-D Printing, Reduce Waste

October 22, 2014 12:40 pm | by Emil Venere, Purdue University | News | Comments

New software algorithms have been shown to significantly reduce the time and material needed to produce objects with 3-D printers. The algorithms have been created to address the problem. Researchers from Purdue University have demonstrated one approach that has been shown to reduce printing time by up to 30 percent and the quantity of support material by as much as 65 percent.

An innovative piece of research looks into the matter of machine morality, and questions whether it is “evil” for robots to masquerade as humans.

How to Train your Robot: Can We Teach Robots Right from Wrong?

October 14, 2014 12:46 pm | by Taylor & Francis | News | Comments

From performing surgery to driving cars, today’s robots can do it all. With chatbots recently hailed as passing the Turing test, it appears robots are becoming increasingly adept at posing as humans. While machines are becoming ever more integrated into human lives, the need to imbue them with a sense of morality becomes increasingly urgent. But can we really teach robots how to be good? An innovative piece of research looks into the matter

The Oil and Gas High Performance Computing (HPC) Workshop, hosted annually at Rice University, is the premier meeting place for discussion of challenges and opportunities around high performance computing, information technology, and computational science

2015 Rice Oil & Gas High Performance Computing Workshop

October 13, 2014 2:45 pm | by Rice University | Events

The Oil and Gas High Performance Computing (HPC) Workshop, hosted annually at Rice University, is the premier meeting place for discussion of challenges and opportunities around high performance computing, information technology, and computational science and engineering.

Rob Farber is an independent HPC expert to startups and Fortune 100 companies, as well as government and academic organizations.

High Performance Parallelism Pearls: A Teaching Juggernaut

October 13, 2014 9:52 am | by Rob Farber | Blogs | Comments

High Performance Parallelism Pearls, the latest book by James Reinders and Jim Jeffers, is a teaching juggernaut that packs the experience of 69 authors into 28 chapters designed to get readers running on the Intel Xeon Phi family of coprocessors, plus provide tools and techniques to adapt legacy codes, as well as increase application performance on Intel Xeon processors. 

Error-correcting codes are one of the glories of the information age: They’re   what guarantee the flawless transmission of digital information over the   airwaves or through copper wire, even in the presence of the corrupting   influences that engineers

Reaching the Limit of Error-Correcting Codes

October 2, 2014 3:44 pm | by Larry Hardesty, MIT | News | Comments

Error-correcting codes are one of the glories of the information age: They’re what guarantee the flawless transmission of digital information over the airwaves or through copper wire, even in the presence of the corrupting influences that engineers call “noise.”

The team recently took the MIT cheetah-bot for a test run, where it bounded across the grass at a steady clip.  Courtesy of Jose-Luis Olivares/MIT

Algorithm Enables Untethered Cheetah Robot to Run and Jump

September 16, 2014 2:14 pm | by Jennifer Chu, MIT | News | Comments

MIT researchers have developed an algorithm for bounding that they’ve successfully implemented in a robotic cheetah — a sleek, four-legged assemblage of gears, batteries and electric motors that weighs about as much as its feline counterpart. The team recently took the robot for a test run, where it bounded across the grass at a steady clip. The researchers estimate the robot may eventually reach speeds of up to 30 mph.

“Scalability and performance means taking a careful look at the code modernization opportunities that exist for both message passing and threads as well as opportunities for vectorization and SIMDization.” Rick Stevens, Argonne National Laboratory

Extending the Lifespan of Critical Resources through Code Modernization

September 9, 2014 2:05 pm | by Doug Black | Articles | Comments

As scientific computing moves inexorably toward the Exascale era, an increasingly urgent problem has emerged: many HPC software applications — both public domain and proprietary commercial — are hamstrung by antiquated algorithms and software unable to function in manycore supercomputing environments. Aside from developing an Exascale-level architecture, HPC code modernization is the most important challenge facing the HPC community.

X
You may login with either your assigned username or your e-mail address.
The password field is case sensitive.
Loading