Optimization algorithms, which try to find the minimum values of mathematical functions, are everywhere in engineering. Among other things, they’re used to evaluate design tradeoffs, to assess control systems, and to find patterns in data. One way to solve a difficult optimization problem is to first reduce it to a related but much simpler problem, then gradually add complexity back in ...
IBM has announced the first winner of its Watson University Competition, part of the company's...
For household robots to be practical, they need to be able to recognize the objects they’re...
Researchers have found that, based on enough Facebook Likes, computers can judge your personality traits better than your friends, family and even your partner. Using a new algorithm, researchers have calculated the average number of Likes artificial intelligence (AI) needs to draw personality inferences about you as accurately as your partner or parents.
NASA and Nissan have announced the formation of a five-year research and development partnership to advance autonomous vehicle systems and prepare for commercial application of the technology. Researchers from NASA’s Ames Research Center and Nissan’s U.S. Silicon Valley Research Center will focus on autonomous drive systems, human-machine interface solutions, network-enabled applications, and software analysis and verification.
In 2007, Google unleashed a fleet of cars with roof-mounted cameras to provide street-level images of roads around the world. Now, an MIT spinout is bringing similar drive-by innovations to energy efficiency by deploying cars with thermal-imaging rooftop rigs that create heat maps of thousands of homes and buildings per hour, detecting fixable leaks in building envelopes — windows, doors, walls and foundations — to help curb energy loss.
Researchers are closer to creating underwater robotic creatures with a brain of their own — besides behaving like the real thing. In the near future, it would not be too tall an order for the National University of Singapore (NUS) team to produce a swarm of autonomous tiny robotic sea turtles and fishes, for example, to perform hazardous missions, such as detecting nuclear wastes underwater or other tasks too dangerous for humans.
For decades, neuroscientists have been trying to design computer networks that can mimic visual skills such as recognizing objects. Until now, no computer model has been able to match the primate brain at visual object recognition during a brief glance. However, a new study from MIT neuroscientists has found that one of the latest generation of these so-called “deep neural networks” matches the primate brain.
In the decade since the genome was sequenced, scientists and doctors have struggled to answer an all-consuming question: Which DNA mutations cause disease? A new computational technique developed at the University of Toronto may now be able to tell us. A team has developed the first method for ‘ranking’ genetic mutations based on how living cells ‘read’ DNA, revealing how likely any given alteration is to cause disease.
Computers are good at identifying patterns in huge data sets. Humans, by contrast, are good at inferring patterns from just a few examples. In a paper appearing at the Neural Information Processing Society’s conference next week, MIT researchers present a new system that bridges these two ways of processing information, so that humans and computers can collaborate to make better decisions.
A Georgia Tech professor recently offered an alternative to the celebrated “Turing Test” to determine whether a machine or computer program exhibits human-level intelligence. The Turing Test — originally called the Imitation Game — was proposed by computing pioneer Alan Turing in 1950. In practice, some applications of the test require a machine to engage in dialogue and convince a human judge that it is an actual person.
Cognitive apps are in market today and continue to change the way professionals and consumers make decisions. To help accelerate this transformation, the IBM Watson Group announced an investment in Pathway Genomics, a clinical laboratory that offers genetic testing services globally, to help deliver the first-ever cognitive consumer-facing app based on genetics from a user’s personal makeup.
Through a computational algorithm, researchers have developed a neural network that allows a small robot to detect different patterns, such as images, fingerprints, handwriting, faces, bodies, voice frequencies and DNA sequences. Nancy Guadalupe Arana Daniel focused on the recognition of human silhouettes in disaster situations.
From performing surgery to driving cars, today’s robots can do it all. With chatbots recently hailed as passing the Turing test, it appears robots are becoming increasingly adept at posing as humans. While machines are becoming ever more integrated into human lives, the need to imbue them with a sense of morality becomes increasingly urgent. But can we really teach robots how to be good? An innovative piece of research looks into the matter
Next-gen leaders push themselves every day to answer this key question: How can my organization make a difference? IBM is helping to deliver the answer with new apps powered by Watson to improve the quality of life. IBM's Watson is a groundbreaking platform with the ability to interact in natural language, process vast amounts of disparate forms of big data and learn from each interaction.
IBM Watson Group's global headquarters, at 51 Astor Place in New York City's Silicon Alley, is open for business. The Watson headquarters will serve as a home base for more than 600 IBM Watson employees, just part of the more than 2,000 IBMers dedicated to Watson worldwide. In addition to a sizeable employee presence, IBM is opening its doors to area developers and entrepreneurs, hosting industry workshops, seminars and networking.
Researchers have equipped a robot with a novel tactile sensor that lets it grasp a USB cable draped freely over a hook and insert it into a USB port. The sensor is an adaptation of a technology called GelSight, which was developed at MIT. The new sensor isn’t as sensitive as the original GelSight sensor, which could resolve details on the micrometer scale. But it’s smaller, and its processing algorithm is faster.
“Seeking educational curriculum researchers. Humans need not apply.” A Washington State University professor has figured out a dramatically easier and more cost-effective way to do research on science curriculum in the classroom — and it could include playing video games. Called “computational modeling,” it involves a computer “learning” student behavior and then “thinking” as students would.
Face recognition software measures various parameters in a mug shot, such as the distance between the person’s eyes, the height from lip to top of their nose and various other metrics and then compares it with photos of people in the database that have been tagged with a given name. Now, research looks to take that one step further in recognizing the emotion portrayed by a face.
Mayo Clinic and IBM have announced plans to pilot Watson, the IBM cognitive computer, to match patients more quickly with appropriate clinical trials. A proof-of-concept phase is currently underway, with the intent to introduce it into clinical use in early 2015. Researchers hope the increased speed also will speed new discoveries.
New software launched by researchers at Birmingham City University aims to reduce the long periods of training and expensive equipment required to make music, while also giving musicians more intuitive control over the music that they produce. The developed software, showcased at the British Science Festival, trains computers to understand the language of musicians when applying effects to their music.
CAPTCHA services that require users to recognize and type in static distorted characters may be a method of the past, according to studies published by researchers at the University of Alabama at Birmingham. Nitesh Saxena led a team that investigated the security and usability of the next generation of CAPTCHAs that are based on simple computer games.
In the age of big data, visualization tools are vital. With a single glance at a graphic display, a human being can recognize patterns that a computer might fail to find even after hours of analysis. But what if there are aberrations in the patterns? Or what if there’s just a suggestion of a visual pattern that’s not distinct enough to justify any strong inferences? Or what if the pattern is clear, but not what was to be expected?
The first thousand-robot flash mob has assembled at Harvard University. Instead of one highly-complex robot, a “kilo” of robots collaborate, providing a simple platform for the enactment of complex behaviors. Called Kilobots, these extremely simple robots are each just a few centimeters across and stand on three pin-like legs.
Collecting Just the Right Data: When you can’t collect all you need, new algorithm tells you which to targetJuly 28, 2014 2:06 pm | by Larry Hardesty, MIT | News | Comments
Much artificial-intelligence research addresses the problem of making predictions based on large data sets. An obvious example is the recommendation engines at retail sites like Amazon. But some types of data are harder to collect than online click histories — information about geological formations thousands of feet underground, for instance. And in other applications there may just not be enough time to crunch all the available data.
Music fans and critics know that the music of the Beatles underwent a dramatic transformation in just a few years. But, until now, there hasn’t been a scientific way to measure the progression. Computer scientists at Lawrence Technological University have developed an artificial intelligence algorithm that can analyze and compare musical styles, enabling research into their musical progression.
Alan Turing led a team of code breakers at Bletchley Park which cracked the German Enigma machine cypher during WWII but that is far from being his only legacy. In the year of the 100th anniversary of his birth, researchers published a series of ‘Turing tests’ in the Journal of Experimental & Theoretical Artificial Intelligence; these entailed a series of five-minute conversations between human and machine or human and human.
With over 700 new functions — the single biggest jump in new functionality in the software's history — Mathematica 10 is the first version of Mathematica based on the complete Wolfram Language. Integration with the Wolfram Cloud and access to the expanded Wolfram Knowledgebase open up new possibilities for intelligent computation and deployment.
- Page 1