A Georgia Tech professor recently offered an alternative to the celebrated “Turing Test” to determine whether a machine or computer program exhibits human-level intelligence. The Turing Test — originally called the Imitation Game — was proposed by computing pioneer Alan Turing in 1950. In practice, some applications of the test require a machine to engage in dialogue and convince a human judge that it is an actual person.
Cognitive apps are in market today and continue to change the way professionals and consumers...
Through a computational algorithm, researchers have developed a neural network that allows a...
From performing surgery to driving cars, today’s robots can do it all. With chatbots recently...
Next-gen leaders push themselves every day to answer this key question: How can my organization make a difference? IBM is helping to deliver the answer with new apps powered by Watson to improve the quality of life. IBM's Watson is a groundbreaking platform with the ability to interact in natural language, process vast amounts of disparate forms of big data and learn from each interaction.
IBM Watson Group's global headquarters, at 51 Astor Place in New York City's Silicon Alley, is open for business. The Watson headquarters will serve as a home base for more than 600 IBM Watson employees, just part of the more than 2,000 IBMers dedicated to Watson worldwide. In addition to a sizeable employee presence, IBM is opening its doors to area developers and entrepreneurs, hosting industry workshops, seminars and networking.
Researchers have equipped a robot with a novel tactile sensor that lets it grasp a USB cable draped freely over a hook and insert it into a USB port. The sensor is an adaptation of a technology called GelSight, which was developed at MIT. The new sensor isn’t as sensitive as the original GelSight sensor, which could resolve details on the micrometer scale. But it’s smaller, and its processing algorithm is faster.
“Seeking educational curriculum researchers. Humans need not apply.” A Washington State University professor has figured out a dramatically easier and more cost-effective way to do research on science curriculum in the classroom — and it could include playing video games. Called “computational modeling,” it involves a computer “learning” student behavior and then “thinking” as students would.
Face recognition software measures various parameters in a mug shot, such as the distance between the person’s eyes, the height from lip to top of their nose and various other metrics and then compares it with photos of people in the database that have been tagged with a given name. Now, research looks to take that one step further in recognizing the emotion portrayed by a face.
Mayo Clinic and IBM have announced plans to pilot Watson, the IBM cognitive computer, to match patients more quickly with appropriate clinical trials. A proof-of-concept phase is currently underway, with the intent to introduce it into clinical use in early 2015. Researchers hope the increased speed also will speed new discoveries.
New software launched by researchers at Birmingham City University aims to reduce the long periods of training and expensive equipment required to make music, while also giving musicians more intuitive control over the music that they produce. The developed software, showcased at the British Science Festival, trains computers to understand the language of musicians when applying effects to their music.
CAPTCHA services that require users to recognize and type in static distorted characters may be a method of the past, according to studies published by researchers at the University of Alabama at Birmingham. Nitesh Saxena led a team that investigated the security and usability of the next generation of CAPTCHAs that are based on simple computer games.
In the age of big data, visualization tools are vital. With a single glance at a graphic display, a human being can recognize patterns that a computer might fail to find even after hours of analysis. But what if there are aberrations in the patterns? Or what if there’s just a suggestion of a visual pattern that’s not distinct enough to justify any strong inferences? Or what if the pattern is clear, but not what was to be expected?
The first thousand-robot flash mob has assembled at Harvard University. Instead of one highly-complex robot, a “kilo” of robots collaborate, providing a simple platform for the enactment of complex behaviors. Called Kilobots, these extremely simple robots are each just a few centimeters across and stand on three pin-like legs.
Collecting Just the Right Data: When you can’t collect all you need, new algorithm tells you which to targetJuly 28, 2014 2:06 pm | by Larry Hardesty, MIT | News | Comments
Much artificial-intelligence research addresses the problem of making predictions based on large data sets. An obvious example is the recommendation engines at retail sites like Amazon. But some types of data are harder to collect than online click histories — information about geological formations thousands of feet underground, for instance. And in other applications there may just not be enough time to crunch all the available data.
Music fans and critics know that the music of the Beatles underwent a dramatic transformation in just a few years. But, until now, there hasn’t been a scientific way to measure the progression. Computer scientists at Lawrence Technological University have developed an artificial intelligence algorithm that can analyze and compare musical styles, enabling research into their musical progression.
Alan Turing led a team of code breakers at Bletchley Park which cracked the German Enigma machine cypher during WWII but that is far from being his only legacy. In the year of the 100th anniversary of his birth, researchers published a series of ‘Turing tests’ in the Journal of Experimental & Theoretical Artificial Intelligence; these entailed a series of five-minute conversations between human and machine or human and human.
With over 700 new functions — the single biggest jump in new functionality in the software's history — Mathematica 10 is the first version of Mathematica based on the complete Wolfram Language. Integration with the Wolfram Cloud and access to the expanded Wolfram Knowledgebase open up new possibilities for intelligent computation and deployment.
Machine learning, in which computers learn new skills by looking for patterns in training data, is the basis of most recent advances in artificial intelligence, from voice-recognition systems to self-parking cars. It’s also the technique that autonomous robots typically use to build models of their environments. That type of model-building gets complicated, however, in cases in which clusters of robots work as teams.
In today’s digitally driven world, access to information appears limitless. But when you have something specific in mind that you don’t know, like the name of that niche kitchen tool you saw at a friend’s house, it can be surprisingly hard to sift through the volume of information online and know how to search for it. Or, the opposite problem can occur — we can look up anything on the Internet, but how can we be sure we're finding every...
For every thought or behavior, the brain erupts in a riot of activity, as thousands of cells communicate via electrical and chemical signals. Each nerve cell influences others within an intricate, interconnected neural network. And connections between brain cells change over time in response to our environment.
Google says its self-driving cars are motoring along: they can navigate freeways comfortably, albeit with a driver ready to take control. But city driving — with its obstacle course of stray walkers, bicyclists and blind corners — has been a far greater challenge for the cars' computers.
UNIVERSITY of Huddersfield experts are in charge of a worldwide competition that is designed to encourage breakthroughs in the use of artificial intelligence for automated planning and scheduling. High performance computers at the University are being used to test the dozens of complex software...
Costas Bekas is managing the Foundations of Cognitive Computing group at IBM Research-Zurich. He received his B. Eng., Msc and PhD, all from the Computer Engineering & Informatics Department, University of Patras, Greece, in 1998, 2001 and 2003 respectively. Between 2003-2005, he worked as a postdoctoral associate with prof. Yousef Saad at the Computer Science & Engineering Department, University of Minnesota
A joint study by researchers at the University of California, San Diego and the University of Toronto has found that a computer system spots real or faked expressions of pain more accurately than people can. The research team found that humans could not discriminate real from faked expressions of pain better than random chance
Big Data tools such as Grok and IBM Watson are enabling large organizations to behave more like agile startups. Of the transformative technology developments that have ushered in the current frenzy of activity along the information superhighway, the 1994 invention of the “Wiki” by Ward Cunningham is among the most disruptive.
In this segment, we look at the Never Ending Image Learner. A computer program is running 24-hours-a-day at Carnegie Mellon University, searching the Web for images, doing its best to understand them on its own and, as it builds a growing visual database, gathering common sense on a massive scale.
Writing a program to control a single autonomous robot navigating an uncertain environment with an erratic communication link is hard enough; write one for multiple robots that may or may not have to work in tandem, depending on the task, is even harder.
Alan Turing: His Work and Impact, was selected for the top honor, R.R. Hawkins Award, at the 38th annual PROSE Awards. Celebrating the centenary of his birth, the bookwas praised as a fitting tribute to the life of the legendary mathematical and scientific genius, considered to be the father of theoretical computer science and artificial intelligence.
- Page 1