Teaching computers to see — by learning to see like computers

Thursday, September 19, 2013 - 03:30 in Mathematics & Economics

Object-recognition systems — software that tries to identify objects in digital images — typically rely on machine learning. They comb through databases of previously labeled images and look for combinations of visual features that seem to correlate with particular objects. Then, when presented with a new image, they try to determine whether it contains one of the previously identified combinations of features.Even the best object-recognition systems, however, succeed only around 30 or 40 percent of the time — and their failures can be totally mystifying. Researchers are divided in their explanations: Are the learning algorithms themselves to blame? Or are they being applied to the wrong types of features? Or — the “big-data” explanation — do the systems just need more training data?To attempt to answer these and related questions, researchers at MIT’s Computer Science and Artificial Intelligence Laboratory have created a system that, in effect, allows humans to see...

Read the whole article on MIT Research

More from MIT Research

Latest Science Newsletter

Get the latest and most popular science news articles of the week in your Inbox! It's free!

Check out our next project, Biology.Net