MIT News had an article that comments on the test images and standards used in object recognition research in computer vision.
“With only two types of objects to distinguish, this test should have been easier for the ‘toy’ computer model, but it proved harder,” Cox says. The team’s conclusion: “Our model did well on the Caltech101 image set not because it is a good model but because the ‘natural’ images fail to adequately capture real-world variability.”
As a result, the researchers argue for revamping the current standards and images used by the computer-vision community to compare models and measure progress. Before computers can approach the performance of the human brain, they say, scientists must better understand why the task of object recognition is so difficult and the brain’s abilities are so impressive.
The problem mentioned here is not new — this has been the central problem. In computer vision, we design data to make algorithms successful — goal is not to develop algorithms or theory that will work, but to design data that will show success of a theory. A common set of similar data set used by image retrieval people is to use Corel Database for testing algorithms in image retrieval. This is similar situation as above — carefully selected data that does little to capture the variability of the real world.
Computer vision could be one of the most powerful discipline in computer science that could be central mechanism for input of real world data. This will not happen until we learn to face real problems, rather than ignore them and develop theories and customize data to prove our theories with the only goal of publishing a research paper.
Pingback: Teste » Test Images in Computer Vision research
Interesting tests here 🙂