Sometimes it is difficult to separate vision from hype. A recent report about search engines (see News.com report) is a good example of this. It reports:
“Search engines try to train us to become good keyword searchers. We dumb down our intelligence so it will be natural for the computer,” said Pell, whose company, Powerset, is based in Palo Alto, Calif.
“The big shift that will happen in society is that instead of moving human expressions and interactions into what’s easy for the computer, we’ll move computers’ abilities to handle expressions that are natural for the human,” he said.
Powerset, which hasn’t divulged its launch date yet, is using AI to train computers not just to read words on the page, but make connections between those words and make inferences in the language. That way a search engine could think through and redefine relevance beyond the most popular page or the site with the most occurrences of keywords entered in a search box.
AI had similar dreams since 1960 and Moore’s law has only a small role in helping with this — more important is our ability to structuralize and analyze natural language. Progress has taken place but also has convinced researchers how difficult the problem is.
Later this article gets to an area that is close to my own research and experience. It talks about Riya.com and says:
Imagine uploading a picture to the Web of your favorite ratty couch, and then asking a search engine to find another one like it. The tool wouldn’t just produce a similar couch but it might even point to a store where you could buy it.
Right now, most image search engines rely on keywords, or descriptive text that is linked to a photo in order to retrieve a list of results that match a Web surfer’s keyword query. That method can be unreliable, however, if photos or images lack sufficient descriptions.
Bradley Horowitz — where are you? Do you hear these words — are they exactly (in the sense that this article referes to matching) same as we used in 1993 when we started Virage. And now Bradley jokes in his talks about Computer Vision being tough and hence using tags on Flickr.
My very best wishes to Munjal Shah in his efforts. I want him to be successful — I want to see visual search here. My own research at UC Irvine is very much in this direction — but that is research not a product.