NYT has an article on increasing ‘artificial intelligence’ usage in Web applications.Â It points out that many emerging systems and applications are going in that direction:Â
Their goal is to add a layer of meaning on top of the existing Web that would make it less of a catalog and more of a guide â€” and even provide the foundation for systems that can reason in a human fashion. That level of artificial intelligence, with machines doing the thinking instead of simply following commands, has eluded researchers for more than half a century.
Referred to as Web 3.0, the effort is in its infancy, and the very idea has given rise to skeptics who have called it an unobtainable vision. But the underlying technologies are rapidly gaining adherents, at big companies like I.B.M. and Google as well as small ones. Their projects often center on simple, practical uses, from producing vacation recommendations to predicting the next hit song.
This article is referring to progress in Semantic Web.Â When Web searches have to deal with keywords, the search complexity increases linearly, but once relations between entities or events are required the search complexity increases at least quadratically.Â By using metadata andÂ preprocessing to extract derived metadata, one may reduce this complexity at the search time.Â But then the trick will be to anticipate querries by users to decide preprocessing.Â Clearly the problem is much more complex, but then our technology is much more sphisticated and iur understanding of problems and solutions has improved.Â It will be interesting to see what kind of technology emerges in the next few years to solve this problem.
Pingback: LULOP.org [opensource] » 2.0 is human, 3.0 is artificial