Digital cameras have changed our habits significantly. In early days, we wwaited for ‘kodak moments’ and used our camera thoughtfully. Now we use our camera all the time and take photos generously — photos are becoming our inexpensive secondary memory to preserve our experiences and memories.
A major problem has been how to use all the photos that we take. People take so many photos but how to organize them to retrieve them when needed has become a challenging problem. Flickr, iPhotos, Picasa and such are good starts but lack the flexibility and competence that will help consumers organize these experiences easily for visiting and sharing them. Tags are much hyped so far because who wants to take time to tag? And there is no easy way to tag them.
In computer vision, multimedia, and related communities people realized that most people organize and retrieve photos based on events (time and location). It would be great if automatic techniques could be developed to assign tags automatically to photos when they are loaded on a computer or a web site. SOme research has started in this area. Yesterday Bo Gong defended his doctoral thesis in this area In School of Information and COmputer Sciences on this topic. (Disclosure: I am his advisor.) This work shows how one can take some concrete steps in using context information — coming from EXIF data stored with each picture in modern cameras — in this direction. This is an early work, but is a strong indicator of things that could be done.
A good thing about such research is that it is needed to solve a pain point that the technology has created in the last few years.