This paper refers to research in the domain of visual lifelogging, whereby individuals capture much of their lives using digital cameras. The potential benefits of lifelogging include: applications to review tourist trips, memory aid applications, learning assistants, etc. The SenseCam, developed by Microsoft Research in Cambridge, UK, is a small wearable device which incorporates a digital camera and onboard sensors (motion, ambient temperature, light level, and passive infrared to detect presence of people).
There exists a number of challenges in managing the vast quantities of data generated by lifelogging devices such as the SenseCam. Our work concentrates on the following areas withing visual lifelogging: Segmenting sequences of images into events (e.g. breakfast, at meeting); retrieving similar events (what other times was I at the park?); determining most important events (meeting an old friend is more important than breakfast); selection of ideal keyframe to provide an event summary; and augmenting lifeLog events with images taken by millions of users from "Web 2.0" websites (show me other pictures of the Statue of Liberty to augment my own lifelog images)