59 research outputs found

    Experiencing SenseCam: a case study interview exploring seven years living with a wearable camera

    Get PDF
    This paper presents the findings from an interview with CG, an individual who has worn an automated camera, the SenseCam, every day for the past seven years. Of interest to the study were the participant’s day-to-day experiences wearing the camera and whether these had changed since first wearing the camera. The findings presented outline the effect that wearing the camera has on his self-identity, relationships and interactions with people in the public. Issues relating to data capture, transfer and retrieval of lifelog images are also identified. These experiences inform us of the long-term effects of digital life capture and how lifelogging could progress in the future

    Multimodal segmentation of lifelog data

    Get PDF
    A personal lifelog of visual and audio information can be very helpful as a human memory augmentation tool. The SenseCam, a passive wearable camera, used in conjunction with an iRiver MP3 audio recorder, will capture over 20,000 images and 100 hours of audio per week. If used constantly, very soon this would build up to a substantial collection of personal data. To gain real value from this collection it is important to automatically segment the data into meaningful units or activities. This paper investigates the optimal combination of data sources to segment personal data into such activities. 5 data sources were logged and processed to segment a collection of personal data, namely: image processing on captured SenseCam images; audio processing on captured iRiver audio data; and processing of the temperature, white light level, and accelerometer sensors onboard the SenseCam device. The results indicate that a combination of the image, light and accelerometer sensor data segments our collection of personal data better than a combination of all 5 data sources. The accelerometer sensor is good for detecting when the user moves to a new location, while the image and light sensors are good for detecting changes in wearer activity within the same location, as well as detecting when the wearer socially interacts with others

    Organising and structuring a visual diary using visual interest point detectors

    Get PDF
    As wearable cameras become more popular, researchers are increasingly focusing on novel applications to manage the large volume of data these devices produce. One such application is the construction of a Visual Diary from an individual’s photographs. Microsoft’s SenseCam, a device designed to passively record a Visual Diary and cover a typical day of the user wearing the camera, is an example of one such device. The vast quantity of images generated by these devices means that the management and organisation of these collections is not a trivial matter. We believe wearable cameras, such as SenseCam, will become more popular in the future and the management of the volume of data generated by these devices is a key issue. Although there is a significant volume of work in the literature in the object detection and recognition and scene classification fields, there is little work in the area of setting detection. Furthermore, few authors have examined the issues involved in analysing extremely large image collections (like a Visual Diary) gathered over a long period of time. An algorithm developed for setting detection should be capable of clustering images captured at the same real world locations (e.g. in the dining room at home, in front of the computer in the office, in the park, etc.). This requires the selection and implementation of suitable methods to identify visually similar backgrounds in images using their visual features. We present a number of approaches to setting detection based on the extraction of visual interest point detectors from the images. We also analyse the performance of two of the most popular descriptors - Scale Invariant Feature Transform (SIFT) and Speeded Up Robust Features (SURF).We present an implementation of a Visual Diary application and evaluate its performance via a series of user experiments. Finally, we also outline some techniques to allow the Visual Diary to automatically detect new settings, to scale as the image collection continues to grow substantially over time, and to allow the user to generate a personalised summary of their data

    An interactive lifelog search engine for LSC2018

    Get PDF
    In this work, we describe an interactive lifelog search engine developed for the LSC 2018 search challenge at ACM ICMR 2018. The paper introduces the four-step process required to support lifelog search engines and describes the source data for the search engine as well as the approach to ranking chosen for the iterative search engine. Finally the interface used is introduced before we highlight the limits of the current prototype and suggest opportunities for future work.Peer ReviewedPostprint (published version

    Narcissus to a Man: Lifelogging, Technology and the Normativity of Truth

    No full text
    The growth of the practice of lifelogging, exploiting the capabilities provided by the exponential increase in computer storage, and using technologies such as SenseCam as well as location-based services, Web 2.0, social networking and photo-sharing sites, has led to a growing sense of unease, articulated in books such as Mayer-Schönberger's Delete, that the semi-permanent storage of memories could lead to problematic social consequences. This talk examines the arguments against lifelogging and storage, and argues that they seem less worrying when placed in the context of a wider debate about the nature of mind and memory and their relationship to our environment and the technology we use

    Analysing privacy in visual lifelogging

    Get PDF
    The visual lifelogging activity enables a user, the lifelogger, to passively capture images from a first-person perspective and ultimately create a visual diary encoding every possible aspect of her life with unprecedented details. In recent years, it has gained popularities among different groups of users. However, the possibility of ubiquitous presence of lifelogging devices specifically in private spheres has raised serious concerns with respect to personal privacy. In this article, we have presented a thorough discussion of privacy with respect to visual lifelogging. We have re-adjusted the existing definition of lifelogging to reflect different aspects of privacy and introduced a first-ever privacy threat model identifying several threats with respect to visual lifelogging. We have also shown how the existing privacy guidelines and approaches are inadequate to mitigate the identified threats. Finally, we have outlined a set of requirements and guidelines that can be used to mitigate the identified threats while designing and developing a privacy-preserving framework for visual lifelogging

    The design of an intergenerational lifelog browser to support sharing within family groups

    Get PDF

    Semantic interpretation of events in lifelogging

    Get PDF
    The topic of this thesis is lifelogging, the automatic, passive recording of a person’s daily activities and in particular, on performing a semantic analysis and enrichment of lifelogged data. Our work centers on visual lifelogged data, such as taken from wearable cameras. Such wearable cameras generate an archive of a person’s day taken from a first-person viewpoint but one of the problems with this is the sheer volume of information that can be generated. In order to make this potentially very large volume of information more manageable, our analysis of this data is based on segmenting each day’s lifelog data into discrete and non-overlapping events corresponding to activities in the wearer’s day. To manage lifelog data at an event level, we define a set of concepts using an ontology which is appropriate to the wearer, applying automatic detection of concepts to these events and then semantically enriching each of the detected lifelog events making them an index into the events. Once this enrichment is complete we can use the lifelog to support semantic search for everyday media management, as a memory aid, or as part of medical analysis on the activities of daily living (ADL), and so on. In the thesis, we address the problem of how to select the concepts to be used for indexing events and we propose a semantic, density- based algorithm to cope with concept selection issues for lifelogging. We then apply activity detection to classify everyday activities by employing the selected concepts as high-level semantic features. Finally, the activity is modeled by multi-context representations and enriched by Semantic Web technologies. The thesis includes an experimental evaluation using real data from users and shows the performance of our algorithms in capturing the semantics of everyday concepts and their efficacy in activity recognition and semantic enrichment

    VIMES : A Wearable Memory Assistance System for Automatic Information Retrieval

    Get PDF
    The advancement of artificial intelligence and wearable computing triggers the radical innovation of cognitive applications. In this work, we propose VIMES, an augmented reality-based memory assistance system that helps recall declarative memory, such as whom the user meets and what they chat. Through a collaborative method with 20 participants, we design VIMES, a system that runs on smartglasses, takes the first-person audio and video as input, and extracts personal profiles and event information to display on the embedded display or a smartphone. We perform an extensive evaluation with 50 participants to show the effectiveness of VIMES for memory recall. VIMES outperforms (90% memory accuracy) other traditional methods such as self-recall (34%) while offering the best memory experience (Vividness, Coherence, and Visual Perspective all score over 4/5). The user study results show that most participants find VIMES useful (3.75/5) and easy to use (3.46/5).Peer reviewe
    corecore