78 research outputs found

    Investigating older and younger peoples’ motivations for lifelogging with wearable cameras

    Get PDF
    People have a natural tendency to collect things about themselves, their experiences and their shared experiences with people important to them, especially family. Similar to traditional objects such as photographs, lifelogs have been shown to support reminiscence. A lifelog is a digital archive of a person’s experiences and activities and lifelog devices such as wearable cameras can automatically and continuously record events throughout a whole day. We were interested in investigating what would motivate people to lifelog. Due to the importance of shared family reminiscence between family members we focused our study on comparing shared or personal motivations with ten older and ten younger family members. We found from our results that both older and younger adults were more likely to lifelog for the purposes of information sharing and that reviewing lifelog images supported family reminiscence, reflection and story-telling. Based on these findings, recommendations are made for the design of a novel intergenerational family lifelog system

    LifeLogging: personal big data

    Get PDF
    We have recently observed a convergence of technologies to foster the emergence of lifelogging as a mainstream activity. Computer storage has become significantly cheaper, and advancements in sensing technology allows for the efficient sensing of personal activities, locations and the environment. This is best seen in the growing popularity of the quantified self movement, in which life activities are tracked using wearable sensors in the hope of better understanding human performance in a variety of tasks. This review aims to provide a comprehensive summary of lifelogging, to cover its research history, current technologies, and applications. Thus far, most of the lifelogging research has focused predominantly on visual lifelogging in order to capture life details of life activities, hence we maintain this focus in this review. However, we also reflect on the challenges lifelogging poses to an information retrieval scientist. This review is a suitable reference for those seeking a information retrieval scientist’s perspective on lifelogging and the quantified self

    Using visual lifelogs to automatically characterise everyday activities

    Get PDF
    Visual lifelogging is the term used to describe recording our everyday lives using wearable cameras, for applications which are personal to us and do not involve sharing our recorded data. Current applications of visual lifelogging are built around remembrance or searching for specific events from the past. The purpose of the work reported here is to extend this to allow us to characterise and measure the occurrence of everyday activities of the wearer and in so doing to gain insights into the wearer's everyday behaviour. The methods we use are to capture everyday activities using a wearable camera called SenseCam, and to use an algorithm we have developed which indexes lifelog images by the occurrence of basic semantic concepts. We then use data reduction techniques to automatically generate a profile of the wearer's everyday behaviour and activities. Our algorithm has been evaluated on a large set of concepts investigated from 13 users in a user experiment, and for a group of 16 popular everyday activities we achieve an average F-score of 0.90. Our conclusions are that the the technique we have presented for unobtrusively and ambiently characterising everyday behaviour and activities across individuals is of sufficient accuracy to be usable in a range of applications

    Experiencing SenseCam: a case study interview exploring seven years living with a wearable camera

    Get PDF
    This paper presents the findings from an interview with CG, an individual who has worn an automated camera, the SenseCam, every day for the past seven years. Of interest to the study were the participant’s day-to-day experiences wearing the camera and whether these had changed since first wearing the camera. The findings presented outline the effect that wearing the camera has on his self-identity, relationships and interactions with people in the public. Issues relating to data capture, transfer and retrieval of lifelog images are also identified. These experiences inform us of the long-term effects of digital life capture and how lifelogging could progress in the future

    VAISL: Visual-aware identification of semantic locations in lifelog

    Get PDF
    Organising and preprocessing are crucial steps in order to perform analysis on lifelogs. This paper presents a method for preprocessing, enriching, and segmenting lifelogs based on GPS trajectories and images captured from wearable cameras. The proposed method consists of four components: data cleaning, stop/trip point classification, post-processing, and event characterisation. The novelty of this paper lies in the incorporation of a visual module (using a pretrained CLIP model) to improve outlier detection, correct classification errors, and identify each event’s movement mode or location name. This visual component is capable of addressing imprecise boundaries in GPS trajectories and the partition of clusters due to data drift. The results are encouraging, which further emphasises the importance of visual analytics for organising lifelog data

    Semantic interpretation of events in lifelogging

    Get PDF
    The topic of this thesis is lifelogging, the automatic, passive recording of a person’s daily activities and in particular, on performing a semantic analysis and enrichment of lifelogged data. Our work centers on visual lifelogged data, such as taken from wearable cameras. Such wearable cameras generate an archive of a person’s day taken from a first-person viewpoint but one of the problems with this is the sheer volume of information that can be generated. In order to make this potentially very large volume of information more manageable, our analysis of this data is based on segmenting each day’s lifelog data into discrete and non-overlapping events corresponding to activities in the wearer’s day. To manage lifelog data at an event level, we define a set of concepts using an ontology which is appropriate to the wearer, applying automatic detection of concepts to these events and then semantically enriching each of the detected lifelog events making them an index into the events. Once this enrichment is complete we can use the lifelog to support semantic search for everyday media management, as a memory aid, or as part of medical analysis on the activities of daily living (ADL), and so on. In the thesis, we address the problem of how to select the concepts to be used for indexing events and we propose a semantic, density- based algorithm to cope with concept selection issues for lifelogging. We then apply activity detection to classify everyday activities by employing the selected concepts as high-level semantic features. Finally, the activity is modeled by multi-context representations and enriched by Semantic Web technologies. The thesis includes an experimental evaluation using real data from users and shows the performance of our algorithms in capturing the semantics of everyday concepts and their efficacy in activity recognition and semantic enrichment

    Wearable Computing for Health and Fitness: Exploring the Relationship between Data and Human Behaviour

    Get PDF
    Health and fitness wearable technology has recently advanced, making it easier for an individual to monitor their behaviours. Previously self generated data interacts with the user to motivate positive behaviour change, but issues arise when relating this to long term mention of wearable devices. Previous studies within this area are discussed. We also consider a new approach where data is used to support instead of motivate, through monitoring and logging to encourage reflection. Based on issues highlighted, we then make recommendations on the direction in which future work could be most beneficial

    Ethics of lifelog technology

    Get PDF
    In a lifelog, data from different digital sources are combined and processed to form a unified multimedia archive containing information about the quotidian activities of an individual. This dissertation aims to contribute to a responsible development of lifelog technology used by members of the general public for private reasons. Lifelog technology can benefit, but also harm lifeloggers and their social environment. The guiding idea behind this dissertation is that if the ethical challenges can be met and the opportunities realised, the conditions will be optimised for a responsible development and application of the technology. To achieve this, it is important to reflect on these concerns at an early stage of development before the existing rudimentary forms of lifelogs develop into more sophisticated devices with a broad societal application. For this research, a normative framework based on prima facie principles is used. Lifelog technology in its current form is a relatively novel invention and a consensus about its definition is still missing. Therefore the author aims to clarify the characteristics of lifelog technology. Next, the ethical challenges and opportunities of lifelogs are analysed, as they have been discussed in the scholarly literature on the ethics of lifelog technology. Against this backdrop, ethical challenges and opportunities are identified and elaborated. The normative analysis concentrates on two areas of concern, namely (1) the ethical challenges and opportunities that result from the use of lifelog technology, and (2) the conditions under which one becomes a lifelogger. For the first, three sets of key issues are discussed, namely issues to do with (a) privacy, (b) autonomy, and (c) beneficence. For the second, one key set of issues is examined, namely issues to do with autonomy. The discussion of each set of issues is concluded with recommendations designed to tackle the challenges and realise the opportunities

    Lifelog access modelling using MemoryMesh

    Get PDF
    As of very recently, we have observed a convergence of technologies that have led to the emergence of lifelogging as a technology for personal data application. Lifelogging will become ubiquitous in the near future, not just for memory enhancement and health management, but also in various other domains. While there are many devices available for gathering massive lifelogging data, there are still challenges to modelling large volume of multi-modal lifelog data. In the thesis, we explore and address the problem of how to model lifelog in order to make personal lifelogs more accessible to users from the perspective of collection, organization and visualization. In order to subdivide our research targets, we designed and followed the following steps to solve the problem: 1. Lifelog activity recognition. We use multiple sensor data to analyse various daily life activities. Data ranges from accelerometer data collected by mobile phones to images captured by wearable cameras. We propose a semantic, density-based algorithm to cope with concept selection issues for lifelogging sensory data. 2. Visual discovery of lifelog images. Most of the lifelog information we takeeveryday is in a form of images, so images contain significant information about our lives. Here we conduct some experiments on visual content analysis of lifelog images, which includes both image contents and image meta data. 3. Linkage analysis of lifelogs. By exploring linkage analysis of lifelog data, we can connect all lifelog images using linkage models into a concept called the MemoryMesh. The thesis includes experimental evaluations using real-life data collected from multiple users and shows the performance of our algorithms in detecting semantics of daily-life concepts and their effectiveness in activity recognition and lifelog retrieval

    Periodicity detection and its application in lifelog data

    Get PDF
    Wearable sensors are catching our attention not only in industry but also in the market. We can now acquire sensor data from different types of health tracking devices like smart watches, smart bands, lifelog cameras and most smart phones are capable of tracking and logging information using built-in sensors. As data is generated and collected from various sources constantly, researchers have focused on interpreting and understanding the semantics of this longitudinal multi-modal data. One challenge is the fusion of multi-modal data and achieving good performance on tasks such activity recognition, event detection and event segmentation. The classical approach to process the data generated by wearable sensors has three main parts: 1) Event segmentation 2) Event recognition 3) Event retrieval. Many papers have been published in each of the three fields. This thesis has focused on the longitudinal aspect of the data from wearable sensors, instead of concentrating on the data over a short period of time. The following aspects are several key research questions in the thesis. Does longitudinal sensor data have unique features than can distinguish the subject generating the data from other subjects ? In other words, from the longitudinal perspective, does the data from different subjects share more common structure/similarity/identical patterns so that it is difficult to identify a subject using the data. If this is the case, what are those common patterns ? If we are able to eliminate those similarities among all the data, does the data show more specific features that we can use to model the data series and predict the future values ? If there are repeating patterns in longitudinal data, we can use different methods to compute the periodicity of the recurring patterns and furthermore to identify and extract those patterns. Following that we could be able to compare local data over a short time period with more global patterns in order to show the regularity of the local data. Some case studies are included in the thesis to show the value of longitudinal lifelog data related to a correlation of health conditions and training performance
    corecore