86 research outputs found

    On the Place of Text Data in Lifelogs, and Text Analysis via Semantic Facets

    Get PDF
    Current research in lifelog data has not paid enough attention to analysis of cognitive activities in comparison to physical activities. We argue that as we look into the future, wearable devices are going to be cheaper and more prevalent and textual data will play a more significant role. Data captured by lifelogging devices will increasingly include speech and text, potentially useful in analysis of intellectual activities. Analyzing what a person hears, reads, and sees, we should be able to measure the extent of cognitive activity devoted to a certain topic or subject by a learner. Test-based lifelog records can benefit from semantic analysis tools developed for natural language processing. We show how semantic analysis of such text data can be achieved through the use of taxonomic subject facets and how these facets might be useful in quantifying cognitive activity devoted to various topics in a person's day. We are currently developing a method to automatically create taxonomic topic vocabularies that can be applied to this detection of intellectual activity

    LifeLogging: personal big data

    Get PDF
    We have recently observed a convergence of technologies to foster the emergence of lifelogging as a mainstream activity. Computer storage has become significantly cheaper, and advancements in sensing technology allows for the efficient sensing of personal activities, locations and the environment. This is best seen in the growing popularity of the quantified self movement, in which life activities are tracked using wearable sensors in the hope of better understanding human performance in a variety of tasks. This review aims to provide a comprehensive summary of lifelogging, to cover its research history, current technologies, and applications. Thus far, most of the lifelogging research has focused predominantly on visual lifelogging in order to capture life details of life activities, hence we maintain this focus in this review. However, we also reflect on the challenges lifelogging poses to an information retrieval scientist. This review is a suitable reference for those seeking a information retrieval scientist’s perspective on lifelogging and the quantified self

    Semantic interpretation of events in lifelogging

    Get PDF
    The topic of this thesis is lifelogging, the automatic, passive recording of a person’s daily activities and in particular, on performing a semantic analysis and enrichment of lifelogged data. Our work centers on visual lifelogged data, such as taken from wearable cameras. Such wearable cameras generate an archive of a person’s day taken from a first-person viewpoint but one of the problems with this is the sheer volume of information that can be generated. In order to make this potentially very large volume of information more manageable, our analysis of this data is based on segmenting each day’s lifelog data into discrete and non-overlapping events corresponding to activities in the wearer’s day. To manage lifelog data at an event level, we define a set of concepts using an ontology which is appropriate to the wearer, applying automatic detection of concepts to these events and then semantically enriching each of the detected lifelog events making them an index into the events. Once this enrichment is complete we can use the lifelog to support semantic search for everyday media management, as a memory aid, or as part of medical analysis on the activities of daily living (ADL), and so on. In the thesis, we address the problem of how to select the concepts to be used for indexing events and we propose a semantic, density- based algorithm to cope with concept selection issues for lifelogging. We then apply activity detection to classify everyday activities by employing the selected concepts as high-level semantic features. Finally, the activity is modeled by multi-context representations and enriched by Semantic Web technologies. The thesis includes an experimental evaluation using real data from users and shows the performance of our algorithms in capturing the semantics of everyday concepts and their efficacy in activity recognition and semantic enrichment

    What are the limits to time series based recognition of semantic concepts?

    Get PDF
    Most concept recognition in visual multimedia is based on relatively simple concepts, things which are present in the image or video. These usually correspond to objects which can be identified in images or individual frames. Yet there is also a need to recognise semantic con- cepts which have a temporal aspect corresponding to activities or com- plex events. These require some form of time series for recognition and also require some individual concepts to be detected so as to utilise their time-varying features, such as co-occurrence and re-occurrence patterns. While results are reported in the literature of using concept detections which are relatively specific and static, there are research questions which remain unanswered. What concept detection accuracies are satisfactory for time series recognition? Can recognition methods perform equally well across various concept detection performances? What affecting factors need to be taken into account when building concept-based high-level event/activity recognitions? In this paper, we conducted experiments to investigate these questions. Results show that though improving concept detection accuracies can enhance the recognition of time series based concepts, they do not need to be very accurate in order to characterize the dynamic evolution of time series if appropriate methods are used. Experimental results also point out the importance of concept selec- tion for time series recognition, which is usually ignored in the current literature

    Continuous Health Interface Event Retrieval

    Full text link
    Knowing the state of our health at every moment in time is critical for advances in health science. Using data obtained outside an episodic clinical setting is the first step towards building a continuous health estimation system. In this paper, we explore a system that allows users to combine events and data streams from different sources to retrieve complex biological events, such as cardiovascular volume overload. These complex events, which have been explored in biomedical literature and which we call interface events, have a direct causal impact on relevant biological systems. They are the interface through which the lifestyle events influence our health. We retrieve the interface events from existing events and data streams by encoding domain knowledge using an event operator language.Comment: ACM International Conference on Multimedia Retrieval 2020 (ICMR 2020), held in Dublin, Ireland from June 8-11, 202

    Providing effective memory retrieval cues through automatic structuring and augmentation of a lifelog of images

    Get PDF
    Lifelogging is an area of research which is concerned with the capture of many aspects of an individual's life digitally, and within this rapidly emerging field is the significant challenge of managing images passively captured by an individual of their daily life. Possible applications vary from helping those with neurodegenerative conditions recall events from memory, to the maintenance and augmentation of extensive image collections of a tourist's trips. However, a large lifelog of images can quickly amass, with an average of 700,000 images captured each year, using a device such as the SenseCam. We address the problem of managing this vast collection of personal images by investigating automatic techniques that: 1. Identify distinct events within a full day of lifelog images (which typically consists of 2,000 images) e.g. breakfast, working on PC, meeting, etc. 2. Find similar events to a given event in a person's lifelog e.g. "show me other events where I was in the park" 3. Determine those events that are more important or unusual to the user and also select a relevant keyframe image for visual display of an event e.g. a "meeting" is more interesting to review than "working on PC" 4. Augment the images from a wearable camera with higher quality images from external "Web 2.0" sources e.g. find me pictures taken by others of the U2 concert in Croke Park In this dissertation we discuss novel techniques to realise each of these facets and how effective they are. The significance of this work is not only of benefit to the lifelogging community, but also to cognitive psychology researchers studying the potential benefits of lifelogging devices to those with neurodegenerative diseases

    Automatically Augmenting Lifelog Events Using Pervasively Generated Content from Millions of People

    Get PDF
    In sensor research we take advantage of additional contextual sensor information to disambiguate potentially erroneous sensor readings or to make better informed decisions on a single sensor’s output. This use of additional information reinforces, validates, semantically enriches, and augments sensed data. Lifelog data is challenging to augment, as it tracks one’s life with many images including the places they go, making it non-trivial to find associated sources of information. We investigate realising the goal of pervasive user-generated content based on sensors, by augmenting passive visual lifelogs with “Web 2.0” content collected by millions of other individuals

    Multimodal Wearable Intelligence for Dementia Care in Healthcare 4.0: A Survey

    Get PDF
    As a new revolution of Ubiquitous Computing and Internet of Things, multimodal wearable intelligence technique is rapidly becoming a new research topic in both academic and industrial fields. Owning to the rapid spread of wearable and mobile devices, this technique is evolving healthcare from traditional hub-based systems to more personalised healthcare systems. This trend is well-aligned with recent Healthcare 4.0 which is a continuous process of transforming the entire healthcare value chain to be preventive, precise, predictive and personalised, with significant benefits to elder care. But empowering the utility of multimodal wearable intelligence technique for elderly care like people with dementia is significantly challenging considering many issues, such as shortage of cost-effective wearable sensors, heterogeneity of wearable devices connected, high demand for interoperability, etc. Focusing on these challenges, this paper gives a systematic review of advanced multimodal wearable intelligence technologies for dementia care in Healthcare 4.0. One framework is proposed for reviewing the current research of wearable intelligence, and key enabling technologies, major applications, and successful case studies in dementia care, and finally points out future research trends and challenges in Healthcare 4.0
    corecore