50 research outputs found

    Evaluating access mechanisms for multimodal representations of lifelogs

    Get PDF
    Lifelogging, the automatic and ambient capture of daily life activities into a digital archive called a lifelog, is an increasingly popular activity with a wide range of applications areas including medical (memory support), behavioural science (analysis of quality of life), work- related (auto-recording of tasks) and more. In this paper we focus on lifelogging where there is sometimes a need to re-find something from one’s past, recent or distant, from the lifelog. To be effective, a lifelog should be accessible across a variety of access devices. In the work reported here we create eight lifelogging interfaces and evaluate their effectiveness on three access devices; laptop, smartphone and e-book reader,for a searching task. Based on tests with 16 users, we identify which of the eight interfaces are most effective for each access device in a known-item search task through the lifelog, for both the lifelog owner, and for other searchers. Our results are important in suggesting ways in which personal lifelogs can be most effectively used and accessed

    Socio-technical lifelogging: deriving design principles for a future proof digital past

    Get PDF
    Lifelogging is a technically inspired approach that attempts to address the problem of human forgetting by developing systems that ‘record everything’. Uptake of lifelogging systems has generally been disappointing, however. One reason for this lack of uptake is the absence of design principles for developing digital systems to support memory. Synthesising multiple studies, we identify and evaluate 4 new empirically motivated design principles for lifelogging: Selectivity, Embodiment, Synergy and Reminiscence. We first summarise 4 empirical studies that motivate the principles, then describe the evaluation of 4 novel systems built to embody these principles. The design principles were generative, leading to the development of new classes of lifelogging system, as well as providing strategic guidance about how those systems should be built. Evaluations suggest support for Selection and Embodiment principles, but more conceptual and technical work is needed to refine the Synergy and Reminiscence principles

    A lifelogging system supporting multimodal access

    Get PDF
    Today, technology has progressed to allow us to capture our lives digitally such as taking pictures, recording videos and gaining access to WiFi to share experiences using smartphones. People’s lifestyles are changing. One example is from the traditional memo writing to the digital lifelog. Lifelogging is the process of using digital tools to collect personal data in order to illustrate the user’s daily life (Smith et al., 2011). The availability of smartphones embedded with different sensors such as camera and GPS has encouraged the development of lifelogging. It also has brought new challenges in multi-sensor data collection, large volume data storage, data analysis and appropriate representation of lifelog data across different devices. This study is designed to address the above challenges. A lifelogging system was developed to collect, store, analyse, and display multiple sensors’ data, i.e. supporting multimodal access. In this system, the multi-sensor data (also called data streams) is firstly transmitted from smartphone to server only when the phone is being charged. On the server side, six contexts are detected namely personal, time, location, social, activity and environment. Events are then segmented and a related narrative is generated. Finally, lifelog data is presented differently on three widely used devices which are the computer, smartphone and E-book reader. Lifelogging is likely to become a well-accepted technology in the coming years. Manual logging is not possible for most people and is not feasible in the long-term. Automatic lifelogging is needed. This study presents a lifelogging system which can automatically collect multi-sensor data, detect contexts, segment events, generate meaningful narratives and display the appropriate data on different devices based on their unique characteristics. The work in this thesis therefore contributes to automatic lifelogging development and in doing so makes a valuable contribution to the development of the field

    Temporal multimodal video and lifelog retrieval

    Get PDF
    The past decades have seen exponential growth of both consumption and production of data, with multimedia such as images and videos contributing significantly to said growth. The widespread proliferation of smartphones has provided everyday users with the ability to consume and produce such content easily. As the complexity and diversity of multimedia data has grown, so has the need for more complex retrieval models which address the information needs of users. Finding relevant multimedia content is central in many scenarios, from internet search engines and medical retrieval to querying one's personal multimedia archive, also called lifelog. Traditional retrieval models have often focused on queries targeting small units of retrieval, yet users usually remember temporal context and expect results to include this. However, there is little research into enabling these information needs in interactive multimedia retrieval. In this thesis, we aim to close this research gap by making several contributions to multimedia retrieval with a focus on two scenarios, namely video and lifelog retrieval. We provide a retrieval model for complex information needs with temporal components, including a data model for multimedia retrieval, a query model for complex information needs, and a modular and adaptable query execution model which includes novel algorithms for result fusion. The concepts and models are implemented in vitrivr, an open-source multimodal multimedia retrieval system, which covers all aspects from extraction to query formulation and browsing. vitrivr has proven its usefulness in evaluation campaigns and is now used in two large-scale interdisciplinary research projects. We show the feasibility and effectiveness of our contributions in two ways: firstly, through results from user-centric evaluations which pit different user-system combinations against one another. Secondly, we perform a system-centric evaluation by creating a new dataset for temporal information needs in video and lifelog retrieval with which we quantitatively evaluate our models. The results show significant benefits for systems that enable users to specify more complex information needs with temporal components. Participation in interactive retrieval evaluation campaigns over multiple years provides insight into possible future developments and challenges of such campaigns

    Digital life stories: Semi-automatic (auto)biographies within lifelog collections

    Get PDF
    Our life stories enable us to reflect upon and share our personal histories. Through emerging digital technologies the possibility of collecting life experiences digitally is increasingly feasible; consequently so is the potential to create a digital counterpart to our personal narratives. In this work, lifelogging tools are used to collect digital artifacts continuously and passively throughout our day. These include images, documents, emails and webpages accessed; texts messages and mobile activity. This range of data when brought together is known as a lifelog. Given the complexity, volume and multimodal nature of such collections, it is clear that there are significant challenges to be addressed in order to achieve coherent and meaningful digital narratives of our events from our life histories. This work investigates the construction of personal digital narratives from lifelog collections. It examines the underlying questions, issues and challenges relating to construction of personal digital narratives from lifelogs. Fundamentally, it addresses how to organize and transform data sampled from an individual’s day-to-day activities into a coherent narrative account. This enquiry is enabled by three 20-month long-term lifelogs collected by participants and produces a narrative system which enables the semi-automatic construction of digital stories from lifelog content. Inspired by probative studies conducted into current practices of curation, from which a set of fundamental requirements are established, this solution employs a 2-dimensional spatial framework for storytelling. It delivers integrated support for the structuring of lifelog content and its distillation into storyform through information retrieval approaches. We describe and contribute flexible algorithmic approaches to achieve both. Finally, this research inquiry yields qualitative and quantitative insights into such digital narratives and their generation, composition and construction. The opportunities for such personal narrative accounts to enable recollection, reminiscence and reflection with the collection owners are established and its benefit in sharing past personal experience experiences is outlined. Finally, in a novel investigation with motivated third parties we demonstrate the opportunities such narrative accounts may have beyond the scope of the collection owner in: personal, societal and cultural explorations, artistic endeavours and as a generational heirloom

    Semantic interpretation of events in lifelogging

    Get PDF
    The topic of this thesis is lifelogging, the automatic, passive recording of a person’s daily activities and in particular, on performing a semantic analysis and enrichment of lifelogged data. Our work centers on visual lifelogged data, such as taken from wearable cameras. Such wearable cameras generate an archive of a person’s day taken from a first-person viewpoint but one of the problems with this is the sheer volume of information that can be generated. In order to make this potentially very large volume of information more manageable, our analysis of this data is based on segmenting each day’s lifelog data into discrete and non-overlapping events corresponding to activities in the wearer’s day. To manage lifelog data at an event level, we define a set of concepts using an ontology which is appropriate to the wearer, applying automatic detection of concepts to these events and then semantically enriching each of the detected lifelog events making them an index into the events. Once this enrichment is complete we can use the lifelog to support semantic search for everyday media management, as a memory aid, or as part of medical analysis on the activities of daily living (ADL), and so on. In the thesis, we address the problem of how to select the concepts to be used for indexing events and we propose a semantic, density- based algorithm to cope with concept selection issues for lifelogging. We then apply activity detection to classify everyday activities by employing the selected concepts as high-level semantic features. Finally, the activity is modeled by multi-context representations and enriched by Semantic Web technologies. The thesis includes an experimental evaluation using real data from users and shows the performance of our algorithms in capturing the semantics of everyday concepts and their efficacy in activity recognition and semantic enrichment

    Providing effective memory retrieval cues through automatic structuring and augmentation of a lifelog of images

    Get PDF
    Lifelogging is an area of research which is concerned with the capture of many aspects of an individual's life digitally, and within this rapidly emerging field is the significant challenge of managing images passively captured by an individual of their daily life. Possible applications vary from helping those with neurodegenerative conditions recall events from memory, to the maintenance and augmentation of extensive image collections of a tourist's trips. However, a large lifelog of images can quickly amass, with an average of 700,000 images captured each year, using a device such as the SenseCam. We address the problem of managing this vast collection of personal images by investigating automatic techniques that: 1. Identify distinct events within a full day of lifelog images (which typically consists of 2,000 images) e.g. breakfast, working on PC, meeting, etc. 2. Find similar events to a given event in a person's lifelog e.g. "show me other events where I was in the park" 3. Determine those events that are more important or unusual to the user and also select a relevant keyframe image for visual display of an event e.g. a "meeting" is more interesting to review than "working on PC" 4. Augment the images from a wearable camera with higher quality images from external "Web 2.0" sources e.g. find me pictures taken by others of the U2 concert in Croke Park In this dissertation we discuss novel techniques to realise each of these facets and how effective they are. The significance of this work is not only of benefit to the lifelogging community, but also to cognitive psychology researchers studying the potential benefits of lifelogging devices to those with neurodegenerative diseases

    Lifelog access modelling using MemoryMesh

    Get PDF
    As of very recently, we have observed a convergence of technologies that have led to the emergence of lifelogging as a technology for personal data application. Lifelogging will become ubiquitous in the near future, not just for memory enhancement and health management, but also in various other domains. While there are many devices available for gathering massive lifelogging data, there are still challenges to modelling large volume of multi-modal lifelog data. In the thesis, we explore and address the problem of how to model lifelog in order to make personal lifelogs more accessible to users from the perspective of collection, organization and visualization. In order to subdivide our research targets, we designed and followed the following steps to solve the problem: 1. Lifelog activity recognition. We use multiple sensor data to analyse various daily life activities. Data ranges from accelerometer data collected by mobile phones to images captured by wearable cameras. We propose a semantic, density-based algorithm to cope with concept selection issues for lifelogging sensory data. 2. Visual discovery of lifelog images. Most of the lifelog information we takeeveryday is in a form of images, so images contain significant information about our lives. Here we conduct some experiments on visual content analysis of lifelog images, which includes both image contents and image meta data. 3. Linkage analysis of lifelogs. By exploring linkage analysis of lifelog data, we can connect all lifelog images using linkage models into a concept called the MemoryMesh. The thesis includes experimental evaluations using real-life data collected from multiple users and shows the performance of our algorithms in detecting semantics of daily-life concepts and their effectiveness in activity recognition and lifelog retrieval

    Overview of ImageCLEF 2018: Challenges, Datasets and Evaluation

    Get PDF
    This paper presents an overview of the ImageCLEF 2018 evaluation campaign, an event that was organized as part of the CLEF (Conference and Labs of the Evaluation Forum) Labs 2018. ImageCLEF is an ongoing initiative (it started in 2003) that promotes the evaluation of technologies for annotation, indexing and retrieval with the aim of providing information access to collections of images in various usage scenarios and domains. In 2018, the 16th edition of ImageCLEF ran three main tasks and a pilot task: (1) a caption prediction task that aims at predicting the caption of a figure from the biomedical literature based only on the figure image; (2) a tuberculosis task that aims at detecting the tuberculosis type, severity and drug resistance from CT (Computed Tomography) volumes of the lung; (3) a LifeLog task (videos, images and other sources) about daily activities understanding and moment retrieval, and (4) a pilot task on visual question answering where systems are tasked with answering medical questions. The strong participation, with over 100 research groups registering and 31 submitting results for the tasks, shows an increasing interest in this benchmarking campaign

    Evaluating Information Retrieval and Access Tasks

    Get PDF
    This open access book summarizes the first two decades of the NII Testbeds and Community for Information access Research (NTCIR). NTCIR is a series of evaluation forums run by a global team of researchers and hosted by the National Institute of Informatics (NII), Japan. The book is unique in that it discusses not just what was done at NTCIR, but also how it was done and the impact it has achieved. For example, in some chapters the reader sees the early seeds of what eventually grew to be the search engines that provide access to content on the World Wide Web, today’s smartphones that can tailor what they show to the needs of their owners, and the smart speakers that enrich our lives at home and on the move. We also get glimpses into how new search engines can be built for mathematical formulae, or for the digital record of a lived human life. Key to the success of the NTCIR endeavor was early recognition that information access research is an empirical discipline and that evaluation therefore lay at the core of the enterprise. Evaluation is thus at the heart of each chapter in this book. They show, for example, how the recognition that some documents are more important than others has shaped thinking about evaluation design. The thirty-three contributors to this volume speak for the many hundreds of researchers from dozens of countries around the world who together shaped NTCIR as organizers and participants. This book is suitable for researchers, practitioners, and students—anyone who wants to learn about past and present evaluation efforts in information retrieval, information access, and natural language processing, as well as those who want to participate in an evaluation task or even to design and organize one
    corecore