28 research outputs found

    Lifelogging As An Extreme Form of Personal Information Management -- What Lessons To Learn

    Full text link
    Personal data includes the digital footprints that we leave behind as part of our everyday activities, both online and offline in the real world. It includes data we collect ourselves, such as from wearables, as well as the data collected by others about our online behaviour and activities. Sometimes we are able to use the personal data we ourselves collect, in order to examine some parts of our lives but for the most part, our personal data is leveraged by third parties including internet companies, for services like targeted advertising and recommendations. Lifelogging is a form of extreme personal data gathering and in this article we present an overview of the tools used to manage access to lifelogs as demonstrated at the most recent of the annual Lifelog Search Challenge benchmarking workshops. Here, experimental systems are showcased in live, real time information seeking tasks by real users. This overview of these systems' capabilities show the range of possibilities for accessing our own personal data which may, in time, become more easily available as consumer-level services

    An experiment in Interactive Retrieval for the lifelog moment retrieval task at imageCLEFlifelog2020.

    Get PDF
    The development of technology has led to an increase in mobile devices’ use to keep track of individual daily activities, as known as Lifelogging. Lifelogging has raised many research challenges, one of which is how to retrieve a specific moment in response to a user’s information need. This paper presents an efficient interactive search engine for large multimodal lifelog data which is evaluated in the ImageCLEFlifelog2020 Lifelog Moment Retrieval task (LMRT). The system is the modified version of the MyscĂ©al demonstrator used in the Lifelog Search Challenge 2020, with the addition of visual similarity and a new method of visualising results. In interactive experimentation, our system achieved an F1@ 10 score of 0.48 in the official submission but can be significantly improved by implementing a number of post-processing steps

    VAISL: Visual-aware identification of semantic locations in lifelog

    Get PDF
    Organising and preprocessing are crucial steps in order to perform analysis on lifelogs. This paper presents a method for preprocessing, enriching, and segmenting lifelogs based on GPS trajectories and images captured from wearable cameras. The proposed method consists of four components: data cleaning, stop/trip point classification, post-processing, and event characterisation. The novelty of this paper lies in the incorporation of a visual module (using a pretrained CLIP model) to improve outlier detection, correct classification errors, and identify each event’s movement mode or location name. This visual component is capable of addressing imprecise boundaries in GPS trajectories and the partition of clusters due to data drift. The results are encouraging, which further emphasises the importance of visual analytics for organising lifelog data

    Myscéal: an experimental interactive lifelog retrieval system for LSC'20

    Get PDF
    The Lifelog Search Challenge (LSC), is an annual comparative benchmarking activity for comparing approaches to interactive retrieval from multi-modal lifelogs. Being an interactive search challenge, issues such as retrieval accuracy, search speed and usability of interfaces are key challenges that must be addressed by every participant. In this paper, we introduce MyscĂ©al, an interactive lifelog retrieval engine designed to support novice users to retrieve items of interest from a large multimodal lifelog. Additionally, we also introduce a new similarity measure called “aTFIDF”, to match a user’s free-text information need with the multimodal lifelog index

    Myscéal 2.0: a revised experimental interactive lifelog retrieval system for LSC'21

    Get PDF
    Building an interactive retrieval system for lifelogging contains many challenges due to massive multi-modal personal data besides the requirement of accuracy and rapid response for such a tool. The Lifelog Search Challenge (LSC) is the international lifelog retrieval competition that inspires researchers to develop their systems to cope with the challenges and evaluates the effectiveness of their solutions. In this paper, we upgrade our previous Myscéal 2.0 and present Myscéal 2.0 system for the LSC'21 with the improved features inspired by the novice users experiments. The experiments show that a novice user achieved more than half of the expert score on average. To mitigate the gap of them, some potential enhancements were identified and integrated to the enhanced version

    VRLE: Lifelog Interaction Prototype in Virtual Reality:Lifelog Search Challenge at ACM ICMR 2020

    Get PDF
    The Lifelog Search Challenge (LSC) invites researchers to share their prototypes for interactive lifelog retrieval and encourages competition to develop and evaluate effective methodologies to achieve this. With this paper we present a novel approach to visual lifelog exploration based on our research to date utilising virtual reality as a medium for interactive information retrieval. The VRLE prototype presented is an iteration on a previous system which won the first LSC competition at ACM ICMR 2018

    E-Myscéal: embedding-based Interactive lifelog retrieval system for LSC'22

    Get PDF
    Developing interactive lifelog retrieval systems is a growing research area. There are many international competitions for lifelog retrieval that encourage researchers to build effective systems that can address the multimodal retrieval challenge of lifelogs. The Lifelog Search Challenge (LSC) was first organised in 2018 and is currently the only interactive benchmarking evaluation for lifelog retrieval systems. Participating systems should have an accurate search engine and a user-friendly interface that can help users to retrieve relevant content. In this paper, we upgrade our previous MyScéal, which was the top performing system in LSC'20 and LSC'21, and present E-MyScéal for LSC'22, which includes a completely different search engine. Instead of using visual concepts for retrieval such as MyScéal, the new E-MyScéal employs an embedding technique that facilitates novice users who are not familiar with the concepts. Our experiments show that the new search engine can find relevant images in the first place in the ranked list, four a quarter of the LSC'21 queries (26%) by using just the first hint from the textual information need. Regarding the user interface, we still keep the simple non-faceted design as in the previous version but improve the event view browsing in order to better support novice users

    DCU team at the NTCIR-15 micro-activity retrieval task

    Get PDF
    The growing attention to lifelogging research has led to the creation of many retrieval systems, most of which employed event segmentation as core functionality. While previous literature focused on splitting lifelog data into broad segments of daily living activities, less attention was paid to micro-activities which last for short periods of time, yet carry valuable information for building a high-precision retrieval engine. In this paper, we present our efforts in addressing the NTCIR-15 MART challenge, in which the participants were asked to retrieve micro-activities from a multi-modal dataset. We proposed five models which investigate imagery and sensory data, both jointly and separately using various Deep Learn- ing and Machine Learning techniques, and achieved a maximum mAP score of 0.901 using an Image Tabular Pair-wise Similarity model, and overall ranked second in the competition. Our model not only captures the information coming from the temporal visual data combined with sensor signal, but also works as a Siamese network to discriminate micro-activities

    A VR interface for browsing visual spaces at VBS2021

    Get PDF
    The Video Browser Showdown (VBS) is an annual competition in which each participant prepares an interactive video retrieval system and partakes in a live comparative evaluation at the annual MMMConference. In this paper, we introduce Eolas, which is a prototype video/image retrieval system incorporating a novel virtual reality (VR)interface. For VBS’21, Eolas represented each keyframe of the collection by an embedded feature in a latent vector space, into which a query would also be projected to facilitate retrieval within a VR environment. A user could then explore the space and perform one of a number of filter operations to traverse the space and locate the correct result

    An Exploration into the Benefits of the CLIP model for Lifelog Retrieval

    Get PDF
    In this paper, we attempt to fine-tune the CLIP (Contrastive Language-Image Pre-Training) model on the Lifelog Question Answering dataset (LLQA) to investigate retrieval performance of the fine-tuned model over the zero-shot baseline model. We train the model adopting a weight space ensembling approach using a modified loss function to take into account the differences in our dataset (LLQA) when compared with the dataset the CLIP model was originally pretrained on. We further evaluate our fine-tuned model using visual as well as multimodal queries on multiple retrieval tasks, demonstrating improved performance over the zero-shot baseline model
    corecore