33 research outputs found

    FIRST - Flexible interactive retrieval SysTem for visual lifelog exploration at LSC 2020

    Get PDF
    Lifelog can provide useful insights of our daily activities. It is essential to provide a flexible way for users to retrieve certain events or moments of interest, corresponding to a wide variation of query types. This motivates us to develop FIRST, a Flexible Interactive Retrieval SysTem, to help users to combine or integrate various query components in a flexible manner to handle different query scenarios, such as visual clustering data based on color histogram, visual similarity, GPS location, or scene attributes. We also employ personalized concept detection and image captioning to enhance image understanding from visual lifelog data, and develop an autoencoderlike approach for query text and image feature mapping. Furthermore, we refine the user interface of the retrieval system to better assist users in query expansion and verifying sequential events in a flexible temporal resolution to control the navigation speed through sequences of images

    Recuperação e identificação de momentos em imagens

    Get PDF
    In our modern society almost anyone is able to capture moments and record events due to the ease accessibility to smartphones. This leads to the question, if we record so much of our life how can we easily retrieve specific moments? The answer to this question would open the door for a big leap in human life quality. The possibilities are endless, from trivial problems like finding a photo of a birthday cake to being capable of analyzing the progress of mental illnesses in patients or even tracking people with infectious diseases. With so much data being created everyday, the answer to this question becomes more complex. There is no stream lined approach to solve the problem of moment localization in a large dataset of images and investigations into this problem have only started a few years ago. ImageCLEF is one competition where researchers participate and try to achieve new and better results in the task of moment retrieval. This complex problem, along with the interest in participating in the ImageCLEF Lifelog Moment Retrieval Task posed a good challenge for the development of this dissertation. The proposed solution consists in developing a system capable of retriving images automatically according to specified moments described in a corpus of text without any sort of user interaction and using only state-of-the-art image and text processing methods. The developed retrieval system achieves this objective by extracting and categorizing relevant information from text while being able to compute a similarity score with the extracted labels from the image processing stage. In this way, the system is capable of telling if images are related to the specified moment in text and therefore able to retrieve the pictures accordingly. In the ImageCLEF Life Moment Retrieval 2020 subtask the proposed automatic retrieval system achieved a score of 0.03 in the F1-measure@10 evaluation methodology. Even though this scores are not competitve when compared to other teams systems scores, the built system presents a good baseline for future work.Na sociedade moderna, praticamente qualquer pessoa consegue capturar momentos e registar eventos devido à facilidade de acesso a smartphones. Isso leva à questão, se registamos tanto da nossa vida, como podemos facilmente recuperar momentos específicos? A resposta a esta questão abriria a porta para um grande salto na qualidade da vida humana. As possibilidades são infinitas, desde problemas triviais como encontrar a foto de um bolo de aniversário até ser capaz de analisar o progresso de doenças mentais em pacientes ou mesmo rastrear pessoas com doenças infecciosas. Com tantos dados a serem criados todos os dias, a resposta a esta pergunta torna-se mais complexa. Não existe uma abordagem linear para resolver o problema da localização de momentos num grande conjunto de imagens e investigações sobre este problema começaram há apenas poucos anos. O ImageCLEF é uma competição onde investigadores participam e tentam alcançar novos e melhores resultados na tarefa de recuperação de momentos a cada ano. Este problema complexo, em conjunto com o interesse em participar na tarefa ImageCLEF Lifelog Moment Retrieval, apresentam-se como um bom desafio para o desenvolvimento desta dissertação. A solução proposta consiste num sistema capaz de recuperar automaticamente imagens de momentos descritos em formato de texto, sem qualquer tipo de interação de um utilizador, utilizando apenas métodos estado da arte de processamento de imagem e texto. O sistema de recuperação desenvolvido alcança este objetivo através da extração e categorização de informação relevante de texto enquanto calcula um valor de similaridade com os rótulos extraídos durante a fase de processamento de imagem. Dessa forma, o sistema consegue dizer se as imagens estão relacionadas ao momento especificado no texto e, portanto, é capaz de recuperar as imagens de acordo. Na subtarefa ImageCLEF Life Moment Retrieval 2020, o sistema de recuperação automática de imagens proposto alcançou uma pontuação de 0.03 na metodologia de avaliação F1-measure@10. Mesmo que estas pontuações não sejam competitivas quando comparadas às pontuações de outros sistemas de outras equipas, o sistema construído apresenta-se como uma boa base para trabalhos futuros.Mestrado em Engenharia Eletrónica e Telecomunicaçõe

    Organiser Team at ImageCLEFlifelog 2020: A Baseline Approach for Moment Retrieval and Athlete Performance Prediction using Lifelog Data

    Get PDF
    For the LMRT task at ImageCLEFlifelog 2020, LIFER 3.0, a new version of the LIFER system with improvements in the user interface and system affordance, is used and evaluated via feedback from a user experiment. In addition, since both tasks share a common dataset, LIFER 3.0 borrows some features from the LifeSeeker system deployed for the Lifelog Search Challenge; which are free-text search, visual similarity search and elastic sequencing filter. For the SPLL task, we proposed a naive solution by capturing the rate of change in running speed and weight, then obtain the target changes for each subtask using average computation and linear regression model. The results presented in this paper can be used as comparative baselines for other participants in the ImageCLEFlifelog 2020 challenge.publishedVersio

    Myscéal 2.0: a revised experimental interactive lifelog retrieval system for LSC'21

    Get PDF
    Building an interactive retrieval system for lifelogging contains many challenges due to massive multi-modal personal data besides the requirement of accuracy and rapid response for such a tool. The Lifelog Search Challenge (LSC) is the international lifelog retrieval competition that inspires researchers to develop their systems to cope with the challenges and evaluates the effectiveness of their solutions. In this paper, we upgrade our previous Myscéal 2.0 and present Myscéal 2.0 system for the LSC'21 with the improved features inspired by the novice users experiments. The experiments show that a novice user achieved more than half of the expert score on average. To mitigate the gap of them, some potential enhancements were identified and integrated to the enhanced version

    Temporal multimodal video and lifelog retrieval

    Get PDF
    The past decades have seen exponential growth of both consumption and production of data, with multimedia such as images and videos contributing significantly to said growth. The widespread proliferation of smartphones has provided everyday users with the ability to consume and produce such content easily. As the complexity and diversity of multimedia data has grown, so has the need for more complex retrieval models which address the information needs of users. Finding relevant multimedia content is central in many scenarios, from internet search engines and medical retrieval to querying one's personal multimedia archive, also called lifelog. Traditional retrieval models have often focused on queries targeting small units of retrieval, yet users usually remember temporal context and expect results to include this. However, there is little research into enabling these information needs in interactive multimedia retrieval. In this thesis, we aim to close this research gap by making several contributions to multimedia retrieval with a focus on two scenarios, namely video and lifelog retrieval. We provide a retrieval model for complex information needs with temporal components, including a data model for multimedia retrieval, a query model for complex information needs, and a modular and adaptable query execution model which includes novel algorithms for result fusion. The concepts and models are implemented in vitrivr, an open-source multimodal multimedia retrieval system, which covers all aspects from extraction to query formulation and browsing. vitrivr has proven its usefulness in evaluation campaigns and is now used in two large-scale interdisciplinary research projects. We show the feasibility and effectiveness of our contributions in two ways: firstly, through results from user-centric evaluations which pit different user-system combinations against one another. Secondly, we perform a system-centric evaluation by creating a new dataset for temporal information needs in video and lifelog retrieval with which we quantitatively evaluate our models. The results show significant benefits for systems that enable users to specify more complex information needs with temporal components. Participation in interactive retrieval evaluation campaigns over multiple years provides insight into possible future developments and challenges of such campaigns

    Digital life stories: Semi-automatic (auto)biographies within lifelog collections

    Get PDF
    Our life stories enable us to reflect upon and share our personal histories. Through emerging digital technologies the possibility of collecting life experiences digitally is increasingly feasible; consequently so is the potential to create a digital counterpart to our personal narratives. In this work, lifelogging tools are used to collect digital artifacts continuously and passively throughout our day. These include images, documents, emails and webpages accessed; texts messages and mobile activity. This range of data when brought together is known as a lifelog. Given the complexity, volume and multimodal nature of such collections, it is clear that there are significant challenges to be addressed in order to achieve coherent and meaningful digital narratives of our events from our life histories. This work investigates the construction of personal digital narratives from lifelog collections. It examines the underlying questions, issues and challenges relating to construction of personal digital narratives from lifelogs. Fundamentally, it addresses how to organize and transform data sampled from an individual’s day-to-day activities into a coherent narrative account. This enquiry is enabled by three 20-month long-term lifelogs collected by participants and produces a narrative system which enables the semi-automatic construction of digital stories from lifelog content. Inspired by probative studies conducted into current practices of curation, from which a set of fundamental requirements are established, this solution employs a 2-dimensional spatial framework for storytelling. It delivers integrated support for the structuring of lifelog content and its distillation into storyform through information retrieval approaches. We describe and contribute flexible algorithmic approaches to achieve both. Finally, this research inquiry yields qualitative and quantitative insights into such digital narratives and their generation, composition and construction. The opportunities for such personal narrative accounts to enable recollection, reminiscence and reflection with the collection owners are established and its benefit in sharing past personal experience experiences is outlined. Finally, in a novel investigation with motivated third parties we demonstrate the opportunities such narrative accounts may have beyond the scope of the collection owner in: personal, societal and cultural explorations, artistic endeavours and as a generational heirloom

    An Interactive Lifelog Search Engine for LSC2018

    Get PDF
    This thesis consists on developing an interactive lifelog search engine for the LSC 2018 search challenge at ACM ICMR 2018. This search engine is created in order to browse for images from a given lifelog dataset and display them along with some written information related to them and four other images providing contextualization about the searched one. First of all, the work makes an introduction to the relevance of this project. It introduces the reader to the main social problems affronted and the aim of our project to deal with them. Thus, go ahead with the scope of the project introducing to the main objectives fixed. Also, the work is gone by the actual state of the same kind of prototypes that already exist to let the reader see the differences that our project presents. After the project approach is done, it begins a travel trough the methodology and creation process, going deep in the main aspects and the explanation of every election and decision, also remarking the limits of the current prototype. Additionally, the project concludes with a result section where the system is tested with six users. They are asked to find three specific images using the search engine. This test is divided in two sections: first, a qualitative section where the user is asked to test the system and fill out a survey to see how comfortable it is for him. And a second section, more quantitative, where they value the speed of our system. Finally, the project concludes going through the actual and future ethics of lifelogging in general and with a final conclusion further investigation and future improvemen

    LifeLogging: personal big data

    Get PDF
    We have recently observed a convergence of technologies to foster the emergence of lifelogging as a mainstream activity. Computer storage has become significantly cheaper, and advancements in sensing technology allows for the efficient sensing of personal activities, locations and the environment. This is best seen in the growing popularity of the quantified self movement, in which life activities are tracked using wearable sensors in the hope of better understanding human performance in a variety of tasks. This review aims to provide a comprehensive summary of lifelogging, to cover its research history, current technologies, and applications. Thus far, most of the lifelogging research has focused predominantly on visual lifelogging in order to capture life details of life activities, hence we maintain this focus in this review. However, we also reflect on the challenges lifelogging poses to an information retrieval scientist. This review is a suitable reference for those seeking a information retrieval scientist’s perspective on lifelogging and the quantified self

    IAPMA 2011: 2nd Workshop on information access to personal media archives

    Get PDF
    Towards e-Memories: challenges of capturing, summarising, presenting, understanding, using, and retrieving relevant information from heterogeneous data contained in personal media archives. Welcome to IAPMA 2011, the second international workshop on "Information Access for Personal Media Archives". It is now possible to archive much of our life experiences in digital form using a variety of sources, e.g. blogs written, tweets made, social network status updates, photographs taken, videos seen, music heard, physiological monitoring, locations visited and environmentally sensed data of those places, details of people met, etc. Information can be captured from a myriad of personal information devices including desktop computers, PDAs, digital cameras, video and audio recorders, and various sensors, including GPS, Bluetooth, and biometric devices

    Stress detection in lifelog data for improved personalized lifelog retrieval system

    Get PDF
    Stress can be categorized into acute and chronic types, with acute stress having short-term positive effects in managing hazardous situations, while chronic stress can adversely impact mental health. In a biological context, stress elicits a physiological response indicative of the fight-or-flight mechanism, accompanied by measurable changes in physiological signals such as blood volume pulse (BVP), galvanic skin response (GSR), and skin temperature (TEMP). While clinical-grade devices have traditionally been used to measure these signals, recent advancements in sensor technology enable their capture using consumer-grade wearable devices, providing opportunities for research in acute stress detection. Despite these advancements, there has been limited focus on utilizing low-resolution data obtained from sensor technology for early stress detection and evaluating stress detection models under real-world conditions. Moreover, the potential of physiological signals to infer mental stress information remains largely unexplored in lifelog retrieval systems. This thesis addresses these gaps through empirical investigations and explores the potential of utilizing physiological signals for stress detection and their integration within the state-of-the-art (SOTA) lifelog retrieval system. The main contributions of this thesis are as follows. Firstly, statistical analyses are conducted to investigate the feasibility of using low-resolution data for stress detection and emphasize the superiority of subject-dependent models over subject-independent models, thereby proposing the optimal approach to training stress detection models with low-resolution data. Secondly, longitudinal stress lifelog data is collected to evaluate stress detection models in real-world settings. It is proposed that training lifelog models on physiological signals in real-world settings is crucial to avoid detection inaccuracies caused by differences between laboratory and free-living conditions. Finally, a state-of-the-art lifelog interactive retrieval system called \lifeseeker is developed, incorporating the stress-moment filter function. Experimental results demonstrate that integrating this function improves the overall performance of the system in both interactive and non-interactive modes. In summary, this thesis contributes to the understanding of stress detection applied in real-world settings and showcases the potential of integrating stress information for enhancing personalized lifelog retrieval system performance
    corecore