40 research outputs found

    Replay detection and multi-stream synchronization in CS:GO game streams using content-based Image retrieval and Image signature matching

    Get PDF
    In GameStory: The 2019 Video Game Analytics Challenge, two main tasks are nominated to solve in the challenge, which are replay detection - multi-stream synchronization, and game story summarization. In this paper, we propose a data-driven based approach to solve the first task: replay detection - multi-stream synchronization. Our solution aims to determine the replays which lie between two logo-transitional endpoints and synchronize them with their sources by extracting frames from videos, then applying image processing and retrieval remedies. In detail, we use the Bag of Visual Words approach to detect the logo-transitional endpoints, which contains multiple replays in between, then employ an Image Signature Matching algorithm for multi-stream synchronization and replay boundaries refinement. The best configuration of our proposed solution manages to achieve the second-highest scores in all evaluation metrics of the challenge

    LIFER 2.0: discovering personal lifelog insights using an interactive lifelog retrieval system

    Get PDF
    This paper describes the participation of the Organiser Team in the ImageCLEFlifelog 2019 Solve My Life Puzzle (Puzzle) and Lifelog Moment Retrieval (LMRT) tasks. We proposed to use LIFER 2.0, an enhanced version of LIFER, which was an interactive retrieval system for personal lifelog data. We utilised LIFER 2.0 with some additional visual features, obtained by using traditional visual bag-of-words, to solve the Puzzle task, while with the LMRT, we applied LIFER 2.0 only with the provided information. The results on both tasks confirmed that by using faceted filter and context browsing, a user can gain insights from their personal lifelog by employing very simple interactions. These results also serve as baselines for other approaches in the ImageCLEFlifelog 2019 challenge to compare with

    Overview of NTCIR-15 MART

    Get PDF
    MART (Micro-activity Retrieval Task) was a NTCIR-15 collaborative benchmarking pilot task. The NTCIR-15 MART pilot aimed to motivate the development of first generation techniques for high-precision micro-activity detection and retrieval, to support the identification and retrieval of activities that occur over short time-scales such as minutes, rather than the long-duration event segmentation tasks of the past work. Participating researchers developed and benchmarked approaches to retrieve micro-activities from rich time-aligned multi-modal sensor data. Groups were ranked in decreasing order of micro-activity retrieval accuracy using mAP (mean Average Precision). The dataset used for the task consisted of a detailed lifelog of activities gathered using a controlled protocol of real-world activities (e.g. using a computer, eating, daydreaming, etc). The data included a lifelog camera data stream, biosignal activity (EOG, HR), and computer interactions (mouse movements, screenshots, etc). This task presented a novel set of challenging micro-activity based topics

    Flexible interactive retrieval SysTem 3.0 for visual lifelog exploration at LSC 2022

    Get PDF
    Building a retrieval system with lifelogging data is more complicated than with ordinary data due to the redundancies, blurriness, massive amount of data, various sources of information accompanying lifelogging data, and especially the ad-hoc nature of queries. The Lifelog Search Challenge (LSC) is a benchmarking challenge that encourages researchers and developers to push the boundaries in lifelog retrieval. For LSC'22, we develop FIRST 3.0, a novel and flexible system that leverages expressive cross-domain embeddings to enhance the searching process. Our system aims to adaptively capture the semantics of an image at different levels of detail. We also propose to augment our system with an external search engine to help our system with initial visual examples for unfamiliar concepts. Finally, we organize image data in hierarchical clusters based on their visual similarity and location to assist users in data exploration. Experiments show that our system is both fast and effective in handling various retrieval scenarios

    DCU team at the NTCIR-15 micro-activity retrieval task

    Get PDF
    The growing attention to lifelogging research has led to the creation of many retrieval systems, most of which employed event segmentation as core functionality. While previous literature focused on splitting lifelog data into broad segments of daily living activities, less attention was paid to micro-activities which last for short periods of time, yet carry valuable information for building a high-precision retrieval engine. In this paper, we present our efforts in addressing the NTCIR-15 MART challenge, in which the participants were asked to retrieve micro-activities from a multi-modal dataset. We proposed five models which investigate imagery and sensory data, both jointly and separately using various Deep Learn- ing and Machine Learning techniques, and achieved a maximum mAP score of 0.901 using an Image Tabular Pair-wise Similarity model, and overall ranked second in the competition. Our model not only captures the information coming from the temporal visual data combined with sensor signal, but also works as a Siamese network to discriminate micro-activities

    FIRST - Flexible interactive retrieval SysTem for visual lifelog exploration at LSC 2020

    Get PDF
    Lifelog can provide useful insights of our daily activities. It is essential to provide a flexible way for users to retrieve certain events or moments of interest, corresponding to a wide variation of query types. This motivates us to develop FIRST, a Flexible Interactive Retrieval SysTem, to help users to combine or integrate various query components in a flexible manner to handle different query scenarios, such as visual clustering data based on color histogram, visual similarity, GPS location, or scene attributes. We also employ personalized concept detection and image captioning to enhance image understanding from visual lifelog data, and develop an autoencoderlike approach for query text and image feature mapping. Furthermore, we refine the user interface of the retrieval system to better assist users in query expansion and verifying sequential events in a flexible temporal resolution to control the navigation speed through sequences of images

    LifeSeeker 3.0 : an interactive lifelog search engine for LSC’21

    Get PDF
    In this paper, we present the interactive lifelog retrieval engine developed for the LSC’21 comparative benchmarking challenge. The LifeSeeker 3.0 interactive lifelog retrieval engine is an enhanced version of our previous system participating in LSC’20 - LifeSeeker 2.0. The system is developed by both Dublin City University and the Ho Chi Minh City University of Science. The implementation of LifeSeeker 3.0 focuses on searching and filtering by text query using a weighted Bag-of-Words model with visual concept augmentation and three weighted vocabularies. The visual similarity search is improved using a bag of local convolutional features; while improving the previous version’s performance, enhancing query processing time, result displaying, and browsing support

    LifeSeeker 2.0: interactive lifelog search engine at LSC 2020

    Get PDF
    In this paper we present our interactive lifelog retrieval engine in the LSC’20 comparative benchmarking challenge. The LifeSeeker 2.0 interactive lifelog retrieval engine is developed by both Dublin City University and Ho Chi Minh University of Science, which represents an enhanced version of the two corresponding interactive lifelog retrieval engines in LSC’19. The implementation of LifeSeeker 2.0 has been designed to focus on the searching by text query using a Bag-of-Words model with visual concept augmentation and additional improvements in query processing time, enhanced result display and browsing support, and interacting with visual graphs for both query and filter purposes

    Organiser Team at ImageCLEFlifelog 2020: A Baseline Approach for Moment Retrieval and Athlete Performance Prediction using Lifelog Data

    Get PDF
    For the LMRT task at ImageCLEFlifelog 2020, LIFER 3.0, a new version of the LIFER system with improvements in the user interface and system affordance, is used and evaluated via feedback from a user experiment. In addition, since both tasks share a common dataset, LIFER 3.0 borrows some features from the LifeSeeker system deployed for the Lifelog Search Challenge; which are free-text search, visual similarity search and elastic sequencing filter. For the SPLL task, we proposed a naive solution by capturing the rate of change in running speed and weight, then obtain the target changes for each subtask using average computation and linear regression model. The results presented in this paper can be used as comparative baselines for other participants in the ImageCLEFlifelog 2020 challenge.publishedVersio

    Overview of imageCLEFlifelog 2019: solve my life puzzle and lifelog Moment retrieval

    Get PDF
    This paper describes ImageCLEFlifelog 2019, the third edition of the Lifelog task. In this edition, the task was composed of two subtasks (challenges): the Lifelog Moments Retrieval (LMRT) challenge that followed the same format as in the previous edition, and the Solve My Life Puzzle (Puzzle), a brand new task that focused on rearranging lifelog moments in temporal order. ImageCLEFlifelog 2019 received noticeably higher submissions than the previous editions, with ten teams participating resulting in a total number of 109 runs
    corecore