27 research outputs found
An interactive lifelog search engine for LSC2018
In this work, we describe an interactive lifelog search engine developed for the LSC 2018 search challenge at ACM ICMR 2018. The paper introduces the four-step process required to support lifelog search engines and describes the source data for the search engine as well as the approach to ranking chosen for the iterative search engine. Finally the interface used is introduced before we highlight the limits of the current prototype and suggest opportunities for future work.Peer ReviewedPostprint (published version
LIFER 2.0: discovering personal lifelog insights using an interactive lifelog retrieval system
This paper describes the participation of the Organiser Team in the ImageCLEFlifelog 2019 Solve My Life Puzzle (Puzzle) and Lifelog Moment Retrieval (LMRT) tasks. We proposed to use LIFER 2.0, an enhanced version of LIFER, which was an interactive retrieval system for personal lifelog data. We utilised LIFER 2.0 with some additional visual features, obtained by using traditional visual bag-of-words, to solve the Puzzle task, while with the LMRT, we applied LIFER 2.0 only with the provided information. The results on both tasks confirmed that by using faceted filter and context browsing, a user can gain insights from their personal lifelog by employing very simple interactions. These results also serve as baselines for other approaches in the ImageCLEFlifelog 2019 challenge to compare with
An Interactive Lifelog Search Engine for LSC2018
This thesis consists on developing an interactive lifelog search engine for the LSC 2018 search challenge at ACM ICMR 2018. This search engine is created in order to browse for images from a given lifelog dataset and display them along with some written information related to them and four other images providing contextualization about the searched one. First of all, the work makes an introduction to the relevance of this project. It introduces the reader to the main social problems affronted and the aim of our project to deal with them. Thus, go ahead with the scope of the project introducing to the main objectives fixed. Also, the work is gone by the actual state of the same kind of prototypes that already exist to let the reader see the differences that our project presents. After the project approach is done, it begins a travel trough the methodology and creation process, going deep in the main aspects and the explanation of every election and decision, also remarking the limits of the current prototype. Additionally, the project concludes with a result section where the system is tested with six users. They are asked to find three specific images using the search engine. This test is divided in two sections: first, a qualitative section where the user is asked to test the system and fill out a survey to see how comfortable it is for him. And a second section, more quantitative, where they value the speed of our system. Finally, the project concludes going through the actual and future ethics of lifelogging in general and with a final conclusion further investigation and future improvemen
VRLE: Lifelog Interaction Prototype in Virtual Reality:Lifelog Search Challenge at ACM ICMR 2020
The Lifelog Search Challenge (LSC) invites researchers to share
their prototypes for interactive lifelog retrieval and encourages
competition to develop and evaluate effective methodologies to
achieve this. With this paper we present a novel approach to visual
lifelog exploration based on our research to date utilising virtual
reality as a medium for interactive information retrieval. The VRLE
prototype presented is an iteration on a previous system which
won the first LSC competition at ACM ICMR 2018
Overview of the NTCIR-14 Lifelog-3 task
Lifelog-3 was the third instance of the lifelog task at NTCIR. At NTCIR-14, the Lifelog-3 task explored three different lifelog data access related challenges, the search challenge, the annotation challenge and the insights challenge. In this paper we review the activities of participating teams who took part in the challenges and we suggest next steps for the community
Experiments in lifelog organisation and retrieval at NTCIR
Lifelogging can be described as the process by which individuals use various software and hardware devices to gather large archives of multimodal personal data from multiple sources and store them in a personal data archive, called a lifelog. The Lifelog task at NTCIR was a comparative benchmarking exercise with the aim of encouraging research into the organisation and retrieval of data from multimodal lifelogs. The Lifelog task ran for over 4 years from NTCIR-12 until NTCIR-14 (2015.02–2019.06); it supported participants to submit to five subtasks, each tackling a different challenge related to lifelog retrieval. In this chapter, a motivation is given for the Lifelog task and a review of progress since NTCIR-12 is presented. Finally, the lessons learned and challenges within the domain of lifelog retrieval are presented
VieLens,: an interactive search engine for LSC2019
With the appearance of many wearable devices like smartwatches,
recording glasses (such as Google glass), smart phones, digital personal profiles have become more readily available nowadays. However, searching and navigating these multi-source, multi-modal,
and often unstructured data to extract useful information is still a
relatively challenging task. Therefore, the LSC2019 competition has
been organized so that researchers can demonstrate novel search
engines, as well as exchange ideas and collaborate on these types
of problems. We present in this paper our approach for supporting
interactive searches of lifelog data by employing a new retrieval
system called VieLens, which is an interactive retrieval system enhanced by natural language processing techniques to extend and
improve search results mainly in the context of a user’s activities
in their daily life
Introduction to the Sixth Annual Lifelog Search Challenge, LSC’23
For the sixth time since 2018, the Lifelog Search Challenge (LSC) was organized as a comparative benchmarking exercise for various interactive lifelog search systems. The goal of this international competition is to test system capabilities to access large multimodal lifelogs. LSC’23 attracted twelve participanting teams, each of whom had developed a competitive interactive lifelog retrieval system. The benchmark was organized in front of live audience at the LSC workshop at ACM ICMR’23. As in previous editions, this introductory paper presents the LSC workshop and introduces the participating lifelog search systems
Voxento 2.0: a prototype voice-controlled interactive search engine for lifelogs
In this paper, we describe an extended version of Voxento which
is an interactive voice-based retrieval system for lifelogs that has
been developed to participate in the fourth Lifelog Search Challenge
LSC’21, at ACM ICMR’21. Voxento provides a spoken interface to
the lifelog dataset, which facilitates a novice user to interact with a
personal lifelog using a range of vocal commands and interactions.
For the version presented here, Voxento has been enhanced with
new retrieval features and better user interaction support. In this
paper, we introduce these new features, which include dynamic result filtering, predefined interactive responses and the development
of a new retrieval API. Although Voxento was proposed for wearable technologies such as Google Glass or interactive devices like
smart TVs, the version of Voxento presented here uses a desktop
computer in order to participate in the LSC’21 competition. In the
current Voxento iteration, the user has the option to enable voice
interaction or use standard text-based retrieval