98 research outputs found
NTCIR Lifelog: The First Test Collection for Lifelog Research
Test collections have a long history of supporting repeatable
and comparable evaluation in Information Retrieval (IR).
However, thus far, no shared test collection exists for IR
systems that are designed to index and retrieve multimodal
lifelog data. In this paper we introduce the first test col-
lection for personal lifelog data. The requirements for such
a test collection are motivated, the process of creating the
test collection is described, along with an overview of the
test collection and finally suggestions are given for possible
applications of the test collection, which has been employed
for the NTCIR12-Lifelog task
Overview of NTCIR-13 Lifelog-2 Task
In this paper we review the NTCIR13-Lifelog core task, which ran at NTCIR-13. We outline the test collection employed, along with the tasks, the submissions and the findings from this pilot task. We finish by suggesting future plans for the task
Baseline analysis of a conventional and virtual reality lifelog retrieval system
Continuous media capture via a wearable devices is currently one of the most popular methods to establish a comprehensive record of the entirety of an individual's life experience, referred to in the research community as a lifelog. These vast multimodal corpora include visual and other sensor data and are enriched by content analysis, to generate as extensive a record of an individual's life experience. However, interfacing with such datasets remains an active area of research, and despite the advent of new technology and a plethora of competing mediums for processing digital information, there has been little focus on newly emerging platforms such as virtual reality. In this work, we suggest that the increase in immersion and spatial dimensions provided by virtual reality could provide significant benefits to users when compared to more conventional access methodologies. Hence, we motivate virtual reality as a viable method of exploring multimedia archives (specifically lifelogs) by performing a baseline comparative analysis using a novel application prototype built for the HTC Vive and a conventional prototype built for a standard personal computer
Overview of the NTCIR-14 Lifelog-3 task
Lifelog-3 was the third instance of the lifelog task at NTCIR. At NTCIR-14, the Lifelog-3 task explored three different lifelog data access related challenges, the search challenge, the annotation challenge and the insights challenge. In this paper we review the activities of participating teams who took part in the challenges and we suggest next steps for the community
Experiments in lifelog organisation and retrieval at NTCIR
Lifelogging can be described as the process by which individuals use various software and hardware devices to gather large archives of multimodal personal data from multiple sources and store them in a personal data archive, called a lifelog. The Lifelog task at NTCIR was a comparative benchmarking exercise with the aim of encouraging research into the organisation and retrieval of data from multimodal lifelogs. The Lifelog task ran for over 4 years from NTCIR-12 until NTCIR-14 (2015.02–2019.06); it supported participants to submit to five subtasks, each tackling a different challenge related to lifelog retrieval. In this chapter, a motivation is given for the Lifelog task and a review of progress since NTCIR-12 is presented. Finally, the lessons learned and challenges within the domain of lifelog retrieval are presented
A user-study examining visualization of lifelogs
With continuous advances in the pervasive sensing and lifelogging technologies for the quantified self, users now can record their daily life activities automatically and seamlessly. In the existing lifelogging research, visualization techniques for presenting the lifelogs and evaluating the effectiveness of such techniques from a lifelogger's perspective has not been adequately studied. In this paper, we investigate the effectiveness of four distinct visualization techniques for exploring the lifelogs, which were collected by 22 lifeloggers who volunteered to use a wearable camera and a GPS device simultaneously, for a period of 3 days. Based on a user study with these 22 lifeloggers, which required them to browse through their personal lifelogs, we seek to identify the most effective visualization technique. Our results suggest various ways to augment and improve the visualization of personal lifelogs to enrich the quality of user experience and making lifelogging tools more engaging. We also propose a new visualization feature-drill-down approach with details-on-demand, to make the lifelogging visualization process more meaningful and informative to the lifeloggers
A privacy by design approach to lifelogging
Technologies that enable us to capture and publish data with ease are likely to pose new concerns about privacy of the individual. In this article we exam- ine the privacy implications of lifelogging, a new concept being explored by early adopters, which utilises wearable devices to generate a media rich archive of their life experience. The concept of privacy and the privacy implications of lifelogging are presented and discussed in terms of the four key actors in the lifelogging uni- verse. An initial privacy-aware lifelogging framework, based on the key principles of privacy by design is presented and motivated
LifeLogging: personal big data
We have recently observed a convergence of technologies to foster the emergence of lifelogging as a mainstream activity. Computer storage has become significantly cheaper, and advancements in sensing technology allows for the efficient sensing of personal activities, locations and the environment. This is best seen in the growing popularity of the quantified self movement, in which life activities are tracked using wearable sensors in the hope of better understanding human performance in a variety of tasks. This review aims to provide a comprehensive summary of lifelogging, to cover its research history, current technologies, and applications. Thus far, most of the lifelogging research has focused predominantly on visual lifelogging in order to capture life details of life activities, hence we maintain this focus in this review. However, we also reflect on the challenges lifelogging poses to an information retrieval scientist. This review is a suitable reference for those seeking a information retrieval scientist’s perspective on lifelogging and the quantified self
A multimodal approach for event detection from lifelogs
This paper analyzes how personal lifelog data which contains biometric, visual, activity data, can be leveraged to detect points in time where the individual is partaking in an eating activity. To answer this question, three artificial neural network models were introduced. Firstly, a image object detection model trained to detect eating related objects using the YOLO framework. Secondly, a feed-forward neural network (FANN) and a Long-Short-Term-Memory (LSTM) neural network model which attempts to detect ‘eating moments’ in the lifelog data. The results show promise, with F1-score and AUC score of 0.489 and 0.796 for the FANN model, and F1-score of 0.74 and AUC score of 0.835 respectively. However, there are clear rooms for improvement on all models. The models and methods introduced can help individuals monitor their nutrition habits so they are empowered to make healthy lifestyle decisions. Additionally, several methods for streamlining event detection in lifelog data are introduced.Masteroppgave i informasjonsvitenskapINFO390MASV-INFOMASV-IK
- …