8 research outputs found
Exquisitor at the Lifelog Search Challenge 2020
We present an enhanced version of Exquisitor, our interactive and scalable media exploration system. At its core, Exquisitor is an interactive learning system using relevance feedback on media items to build a model of the users' information need. Relying on efficient media representation and indexing, it facilitates real-time user interaction. The new features for the Lifelog Search Challenge 2020 include support for timeline browsing, search functionality for finding positive examples, and significant interface improvements. Participation in the Lifelog Search Challenge allows us to compare our paradigm, relying predominantly on interactive learning, with more traditional search-based multimedia retrieval systems
Exploring Intuitive Lifelog Retrieval and Interaction Modes in Virtual Reality with vitrivr-VR
The multimodal nature of lifelog data collections poses unique challenges for multimedia management and retrieval systems. The Lifelog Search Challenge (LSC) offers an annual evaluation platform for such interactive retrieval systems. They compete against one another in finding items of interest within a set time frame. In this paper, we present the multimedia retrieval system vitrivr-vr, the latest addition to the vitrivr stack, which participated in the LSC in recent years. vitrivr-vr leverages the 3D space in virtual reality (VR) to offer novel retrieval and user interaction models, which we describe with a special focus on design decisions taken for the participation in the LSC
FIRST - Flexible interactive retrieval SysTem for visual lifelog exploration at LSC 2020
Lifelog can provide useful insights of our daily activities. It is essential to provide a flexible way for users to retrieve certain events
or moments of interest, corresponding to a wide variation of query
types. This motivates us to develop FIRST, a Flexible Interactive Retrieval SysTem, to help users to combine or integrate various query
components in a flexible manner to handle different query scenarios, such as visual clustering data based on color histogram, visual
similarity, GPS location, or scene attributes. We also employ personalized concept detection and image captioning to enhance image
understanding from visual lifelog data, and develop an autoencoderlike approach for query text and image feature mapping. Furthermore, we refine the user interface of the retrieval system to better
assist users in query expansion and verifying sequential events
in a flexible temporal resolution to control the navigation speed
through sequences of images
VieLens,: an interactive search engine for LSC2019
With the appearance of many wearable devices like smartwatches,
recording glasses (such as Google glass), smart phones, digital personal profiles have become more readily available nowadays. However, searching and navigating these multi-source, multi-modal,
and often unstructured data to extract useful information is still a
relatively challenging task. Therefore, the LSC2019 competition has
been organized so that researchers can demonstrate novel search
engines, as well as exchange ideas and collaborate on these types
of problems. We present in this paper our approach for supporting
interactive searches of lifelog data by employing a new retrieval
system called VieLens, which is an interactive retrieval system enhanced by natural language processing techniques to extend and
improve search results mainly in the context of a user’s activities
in their daily life
LifeSeeker 2.0: interactive lifelog search engine at LSC 2020
In this paper we present our interactive lifelog retrieval engine in
the LSC’20 comparative benchmarking challenge. The LifeSeeker
2.0 interactive lifelog retrieval engine is developed by both Dublin
City University and Ho Chi Minh University of Science, which
represents an enhanced version of the two corresponding interactive lifelog retrieval engines in LSC’19. The implementation of
LifeSeeker 2.0 has been designed to focus on the searching by
text query using a Bag-of-Words model with visual concept augmentation and additional improvements in query processing time,
enhanced result display and browsing support, and interacting with
visual graphs for both query and filter purposes