410 research outputs found
Seven years of INEX interactive retrieval experiments – lessons and challenges
This paper summarizes a major effort in interactive search investigation,
the INEX i-track, a collective effort run over a seven-year period. We present
the experimental conditions, report some of the findings of the participating
groups, and examine the challenges posed by this kind of collective experimental
effort
Recommended from our members
Overview of the CHIIR 2019 Workshop on Barriers to InteractiveIR Resources Re-use (BIIRRR 2019)
This paper presents an overview of the BIIRRR 2019 workshop at CHIIR 2019, which had the explicit aim of understanding and promoting re-use of resources for interactive IR experimentation
Effective metadata for social book search from a user perspective
Abstract. In this extended abstract we describe our participation in the INEX 2014 Interactive Social Book Search Track. In previous work, we have looked at the impact of professional and user-generated metadata in the context of book search, and compared these different categories of metadata in terms of retrieval effectiveness. Here, we take a different approach and study the use of professional and user-generated metadata of books in an interactive setting, and the effectivity of this metadata from a user perspective. We compare the perceived usefulness of general descriptions, publication metadata, user reviews and tags in focused and open-ended search tasks, based on data gathered in the INEX Interactive Social Book Search Track. Furthermore, we take a tentative look at the actual use of different types of metadata over time in the aggregated search tasks. Our preliminary findings in the surveyed tasks indicate that user reviews are generally perceived to be more useful than other types of metadata, and they are frequently mentioned in users ’ rationales for selecting books. Furthermore, we observe a varying usage frequency of traditional and user-generated metadata across time in the aggregated search tasks, pro-viding initial indications that these types of metadata might be useful at different stages of a search task.
Recommended from our members
A Manifesto on Resource Re-Use in Interactive Information Retrieval
This perspective paper on resource re-use intends to draw the attention of the interactive information retrieval (IIR) community to the challenges of research documentation and archiving for future use. Resources are understood as encompassing research designs, research data and research infrastructures. It proposes eight principles for improving the re-use of resources in the IIR community and presents concrete steps on how to achieve them. A five-level system for data archiving and documentation envisions increasingly open and stable documentation and access infrastructures
INEX Tweet Contextualization Task: Evaluation, Results and Lesson Learned
Microblogging platforms such as Twitter are increasingly used for on-line client and market analysis. This motivated the proposal of a new track at CLEF INEX lab of Tweet Contextualization. The objective of this task was to help a user to understand a tweet by providing him with a short explanatory summary (500 words). This summary should be built automatically using resources like Wikipedia and generated by extracting relevant passages and aggregating them into a coherent summary. Running for four years, results show that the best systems combine NLP techniques with more traditional methods. More precisely the best performing systems combine passage retrieval, sentence segmentation and scoring, named entity recognition, text part-of-speech (POS) analysis, anaphora detection, diversity content measure as well as sentence reordering. This paper provides a full summary report on the four-year long task. While yearly overviews focused on system results, in this paper we provide a detailed report on the approaches proposed by the participants and which can be considered as the state of the art for this task. As an important result from the 4 years competition, we also describe the open access resources that have been built and collected. The evaluation measures for automatic summarization designed in DUC or MUC were not appropriate to evaluate tweet contextualization, we explain why and depict in detailed the LogSim measure used to evaluate informativeness of produced contexts or summaries. Finally, we also mention the lessons we learned and that it is worth considering when designing a task
- …