410 research outputs found

    Seven years of INEX interactive retrieval experiments – lessons and challenges

    Get PDF
    This paper summarizes a major effort in interactive search investigation, the INEX i-track, a collective effort run over a seven-year period. We present the experimental conditions, report some of the findings of the participating groups, and examine the challenges posed by this kind of collective experimental effort

    Effective metadata for social book search from a user perspective

    Get PDF
    Abstract. In this extended abstract we describe our participation in the INEX 2014 Interactive Social Book Search Track. In previous work, we have looked at the impact of professional and user-generated metadata in the context of book search, and compared these different categories of metadata in terms of retrieval effectiveness. Here, we take a different approach and study the use of professional and user-generated metadata of books in an interactive setting, and the effectivity of this metadata from a user perspective. We compare the perceived usefulness of general descriptions, publication metadata, user reviews and tags in focused and open-ended search tasks, based on data gathered in the INEX Interactive Social Book Search Track. Furthermore, we take a tentative look at the actual use of different types of metadata over time in the aggregated search tasks. Our preliminary findings in the surveyed tasks indicate that user reviews are generally perceived to be more useful than other types of metadata, and they are frequently mentioned in users ’ rationales for selecting books. Furthermore, we observe a varying usage frequency of traditional and user-generated metadata across time in the aggregated search tasks, pro-viding initial indications that these types of metadata might be useful at different stages of a search task.

    Information Access Evaluation:Multilinguality, Multimodality, and Visual Analytics

    Get PDF

    INEX Tweet Contextualization Task: Evaluation, Results and Lesson Learned

    Get PDF
    Microblogging platforms such as Twitter are increasingly used for on-line client and market analysis. This motivated the proposal of a new track at CLEF INEX lab of Tweet Contextualization. The objective of this task was to help a user to understand a tweet by providing him with a short explanatory summary (500 words). This summary should be built automatically using resources like Wikipedia and generated by extracting relevant passages and aggregating them into a coherent summary. Running for four years, results show that the best systems combine NLP techniques with more traditional methods. More precisely the best performing systems combine passage retrieval, sentence segmentation and scoring, named entity recognition, text part-of-speech (POS) analysis, anaphora detection, diversity content measure as well as sentence reordering. This paper provides a full summary report on the four-year long task. While yearly overviews focused on system results, in this paper we provide a detailed report on the approaches proposed by the participants and which can be considered as the state of the art for this task. As an important result from the 4 years competition, we also describe the open access resources that have been built and collected. The evaluation measures for automatic summarization designed in DUC or MUC were not appropriate to evaluate tweet contextualization, we explain why and depict in detailed the LogSim measure used to evaluate informativeness of produced contexts or summaries. Finally, we also mention the lessons we learned and that it is worth considering when designing a task
    corecore