7 research outputs found
Overview of the CLEF 2017 personalised information retrieval pilot lab (PIR-CLEF 2017)
The Personalised Information Retrieval (PIR-CLEF) Lab
workshop at CLEF 2017 is designed to provide a forum for the exploration of methodologies for the repeatable evaluation of personalised
information retrieval (PIR). The PIR-CLEF 2017 Lab provides a preliminary pilot edition of a Lab task dedicated to personalised search, while
the workshop at the conference is intended to provide a forum for the
discussion of strategies for the evaluation of PIR and extension of the
pilot Lab task. The PIR-CLEF 2017 Pilot Task is the first evaluation
benchmark based on the Cranfield paradigm, with the potential benefits
of producing evaluation results that are easily reproducible. The task is
based on search sessions over a subset of the ClueWeb12 collection, undertaken by 10 users by using a clearly defined and novel methodology.
The collection provides data gathered by the activities undertaken during the search sessions by each participant, including details of relevant
documents as marked by the searchers. The PIR-CLEF 2017 workshop
is intended to review the design and construction of this Pilot collection and to consider the topic of reproducible evaluation of PIR more
generally with the aim of launching a more formal PIR Lab at CLEF
201
Overview of the CLEF 2018 personalised information retrieval lab (PIR-CLEF 2018)
At CLEF 2018, the Personalised Information Retrieval Lab
(PIR-CLEF 2018) has been conceived to provide an initiative aimed at
both providing and critically analysing a new approach to the evaluation
of personalization in Information Retrieval (PIR). PIR-CLEF 2018 is the
first edition of this Lab after the successful Pilot lab organised at CLEF
2017. PIR CLEF 2018 has provided registered participants with the data
sets originally developed for the PIR-CLEF 2017 Pilot task; the data collected are related to real search sessions over a subset of the ClueWeb12
collection, undertaken by 10 users by using a novel methodology. The
data were gathered during the search sessions undertaken by 10 volunteer searchers. Activities during these search sessions included relevance
assessment of a retrieved documents by the searchers. 16 groups registered to participate at PIR-CLEF 2018 and were provided with the data
set to allow them to work on PIR related tasks and to provide feedback
about our proposed PIR evaluation methodology with the aim to create
an effective evaluation task
Overview of the CLEF 2018 personalised information retrieval lab (PIR-CLEF 2018)
At CLEF 2018, the Personalised Information Retrieval Lab
(PIR-CLEF 2018) has been conceived to provide an initiative aimed at
both providing and critically analysing a new approach to the evaluation
of personalization in Information Retrieval (PIR). PIR-CLEF 2018 is the
first edition of this Lab after the successful Pilot lab organised at CLEF
2017. PIR CLEF 2018 has provided registered participants with the data
sets originally developed for the PIR-CLEF 2017 Pilot task; the data collected are related to real search sessions over a subset of the ClueWeb12
collection, undertaken by 10 users by using a novel methodology. The
data were gathered during the search sessions undertaken by 10 volunteer searchers. Activities during these search sessions included relevance
assessment of a retrieved documents by the searchers. 16 groups registered to participate at PIR-CLEF 2018 and were provided with the data
set to allow them to work on PIR related tasks and to provide feedback
about our proposed PIR evaluation methodology with the aim to create
an effective evaluation task
Recommended from our members
Information retrieval evaluation in knowledge acquisition tasks
The Cranfield Paradigm is a widely adopted and the de-facto standard approach to the evaluation of IR systems. However, this approach does not inherently support situations in which the user is acquiring knowledge (is learning) during an information seeking session consisting of the submission of a sequence of queries into an information retrieval system. More specifically, during a situation in which the retrieval of a particular document at the beginning of a session can be considered not relevant (due to the user's lack of knowledge), while it can be considered relevant at a later point in the session (once the user acquired all required prerequisite knowledge). In this position paper, we reflect on the limitations of the Cranfield Paradigm in the context of knowledge acquisition tasks and propose several alternatives. These alternatives are based on the notion of evaluating a session consisting of a sequence of individual queries created to address a specific information need as part of a knowledge acquisition task
Proceedings of the Seventh Italian Conference on Computational Linguistics CLiC-it 2020
On behalf of the Program Committee, a very warm welcome to the Seventh Italian Conference on Computational Linguistics (CLiC-it 2020). This edition of the conference is held in Bologna and organised by the University of Bologna. The CLiC-it conference series is an initiative of the Italian Association for Computational Linguistics (AILC) which, after six years of activity, has clearly established itself as the premier national forum for research and development in the fields of Computational Linguistics and Natural Language Processing, where leading researchers and practitioners from academia and industry meet to share their research results, experiences, and challenges
Proceedings of the Eighth Italian Conference on Computational Linguistics CliC-it 2021
The eighth edition of the Italian Conference on Computational Linguistics (CLiC-it 2021) was held at Università degli Studi di Milano-Bicocca from 26th to 28th January 2022. After the edition of 2020, which was held in fully virtual mode due to the health emergency related to Covid-19, CLiC-it 2021 represented the first moment for the Italian research community of Computational Linguistics to meet in person after more than one year of full/partial lockdown