242 research outputs found

    Incorporating user search behaviour into relevance feedback

    Get PDF
    In this paper we present five user experiments on incorporating behavioural information into the relevance feedback process. In particular we concentrate on ranking terms for query expansion and selecting new terms to add to the user's query. Our experiments are an attempt to widen the evidence used for relevance feedback from simply the relevant documents to include information on how users are searching. We show that this information can lead to more successful relevance feedback techniques. We also show that the presentation of relevance feedback to the user is important in the success of relevance feedback

    The effects on topic familiarity on online search behaviour and use of relevance criteria

    Get PDF
    This paper presents an experimental study on the effect of topic familiarity on the assessment behaviour of online searchers. In particular we investigate the effect of topic familiarity on the resources and relevance criteria used by searchers. Our results indicate that searching on an unfamiliar topic leads to use of more generic and fewer specialised resources and that searchers employ different relevance criteria when searching on less familiar topics

    Interactive Information Retrieval in the Work Context:the Challenge of Evaluation

    Get PDF
    Interactive Information Retrieval in the Work Context: the Challenge of Evaluation(Long Abstract)</p

    Introduction to the special issue on evaluating interactive information retrieval systems

    Get PDF
    Evaluation has always been a strong element of Information Retrieval (IR) research, much of our focus being on how we evaluate IR algorithms. As a research field we have benefited greatly from initiatives such as Cranfield, TREC, CLEF and INEX that have added to our knowledge of how to create test collections, the reliability of system-based evaluation criteria and our understanding of how to interpret the results of an algorithmic evaluation. In contrast, evaluations whose main focus is the user experience of searching using IR systems have not yet reached the same level of maturity. Such evaluations are complex to create and assess due to the increased number of variables to incorporate within the study, the lack of standard tools available (for example, test collections) and the difficulty of selecting appropriate evaluation criteria for study

    Injecting Realism into Simulated Work Tasks: A Case Study of the Book Domain

    Get PDF
    • …
    corecore