438 research outputs found

    Understanding Mobile Search Task Relevance and User Behaviour in Context

    Full text link
    Improvements in mobile technologies have led to a dramatic change in how and when people access and use information, and is having a profound impact on how users address their daily information needs. Smart phones are rapidly becoming our main method of accessing information and are frequently used to perform `on-the-go' search tasks. As research into information retrieval continues to evolve, evaluating search behaviour in context is relatively new. Previous research has studied the effects of context through either self-reported diary studies or quantitative log analysis; however, neither approach is able to accurately capture context of use at the time of searching. In this study, we aim to gain a better understanding of task relevance and search behaviour via a task-based user study (n=31) employing a bespoke Android app. The app allowed us to accurately capture the user's context when completing tasks at different times of the day over the period of a week. Through analysis of the collected data, we gain a better understanding of how using smart phones on the go impacts search behaviour, search performance and task relevance and whether or not the actual context is an important factor.Comment: To appear in CHIIR 2019 in Glasgow, U

    BroDyn’18: Workshop on analysis of broad dynamic topics over social media

    Get PDF
    This book constitutes the refereed proceedings of the 40th European Conference on IR Research, ECIR 2018, held in Grenoble, France, in March 2018. The 39 full papers and 39 short papers presented together with 6 demos, 5 workshops and 3 tutorials, were carefully reviewed and selected from 303 submissions. Accepted papers cover the state of the art in information retrieval including topics such as: topic modeling, deep learning, evaluation, user behavior, document representation, recommendation systems, retrieval methods, learning and classication, and micro-blogs

    Active Sampling for Large-scale Information Retrieval Evaluation

    Get PDF
    Evaluation is crucial in Information Retrieval. The development of models, tools and methods has significantly benefited from the availability of reusable test collections formed through a standardized and thoroughly tested methodology, known as the Cranfield paradigm. Constructing these collections requires obtaining relevance judgments for a pool of documents, retrieved by systems participating in an evaluation task; thus involves immense human labor. To alleviate this effort different methods for constructing collections have been proposed in the literature, falling under two broad categories: (a) sampling, and (b) active selection of documents. The former devises a smart sampling strategy by choosing only a subset of documents to be assessed and inferring evaluation measure on the basis of the obtained sample; the sampling distribution is being fixed at the beginning of the process. The latter recognizes that systems contributing documents to be judged vary in quality, and actively selects documents from good systems. The quality of systems is measured every time a new document is being judged. In this paper we seek to solve the problem of large-scale retrieval evaluation combining the two approaches. We devise an active sampling method that avoids the bias of the active selection methods towards good systems, and at the same time reduces the variance of the current sampling approaches by placing a distribution over systems, which varies as judgments become available. We validate the proposed method using TREC data and demonstrate the advantages of this new method compared to past approaches
    corecore