2,957 research outputs found

    Retrieval experiments using pseudo-desktop collections

    Full text link

    Workshop on evaluating personal search

    Get PDF
    The first ECIR workshop on Evaluating Personal Search was held on 18th April 2011 in Dublin, Ireland. The workshop consisted of 6 oral paper presentations and several discussion sessions. This report presents an overview of the scope and contents of the workshop and outlines the major outcomes

    Workshop on Desktop Search

    Get PDF
    The first SIGIR workshop on Desktop Search was held on 23rd July 2010 in Geneva, Switzerland. The workshop consisted of 2 industrial keynotes, 10 paper presentations in a combination of oral and poster format and several discussion sessions. This report presents an overview of the scope and contents of the workshop and outlines the major outcomes

    A strategy for evaluating search of “Real” personal information archives

    Get PDF
    Personal information archives (PIAs) can include materials from many sources, e.g. desktop and laptop computers, mobile phones, etc. Evaluation of personal search over these collections is problematic for reasons relating to the personal and private nature of the data and associated information needs and measuring system response effectiveness. Conventional information retrieval (IR) evaluation involving use of Cranfield type test collections to establish retrieval effectiveness and laboratory testing of interactive search behaviour have to be re-thought in this situation. One key issue is that personal data and information needs are very different to search of more public third party datasets used in most existing evaluations. Related to this, understanding the issues of how users interact with a search system for their personal data is important in developing search in this area on a well grounded basis. In this proposal we suggest an alternative IR evaluation strategy which preserves privacy of user data and enables evaluation of both the accuracy of search and exploration of interactive search behaviour. The general strategy is that instead of a common search dataset being distributed to participants, we suggest distributing standard expandable personal data collection, indexing and search tools to non-intrusively collect data from participants conducting search tasks over their own data collections on their own machines, and then performing local evaluation of individual results before central agregation

    Managed Forgetting to Support Information Management and Knowledge Work

    Full text link
    Trends like digital transformation even intensify the already overwhelming mass of information knowledge workers face in their daily life. To counter this, we have been investigating knowledge work and information management support measures inspired by human forgetting. In this paper, we give an overview of solutions we have found during the last five years as well as challenges that still need to be tackled. Additionally, we share experiences gained with the prototype of a first forgetful information system used 24/7 in our daily work for the last three years. We also address the untapped potential of more explicated user context as well as features inspired by Memory Inhibition, which is our current focus of research.Comment: 10 pages, 2 figures, preprint, final version to appear in KI - K\"unstliche Intelligenz, Special Issue: Intentional Forgettin

    Towards 'Cranfield' test collections for personal data search evaluation

    Get PDF
    Desktop archives are distinct from sources for which shared “Cranfield” information retrieval test collections1have been created to date. Differences associated with desktop collections include: they are personal to the archive owner, the owner has personal memories about the items contained within them, and only the collection owner can rate the relevance of items retrieved in response to their query. In this paper we discuss these unique attributes of desktop collections and search, and the resulting challenges associated with creating test collections for desktop search. We also outline a proposed strategy for creating test collections for this space

    What makes re-finding information difficult? A study of email re-finding

    Get PDF
    Re-nding information that has been seen or accessed before is a task which can be relatively straight-forward, but often it can be extremely challenging, time-consuming and frustrating. Little is known, however, about what makes one re-finding task harder or easier than another. We performed a user study to learn about the contextual factors that influence users' perception of task diculty in the context of re-finding email messages. 21 participants were issued re-nding tasks to perform on their own personal collections. The participants' responses to questions about the tasks combined with demographic data and collection statistics for the experimental population provide a rich basis to investigate the variables that can influence the perception of diculty. A logistic regression model was developed to examine the relationships be- tween variables and determine whether any factors were associated with perceived task diculty. The model reveals strong relationships between diculty and the time lapsed since a message was read, remembering when the sought-after email was sent, remembering other recipients of the email, the experience of the user and the user's ling strategy. We discuss what these findings mean for the design of re-nding interfaces and future re-finding research

    Meeting of the MINDS: an information retrieval research agenda

    Get PDF
    Since its inception in the late 1950s, the field of Information Retrieval (IR) has developed tools that help people find, organize, and analyze information. The key early influences on the field are well-known. Among them are H. P. Luhn's pioneering work, the development of the vector space retrieval model by Salton and his students, Cleverdon's development of the Cranfield experimental methodology, SpÀrck Jones' development of idf, and a series of probabilistic retrieval models by Robertson and Croft. Until the development of the WorldWideWeb (Web), IR was of greatest interest to professional information analysts such as librarians, intelligence analysts, the legal community, and the pharmaceutical industry
    • 

    corecore