116 research outputs found

    Query Generation as Result Aggregation for Knowledge Representation

    Get PDF
    Knowledge representations have greatly enhanced the fundamental human problem of information search, profoundly changing representations of queries and database information for various retrieval tasks. Despite new technologies, little thought has been given in the field of query recommendation – recommending keyword queries to end users – to a holistic approach that recommends constructed queries from relevant snippets of information; pre-existing queries are used instead. Can we instead determine relevant information a user should see and aggregate it into a query? We construct a general framework leveraging various retrieval architectures to aggregate relevant information into a natural language query for recommendation. We test this framework in text retrieval, aggregating text snippets and comparing output queries to user generated queries. We show that an algorithm can generate queries more closely resembling the original and give effective retrieval results. Our simple approach shows promise for also leveraging knowledge structures to generate effective query recommendations

    Report of ECol Workshop Report on the First International Workshop on the Evaluation on Collaborative Information Seeking and Retrieval (ECol'2015)

    Get PDF
    Report of the ECol Workshop @ CIKM 2015The workshop on the evaluation of collaborative information retrieval and seeking (ECol) was held in conjunction with the 24 th Conference on Information and Knowledge Management (CIKM) in Melbourne, Australia. The workshop featured three main elements. First, a keynote on the main dimensions, challenges, and opportunities in collaborative information retrieval and seeking by Chirag Shah. Second, an oral presentation session in which four papers were presented. Third, a discussion based on three seed research questions: (1) In what ways is collaborative search evaluation more challenging than individual interactive information retrieval (IIIR) evaluation? (2) Would it be possible and/or useful to standardise experimental designs and data for collaborative search evaluation? and (3) For evaluating collaborative search, can we leverage ideas from other tasks such as diversified search, subtopic mining and/or e-discovery? The discussion was intense and raised many points and issues, leading to the proposition that a new evaluation track focused on collaborative information retrieval/seeking tasks, would be worthwhile

    Inferring User Knowledge Level from Eye Movement Patterns

    Get PDF
    The acquisition of information and the search interaction process is influenced strongly by a person’s use of their knowledge of the domain and the task. In this paper we show that a user’s level of domain knowledge can be inferred from their interactive search behaviors without considering the content of queries or documents. A technique is presented to model a user’s information acquisition process during search using only measurements of eye movement patterns. In a user study (n=40) of search in the domain of genomics, a representation of the participant’s domain knowledge was constructed using self-ratings of knowledge of genomics-related terms (n=409). Cognitive effort features associated with reading eye movement patterns were calculated for each reading instance during the search tasks. The results show correlations between the cognitive effort due to reading and an individual’s level of domain knowledge. We construct exploratory regression models that suggest it is possible to build models that can make predictions of the user’s level of knowledge based on real-time measurements of eye movement patterns during a task session

    Investigating User Search Tactic Patterns and System Support in Using Digital Libraries

    Get PDF
    This study aims to investigate users\u27 search tactic application and system support in using digital libraries. A user study was conducted with sixty digital library users. The study was designed to answer three research questions: 1) How do users engage in a search process by applying different types of search tactics while conducting different search tasks?; 2) How does the system support users to apply different types of search tactics?; 3) How do users\u27 search tactic application and system support for different types of search tactics affect search outputs? Sixty student subjects were recruited from different disciplines in a state research university. Multiple methods were employed to collect data, including questionnaires, transaction logs and think-aloud protocols. Subjects were asked to conduct three different types of search tasks, namely, known-item search, specific information search and exploratory search, using Library of Congress Digital Libraries. To explore users\u27 search tactic patterns (RQ1), quantitative analysis was conducted, including descriptive statistics, kernel regression, transition analysis, and clustering analysis. Types of system support were explored by analyzing system features for search tactic application. In addition, users\u27 perceived system support, difficulty, and satisfaction with search tactic application were measured using post-search questionnaires (RQ2). Finally, the study examined the causal relationships between search process and search outputs (RQ 3) based on multiple regression and structural equation modeling. This study uncovers unique behavior of users\u27 search tactic application and corresponding system support in the context of digital libraries. First, search tactic selections, changes, and transitions were explored in different task situations - known-item search, specific information search, and exploratory search. Search tactic application patterns differed by task type. In known-item search tasks, users preferred to apply search query creation and following search result evaluation tactics, but less query reformulation or iterative tactic loops were observed. In specific information search tasks, iterative search result evaluation strategies were dominantly used. In exploratory tasks, browsing tactics were frequently selected as well as search result evaluation tactics. Second, this study identified different types of system support for search tactic application. System support, difficulty, and satisfaction were measure in terms of search tactic application focusing on search process. Users perceived relatively high system support for accessing and browsing tactics while less support for query reformulation and item evaluation tactics. Third, the effects of search tactic selections and system support on search outputs were examined based on multiple regression. In known-item searches, frequencies of query creation and accessing forwarding tactics would positively affect search efficiency. In specific information searches, time spent on applying search result evaluation tactics would have a positive impact on success rate. In exploratory searches, browsing tactics turned out to be positively associated with aspectual recall and satisfaction with search results. Based on the findings, the author discussed unique patterns of users\u27 search tactic application as well as system design implications in digital library environments

    Current Research in Supporting Complex Search Tasks

    Get PDF
    ABSTRACT ere is broad consensus in the eld of IR that search is complex in many use cases and applications, both on the Web and in domain speci c collections, and both professionally and in our daily life. Yet our understanding of complex search tasks, in comparison to simple look up tasks, is fragmented at best. e workshop addresses many open research questions: What are the obvious use cases and applications of complex search? What are essential features of work tasks and search tasks to take into account? And how do these evolve over time? With a multitude of information, varying from introductory to specialized, and from authoritative to speculative or opinionated, when to show what sources of information? How does the information seeking process evolve and what are relevant di erences between di erent stages? With complex task and search process management, blending searching, browsing, and recommendations, and supporting exploratory search to sensemaking and analytics, UI and UX design pose an overconstrained challenge. How do we evaluate and compare approaches? Which measures should be taken into account? Supporting complex search tasks requires new collaborations across the elds of CHI and IR, and the proposed workshop will bring together a diverse group of researchers to work together on one of the greatest challenges of our eld
    • 

    corecore