116 research outputs found
Query Generation as Result Aggregation for Knowledge Representation
Knowledge representations have greatly enhanced the fundamental human problem of information search, profoundly changing representations of queries and database information for various retrieval tasks. Despite new technologies, little thought has been given in the field of query recommendation â recommending keyword queries to end users â to a holistic approach that recommends constructed queries from relevant snippets of information; pre-existing queries are used instead. Can we instead determine relevant information a user should see and aggregate it into a query? We construct a general framework leveraging various retrieval architectures to aggregate relevant information into a natural language query for recommendation. We test this framework in text retrieval, aggregating text snippets and comparing output queries to user generated queries. We show that an algorithm can generate queries more closely resembling the original and give effective retrieval results. Our simple approach shows promise for also leveraging knowledge structures to generate effective query recommendations
Report of ECol Workshop Report on the First International Workshop on the Evaluation on Collaborative Information Seeking and Retrieval (ECol'2015)
Report of the ECol Workshop @ CIKM 2015The workshop on the evaluation of collaborative information retrieval and seeking (ECol) was held in conjunction with the 24 th Conference on Information and Knowledge Management (CIKM) in Melbourne, Australia. The workshop featured three main elements. First, a keynote on the main dimensions, challenges, and opportunities in collaborative information retrieval and seeking by Chirag Shah. Second, an oral presentation session in which four papers were presented. Third, a discussion based on three seed research questions: (1) In what ways is collaborative search evaluation more challenging than individual interactive information retrieval (IIIR) evaluation? (2) Would it be possible and/or useful to standardise experimental designs and data for collaborative search evaluation? and (3) For evaluating collaborative search, can we leverage ideas from other tasks such as diversified search, subtopic mining and/or e-discovery? The discussion was intense and raised many points and issues, leading to the proposition that a new evaluation track focused on collaborative information retrieval/seeking tasks, would be worthwhile
Inferring User Knowledge Level from Eye Movement Patterns
The acquisition of information and the search interaction process is influenced strongly by a personâs use of their knowledge of the domain and the task. In this paper we show that a userâs level of domain knowledge can be inferred from their interactive search behaviors without considering the content of queries or documents. A technique is presented to model a userâs information acquisition process during search using only measurements of eye movement patterns. In a user study (n=40) of search in the domain of genomics, a representation of the participantâs domain knowledge was constructed using self-ratings of knowledge of genomics-related terms (n=409). Cognitive effort features associated with reading eye movement patterns were calculated for each reading instance during the search tasks. The results show correlations between the cognitive effort due to reading and an individualâs level of domain knowledge. We construct exploratory regression models that suggest it is possible to build models that can make predictions of the userâs level of knowledge based on real-time measurements of eye movement patterns during a task session
Investigating User Search Tactic Patterns and System Support in Using Digital Libraries
This study aims to investigate users\u27 search tactic application and system support in using digital libraries. A user study was conducted with sixty digital library users. The study was designed to answer three research questions: 1) How do users engage in a search process by applying different types of search tactics while conducting different search tasks?; 2) How does the system support users to apply different types of search tactics?; 3) How do users\u27 search tactic application and system support for different types of search tactics affect search outputs? Sixty student subjects were recruited from different disciplines in a state research university. Multiple methods were employed to collect data, including questionnaires, transaction logs and think-aloud protocols. Subjects were asked to conduct three different types of search tasks, namely, known-item search, specific information search and exploratory search, using Library of Congress Digital Libraries. To explore users\u27 search tactic patterns (RQ1), quantitative analysis was conducted, including descriptive statistics, kernel regression, transition analysis, and clustering analysis. Types of system support were explored by analyzing system features for search tactic application. In addition, users\u27 perceived system support, difficulty, and satisfaction with search tactic application were measured using post-search questionnaires (RQ2). Finally, the study examined the causal relationships between search process and search outputs (RQ 3) based on multiple regression and structural equation modeling.
This study uncovers unique behavior of users\u27 search tactic application and corresponding system support in the context of digital libraries. First, search tactic selections, changes, and transitions were explored in different task situations - known-item search, specific information search, and exploratory search. Search tactic application patterns differed by task type. In known-item search tasks, users preferred to apply search query creation and following search result evaluation tactics, but less query reformulation or iterative tactic loops were observed. In specific information search tasks, iterative search result evaluation strategies were dominantly used. In exploratory tasks, browsing tactics were frequently selected as well as search result evaluation tactics. Second, this study identified different types of system support for search tactic application. System support, difficulty, and satisfaction were measure in terms of search tactic application focusing on search process. Users perceived relatively high system support for accessing and browsing tactics while less support for query reformulation and item evaluation tactics. Third, the effects of search tactic selections and system support on search outputs were examined based on multiple regression. In known-item searches, frequencies of query creation and accessing forwarding tactics would positively affect search efficiency. In specific information searches, time spent on applying search result evaluation tactics would have a positive impact on success rate. In exploratory searches, browsing tactics turned out to be positively associated with aspectual recall and satisfaction with search results. Based on the findings, the author discussed unique patterns of users\u27 search tactic application as well as system design implications in digital library environments
Recommended from our members
A user-centred approach to information retrieval
A user model is a fundamental component in user-centred information retrieval systems. It enables personalization of a user's search experience. The development of such a model involves three phases: collecting information about each user, representing such information, and integrating the model into a retrieval application. Progress in this area is typically met with privacy and scalability challenges that hinder the ability to synthesize collective knowledge from each user's search behaviour. In this thesis, I propose a framework that addresses each of these three phases. The proposed framework is based on social role theory from the social science literature and at the centre of this theory is the concept of a social position. A social position is a label for a group of users with similar behavioural patterns. Examples of such positions are traveller, patient, movie fan, and computer scientist. In this thesis, a social position acts as a label for users who are expected to have similar interests. The proposed framework does not require real users' data; rather it uses the web as a resource to model users.
The proposed framework offers a data-driven and modular design for each of the three phases of building a user model. First, I present an approach to identify social positions from natural language sentences. I formulate this task as a binary classification task and develop a method to enumerate candidate social positions. The proposed classifier achieves an accuracy score of 85.8%, which indicates that social positions can be identified with good accuracy. Through an inter-annotator agreement study, I further show a reasonable level of agreement between users when identifying social positions.
Second, I introduce a novel topic modelling-based approach to represent each social position as a multinomial distribution over words. This approach estimates a topic from a document collection for each position. To construct such a collection for a particular position, I propose a seeding algorithm that extracts a set of terms relevant to the social position. Coherence-based evaluation shows that the proposed approach learns significantly more coherent representations when compared with a relevance modelling baseline.
Third, I present a diversification approach based on the proposed framework. Diversification algorithms aim to return a result list for a search query that would potentially satisfy users with diverse information needs. I propose to identify social positions that are relevant to a search query. These positions act as an implicit representation of the many possible interpretations of the search query. Then, relevant positions are provided to a diversification technique that proportionally diversifies results based on each social position's importance. I evaluate my approach using four test collections provided by the diversity task of the Text REtrieval Conference (TREC) web tracks for 2009, 2010, 2011, and 2012. Results demonstrate that my proposed diversification approach is effective and provides statistically significant improvements over various implicit diversification approaches.
Fourth, I introduce a session-based search system under the framework of learning to rank. Such a system aims to improve the retrieval performance for a search query using previous user interactions during the search session. I present a method to match a search session to its most relevant social positions based on the session's interaction data. I then suggest identifying related sessions from query logs that are likely to be issued by users with similar information needs. Novel learning features are then estimated from the session's social positions, related sessions, and interaction data. I evaluate the proposed system using four test collections from the TREC session track. This approach achieves state-of-the-art results compared with effective session-based search systems. I demonstrate that such a strong performance is mainly attributed to features that are derived from social positions' data
Current Research in Supporting Complex Search Tasks
ABSTRACT ere is broad consensus in the eld of IR that search is complex in many use cases and applications, both on the Web and in domain speci c collections, and both professionally and in our daily life. Yet our understanding of complex search tasks, in comparison to simple look up tasks, is fragmented at best. e workshop addresses many open research questions: What are the obvious use cases and applications of complex search? What are essential features of work tasks and search tasks to take into account? And how do these evolve over time? With a multitude of information, varying from introductory to specialized, and from authoritative to speculative or opinionated, when to show what sources of information? How does the information seeking process evolve and what are relevant di erences between di erent stages? With complex task and search process management, blending searching, browsing, and recommendations, and supporting exploratory search to sensemaking and analytics, UI and UX design pose an overconstrained challenge. How do we evaluate and compare approaches? Which measures should be taken into account? Supporting complex search tasks requires new collaborations across the elds of CHI and IR, and the proposed workshop will bring together a diverse group of researchers to work together on one of the greatest challenges of our eld
- âŠ