13 research outputs found
Lexical Query Modeling in Session Search
Lexical query modeling has been the leading paradigm for session search. In
this paper, we analyze TREC session query logs and compare the performance of
different lexical matching approaches for session search. Naive methods based
on term frequency weighing perform on par with specialized session models. In
addition, we investigate the viability of lexical query models in the setting
of session search. We give important insights into the potential and
limitations of lexical query modeling for session search and propose future
directions for the field of session search.Comment: ICTIR2016, Proceedings of the 2nd ACM International Conference on the
Theory of Information Retrieval. 201
Search literacy: learning to search to learn
People can often find themselves out of their depth when they face knowledge-based problems, such as faulty technology, or medical concerns. This can also happen in everyday domains that users are simply inexperienced with, like cooking. These are common exploratory search conditions, where users donât quite know enough about the domain to know if they are submitting a good query, nor if the results directly resolve their need or can be translated to do so. In such situations, people turn to their friends for help, or to forums like StackOverflow, so that someone can explain things to them and translate information to their specific need. This short paper describes work-in-progress within a Google-funded project focusing on Search Literacy in these situations, where improved search skills will help users to learn as they search, to search better, and to better comprehend the results. Focusing on the technology-problem domain, we present initial results from a qualitative study of questions asked and answers given in StackOverflow, and present plans for designing search engine support to help searchers learn as they search
Extracting Hierarchies of Search Tasks & Subtasks via a Bayesian Nonparametric Approach
A significant amount of search queries originate from some real world
information need or tasks. In order to improve the search experience of the end
users, it is important to have accurate representations of tasks. As a result,
significant amount of research has been devoted to extracting proper
representations of tasks in order to enable search systems to help users
complete their tasks, as well as providing the end user with better query
suggestions, for better recommendations, for satisfaction prediction, and for
improved personalization in terms of tasks. Most existing task extraction
methodologies focus on representing tasks as flat structures. However, tasks
often tend to have multiple subtasks associated with them and a more
naturalistic representation of tasks would be in terms of a hierarchy, where
each task can be composed of multiple (sub)tasks. To this end, we propose an
efficient Bayesian nonparametric model for extracting hierarchies of such tasks
\& subtasks. We evaluate our method based on real world query log data both
through quantitative and crowdsourced experiments and highlight the importance
of considering task/subtask hierarchies.Comment: 10 pages. Accepted at SIGIR 2017 as a full pape
Query Generation as Result Aggregation for Knowledge Representation
Knowledge representations have greatly enhanced the fundamental human problem of information search, profoundly changing representations of queries and database information for various retrieval tasks. Despite new technologies, little thought has been given in the field of query recommendation â recommending keyword queries to end users â to a holistic approach that recommends constructed queries from relevant snippets of information; pre-existing queries are used instead. Can we instead determine relevant information a user should see and aggregate it into a query? We construct a general framework leveraging various retrieval architectures to aggregate relevant information into a natural language query for recommendation. We test this framework in text retrieval, aggregating text snippets and comparing output queries to user generated queries. We show that an algorithm can generate queries more closely resembling the original and give effective retrieval results. Our simple approach shows promise for also leveraging knowledge structures to generate effective query recommendations
Examining Usersâ Knowledge Change in the Task Completion Process
This paper examines the changes of information searchersâ topic knowledge levels in the process of completing information tasks. Multi-session tasks were used in the study, which enables the convenience of eliciting usersâ topic knowledge during their process of completing the whole tasks. The study was a 3-session laboratory experiment with 24 participants, each time working on one subtask in an assigned 3-session general task. The general task was either parallel or dependently structured. Questionnaires were administered before and after each session to elicit usersâ perceptions of their knowledge levels, task attributes, and other task features, for both the overall task and the sub-tasks. Our results support the assumption that usersâ knowledge generally increases after each search session, but there were exceptions in which a âceilingâ effect was shown. We also found that knowledge was correlated with usersâ perceptions of task attributes and accomplishment. In addition, task type was found to affect several aspects of knowledge levels and knowledge change. These findings further our understanding of usersâ knowledge in information tasks and are thus helpful for information retrieval research and system design