25 research outputs found
5 â 4 â 4 â 3: On the Uneven Gaps between Different Levels of Graded User Satisfaction in Interactive Information Retrieval Evaluation
Similar to other ground truth measures, graded user satisfaction has been frequently employed as a continuous variable in information retrieval evaluation based on the assumption that intervals between adjacent grades are quantitatively equal. To examine the validity of equal-gap assumption and explore dynamic perceptual thresholds triggering grade changes in search evaluation, we investigate the extent to which users are sensitive to changes in search efforts and outcomes across different gaps of graded satisfaction. Experiments on four user study datasets (15,337 queries) indicate that 1) User satisfaction sensitivity, especially to offline evaluation metrics, changes significantly across gaps in satisfaction scale; 2) the size and direction of changes in sensitivity vary across study settings, search types, and intentions, especially within â3-5â scale subrange. This study speaks to the fundamentals of user-centered evaluation and advances the knowledge of heterogeneity in satisfaction sensitivity to search efforts and gains and implicit changes in evaluation thresholds
Recommended from our members
IWILDS'22 - Third International Workshop on Investigating Learning During Web Search
Since its inception, the World Wide Web has become a major information source, consulted for a diversity of informational tasks. With an abundance of information available online, Web search engines have been a main entry point, supporting users in finding suitable Web content for ever more complex information needs. The IWILDS workshop series invites research on complex search activities related to human learning. It provides an interdisciplinary platform for the presentation and discussion of recent research on human learning on the Web, welcoming perspectives from computer & information science, education and psychology
Improving Conversational Recommendation Systems via Bias Analysis and Language-Model-Enhanced Data Augmentation
Conversational Recommendation System (CRS) is a rapidly growing research area
that has gained significant attention alongside advancements in language
modelling techniques. However, the current state of conversational
recommendation faces numerous challenges due to its relative novelty and
limited existing contributions. In this study, we delve into benchmark datasets
for developing CRS models and address potential biases arising from the
feedback loop inherent in multi-turn interactions, including selection bias and
multiple popularity bias variants. Drawing inspiration from the success of
generative data via using language models and data augmentation techniques, we
present two novel strategies, 'Once-Aug' and 'PopNudge', to enhance model
performance while mitigating biases. Through extensive experiments on ReDial
and TG-ReDial benchmark datasets, we show a consistent improvement of CRS
techniques with our data augmentation approaches and offer additional insights
on addressing multiple newly formulated biases.Comment: Accepted by EMNLP 2023 (Findings
Information seeking behaviors in different study settings
Empirical studies of information seeking and retrieval (IS&R) behaviors that invite real users are generally conducted in two types of settings: laboratory setting and remote setting. In lab studies, participants are usually invited to a computer lab and perform search tasks on a device provided by the researcher. In remote studies, participants are instructed to work at their own work or home environment using their devices. Their information seeking behavior and experience are often captured by browser plugin (sometimes) coupled with online diaries. While the lab setting gives researchers a great amount of control, it is often criticized for being too artificial which may potentially lead participants to behave unnaturally. However, there has not been clear evidence that if study settings affect how participants look for information online. This poster reports on a work in progress investigating the influence of study settings on usersâ online information seeking behavior. Thirty-six college students finished four search tasks individually in a 1-2-week period. Two tasks were completed in a computer lab while the other two were completed at a location of their choice. They were also interviewed about their experiences in two different settings. This study will provide implications to both future IS&R study design and results interpretation. Preliminary results showed that participants spent more time on web sources when they worked remotely. They felt more relaxed in the remote setting while were also frequently distracted by text/SNS messages or other things at school or home
Investigating usersâ learning and decision-making processes in search interactions: a behavioral economics approach
How users think, learn, and make decisions when interacting with search systems is central to the area of Interactive Information Retrieval (IIR). Most of the prior work are either descriptive in nature or limited to one or two factors. The existing economic models of search illustrate a promising direction for developing formal models of usersâ learning and search interactions. However, they were built upon numerous unrealistic assumptions about human capacity and rationality and ignored the impacts of cognitive biases (Liu & Shah, 2019). Thus, a fundamental question still persists: why do users learn and behave in the way they do in real-life situation? In this work, we seek to build and empirically test a behavioral economics framework, aiming to answer the question and address the limitations of previous studies by (1) linking the "isolated" insights from IIR studies together under a broader theoretical umbrella and also by (2) bridging IIR with the insights from behavioral economics and cognitive psychology. The behavioral economics model represents a "collision" between IIR and behavioral economics approach. This collision has multiple contributions: (1) it generates a concise representation of users' learning process and search interactions; (2) It offers space in the formal model for explaining the biases in usersâ actual behavior; (3) It points to novel research questions regarding user modeling, recommendations design, and systems evaluation for future studies.
Liu, J., & Shah, C. (2019). Investigating the impacts of expectation disconfirmation on Web search. In Proceedings of CHIIR (pp. 319-323). ACM. New York, NY
Investigating usersâ learning and decision-making processes in search interactions: a behavioral economics approach
How users think, learn, and make decisions when interacting with search systems is central to the area of Interactive Information Retrieval (IIR). Most of the prior work are either descriptive in nature or limited to one or two factors. The existing economic models of search illustrate a promising direction for developing formal models of usersâ learning and search interactions. However, they were built upon numerous unrealistic assumptions about human capacity and rationality and ignored the impacts of cognitive biases (Liu & Shah, 2019). Thus, a fundamental question still persists: why do users learn and behave in the way they do in real-life situation? In this work, we seek to build and empirically test a behavioral economics framework, aiming to answer the question and address the limitations of previous studies by (1) linking the "isolated" insights from IIR studies together under a broader theoretical umbrella and also by (2) bridging IIR with the insights from behavioral economics and cognitive psychology. The behavioral economics model represents a "collision" between IIR and behavioral economics approach. This collision has multiple contributions: (1) it generates a concise representation of users' learning process and search interactions; (2) It offers space in the formal model for explaining the biases in usersâ actual behavior; (3) It points to novel research questions regarding user modeling, recommendations design, and systems evaluation for future studies.
Liu, J., & Shah, C. (2019). Investigating the impacts of expectation disconfirmation on Web search. In Proceedings of CHIIR (pp. 319-323). ACM. New York, NY
Recommended from our members
IWILDS'22 - Third International Workshop on Investigating Learning During Web Search
Since its inception, the World Wide Web has become a major information source, consulted for a diversity of informational tasks. With an abundance of information available online, Web search engines have been a main entry point, supporting users in finding suitable Web content for ever more complex information needs. The IWILDS workshop series invites research on complex search activities related to human learning. It provides an interdisciplinary platform for the presentation and discussion of recent research on human learning on the Web, welcoming perspectives from computer & information science, education and psychology