25 research outputs found

    5 − 4 ≠ 4 − 3: On the Uneven Gaps between Different Levels of Graded User Satisfaction in Interactive Information Retrieval Evaluation

    Get PDF
    Similar to other ground truth measures, graded user satisfaction has been frequently employed as a continuous variable in information retrieval evaluation based on the assumption that intervals between adjacent grades are quantitatively equal. To examine the validity of equal-gap assumption and explore dynamic perceptual thresholds triggering grade changes in search evaluation, we investigate the extent to which users are sensitive to changes in search efforts and outcomes across different gaps of graded satisfaction. Experiments on four user study datasets (15,337 queries) indicate that 1) User satisfaction sensitivity, especially to offline evaluation metrics, changes significantly across gaps in satisfaction scale; 2) the size and direction of changes in sensitivity vary across study settings, search types, and intentions, especially within “3-5” scale subrange. This study speaks to the fundamentals of user-centered evaluation and advances the knowledge of heterogeneity in satisfaction sensitivity to search efforts and gains and implicit changes in evaluation thresholds

    Improving Conversational Recommendation Systems via Bias Analysis and Language-Model-Enhanced Data Augmentation

    Full text link
    Conversational Recommendation System (CRS) is a rapidly growing research area that has gained significant attention alongside advancements in language modelling techniques. However, the current state of conversational recommendation faces numerous challenges due to its relative novelty and limited existing contributions. In this study, we delve into benchmark datasets for developing CRS models and address potential biases arising from the feedback loop inherent in multi-turn interactions, including selection bias and multiple popularity bias variants. Drawing inspiration from the success of generative data via using language models and data augmentation techniques, we present two novel strategies, 'Once-Aug' and 'PopNudge', to enhance model performance while mitigating biases. Through extensive experiments on ReDial and TG-ReDial benchmark datasets, we show a consistent improvement of CRS techniques with our data augmentation approaches and offer additional insights on addressing multiple newly formulated biases.Comment: Accepted by EMNLP 2023 (Findings

    Information seeking behaviors in different study settings

    Get PDF
    Empirical studies of information seeking and retrieval (IS&R) behaviors that invite real users are generally conducted in two types of settings: laboratory setting and remote setting. In lab studies, participants are usually invited to a computer lab and perform search tasks on a device provided by the researcher. In remote studies, participants are instructed to work at their own work or home environment using their devices. Their information seeking behavior and experience are often captured by browser plugin (sometimes) coupled with online diaries. While the lab setting gives researchers a great amount of control, it is often criticized for being too artificial which may potentially lead participants to behave unnaturally. However, there has not been clear evidence that if study settings affect how participants look for information online. This poster reports on a work in progress investigating the influence of study settings on users’ online information seeking behavior. Thirty-six college students finished four search tasks individually in a 1-2-week period. Two tasks were completed in a computer lab while the other two were completed at a location of their choice. They were also interviewed about their experiences in two different settings. This study will provide implications to both future IS&R study design and results interpretation. Preliminary results showed that participants spent more time on web sources when they worked remotely. They felt more relaxed in the remote setting while were also frequently distracted by text/SNS messages or other things at school or home

    Investigating users’ learning and decision-making processes in search interactions: a behavioral economics approach

    No full text
    How users think, learn, and make decisions when interacting with search systems is central to the area of Interactive Information Retrieval (IIR). Most of the prior work are either descriptive in nature or limited to one or two factors. The existing economic models of search illustrate a promising direction for developing formal models of users’ learning and search interactions. However, they were built upon numerous unrealistic assumptions about human capacity and rationality and ignored the impacts of cognitive biases (Liu & Shah, 2019). Thus, a fundamental question still persists: why do users learn and behave in the way they do in real-life situation? In this work, we seek to build and empirically test a behavioral economics framework, aiming to answer the question and address the limitations of previous studies by (1) linking the "isolated" insights from IIR studies together under a broader theoretical umbrella and also by (2) bridging IIR with the insights from behavioral economics and cognitive psychology. The behavioral economics model represents a "collision" between IIR and behavioral economics approach. This collision has multiple contributions: (1) it generates a concise representation of users' learning process and search interactions; (2) It offers space in the formal model for explaining the biases in users’ actual behavior; (3) It points to novel research questions regarding user modeling, recommendations design, and systems evaluation for future studies. Liu, J., & Shah, C. (2019). Investigating the impacts of expectation disconfirmation on Web search. In Proceedings of CHIIR (pp. 319-323). ACM. New York, NY

    Investigating users’ learning and decision-making processes in search interactions: a behavioral economics approach

    No full text
    How users think, learn, and make decisions when interacting with search systems is central to the area of Interactive Information Retrieval (IIR). Most of the prior work are either descriptive in nature or limited to one or two factors. The existing economic models of search illustrate a promising direction for developing formal models of users’ learning and search interactions. However, they were built upon numerous unrealistic assumptions about human capacity and rationality and ignored the impacts of cognitive biases (Liu & Shah, 2019). Thus, a fundamental question still persists: why do users learn and behave in the way they do in real-life situation? In this work, we seek to build and empirically test a behavioral economics framework, aiming to answer the question and address the limitations of previous studies by (1) linking the "isolated" insights from IIR studies together under a broader theoretical umbrella and also by (2) bridging IIR with the insights from behavioral economics and cognitive psychology. The behavioral economics model represents a "collision" between IIR and behavioral economics approach. This collision has multiple contributions: (1) it generates a concise representation of users' learning process and search interactions; (2) It offers space in the formal model for explaining the biases in users’ actual behavior; (3) It points to novel research questions regarding user modeling, recommendations design, and systems evaluation for future studies. Liu, J., & Shah, C. (2019). Investigating the impacts of expectation disconfirmation on Web search. In Proceedings of CHIIR (pp. 319-323). ACM. New York, NY

    Interactive IR user study design, evaluation, and reporting

    No full text
    corecore