281 research outputs found

    Towards predicting web searcher gaze position from mouse movements

    Get PDF
    Abstract A key problem in information retrieval is inferring the searcher's interest in the results, which can be used for implicit feedback, query suggestion, and result ranking and summarization. One important indicator of searcher interest is gaze position -that is, the results or the terms in a result listing where a searcher concentrates her attention. Capturing this information normally requires eye tracking equipment, which until now has limited the use of gaze-based feedback to the laboratory. While previous research has reported a correlation between mouse movement and gaze position, we are not aware of prior work on automatically inferring searcher's gaze position from mouse movement or similar interface interactions. In this paper, we report the first results on automatically inferring whether the searcher's gaze position is coordinated with the mouse position -a crucial step towards predicting the searcher gaze position by analyzing the computer mouse movements

    Are all the frames equally important?

    Full text link
    In this work, we address the problem of measuring and predicting temporal video saliency - a metric which defines the importance of a video frame for human attention. Unlike the conventional spatial saliency which defines the location of the salient regions within a frame (as it is done for still images), temporal saliency considers importance of a frame as a whole and may not exist apart from context. The proposed interface is an interactive cursor-based algorithm for collecting experimental data about temporal saliency. We collect the first human responses and perform their analysis. As a result, we show that qualitatively, the produced scores have very explicit meaning of the semantic changes in a frame, while quantitatively being highly correlated between all the observers. Apart from that, we show that the proposed tool can simultaneously collect fixations similar to the ones produced by eye-tracker in a more affordable way. Further, this approach may be used for creation of first temporal saliency datasets which will allow training computational predictive algorithms. The proposed interface does not rely on any special equipment, which allows to run it remotely and cover a wide audience.Comment: CHI'20 Late Breaking Work

    Incorporating Clicks, Attention and Satisfaction into a Search Engine Result Page Evaluation Model

    Get PDF
    Modern search engine result pages often provide immediate value to users and organize information in such a way that it is easy to navigate. The core ranking function contributes to this and so do result snippets, smart organization of result blocks and extensive use of one-box answers or side panels. While they are useful to the user and help search engines to stand out, such features present two big challenges for evaluation. First, the presence of such elements on a search engine result page (SERP) may lead to the absence of clicks, which is, however, not related to dissatisfaction, so-called "good abandonments." Second, the non-linear layout and visual difference of SERP items may lead to non-trivial patterns of user attention, which is not captured by existing evaluation metrics. In this paper we propose a model of user behavior on a SERP that jointly captures click behavior, user attention and satisfaction, the CAS model, and demonstrate that it gives more accurate predictions of user actions and self-reported satisfaction than existing models based on clicks alone. We use the CAS model to build a novel evaluation metric that can be applied to non-linear SERP layouts and that can account for the utility that users obtain directly on a SERP. We demonstrate that this metric shows better agreement with user-reported satisfaction than conventional evaluation metrics.Comment: CIKM2016, Proceedings of the 25th ACM International Conference on Information and Knowledge Management. 201

    Validating simulated interaction for retrieval evaluation

    Get PDF
    A searcher’s interaction with a retrieval system consists of actions such as query formulation, search result list interaction and document interaction. The simulation of searcher interaction has recently gained momentum in the analysis and evaluation of interactive information retrieval (IIR). However, a key issue that has not yet been adequately addressed is the validity of such IIR simulations and whether they reliably predict the performance obtained by a searcher across the session. The aim of this paper is to determine the validity of the common interaction model (CIM) typically used for simulating multi-query sessions. We focus on search result interactions, i.e., inspecting snippets, examining documents and deciding when to stop examining the results of a single query, or when to stop the whole session. To this end, we run a series of simulations grounded by real world behavioral data to show how accurate and responsive the model is to various experimental conditions under which the data were produced. We then validate on a second real world data set derived under similar experimental conditions. We seek to predict cumulated gain across the session. We find that the interaction model with a query-level stopping strategy based on consecutive non-relevant snippets leads to the highest prediction accuracy, and lowest deviation from ground truth, around 9 to 15% depending on the experimental conditions. To our knowledge, the present study is the first validation effort of the CIM that shows that the model’s acceptance and use is justified within IIR evaluations. We also identify and discuss ways to further improve the CIM and its behavioral parameters for more accurate simulations

    Getting the Most from Eye-Tracking: User-Interaction Based Reading Region Estimation Dataset and Models

    Full text link
    A single digital newsletter usually contains many messages (regions). Users' reading time spent on, and read level (skip/skim/read-in-detail) of each message is important for platforms to understand their users' interests, personalize their contents, and make recommendations. Based on accurate but expensive-to-collect eyetracker-recorded data, we built models that predict per-region reading time based on easy-to-collect Javascript browser tracking data. With eye-tracking, we collected 200k ground-truth datapoints on participants reading news on browsers. Then we trained machine learning and deep learning models to predict message-level reading time based on user interactions like mouse position, scrolling, and clicking. We reached 27\% percentage error in reading time estimation with a two-tower neural network based on user interactions only, against the eye-tracking ground truth data, while the heuristic baselines have around 46\% percentage error. We also discovered the benefits of replacing per-session models with per-timestamp models, and adding user pattern features. We concluded with suggestions on developing message-level reading estimation techniques based on available data.Comment: Ruoyan Kong, Ruixuan Sun, Charles Chuankai Zhang, Chen Chen, Sneha Patri, Gayathri Gajjela, and Joseph A. Konstan. Getting the most from eyetracking: User-interaction based reading region estimation dataset and models. In Proceedings of the 2023 Symposium on Eye Tracking Research and Applications, ETRA 23, New York, NY, USA, 2023. Association for Computing Machiner

    Comparative analysis of relevance feedback methods based on two user studies

    Get PDF
    AbstractRigorous analysis of user interest in web documents is essential for the development of recommender systems. This paper investigates the relationship between the implicit parameters and user explicit rating during their search and reading tasks. The objective of this paper is therefore three-fold: firstly, the paper identifies the implicit parameters which are statistically correlated with the user explicit rating through user study 1. These parameters are used to develop a predictive model which can be used to represent users’ perceived relevance of documents. Secondly, it investigates the reliability and validity of the predictive model by comparing it with eye gaze during a reading task through user study 2. Our findings suggest that there is no significant difference between the predictive model based on implicit indicators and eye gaze within the context examined. Thirdly, we measured the consistency of user explicit rating in both studies and found significant consistency in user explicit rating of document relevance and interest level which further validates the predictive model. We envisage that the results presented in this paper can help to develop recommender and personalised systems for recommending documents to users based on their previous interaction with the system

    Gaze–mouse coordinated movements and dependency with coordination demands in tracing.

    Get PDF
    Eye movements have been shown to lead hand movements in tracing tasks where subjects have to move their fingers along a predefined trace. The question remained, whether the leading relationship was similar when tracing with a pointing device, such as a mouse; more importantly, whether tasks that required more or less gaze–mouse coordination would introduce variation in this pattern of behaviour, in terms of both spatial and temporal leading of gaze position to mouse movement. A three-level gaze–mouse coordination demand paradigm was developed to address these questions. A substantial dataset of 1350 trials was collected and analysed. The linear correlation of gaze–mouse movements, the statistical distribution of the lead time, as well as the lead distance between gaze and mouse cursor positions were all considered, and we proposed a new method to quantify lead time in gaze–mouse coordination. The results supported and extended previous empirical findings that gaze often led mouse movements. We found that the gaze–mouse coordination demands of the task were positively correlated to the gaze lead, both spatially and temporally. However, the mouse movements were synchronised with or led gaze in the simple straight line condition, which demanded the least gaze–mouse coordination

    Predicting users’ behavior using mouse movement information: an information foraging theory perspective

    Get PDF
    The prediction of users’ behavior is essential for keeping useful information on the web. Previous studies have used mouse cursor information in web usability evaluation and designing user-oriented search interfaces. However, we know fairly to a small extent pertaining to user behavior, specifically clicking and navigating behavior, for prolonged search session illustrating sophisticated search norms. In this study, we perform extensive analysis on a mouse movement activities dataset to capture every users’ movement pattern using the effects of information foraging theory (IFT). The mouse cursor movement information dataset includes the timing and positioning information of mouse cursors collected from several users in different sessions. The tasks vary in two dimensions: (1) to determine the interactive elements (i.e., information episodes) of user interaction with the site; (2) adopt these findings to predict users’ behavior by exploiting the LSTM model. Our model is developed to find the main patterns of the user’s movement on the site and simulate the behavior of users’ mouse movement on any website. We validate our approach on a mouse movement dataset with a rich collection of time and position information of mouse pointers in which searchers and websites are annotated by web foragers and information patches, respectively. Our evaluation shows that the proposed IFT-based effects provide an LSTM model a more accurate interpretative exposition of all the patterns in the movement of the users’ mouse cursors across the screen
    • …
    corecore