335 research outputs found
Incorporating Clicks, Attention and Satisfaction into a Search Engine Result Page Evaluation Model
Modern search engine result pages often provide immediate value to users and
organize information in such a way that it is easy to navigate. The core
ranking function contributes to this and so do result snippets, smart
organization of result blocks and extensive use of one-box answers or side
panels. While they are useful to the user and help search engines to stand out,
such features present two big challenges for evaluation. First, the presence of
such elements on a search engine result page (SERP) may lead to the absence of
clicks, which is, however, not related to dissatisfaction, so-called "good
abandonments." Second, the non-linear layout and visual difference of SERP
items may lead to non-trivial patterns of user attention, which is not captured
by existing evaluation metrics.
In this paper we propose a model of user behavior on a SERP that jointly
captures click behavior, user attention and satisfaction, the CAS model, and
demonstrate that it gives more accurate predictions of user actions and
self-reported satisfaction than existing models based on clicks alone. We use
the CAS model to build a novel evaluation metric that can be applied to
non-linear SERP layouts and that can account for the utility that users obtain
directly on a SERP. We demonstrate that this metric shows better agreement with
user-reported satisfaction than conventional evaluation metrics.Comment: CIKM2016, Proceedings of the 25th ACM International Conference on
Information and Knowledge Management. 201
Web Browsing Behavior Analysis and Interactive Hypervideo
© ACM, 2013. This is the author's version of the work. It is posted here by permission of ACM for your personal use. Not for redistribution. The definitive version was published in, ACM Transactions on the Web, Vol. 7, No. 4, Article 20, Publication date: October 2013.http://doi.acm.org/ 10.1145/2529995.2529996[EN] Processing data on any sort of user interaction is well known to be cumbersome and mostly time consuming.
In order to assist researchers in easily inspecting fine-grained browsing data, current tools usually display
user interactions as mouse cursor tracks, a video-like visualization scheme. However, to date, traditional
online video inspection has not explored the full capabilities of hypermedia and interactive techniques.
In response to this need, we have developed SMT 2ǫ, a Web-based tracking system for analyzing browsing
behavior using feature-rich hypervideo visualizations. We compare our system to related work in academia
and the industry, showing that ours features unprecedented visualization capabilities. We also show that
SMT 2ǫ efficiently captures browsing data and is perceived by users to be both helpful and usable. A series of
prediction experiments illustrate that raw cursor data are accessible and can be easily handled, providing
evidence that the data can be used to construct and verify research hypotheses. Considering its limitations,
it is our hope that SMT 2ǫ will assist researchers, usability practitioners, and other professionals interested
in understanding how users browse the Web.This work was partially supported by the MIPRCV Consolider Ingenio 2010 program (CSD2007-00018) and the TIN2009-14103-C03-03 project. It is also supported by the 7th Framework Program of the European Commision (FP7/2007-13) under grant agreement No. 287576 (CasMaCat).Leiva Torres, LA.; Vivó Hernando, RA. (2013). Web Browsing Behavior Analysis and Interactive Hypervideo. ACM Transactions on the Web. 7(4):20:1-20:28. https://doi.org/10.1145/2529995.2529996S20:120:287
Encyclopedia of software components
Intelligent browsing through a collection of reusable software components is facilitated with a computer having a video monitor and a user input interface such as a keyboard or a mouse for transmitting user selections, by presenting a picture of encyclopedia volumes with respective visible labels referring to types of software, in accordance with a metaphor in which each volume includes a page having a list of general topics under the software type of the volume and pages having lists of software components for each one of the generic topics, altering the picture to open one of the volumes in response to an initial user selection specifying the one volume to display on the monitor a picture of the page thereof having the list of general topics and altering the picture to display the page thereof having a list of software components under one of the general topics in response to a next user selection specifying the one general topic, and then presenting a picture of a set of different informative plates depicting different types of information about one of the software components in response to a further user selection specifying the one component
Constructing an Interaction Behavior Model for Web Image Search
User interaction behavior is a valuable source of implicit relevance
feedback. In Web image search a different type of search result presentation is
used than in general Web search, which leads to different interaction
mechanisms and user behavior. For example, image search results are
self-contained, so that users do not need to click the results to view the
landing page as in general Web search, which generates sparse click data. Also,
two-dimensional result placement instead of a linear result list makes browsing
behaviors more complex. Thus, it is hard to apply standard user behavior models
(e.g., click models) developed for general Web search to Web image search.
In this paper, we conduct a comprehensive image search user behavior analysis
using data from a lab-based user study as well as data from a commercial search
log. We then propose a novel interaction behavior model, called grid-based user
browsing model (GUBM), whose design is motivated by observations from our data
analysis. GUBM can both capture users' interaction behavior, including cursor
hovering, and alleviate position bias. The advantages of GUBM are two-fold: (1)
It is based on an unsupervised learning method and does not need manually
annotated data for training. (2) It is based on user interaction features on
search engine result pages (SERPs) and is easily transferable to other
scenarios that have a grid-based interface such as video search engines. We
conduct extensive experiments to test the performance of our model using a
large-scale commercial image search log. Experimental results show that in
terms of behavior prediction (perplexity), and topical relevance and image
quality (normalized discounted cumulative gain (NDCG)), GUBM outperforms
state-of-the-art baseline models as well as the original ranking. We make the
implementation of GUBM and related datasets publicly available for future
studies.Comment: 10 page
Getting the Most from Eye-Tracking: User-Interaction Based Reading Region Estimation Dataset and Models
A single digital newsletter usually contains many messages (regions). Users'
reading time spent on, and read level (skip/skim/read-in-detail) of each
message is important for platforms to understand their users' interests,
personalize their contents, and make recommendations. Based on accurate but
expensive-to-collect eyetracker-recorded data, we built models that predict
per-region reading time based on easy-to-collect Javascript browser tracking
data.
With eye-tracking, we collected 200k ground-truth datapoints on participants
reading news on browsers. Then we trained machine learning and deep learning
models to predict message-level reading time based on user interactions like
mouse position, scrolling, and clicking. We reached 27\% percentage error in
reading time estimation with a two-tower neural network based on user
interactions only, against the eye-tracking ground truth data, while the
heuristic baselines have around 46\% percentage error. We also discovered the
benefits of replacing per-session models with per-timestamp models, and adding
user pattern features. We concluded with suggestions on developing
message-level reading estimation techniques based on available data.Comment: Ruoyan Kong, Ruixuan Sun, Charles Chuankai Zhang, Chen Chen, Sneha
Patri, Gayathri Gajjela, and Joseph A. Konstan. Getting the most from
eyetracking: User-interaction based reading region estimation dataset and
models. In Proceedings of the 2023 Symposium on Eye Tracking Research and
Applications, ETRA 23, New York, NY, USA, 2023. Association for Computing
Machiner
Deep Sequential Models for Task Satisfaction Prediction
Detecting and understanding implicit signals of user satisfaction are essential for experimentation aimed at predicting searcher satisfaction. As retrieval systems have advanced, search tasks have steadily emerged as accurate units not only to capture searcher's goals but also in understanding how well a system is able to help the user achieve that goal. However, a major portion of existing work on modeling searcher satisfaction has focused on query level satisfaction. The few existing approaches for task satisfaction prediction have narrowly focused on simple tasks aimed at solving atomic information needs.
In this work we go beyond such atomic tasks and consider the problem of predicting user's satisfaction when engaged in complex search tasks composed of many different queries and subtasks. We begin by considering holistic view of user interactions with the search engine result page (SERP) and extract detailed interaction sequences of their activity. We then look at query level abstraction and propose a novel deep sequential architecture which leverages the extracted interaction sequences to predict query level satisfaction. Further, we enrich this model with auxiliary features which have been traditionally used for satisfaction prediction and propose a unified multi-view model which combines the benefit of user interaction sequences with auxiliary features.
Finally, we go beyond query level abstraction and consider query sequences issued by the user in order to complete a complex task, to make task level satisfaction predictions. We propose a number of functional composition techniques which take into account query level satisfaction estimates along with the query sequence to predict task level satisfaction. Through rigorous experiments, we demonstrate that the proposed deep sequential models significantly outperform established baselines at both query and task satisfaction prediction. Our findings have implications on metric development for gauging user satisfaction and on designing systems which help users accomplish complex search tasks
- …