1,699 research outputs found
Ranking algorithms for implicit feedback
This report presents novel algorithms to use eye movements as an implicit relevance feedback in order to improve the performance of the searches. The algorithms are evaluated on "Transport Rank Five" Dataset which were previously collected in Task 8.3. We demonstrated that simple linear combination or tensor product of eye movement and image features can improve the retrieval accuracy
Inferring User Knowledge Level from Eye Movement Patterns
The acquisition of information and the search interaction process is influenced strongly by a person’s use of their knowledge of the domain and the task. In this paper we show that a user’s level of domain knowledge can be inferred from their interactive search behaviors without considering the content of queries or documents. A technique is presented to model a user’s information acquisition process during search using only measurements of eye movement patterns. In a user study (n=40) of search in the domain of genomics, a representation of the participant’s domain knowledge was constructed using self-ratings of knowledge of genomics-related terms (n=409). Cognitive effort features associated with reading eye movement patterns were calculated for each reading instance during the search tasks. The results show correlations between the cognitive effort due to reading and an individual’s level of domain knowledge. We construct exploratory regression models that suggest it is possible to build models that can make predictions of the user’s level of knowledge based on real-time measurements of eye movement patterns during a task session
The Role of Word-Eye-Fixations for Query Term Prediction
Throughout the search process, the user's gaze on inspected SERPs and
websites can reveal his or her search interests. Gaze behavior can be captured
with eye tracking and described with word-eye-fixations. Word-eye-fixations
contain the user's accumulated gaze fixation duration on each individual word
of a web page. In this work, we analyze the role of word-eye-fixations for
predicting query terms. We investigate the relationship between a range of
in-session features, in particular, gaze data, with the query terms and train
models for predicting query terms. We use a dataset of 50 search sessions
obtained through a lab study in the social sciences domain. Using established
machine learning models, we can predict query terms with comparably high
accuracy, even with only little training data. Feature analysis shows that the
categories Fixation, Query Relevance and Session Topic contain the most
effective features for our task
Factuality Checking in News Headlines with Eye Tracking
We study whether it is possible to infer if a news headline is true or false
using only the movement of the human eyes when reading news headlines. Our
study with 55 participants who are eye-tracked when reading 108 news headlines
(72 true, 36 false) shows that false headlines receive statistically
significantly less visual attention than true headlines. We further build an
ensemble learner that predicts news headline factuality using only eye-tracking
measurements. Our model yields a mean AUC of 0.688 and is better at detecting
false than true headlines. Through a model analysis, we find that eye-tracking
25 users when reading 3-6 headlines is sufficient for our ensemble learner.Comment: Accepted to SIGIR 202
Relevance Prediction from Eye-movements Using Semi-interpretable Convolutional Neural Networks
We propose an image-classification method to predict the perceived-relevance
of text documents from eye-movements. An eye-tracking study was conducted where
participants read short news articles, and rated them as relevant or irrelevant
for answering a trigger question. We encode participants' eye-movement
scanpaths as images, and then train a convolutional neural network classifier
using these scanpath images. The trained classifier is used to predict
participants' perceived-relevance of news articles from the corresponding
scanpath images. This method is content-independent, as the classifier does not
require knowledge of the screen-content, or the user's information-task. Even
with little data, the image classifier can predict perceived-relevance with up
to 80% accuracy. When compared to similar eye-tracking studies from the
literature, this scanpath image classification method outperforms previously
reported metrics by appreciable margins. We also attempt to interpret how the
image classifier differentiates between scanpaths on relevant and irrelevant
documents
Prediction of Search Targets From Fixations in Open-World Settings
Previous work on predicting the target of visual search from human fixations
only considered closed-world settings in which training labels are available
and predictions are performed for a known set of potential targets. In this
work we go beyond the state of the art by studying search target prediction in
an open-world setting in which we no longer assume that we have fixation data
to train for the search targets. We present a dataset containing fixation data
of 18 users searching for natural images from three image categories within
synthesised image collages of about 80 images. In a closed-world baseline
experiment we show that we can predict the correct target image out of a
candidate set of five images. We then present a new problem formulation for
search target prediction in the open-world setting that is based on learning
compatibilities between fixations and potential targets
- …