20,942 research outputs found
Incorporating Clicks, Attention and Satisfaction into a Search Engine Result Page Evaluation Model
Modern search engine result pages often provide immediate value to users and
organize information in such a way that it is easy to navigate. The core
ranking function contributes to this and so do result snippets, smart
organization of result blocks and extensive use of one-box answers or side
panels. While they are useful to the user and help search engines to stand out,
such features present two big challenges for evaluation. First, the presence of
such elements on a search engine result page (SERP) may lead to the absence of
clicks, which is, however, not related to dissatisfaction, so-called "good
abandonments." Second, the non-linear layout and visual difference of SERP
items may lead to non-trivial patterns of user attention, which is not captured
by existing evaluation metrics.
In this paper we propose a model of user behavior on a SERP that jointly
captures click behavior, user attention and satisfaction, the CAS model, and
demonstrate that it gives more accurate predictions of user actions and
self-reported satisfaction than existing models based on clicks alone. We use
the CAS model to build a novel evaluation metric that can be applied to
non-linear SERP layouts and that can account for the utility that users obtain
directly on a SERP. We demonstrate that this metric shows better agreement with
user-reported satisfaction than conventional evaluation metrics.Comment: CIKM2016, Proceedings of the 25th ACM International Conference on
Information and Knowledge Management. 201
The Phyre2 web portal for protein modeling, prediction and analysis
Phyre2 is a suite of tools available on the web to predict and analyze protein structure, function and mutations. The focus of Phyre2 is to provide biologists with a simple and intuitive interface to state-of-the-art protein bioinformatics tools. Phyre2 replaces Phyre, the original version of the server for which we previously published a paper in Nature Protocols. In this updated protocol, we describe Phyre2, which uses advanced remote homology detection methods to build 3D models, predict ligand binding sites and analyze the effect of amino acid variants (e.g., nonsynonymous SNPs (nsSNPs)) for a user's protein sequence. Users are guided through results by a simple interface at a level of detail they determine. This protocol will guide users from submitting a protein sequence to interpreting the secondary and tertiary structure of their models, their domain composition and model quality. A range of additional available tools is described to find a protein structure in a genome, to submit large number of sequences at once and to automatically run weekly searches for proteins that are difficult to model. The server is available at http://www.sbg.bio.ic.ac.uk/phyre2. A typical structure prediction will be returned between 30 min and 2 h after submission
Unbiased Learning to Rank with Unbiased Propensity Estimation
Learning to rank with biased click data is a well-known challenge. A variety
of methods has been explored to debias click data for learning to rank such as
click models, result interleaving and, more recently, the unbiased
learning-to-rank framework based on inverse propensity weighting. Despite their
differences, most existing studies separate the estimation of click bias
(namely the \textit{propensity model}) from the learning of ranking algorithms.
To estimate click propensities, they either conduct online result
randomization, which can negatively affect the user experience, or offline
parameter estimation, which has special requirements for click data and is
optimized for objectives (e.g. click likelihood) that are not directly related
to the ranking performance of the system. In this work, we address those
problems by unifying the learning of propensity models and ranking models. We
find that the problem of estimating a propensity model from click data is a
dual problem of unbiased learning to rank. Based on this observation, we
propose a Dual Learning Algorithm (DLA) that jointly learns an unbiased ranker
and an \textit{unbiased propensity model}. DLA is an automatic unbiased
learning-to-rank framework as it directly learns unbiased ranking models from
biased click data without any preprocessing. It can adapt to the change of bias
distributions and is applicable to online learning. Our empirical experiments
with synthetic and real-world data show that the models trained with DLA
significantly outperformed the unbiased learning-to-rank algorithms based on
result randomization and the models trained with relevance signals extracted by
click models
Why People Search for Images using Web Search Engines
What are the intents or goals behind human interactions with image search
engines? Knowing why people search for images is of major concern to Web image
search engines because user satisfaction may vary as intent varies. Previous
analyses of image search behavior have mostly been query-based, focusing on
what images people search for, rather than intent-based, that is, why people
search for images. To date, there is no thorough investigation of how different
image search intents affect users' search behavior.
In this paper, we address the following questions: (1)Why do people search
for images in text-based Web image search systems? (2)How does image search
behavior change with user intent? (3)Can we predict user intent effectively
from interactions during the early stages of a search session? To this end, we
conduct both a lab-based user study and a commercial search log analysis.
We show that user intents in image search can be grouped into three classes:
Explore/Learn, Entertain, and Locate/Acquire. Our lab-based user study reveals
different user behavior patterns under these three intents, such as first click
time, query reformulation, dwell time and mouse movement on the result page.
Based on user interaction features during the early stages of an image search
session, that is, before mouse scroll, we develop an intent classifier that is
able to achieve promising results for classifying intents into our three intent
classes. Given that all features can be obtained online and unobtrusively, the
predicted intents can provide guidance for choosing ranking methods immediately
after scrolling
Restriction landmark genomic scanning (RLGS) spot identification by second generation virtual RLGS in multiple genomes with multiple enzyme combinations.
BackgroundRestriction landmark genomic scanning (RLGS) is one of the most successfully applied methods for the identification of aberrant CpG island hypermethylation in cancer, as well as the identification of tissue specific methylation of CpG islands. However, a limitation to the utility of this method has been the ability to assign specific genomic sequences to RLGS spots, a process commonly referred to as "RLGS spot cloning."ResultsWe report the development of a virtual RLGS method (vRLGS) that allows for RLGS spot identification in any sequenced genome and with any enzyme combination. We report significant improvements in predicting DNA fragment migration patterns by incorporating sequence information into the migration models, and demonstrate a median Euclidian distance between actual and predicted spot migration of 0.18 centimeters for the most complex human RLGS pattern. We report the confirmed identification of 795 human and 530 mouse RLGS spots for the most commonly used enzyme combinations. We also developed a method to filter the virtual spots to reduce the number of extra spots seen on a virtual profile for both the mouse and human genomes. We demonstrate use of this filter to simplify spot cloning and to assist in the identification of spots exhibiting tissue-specific methylation.ConclusionThe new vRLGS system reported here is highly robust for the identification of novel RLGS spots. The migration models developed are not specific to the genome being studied or the enzyme combination being used, making this tool broadly applicable. The identification of hundreds of mouse and human RLGS spot loci confirms the strong bias of RLGS studies to focus on CpG islands and provides a valuable resource to rapidly study their methylation
Constructing an Interaction Behavior Model for Web Image Search
User interaction behavior is a valuable source of implicit relevance
feedback. In Web image search a different type of search result presentation is
used than in general Web search, which leads to different interaction
mechanisms and user behavior. For example, image search results are
self-contained, so that users do not need to click the results to view the
landing page as in general Web search, which generates sparse click data. Also,
two-dimensional result placement instead of a linear result list makes browsing
behaviors more complex. Thus, it is hard to apply standard user behavior models
(e.g., click models) developed for general Web search to Web image search.
In this paper, we conduct a comprehensive image search user behavior analysis
using data from a lab-based user study as well as data from a commercial search
log. We then propose a novel interaction behavior model, called grid-based user
browsing model (GUBM), whose design is motivated by observations from our data
analysis. GUBM can both capture users' interaction behavior, including cursor
hovering, and alleviate position bias. The advantages of GUBM are two-fold: (1)
It is based on an unsupervised learning method and does not need manually
annotated data for training. (2) It is based on user interaction features on
search engine result pages (SERPs) and is easily transferable to other
scenarios that have a grid-based interface such as video search engines. We
conduct extensive experiments to test the performance of our model using a
large-scale commercial image search log. Experimental results show that in
terms of behavior prediction (perplexity), and topical relevance and image
quality (normalized discounted cumulative gain (NDCG)), GUBM outperforms
state-of-the-art baseline models as well as the original ranking. We make the
implementation of GUBM and related datasets publicly available for future
studies.Comment: 10 page
Finding any Waldo: zero-shot invariant and efficient visual search
Searching for a target object in a cluttered scene constitutes a fundamental
challenge in daily vision. Visual search must be selective enough to
discriminate the target from distractors, invariant to changes in the
appearance of the target, efficient to avoid exhaustive exploration of the
image, and must generalize to locate novel target objects with zero-shot
training. Previous work has focused on searching for perfect matches of a
target after extensive category-specific training. Here we show for the first
time that humans can efficiently and invariantly search for natural objects in
complex scenes. To gain insight into the mechanisms that guide visual search,
we propose a biologically inspired computational model that can locate targets
without exhaustive sampling and generalize to novel objects. The model provides
an approximation to the mechanisms integrating bottom-up and top-down signals
during search in natural scenes.Comment: Number of figures: 6 Number of supplementary figures: 1
- …