2,097 research outputs found
Enhancing User Personalization in Conversational Recommenders
Conversational recommenders are emerging as a powerful tool to personalize a
user's recommendation experience. Through a back-and-forth dialogue, users can
quickly hone in on just the right items. Many approaches to conversational
recommendation, however, only partially explore the user preference space and
make limiting assumptions about how user feedback can be best incorporated,
resulting in long dialogues and poor recommendation performance. In this paper,
we propose a novel conversational recommendation framework with two unique
features: (i) a greedy NDCG attribute selector, to enhance user personalization
in the interactive preference elicitation process by prioritizing attributes
that most effectively represent the actual preference space of the user; and
(ii) a user representation refiner, to effectively fuse together the user
preferences collected from the interactive elicitation process to obtain a more
personalized understanding of the user. Through extensive experiments on four
frequently used datasets, we find the proposed framework not only outperforms
all the state-of-the-art conversational recommenders (in terms of both
recommendation performance and conversation efficiency), but also provides a
more personalized experience for the user under the proposed multi-groundtruth
multi-round conversational recommendation setting.Comment: To Appear On TheWebConf (WWW) 202
How Trustworthy are the Existing Performance Evaluations for Basic Vision Tasks?
This paper examines performance evaluation criteria for basic vision tasks
involving sets of objects namely, object detection, instance-level segmentation
and multi-object tracking. The rankings of algorithms by current criteria
fluctuate with different choices of parameters, e.g. Intersection over Union
(IoU) threshold, making their evaluations unreliable. More importantly, there
is no means to even verify whether we can trust the evaluations of a criterion.
This work advocates a notion of trustworthiness for criteria, which requires
(i) robustness to parameters for reliability, (ii) contextual meaningfulness in
sanity tests, and (iii) consistency with mathematical requirements such as the
metric properties. We show that such requirements were overlooked by many
widely-used criteria. We also explore alternative criteria using metrics for
sets of shapes, and assess them against these requirements to find trustworthy
criteria
K-Space at TRECVID 2008
In this paper we describe K-Space’s participation in
TRECVid 2008 in the interactive search task. For 2008
the K-Space group performed one of the largest interactive
video information retrieval experiments conducted
in a laboratory setting. We had three institutions participating
in a multi-site multi-system experiment. In
total 36 users participated, 12 each from Dublin City
University (DCU, Ireland), University of Glasgow (GU,
Scotland) and Centrum Wiskunde and Informatica (CWI,
the Netherlands). Three user interfaces were developed,
two from DCU which were also used in 2007 as well as
an interface from GU. All interfaces leveraged the same
search service. Using a latin squares arrangement, each
user conducted 12 topics, leading in total to 6 runs per
site, 18 in total. We officially submitted for evaluation 3
of these runs to NIST with an additional expert run using
a 4th system. Our submitted runs performed around
the median. In this paper we will present an overview of
the search system utilized, the experimental setup and a
preliminary analysis of our results
Prioritized Multi-View Stereo Depth Map Generation Using Confidence Prediction
In this work, we propose a novel approach to prioritize the depth map
computation of multi-view stereo (MVS) to obtain compact 3D point clouds of
high quality and completeness at low computational cost. Our prioritization
approach operates before the MVS algorithm is executed and consists of two
steps. In the first step, we aim to find a good set of matching partners for
each view. In the second step, we rank the resulting view clusters (i.e. key
views with matching partners) according to their impact on the fulfillment of
desired quality parameters such as completeness, ground resolution and
accuracy. Additional to geometric analysis, we use a novel machine learning
technique for training a confidence predictor. The purpose of this confidence
predictor is to estimate the chances of a successful depth reconstruction for
each pixel in each image for one specific MVS algorithm based on the RGB images
and the image constellation. The underlying machine learning technique does not
require any ground truth or manually labeled data for training, but instead
adapts ideas from depth map fusion for providing a supervision signal. The
trained confidence predictor allows us to evaluate the quality of image
constellations and their potential impact to the resulting 3D reconstruction
and thus builds a solid foundation for our prioritization approach. In our
experiments, we are thus able to reach more than 70% of the maximal reachable
quality fulfillment using only 5% of the available images as key views. For
evaluating our approach within and across different domains, we use two
completely different scenarios, i.e. cultural heritage preservation and
reconstruction of single family houses.Comment: This paper was accepted to ISPRS Journal of Photogrammetry and Remote
Sensing
(https://www.journals.elsevier.com/isprs-journal-of-photogrammetry-and-remote-sensing)
on March 21, 2018. The official version will be made available on
ScienceDirect (https://www.sciencedirect.com
- …