20,465 research outputs found
Learning Conditional Lexicographic Preference Trees
We introduce a generalization of lexicographic orders and argue that this generalization constitutes an interesting model class for preference learning in general and ranking in particular. We propose a learning algorithm for inducing a so-called conditional lexicographic preference tree from a given set of training data in the form of pairwise comparisons between objects. Experimentally, we validate our algorithm in the setting of multipartite ranking
Ordered Preference Elicitation Strategies for Supporting Multi-Objective Decision Making
In multi-objective decision planning and learning, much attention is paid to
producing optimal solution sets that contain an optimal policy for every
possible user preference profile. We argue that the step that follows, i.e,
determining which policy to execute by maximising the user's intrinsic utility
function over this (possibly infinite) set, is under-studied. This paper aims
to fill this gap. We build on previous work on Gaussian processes and pairwise
comparisons for preference modelling, extend it to the multi-objective decision
support scenario, and propose new ordered preference elicitation strategies
based on ranking and clustering. Our main contribution is an in-depth
evaluation of these strategies using computer and human-based experiments. We
show that our proposed elicitation strategies outperform the currently used
pairwise methods, and found that users prefer ranking most. Our experiments
further show that utilising monotonicity information in GPs by using a linear
prior mean at the start and virtual comparisons to the nadir and ideal points,
increases performance. We demonstrate our decision support framework in a
real-world study on traffic regulation, conducted with the city of Amsterdam.Comment: AAMAS 2018, Source code at
https://github.com/lmzintgraf/gp_pref_elici
Neural Collaborative Ranking
Recommender systems are aimed at generating a personalized ranked list of
items that an end user might be interested in. With the unprecedented success
of deep learning in computer vision and speech recognition, recently it has
been a hot topic to bridge the gap between recommender systems and deep neural
network. And deep learning methods have been shown to achieve state-of-the-art
on many recommendation tasks. For example, a recent model, NeuMF, first
projects users and items into some shared low-dimensional latent feature space,
and then employs neural nets to model the interaction between the user and item
latent features to obtain state-of-the-art performance on the recommendation
tasks. NeuMF assumes that the non-interacted items are inherent negative and
uses negative sampling to relax this assumption. In this paper, we examine an
alternative approach which does not assume that the non-interacted items are
necessarily negative, just that they are less preferred than interacted items.
Specifically, we develop a new classification strategy based on the widely used
pairwise ranking assumption. We combine our classification strategy with the
recently proposed neural collaborative filtering framework, and propose a
general collaborative ranking framework called Neural Network based
Collaborative Ranking (NCR). We resort to a neural network architecture to
model a user's pairwise preference between items, with the belief that neural
network will effectively capture the latent structure of latent factors. The
experimental results on two real-world datasets show the superior performance
of our models in comparison with several state-of-the-art approaches.Comment: Proceedings of the 2018 ACM on Conference on Information and
Knowledge Managemen
From Rankings to Ratings: Rank Scoring Via Active Learning
In this paper we present RaScAL, an active learning approach to predicting real-valued scores for items given access to an oracle and knowledge of the overall item-ranking. In an experiment on six different datasets, we find that RaScAL consistently outperforms the state-of-the-art. The RaScAL algorithm represents one step within a proposed overall system of preference elicitations of scores via pairwise comparisons
Recommended from our members
Maxing, Ranking and Preference Learning
PAC maximum selection (maxing) and ranking of elements via randompairwise comparisons have diverse applications and have been studiedunder many models and assumptions. We consider -PACmaxing and ranking using pairwise comparisons for \nobreak{general}probabilistic models. We present a comprehensive understanding ofthree important problems in PAC preference learning: maxing, ranking,and estimating \emph{all} pairwise preference probabilities, in theadaptive setting.{\bf SST + STI:} We consider -PAC maximum-selectionand ranking using pairwise comparisons for \nobreak{general}probabilistic models whose comparison probabilities satisfy\emph{strong stochastic transitivity (SST)} and \emph{stochastic triangle inequality (STI)}. Modifying the popular knockouttournament, we propose a simple maximum-selection algorithm that uses comparisons, optimal up to a constantfactor. We then derive a general framework that uses noisy binarysearch to speed up many ranking algorithms, and combine it with mergesort to obtain a ranking algorithm that uses \mathcal{O}\left(\fracn{\epsilon^2}\log n(\log \log n)^3\right) comparisons for, optimal up to a factor.{\bf SST +/- STI and Borda:} With just one simple natural assumption:\emph{strong stochastic transitivity (SST)}, we show that maxing canbe performed with linearly many comparisons yet ranking requiresquadratically many. With no assumptions at all, we show that for theBorda-score metric, maximum selection can be performed with linearlymany comparisons and ranking can be performed with \cO(n\log n)comparisons.{\bf General Transitive Models} With just \emph{Weak Stochastic Transitivity (WST)}, we show that maxing requires comparisons and with slightly more restrictive \emph{Medium Stochastic Transitivity (MST)}, we present a linear complexity maxingalgorithm. With \emph{Strong Stochastic Transitivity (SST)} and\emph{Stochastic Triangle Inequality (STI)}, we derive a rankingalgorithm with optimal complexity and anoptimal algorithm that estimates all pairwise preferenceprobabilities.{\bf Sequential and Competitive} We extend the well-known\emph{secretary problem} to a probabilistic setting, and apply theintuition gained to derive the first query-optimal sequentialalgorithm for probabilistic-maxing. Furthermore, departing fromprevious assumptions, the algorithm and performance guarantees applyeven for infinitely many items, hence in particular do not requirea-priori knowledge of the number of items. The algorithm has linearcomplexity, and is optimal also in the streaming setting and for bothtraditional- and dueling-bandits. In a non-streaming setting, amodification of the algorithm is \emph{competitive} in that itrequires essentially the lowest number of queries not just in theworst case, but for every underlying distribution
- …