86,859 research outputs found
Pairwise Comparisons Simplified
This study examines the notion of generators of a pairwise comparisons
matrix. Such approach decreases the number of pairwise comparisons from to . An algorithm of reconstructing of the PC matrix from its set
of generators is presented.Comment: 15 pages, two figure
Clustering and Inference From Pairwise Comparisons
Given a set of pairwise comparisons, the classical ranking problem computes a
single ranking that best represents the preferences of all users. In this
paper, we study the problem of inferring individual preferences, arising in the
context of making personalized recommendations. In particular, we assume that
there are users of types; users of the same type provide similar
pairwise comparisons for items according to the Bradley-Terry model. We
propose an efficient algorithm that accurately estimates the individual
preferences for almost all users, if there are
pairwise comparisons per type, which is near optimal in sample complexity when
only grows logarithmically with or . Our algorithm has three steps:
first, for each user, compute the \emph{net-win} vector which is a projection
of its -dimensional vector of pairwise comparisons onto an
-dimensional linear subspace; second, cluster the users based on the net-win
vectors; third, estimate a single preference for each cluster separately. The
net-win vectors are much less noisy than the high dimensional vectors of
pairwise comparisons and clustering is more accurate after the projection as
confirmed by numerical experiments. Moreover, we show that, when a cluster is
only approximately correct, the maximum likelihood estimation for the
Bradley-Terry model is still close to the true preference.Comment: Corrected typos in the abstrac
Dynamic Metric Learning from Pairwise Comparisons
Recent work in distance metric learning has focused on learning
transformations of data that best align with specified pairwise similarity and
dissimilarity constraints, often supplied by a human observer. The learned
transformations lead to improved retrieval, classification, and clustering
algorithms due to the better adapted distance or similarity measures. Here, we
address the problem of learning these transformations when the underlying
constraint generation process is nonstationary. This nonstationarity can be due
to changes in either the ground-truth clustering used to generate constraints
or changes in the feature subspaces in which the class structure is apparent.
We propose Online Convex Ensemble StrongLy Adaptive Dynamic Learning (OCELAD),
a general adaptive, online approach for learning and tracking optimal metrics
as they change over time that is highly robust to a variety of nonstationary
behaviors in the changing metric. We apply the OCELAD framework to an ensemble
of online learners. Specifically, we create a retro-initialized composite
objective mirror descent (COMID) ensemble (RICE) consisting of a set of
parallel COMID learners with different learning rates, demonstrate RICE-OCELAD
on both real and synthetic data sets and show significant performance
improvements relative to previously proposed batch and online distance metric
learning algorithms.Comment: to appear Allerton 2016. arXiv admin note: substantial text overlap
with arXiv:1603.0367
- …