4 research outputs found

    Mallows Ranking Models: Maximum Likelihood Estimate and Regeneration

    Full text link
    This paper is concerned with various Mallows ranking models. We study the statistical properties of the MLE of Mallows' Ï•\phi model. We also make connections of various Mallows ranking models, encompassing recent progress in mathematics. Motivated by the infinite top-tt ranking model, we propose an algorithm to select the model size tt automatically. The key idea relies on the renewal property of such an infinite random permutation. Our algorithm shows good performance on several data sets.Comment: 10 pages, 2 figures, 5 tables. This paper is published by http://proceedings.mlr.press/v97/tang19a.htm

    Learning Mixtures of Ranking Models

    Full text link
    This work concerns learning probabilistic models for ranking data in a heterogeneous population. The specific problem we study is learning the parameters of a Mallows Mixture Model. Despite being widely studied, current heuristics for this problem do not have theoretical guarantees and can get stuck in bad local optima. We present the first polynomial time algorithm which provably learns the parameters of a mixture of two Mallows models. A key component of our algorithm is a novel use of tensor decomposition techniques to learn the top-k prefix in both the rankings. Before this work, even the question of identifiability in the case of a mixture of two Mallows models was unresolved

    Multiresolution Analysis of Incomplete Rankings

    Full text link
    Incomplete rankings on a set of items {1,  …,  n}\{1,\; \ldots,\; n\} are orderings of the form a1≺⋯≺aka_{1}\prec\dots\prec a_{k}, with {a1,…ak}⊂{1,…,n}\{a_{1},\dots a_{k}\}\subset\{1,\dots,n\} and k<nk < n. Though they arise in many modern applications, only a few methods have been introduced to manipulate them, most of them consisting in representing any incomplete ranking by the set of all its possible linear extensions on {1,  …,  n}\{1,\; \ldots,\; n\}. It is the major purpose of this paper to introduce a completely novel approach, which allows to treat incomplete rankings directly, representing them as injective words over {1,  …,  n}\{1,\; \ldots,\; n\}. Unexpectedly, operations on incomplete rankings have very simple equivalents in this setting and the topological structure of the complex of injective words can be interpretated in a simple fashion from the perspective of ranking. We exploit this connection here and use recent results from algebraic topology to construct a multiresolution analysis and develop a wavelet framework for incomplete rankings. Though purely combinatorial, this construction relies on the same ideas underlying multiresolution analysis on a Euclidean space, and permits to localize the information related to rankings on each subset of items. It can be viewed as a crucial step toward nonlinear approximation of distributions of incomplete rankings and paves the way for many statistical applications, including preference data analysis and the design of recommender systems

    Tractable Search for Learning Exponential Models of Rankings

    No full text
    We consider the problem of learning the Generalized Mallows (GM) model of [Fligner and Verducci, 1986], which represents a probability distribution over all possible permutations (or rankings) of a given set of objects. The training data consists of a set of permutations. This problem generalizes the well known rank aggregation problem. Maximum Likelihood estimation of the GM model is NP-hard. An exact but inefficient searchbased method was recently proposed for this problem. Here we introduce the first nontrivial heuristic function for this search. We justify it theoretically, and show why it is admissible in practice. We experimentally demonstrate its effectiveness, and show that it is superior to existing techniques for learning the GM model. We also show good performance of a family of faster approximate methods of search.
    corecore