286 research outputs found

    Efficient Bayesian Inference for Generalized Bradley-Terry Models

    Full text link
    The Bradley-Terry model is a popular approach to describe probabilities of the possible outcomes when elements of a set are repeatedly compared with one another in pairs. It has found many applications including animal behaviour, chess ranking and multiclass classification. Numerous extensions of the basic model have also been proposed in the literature including models with ties, multiple comparisons, group comparisons and random graphs. From a computational point of view, Hunter (2004) has proposed efficient iterative MM (minorization-maximization) algorithms to perform maximum likelihood estimation for these generalized Bradley-Terry models whereas Bayesian inference is typically performed using MCMC (Markov chain Monte Carlo) algorithms based on tailored Metropolis-Hastings (M-H) proposals. We show here that these MM\ algorithms can be reinterpreted as special instances of Expectation-Maximization (EM) algorithms associated to suitable sets of latent variables and propose some original extensions. These latent variables allow us to derive simple Gibbs samplers for Bayesian inference. We demonstrate experimentally the efficiency of these algorithms on a variety of applications

    Sparsity-Promoting Bayesian Dynamic Linear Models

    Get PDF
    Sparsity-promoting priors have become increasingly popular over recent years due to an increased number of regression and classification applications involving a large number of predictors. In time series applications where observations are collected over time, it is often unrealistic to assume that the underlying sparsity pattern is fixed. We propose here an original class of flexible Bayesian linear models for dynamic sparsity modelling. The proposed class of models expands upon the existing Bayesian literature on sparse regression using generalized multivariate hyperbolic distributions. The properties of the models are explored through both analytic results and simulation studies. We demonstrate the model on a financial application where it is shown that it accurately represents the patterns seen in the analysis of stock and derivative data, and is able to detect major events by filtering an artificial portfolio of assets

    Experimental Studies of Nanometer-Scaled Single-Asperity Contacts with Metal Surfaces

    Get PDF
    The contact between two surfaces initiates at surface asperities whose properties determine the mechanical behavior of the contact. The response of a nanometer-scaled single asperity onto flat surfaces is experimentally accessible using atomic force microscopy (AFM). The high spatial and force resolution of atomic force microscopy and spectroscopy enables to determine the mechanisms governing plastic deformation, friction, and wear down to the atomic scale. In this chapter, we describe three experimental methods based on atomic force microscopy and corresponding methods for statistical data analysis to determine: the hardness and the deformation mechanisms of metallic surfaces during indentation with an AFM tip and the mechanisms governing wear and friction of metallic surfaces

    A Hierarchical Bayesian Framework for Constructing Sparsity-inducing Priors

    Full text link
    Variable selection techniques have become increasingly popular amongst statisticians due to an increased number of regression and classification applications involving high-dimensional data where we expect some predictors to be unimportant. In this context, Bayesian variable selection techniques involving Markov chain Monte Carlo exploration of the posterior distribution over models can be prohibitively computationally expensive and so there has been attention paid to quasi-Bayesian approaches such as maximum a posteriori (MAP) estimation using priors that induce sparsity in such estimates. We focus on this latter approach, expanding on the hierarchies proposed to date to provide a Bayesian interpretation and generalization of state-of-the-art penalized optimization approaches and providing simultaneously a natural way to include prior information about parameters within this framework. We give examples of how to use this hierarchy to compute MAP estimates for linear and logistic regression as well as sparse precision-matrix estimates in Gaussian graphical models. In addition, an adaptive group lasso method is derived using the framework.Comment: Submitted for publication; corrected typo

    Particle approximation of the intensity measures of a spatial branching point process arising in multi-target tracking

    Get PDF
    The aim of this paper is two-fold. First we analyze the sequence of intensity measures of a spatial branching point process arising in a multiple target tracking context. We study its stability properties, characterize its long time behavior and provide a series of weak Lipschitz type functional contraction inequalities. Second we design and analyze an original particle scheme to approximate numerically these intensity measures. Under appropriate regularity conditions, we obtain uniform and non asymptotic estimates and a functional central limit theorem. To the best of our knowledge, these are the first sharp theoretical results available for this class of spatial branching point processes.Comment: Revised version Technical report INRIA HAL-INRIA RR-723

    Efficient Bayesian Inference for Generalized Bradley-Terry Models

    Get PDF
    International audienceThe Bradley-Terry model is a popular approach to describe probabilities of the possible outcomes when elements of a set are repeatedly compared with one another in pairs. It has found many applications including animal behaviour, chess ranking and multiclass classification. Numerous extensions of the basic model have also been proposed in the literature including models with ties, multiple comparisons, group comparisons and random graphs. From a computational point of view, Hunter (2004) has proposed efficient iterative MM (minorization-maximization) algorithms to perform maximum likelihood estimation for these generalized Bradley-Terry models whereas Bayesian inference is typically performed using MCMC (Markov chain Monte Carlo) algorithms based on tailored Metropolis-Hastings (M-H) proposals. We show here that these MM\ algorithms can be reinterpreted as special instances of Expectation-Maximization (EM) algorithms associated to suitable sets of latent variables and propose some original extensions. These latent variables allow us to derive simple Gibbs samplers for Bayesian inference. We demonstrate experimentally the efficiency of these algorithms on a variety of applications.Le modèle de Bradley-Terry est une approche populaire pour décrire les résultats possibles lorsque des éléments d'un ensemble sont mis en comparaison par paire. Il a trouvé de nombreuses applications incluant le comportement animal, le classement de joueurs d'échecs et la classification multi-classes. Plusieurs extensions du modèle classique ont été proposées dans la littérature afin de prendre en compte des matchs nuls, des comparaisons multiples et des comparaisons entre groupes. D'un point de vue computationel, Hunter (2004) a proposé des algorithmes MM (minorization-maximization) itératifs efficaces pour l'estimation du maximum de vraisemblance dans les modèles de Bradley-Terry généralisés, tandis que l'inférence bayésienne est réalisée à l'aide d'algorithmes MCMC (Markov chain Monte Carlo) basées sur des lois de proposition Metropolis-Hastings (M-H) adaptées. Nous montrons que ces algorithmes MM peuvent être réinterprétés comme des instances d'algorithmes EM (Expectation-Maximization) associés à des ensembles de variables latentes. Ces variables latentes nous permettent de dériver des algorithmes de Gibbs simples pour l'inférence bayésiennes. Nous démontrons expérimentalement l'efficacité de ces algorithmes sur plusieurs applications
    corecore