18 research outputs found

    Asymptotically exact data augmentation : models and Monte Carlo sampling with applications to Bayesian inference

    Get PDF
    Numerous machine learning and signal/image processing tasks can be formulated as statistical inference problems. As an archetypal example, recommendation systems rely on the completion of partially observed user/item matrix, which can be conducted via the joint estimation of latent factors and activation coefficients. More formally, the object to be inferred is usually defined as the solution of a variational or stochastic optimization problem. In particular, within a Bayesian framework, this solution is defined as the minimizer of a cost function, referred to as the posterior loss. In the simple case when this function is chosen as quadratic, the Bayesian estimator is known to be the posterior mean which minimizes the mean square error and defined as an integral according to the posterior distribution. In most real-world applicative contexts, computing such integrals is not straightforward. One alternative lies in making use of Monte Carlo integration, which consists in approximating any expectation according to the posterior distribution by an empirical average involving samples from the posterior. This so-called Monte Carlo integration requires the availability of efficient algorithmic schemes able to generate samples from a desired posterior distribution. A huge literature dedicated to random variable generation has proposed various Monte Carlo algorithms. For instance, Markov chain Monte Carlo (MCMC) methods, whose particular instances are the famous Gibbs sampler and Metropolis-Hastings algorithm, define a wide class of algorithms which allow a Markov chain to be generated with the desired stationary distribution. Despite their seemingly simplicity and genericity, conventional MCMC algorithms may be computationally inefficient for large-scale, distributed and/or highly structured problems. The main objective of this thesis consists in introducing new models and related MCMC approaches to alleviate these issues. The intractability of the posterior distribution is tackled by proposing a class of approximate but asymptotically exact augmented (AXDA) models. Then, two Gibbs samplers targetting approximate posterior distributions based on the AXDA framework, are proposed and their benefits are illustrated on challenging signal processing, image processing and machine learning problems. A detailed theoretical study of the convergence rates associated to one of these two Gibbs samplers is also conducted and reveals explicit dependences with respect to the dimension, condition number of the negative log-posterior and prescribed precision. In this work, we also pay attention to the feasibility of the sampling steps involved in the proposed Gibbs samplers. Since one of this step requires to sample from a possibly high-dimensional Gaussian distribution, we review and unify existing approaches by introducing a framework which stands for the stochastic counterpart of the celebrated proximal point algorithm. This strong connection between simulation and optimization is not isolated in this thesis. Indeed, we also show that the derived Gibbs samplers share tight links with quadratic penalty methods and that the AXDA framework yields a class of envelope functions related to the Moreau one

    Author index to volumes 301–400

    Get PDF

    Discrete Mathematics and Symmetry

    Get PDF
    Some of the most beautiful studies in Mathematics are related to Symmetry and Geometry. For this reason, we select here some contributions about such aspects and Discrete Geometry. As we know, Symmetry in a system means invariance of its elements under conditions of transformations. When we consider network structures, symmetry means invariance of adjacency of nodes under the permutations of node set. The graph isomorphism is an equivalence relation on the set of graphs. Therefore, it partitions the class of all graphs into equivalence classes. The underlying idea of isomorphism is that some objects have the same structure if we omit the individual character of their components. A set of graphs isomorphic to each other is denominated as an isomorphism class of graphs. The automorphism of a graph will be an isomorphism from G onto itself. The family of all automorphisms of a graph G is a permutation group
    corecore