11 research outputs found

    Random Correlations for Small Perturbations of Expanding Maps

    No full text
    . We consider random compositions F oe n ! ffi \Delta \Delta \Delta ffi F! of C k expanding maps F! which are C k -close to a given C k expanding map (k ? 1) and not necessarily i.i.d. We study the random correlation functions C! (n) associated to the unique absolutely continuous stationary measures F!! = oe! and smooth test functions. We show C k\Gamma1 stability of the densities of the measures ! , and good uniform bounds on the exponential rate of decay of random correlations as the smooth error level goes to zero. To do this, we let the associated random transfer operators LF! act on suitable cones of positive functions endowed with a Hilbert projective metric. 1. Introduction When studying small random perturbations of a given expanding dynamical system f : X ! X, i.e., compositions F oe n ! ffi \Delta \Delta \Delta ffi F ! with each random variable F ! "close" to f (see Section 2 for precise definitions), one approach is to consider the Markov chain with transiti..

    Variational Information Bottleneck for Semi-Supervised Classification

    No full text
    In this paper, we consider an information bottleneck (IB) framework for semi-supervised classification with several families of priors on latent space representation. We apply a variational decomposition of mutual information terms of IB. Using this decomposition we perform an analysis of several regularizers and practically demonstrate an impact of different components of variational model on the classification accuracy. We propose a new formulation of semi-supervised IB with hand crafted and learnable priors and link it to the previous methods such as semi-supervised versions of VAE (M1 + M2), AAE, CatGAN, etc. We show that the resulting model allows better understand the role of various previously proposed regularizers in semi-supervised classification task in the light of IB framework. The proposed IB semi-supervised model with hand-crafted and learnable priors is experimentally validated on MNIST under different amount of labeled data
    corecore