37 research outputs found

    Truncated Variational Sampling for "Black Box" Optimization of Generative Models

    Get PDF
    We investigate the optimization of two probabilistic generative models with binary latent variables using a novel variational EM approach. The approach distinguishes itself from previous variational approaches by using latent states as variational parameters. Here we use efficient and general purpose sampling procedures to vary the latent states, and investigate the "black box" applicability of the resulting optimization procedure. For general purpose applicability, samples are drawn from approximate marginal distributions of the considered generative model as well as from the model's prior distribution. As such, variational sampling is defined in a generic form, and is directly executable for a given model. As a proof of concept, we then apply the novel procedure (A) to Binary Sparse Coding (a model with continuous observables), and (B) to basic Sigmoid Belief Networks (which are models with binary observables). Numerical experiments verify that the investigated approach efficiently as well as effectively increases a variational free energy objective without requiring any additional analytical steps

    Truncated Variational EM for Semi-Supervised Neural Simpletrons

    Full text link
    Inference and learning for probabilistic generative networks is often very challenging and typically prevents scalability to as large networks as used for deep discriminative approaches. To obtain efficiently trainable, large-scale and well performing generative networks for semi-supervised learning, we here combine two recent developments: a neural network reformulation of hierarchical Poisson mixtures (Neural Simpletrons), and a novel truncated variational EM approach (TV-EM). TV-EM provides theoretical guarantees for learning in generative networks, and its application to Neural Simpletrons results in particularly compact, yet approximately optimal, modifications of learning equations. If applied to standard benchmarks, we empirically find, that learning converges in fewer EM iterations, that the complexity per EM iteration is reduced, and that final likelihood values are higher on average. For the task of classification on data sets with few labels, learning improvements result in consistently lower error rates if compared to applications without truncation. Experiments on the MNIST data set herein allow for comparison to standard and state-of-the-art models in the semi-supervised setting. Further experiments on the NIST SD19 data set show the scalability of the approach when a manifold of additional unlabeled data is available

    Autonomous Cleaning of Corrupted Scanned Documents - A Generative Modeling Approach

    Full text link
    We study the task of cleaning scanned text documents that are strongly corrupted by dirt such as manual line strokes, spilled ink etc. We aim at autonomously removing dirt from a single letter-size page based only on the information the page contains. Our approach, therefore, has to learn character representations without supervision and requires a mechanism to distinguish learned representations from irregular patterns. To learn character representations, we use a probabilistic generative model parameterizing pattern features, feature variances, the features' planar arrangements, and pattern frequencies. The latent variables of the model describe pattern class, pattern position, and the presence or absence of individual pattern features. The model parameters are optimized using a novel variational EM approximation. After learning, the parameters represent, independently of their absolute position, planar feature arrangements and their variances. A quality measure defined based on the learned representation then allows for an autonomous discrimination between regular character patterns and the irregular patterns making up the dirt. The irregular patterns can thus be removed to clean the document. For a full Latin alphabet we found that a single page does not contain sufficiently many character examples. However, even if heavily corrupted by dirt, we show that a page containing a lower number of character types can efficiently and autonomously be cleaned solely based on the structural regularity of the characters it contains. In different examples using characters from different alphabets, we demonstrate generality of the approach and discuss its implications for future developments.Comment: oral presentation and Google Student Travel Award; IEEE conference on Computer Vision and Pattern Recognition 201

    Truncated Variational Sampling for "Black Box" Optimization of Generative Models

    Get PDF
    We investigate the optimization of two probabilistic generative models with binary latent variables using a novel variational EM approach. The approach distinguishes itself from previous variational approaches by using latent states as variational parameters. Here we use efficient and general purpose sampling procedures to vary the latent states, and investigate the "black box" applicability of the resulting optimization procedure. For general purpose applicability, samples are drawn from approximate marginal distributions of the considered generative model as well as from the model's prior distribution. As such, variational sampling is defined in a generic form, and is directly executable for a given model. As a proof of concept, we then apply the novel procedure (A) to Binary Sparse Coding (a model with continuous observables), and (B) to basic Sigmoid Belief Networks (which are models with binary observables). Numerical experiments verify that the investigated approach efficiently as well as effectively increases a variational free energy objective without requiring any additional analytical steps

    On scalable inference and learning in spike-and-slab sparse coding

    Get PDF
    Sparse coding is a widely applied latent variable analysis technique. The standard formulation of sparse coding assumes Laplace as a prior distribution for modeling the activations of latent components. In this work we study sparse coding with spike-and-slab distribution as a prior for latent activity. A spike-and-slab distribution has its probability mass distributed across a ’spike’ at zero and a ’slab’ spreading over a continuous range. For its capacity to induce exact zeros with a higher likelihood, a spike-and-slab prior distribution constitutes a more accurate model of sparse coding. The distribution as a prior also allows for the sparseness of latent activity to be directly inferred from observed data, which essentially makes spike-and-slab sparse coding more flexible and self-adaptive to a wide range of data distributions. By modeling the slab with a Gaussian distribution, we furthermore show that in contrast to the standard approach to sparse coding, we can indeed derive closed-form analytical expressions for exact inference and learning in linear spike-and-slab sparse coding. However, as the posterior landscape of a spike-and-slab prior turns out to be highly multi-modal with a prohibitive exploration cost, in addition to the exact method, we also develop subspace and Gibbs sampling based approximate inference techniques for scalable applications of the linear model. We contrast our approximation methods with variational approximation for scalable posterior inference in linear spike-and-slab sparse coding. We further combine the Gaussian spike-and-slab prior with a nonlinear generative model, which assumes a point-wise maximum combination rule for the generation of observed data. We analyze the model as a precise encoder of low-level features such as edges and their occlusions in visual data. We again combine subspace selection with Gibbs sampling to overcome the analytical intractability of performing exact inference in the model. We numerically analyze our methods on both synthetic and real data for their verification and comparison with other approaches. We assess the linear spike-and-slab approach on source separation and image denoising benchmarks. In most experiments we obtain competitive or state-of-the-art results, while we find that spike-and-slab sparse coding overall outperforms other comparable approaches. By extracting thousands of latent components from a large amount of training data we further demonstrate that our subspace Gibbs sampler is among the most scalable posterior inference methods for a linear sparse coding approach. For the nonlinear model we experiment with artificial and real images to demonstrate that the components learned by the model lie closer to the ground-truth and are easily interpretable as the underlying generative causes of the input. We find that in comparison to standard sparse coding, the nonlinear spike-and-slab approach can compressively encode images using naturally sparse and discernible compositions of latent components. We also demonstrate that the components inferred by the model from natural image patches are statistically more consistent with respect to their structure and distribution to the response patterns of simple cells in the primary visual cortex of the brain. This work thereby contributes novel methods for sophisticated inference and learning in spike-and-slab sparse coding, while it also empirically showcases their functional efficacy through a variety of applications.Sparse Coding ist eine weit verbreitete Technik der latenten Variablenanalyse. Die Standardformulierung von Sparse Coding setzt a priori eine Laplace-Verteilung zur Modellierung der Aktivierung von latenten Komponenten voraus. In dieser Arbeit untersuchen wir Sparse Coding mit einer a priori Spike-and-Slab-Verteilung für latente Aktivität. Eine Spike-and-Slab-Verteilung verteilt ihre Wahrscheinlichkeitsmasse um ein Aktionspotential (“Spike”) um Null und eine dicke Verteilung (“slab”) über einen kontinuierlichen Wertebereich. Durch die Induktion von exakten Nullen mit einer höheren Wahrscheinlichkeit erzeugt eine Apriori-Spike-and-Slab-Verteilung ein genaueres Modell von Sparse Coding. Als A-priori-Verteilung erlaubt sie es uns die Seltenheit von latenten Komponenten direkt von Daten abzuleiten, sodass ein Spike-and-Slab-getriebenes Modell von Sparse Coding sich besser verschiedensten Verteilungen von Daten anpasst. Durch das Modellieren des Slab mittels einer Gauß-Verteilung zeigen wir, dass – im Gegensatz zur Standardformulierung von Sparse Coding – wir in der Tat geschlossene analytische Ausdrücke ableiten können, um eine exakte Ableitung und das Lernen eines linearen Spike-and-Slab-Sparse-Coding-Modell durchzuführen. Weil eine Spike-and-Slab-A-priori-Verteilung zu einer hoch multimodalen A-posteriori-Landschaft mit viel zu hohen Suchkosten führt, entwickeln wir zusätzlich zur exakten Methode Näherungslösungen basierend auf einem Teilraum und Gibbs-Sampling für skalierbare Anwendungen des Modells. Wir vergleichen unseren Ansatz der näherungsweisen Inferenz mit näherungsweiser Variationsrechnung des linearen Spike-and-Slab-Sparse Coding. Des Weiteren kombinieren wir die Spike-and-Slab-A-priori-Verteilung mit einem nicht-linearen Sparse-Coding-Modell, das eine punktweise Maximum-Kombinationsregel zur Datengenerierung voraussetzt. Wir analysieren das Modell als genauen Kodierer von untergeordneten Merkmalen in Bildern wie z.B. Kanten und deren Okklusionen. Wir lösen die analytische Ausweglosigkeit, eine Ableitung von multimodalen A-posteriori-Verteilungen im Modell durchzuführen, durch die Kombination von Gibbs-Sampling und der Auswahl eines Teilraums, um eine skalierbare Prozedur für die approximative Inferenz des Modells zu entwickeln. Wir analysieren unsere Methode numerisch durch synthetische und wirkliche Daten zum Nachweis und Vergleich mit anderen Ansätzen. Wir bewerten den linearen Spike-and-Slab-Ansatz mittels Maßstäben für die Quellentrennung und zur Rauschunterdrückung in Bildern. In den meisten Experimenten erhalten wir vergleichsweise oder die beste Resultate. Gleichzeitig finden wir, dass Spike-and-Slab-Sparse-Coding insgesamt andere vergleichbare Ansätze übertrifft. Durch die Extraktion von Tausenden von latenten Komponenten aus einer riesigen Menge an Trainingsdaten zeigen wir des Weiteren, dass unserer Teilraum Gibbs-Sampler zu den skalierbarsten Inferenzmethoden der linearen Sparse-Coding-Modelle gehört. Für das nichtlineare Modell experimentieren wir mit künstlichen und echten Bildern zur Demonstration, dass die von dem Modell gelernten Komponenten näher an der “Ground Truth” liegen und leichter zu interpretieren sind als die zugrundeliegenden generierenden Einflüsse der Eingabe. Wir finden, dass – im Vergleich zu Standard-Sparse-Coding – der nichtlineare Spike-and-Slab-Ansatz Bilder komprimierend kodieren kann durch natürliche dünnbesetzte und klar erkennbare Kompositionen von latenten Komponenten. Wir zeigen auch, dass die vom Modell abgeleiteten Komponenten von natürlichen Bildern statistisch konsistenter sind in ihrer Struktur und Verteilung mit dem Antwortmuster von einfachen Zellen im primären visuellen Kortex. Diese Arbeit leistet durch neue Methoden zur komplexen Inferenz und zum Erlernen ivvon Spike-and-Slab-Sparse-Coding einen Beitrag und demonstriert deren praktikable Wirksamkeit durch einen Vielzahl von Anwendungen

    ProSper -- A Python Library for Probabilistic Sparse Coding with Non-Standard Priors and Superpositions

    Get PDF
    ProSper is a python library containing probabilistic algorithms to learn dictionaries. Given a set of data points, the implemented algorithms seek to learn the elementary components that have generated the data. The library widens the scope of dictionary learning approaches beyond implementations of standard approaches such as ICA, NMF or standard L1 sparse coding. The implemented algorithms are especially well-suited in cases when data consist of components that combine non-linearly and/or for data requiring flexible prior distributions. Furthermore, the implemented algorithms go beyond standard approaches by inferring prior and noise parameters of the data, and they provide rich a-posteriori approximations for inference. The library is designed to be extendable and it currently includes: Binary Sparse Coding (BSC), Ternary Sparse Coding (TSC), Discrete Sparse Coding (DSC), Maximal Causes Analysis (MCA), Maximum Magnitude Causes Analysis (MMCA), and Gaussian Sparse Coding (GSC, a recent spike-and-slab sparse coding approach). The algorithms are scalable due to a combination of variational approximations and parallelization. Implementations of all algorithms allow for parallel execution on multiple CPUs and multiple machines for medium to large-scale applications. Typical large-scale runs of the algorithms can use hundreds of CPUs to learn hundreds of dictionary elements from data with tens of millions of floating-point numbers such that models with several hundred thousand parameters can be optimized. The library is designed to have minimal dependencies and to be easy to use. It targets users of dictionary learning algorithms and Machine Learning researchers

    Machine Learning: Binary Non-negative Matrix Factorization

    Get PDF
    This bachelor thesis theoretically derives and implements an unsupervised probabilistic generative model called Binary Non-Negative Matrix Factorization. It is a simplification of the standard Non-Negative Matrix Factorization where the factorization into two matrices is restricted to one of them having only binary components instead of continuous components. This simplifies the computation making it exactly solvable while keeping most of the learning capabilities and connects the algorithm to a modified version of Binary Sparse Coding. The learning phase of the model is performed using the EM algorithm, an iterative method that maximizes the likelihood function with respect to the parameters to be learned in a two-step process. The model is tested on artificial data and it is shown to learn the hidden parameters on these simple data although it fails to work properly when applied to real data

    ProSper -- A Python Library for Probabilistic Sparse Coding with Non-Standard Priors and Superpositions

    Get PDF
    ProSper is a python library containing probabilistic algorithms to learn dictionaries. Given a set of data points, the implemented algorithms seek to learn the elementary components that have generated the data. The library widens the scope of dictionary learning approaches beyond implementations of standard approaches such as ICA, NMF or standard L1 sparse coding. The implemented algorithms are especially well-suited in cases when data consist of components that combine non-linearly and/or for data requiring flexible prior distributions. Furthermore, the implemented algorithms go beyond standard approaches by inferring prior and noise parameters of the data, and they provide rich a-posteriori approximations for inference. The library is designed to be extendable and it currently includes: Binary Sparse Coding (BSC), Ternary Sparse Coding (TSC), Discrete Sparse Coding (DSC), Maximal Causes Analysis (MCA), Maximum Magnitude Causes Analysis (MMCA), and Gaussian Sparse Coding (GSC, a recent spike-and-slab sparse coding approach). The algorithms are scalable due to a combination of variational approximations and parallelization. Implementations of all algorithms allow for parallel execution on multiple CPUs and multiple machines for medium to large-scale applications. Typical large-scale runs of the algorithms can use hundreds of CPUs to learn hundreds of dictionary elements from data with tens of millions of floating-point numbers such that models with several hundred thousand parameters can be optimized. The library is designed to have minimal dependencies and to be easy to use. It targets users of dictionary learning algorithms and Machine Learning researchers
    corecore