72 research outputs found

    On the Challenges of Physical Implementations of RBMs

    Full text link
    Restricted Boltzmann machines (RBMs) are powerful machine learning models, but learning and some kinds of inference in the model require sampling-based approximations, which, in classical digital computers, are implemented using expensive MCMC. Physical computation offers the opportunity to reduce the cost of sampling by building physical systems whose natural dynamics correspond to drawing samples from the desired RBM distribution. Such a system avoids the burn-in and mixing cost of a Markov chain. However, hardware implementations of this variety usually entail limitations such as low-precision and limited range of the parameters and restrictions on the size and topology of the RBM. We conduct software simulations to determine how harmful each of these restrictions is. Our simulations are designed to reproduce aspects of the D-Wave quantum computer, but the issues we investigate arise in most forms of physical computation

    Improved learning algorithms for restricted Boltzmann machines

    Get PDF
    A restricted Boltzmann machine (RBM) is often used as a building block for constructing deep neural networks and deep generative models which have gained popularity recently as one way to learn complex and large probabilistic models. In these deep models, it is generally known that the layer-wise pretraining of RBMs facilitates finding a more accurate model for the data. It is, hence, important to have an efficient learning method for RBM. The conventional learning is mostly performed using the stochastic gradients, often, with the approximate method such as contrastive divergence (CD) learning to overcome the computational difficulty. Unfortunately, training RBMs with this approach is known to be difficult, as learning easily diverges after initial convergence. This difficulty has been reported recently by many researchers. This thesis contributes important improvements that address the difficulty of training RBMs. Based on an advanced Markov-Chain Monte-Carlo sampling method called parallel tempering (PT), the thesis proposes a PT learning which can replace CD learning. In terms of both the learning performance and the computational overhead, PT learning is shown to be superior to CD learning through various experiments. The thesis also tackles the problem of choosing the right learning parameter by proposing a new algorithm, the adaptive learning rate, which is able to automatically choose the right learning rate during learning. A closer observation into the update rules suggested that learning by the traditional update rules is easily distracted depending on the representation of data sets. Based on this observation, the thesis proposes a new set of gradient update rules that are more robust to the representation of training data sets and the learning parameters. Extensive experiments on various data sets confirmed that the proposed rules indeed improve learning significantly. Additionally, a Gaussian-Bernoulli RBM (GBRBM) which is a variant of an RBM that can learn continuous real-valued data sets is reviewed, and the proposed improvements are tested upon it. The experiments showed that the improvements could also be made for GBRBMs

    Emergence of Compositional Representations in Restricted Boltzmann Machines

    Full text link
    Extracting automatically the complex set of features composing real high-dimensional data is crucial for achieving high performance in machine--learning tasks. Restricted Boltzmann Machines (RBM) are empirically known to be efficient for this purpose, and to be able to generate distributed and graded representations of the data. We characterize the structural conditions (sparsity of the weights, low effective temperature, nonlinearities in the activation functions of hidden units, and adaptation of fields maintaining the activity in the visible layer) allowing RBM to operate in such a compositional phase. Evidence is provided by the replica analysis of an adequate statistical ensemble of random RBMs and by RBM trained on the handwritten digits dataset MNIST.Comment: Supplementary material available at the authors' webpag

    Solving Machine Learning Problems with Biological Principles

    Get PDF
    Spiking neural networks (SNNs) have been proposed both as models of cortical computation and as candidates for solving problems in machine learning. While increasing recent works have improved their performances in benchmark discriminative tasks, most of them learn by surrogates of backpropagation where biological features such as spikes are regarded more as defects than merits. In this thesis, we explore the enerative abilities of SNNs with built-in biological mechanisms. When sampling from high-dimensional multimodal distributions, models based on general Markov chain Monte Carlo methods often have the mixing problem that the sampler is easy to get trapped in local minima. Inspired from traditional annealing or tempering approaches, we demonstrate that increasing the rate of background Poisson noise in an SNN can flatten the energy landscape and facilitate mixing of the system. In addition, we show that with synaptic short-term plasticity (STP) the SNN can achieve more efficient mixing by local modulation of active attractors and eventually outperforming traditional benchmark models. We reveal diverse sampling statistics of SNNs induced by STP and finally study its implementation on conventional machine learning methods. Our work thereby highlights important computational consequences of biological features that might otherwise appear as artifacts of evolution
    corecore