3,853 research outputs found

    Quantum annealing for the number partitioning problem using a tunable spin glass of ions

    Full text link
    Exploiting quantum properties to outperform classical ways of information-processing is an outstanding goal of modern physics. A promising route is quantum simulation, which aims at implementing relevant and computationally hard problems in controllable quantum systems. Here we demonstrate that in a trapped ion setup, with present day technology, it is possible to realize a spin model of the Mattis type that exhibits spin glass phases. Remarkably, our method produces the glassy behavior without the need for any disorder potential, just by controlling the detuning of the spin-phonon coupling. Applying a transverse field, the system can be used to benchmark quantum annealing strategies which aim at reaching the ground state of the spin glass starting from the paramagnetic phase. In the vicinity of a phonon resonance, the problem maps onto number partitioning, and instances which are difficult to address classically can be implemented.Comment: accepted version (11 pages, 7 figures

    Statistical Physics and Representations in Real and Artificial Neural Networks

    Full text link
    This document presents the material of two lectures on statistical physics and neural representations, delivered by one of us (R.M.) at the Fundamental Problems in Statistical Physics XIV summer school in July 2017. In a first part, we consider the neural representations of space (maps) in the hippocampus. We introduce an extension of the Hopfield model, able to store multiple spatial maps as continuous, finite-dimensional attractors. The phase diagram and dynamical properties of the model are analyzed. We then show how spatial representations can be dynamically decoded using an effective Ising model capturing the correlation structure in the neural data, and compare applications to data obtained from hippocampal multi-electrode recordings and by (sub)sampling our attractor model. In a second part, we focus on the problem of learning data representations in machine learning, in particular with artificial neural networks. We start by introducing data representations through some illustrations. We then analyze two important algorithms, Principal Component Analysis and Restricted Boltzmann Machines, with tools from statistical physics

    Unstable Dynamics, Nonequilibrium Phases and Criticality in Networked Excitable Media

    Full text link
    Here we numerically study a model of excitable media, namely, a network with occasionally quiet nodes and connection weights that vary with activity on a short-time scale. Even in the absence of stimuli, this exhibits unstable dynamics, nonequilibrium phases -including one in which the global activity wanders irregularly among attractors- and 1/f noise while the system falls into the most irregular behavior. A net result is resilience which results in an efficient search in the model attractors space that can explain the origin of certain phenomenology in neural, genetic and ill-condensed matter systems. By extensive computer simulation we also address a relation previously conjectured between observed power-law distributions and the occurrence of a "critical state" during functionality of (e.g.) cortical networks, and describe the precise nature of such criticality in the model.Comment: 18 pages, 9 figure

    From biological neural networks to thinking machines: Transitioning biological organizational principles to computer technology

    Get PDF
    The three-dimensional organization of the vestibular macula is under study by computer assisted reconstruction and simulation methods as a model for more complex neural systems. One goal of this research is to transition knowledge of biological neural network architecture and functioning to computer technology, to contribute to the development of thinking computers. Maculas are organized as weighted neural networks for parallel distributed processing of information. The network is characterized by non-linearity of its terminal/receptive fields. Wiring appears to develop through constrained randomness. A further property is the presence of two main circuits, highly channeled and distributed modifying, that are connected through feedforward-feedback collaterals and biasing subcircuit. Computer simulations demonstrate that differences in geometry of the feedback (afferent) collaterals affects the timing and the magnitude of voltage changes delivered to the spike initiation zone. Feedforward (efferent) collaterals act as voltage followers and likely inhibit neurons of the distributed modifying circuit. These results illustrate the importance of feedforward-feedback loops, of timing, and of inhibition in refining neural network output. They also suggest that it is the distributed modifying network that is most involved in adaptation, memory, and learning. Tests of macular adaptation, through hyper- and microgravitational studies, support this hypothesis since synapses in the distributed modifying circuit, but not the channeled circuit, are altered. Transitioning knowledge of biological systems to computer technology, however, remains problematical

    Searching for collective behavior in a network of real neurons

    Get PDF
    Maximum entropy models are the least structured probability distributions that exactly reproduce a chosen set of statistics measured in an interacting network. Here we use this principle to construct probabilistic models which describe the correlated spiking activity of populations of up to 120 neurons in the salamander retina as it responds to natural movies. Already in groups as small as 10 neurons, interactions between spikes can no longer be regarded as small perturbations in an otherwise independent system; for 40 or more neurons pairwise interactions need to be supplemented by a global interaction that controls the distribution of synchrony in the population. Here we show that such "K-pairwise" models--being systematic extensions of the previously used pairwise Ising models--provide an excellent account of the data. We explore the properties of the neural vocabulary by: 1) estimating its entropy, which constrains the population's capacity to represent visual information; 2) classifying activity patterns into a small set of metastable collective modes; 3) showing that the neural codeword ensembles are extremely inhomogenous; 4) demonstrating that the state of individual neurons is highly predictable from the rest of the population, allowing the capacity for error correction.Comment: 24 pages, 19 figure

    On the Anatomy of MCMC-Based Maximum Likelihood Learning of Energy-Based Models

    Full text link
    This study investigates the effects of Markov chain Monte Carlo (MCMC) sampling in unsupervised Maximum Likelihood (ML) learning. Our attention is restricted to the family of unnormalized probability densities for which the negative log density (or energy function) is a ConvNet. We find that many of the techniques used to stabilize training in previous studies are not necessary. ML learning with a ConvNet potential requires only a few hyper-parameters and no regularization. Using this minimal framework, we identify a variety of ML learning outcomes that depend solely on the implementation of MCMC sampling. On one hand, we show that it is easy to train an energy-based model which can sample realistic images with short-run Langevin. ML can be effective and stable even when MCMC samples have much higher energy than true steady-state samples throughout training. Based on this insight, we introduce an ML method with purely noise-initialized MCMC, high-quality short-run synthesis, and the same budget as ML with informative MCMC initialization such as CD or PCD. Unlike previous models, our energy model can obtain realistic high-diversity samples from a noise signal after training. On the other hand, ConvNet potentials learned with non-convergent MCMC do not have a valid steady-state and cannot be considered approximate unnormalized densities of the training data because long-run MCMC samples differ greatly from observed images. We show that it is much harder to train a ConvNet potential to learn a steady-state over realistic images. To our knowledge, long-run MCMC samples of all previous models lose the realism of short-run samples. With correct tuning of Langevin noise, we train the first ConvNet potentials for which long-run and steady-state MCMC samples are realistic images.Comment: Code available at: https://github.com/point0bar1/ebm-anatom
    corecore