3,314 research outputs found

    Optimal coloured perceptrons

    Full text link
    Ashkin-Teller type perceptron models are introduced. Their maximal capacity per number of couplings is calculated within a first-step replica-symmetry-breaking Gardner approach. The results are compared with extensive numerical simulations using several algorithms.Comment: 8 pages in Latex with 2 eps figures, RSB1 calculations has been adde

    Adaptation to criticality through organizational invariance in embodied agents

    Get PDF
    Many biological and cognitive systems do not operate deep within one or other regime of activity. Instead, they are poised at critical points located at phase transitions in their parameter space. The pervasiveness of criticality suggests that there may be general principles inducing this behaviour, yet there is no well-founded theory for understanding how criticality is generated at a wide span of levels and contexts. In order to explore how criticality might emerge from general adaptive mechanisms, we propose a simple learning rule that maintains an internal organizational structure from a specific family of systems at criticality. We implement the mechanism in artificial embodied agents controlled by a neural network maintaining a correlation structure randomly sampled from an Ising model at critical temperature. Agents are evaluated in two classical reinforcement learning scenarios: the Mountain Car and the Acrobot double pendulum. In both cases the neural controller appears to reach a point of criticality, which coincides with a transition point between two regimes of the agent's behaviour. These results suggest that adaptation to criticality could be used as a general adaptive mechanism in some circumstances, providing an alternative explanation for the pervasive presence of criticality in biological and cognitive systems.Comment: arXiv admin note: substantial text overlap with arXiv:1704.0525

    Dynamics of Interacting Neural Networks

    Full text link
    The dynamics of interacting perceptrons is solved analytically. For a directed flow of information the system runs into a state which has a higher symmetry than the topology of the model. A symmetry breaking phase transition is found with increasing learning rate. In addition it is shown that a system of interacting perceptrons which is trained on the history of its minority decisions develops a good strategy for the problem of adaptive competition known as the Bar Problem or Minority Game.Comment: 9 pages, 3 figures; typos corrected, content reorganize

    Global analysis of parallel analog networks with retarded feedback

    Get PDF
    We analyze the retrieval dynamics of analog ‘‘neural’’ networks with clocked sigmoid elements and multiple signal delays. Proving a conjecture by Marcus and Westervelt, we show that for delay-independent symmetric coupling strengths, the only attractors are fixed points and periodic limit cycles. The same result applies to a larger class of asymmetric networks that may be utilized to store temporal associations with a cyclic structure. We discuss implications for various learning schemes in the space-time domain

    Statistical Physics and Representations in Real and Artificial Neural Networks

    Full text link
    This document presents the material of two lectures on statistical physics and neural representations, delivered by one of us (R.M.) at the Fundamental Problems in Statistical Physics XIV summer school in July 2017. In a first part, we consider the neural representations of space (maps) in the hippocampus. We introduce an extension of the Hopfield model, able to store multiple spatial maps as continuous, finite-dimensional attractors. The phase diagram and dynamical properties of the model are analyzed. We then show how spatial representations can be dynamically decoded using an effective Ising model capturing the correlation structure in the neural data, and compare applications to data obtained from hippocampal multi-electrode recordings and by (sub)sampling our attractor model. In a second part, we focus on the problem of learning data representations in machine learning, in particular with artificial neural networks. We start by introducing data representations through some illustrations. We then analyze two important algorithms, Principal Component Analysis and Restricted Boltzmann Machines, with tools from statistical physics

    Avalanches in self-organized critical neural networks: A minimal model for the neural SOC universality class

    Full text link
    The brain keeps its overall dynamics in a corridor of intermediate activity and it has been a long standing question what possible mechanism could achieve this task. Mechanisms from the field of statistical physics have long been suggesting that this homeostasis of brain activity could occur even without a central regulator, via self-organization on the level of neurons and their interactions, alone. Such physical mechanisms from the class of self-organized criticality exhibit characteristic dynamical signatures, similar to seismic activity related to earthquakes. Measurements of cortex rest activity showed first signs of dynamical signatures potentially pointing to self-organized critical dynamics in the brain. Indeed, recent more accurate measurements allowed for a detailed comparison with scaling theory of non-equilibrium critical phenomena, proving the existence of criticality in cortex dynamics. We here compare this new evaluation of cortex activity data to the predictions of the earliest physics spin model of self-organized critical neural networks. We find that the model matches with the recent experimental data and its interpretation in terms of dynamical signatures for criticality in the brain. The combination of signatures for criticality, power law distributions of avalanche sizes and durations, as well as a specific scaling relationship between anomalous exponents, defines a universality class characteristic of the particular critical phenomenon observed in the neural experiments. The spin model is a candidate for a minimal model of a self-organized critical adaptive network for the universality class of neural criticality. As a prototype model, it provides the background for models that include more biological details, yet share the same universality class characteristic of the homeostasis of activity in the brain.Comment: 17 pages, 5 figure

    Machine-learning nonstationary noise out of gravitational-wave detectors

    Get PDF
    Signal extraction out of background noise is a common challenge in high-precision physics experiments, where the measurement output is often a continuous data stream. To improve the signal-to-noise ratio of the detection, witness sensors are often used to independently measure background noises and subtract them from the main signal. If the noise coupling is linear and stationary, optimal techniques already exist and are routinely implemented in many experiments. However, when the noise coupling is nonstationary, linear techniques often fail or are suboptimal. Inspired by the properties of the background noise in gravitational wave detectors, this work develops a novel algorithm to efficiently characterize and remove nonstationary noise couplings, provided there exist witnesses of the noise source and of the modulation. In this work, the algorithm is described in its most general formulation, and its efficiency is demonstrated with examples from the data of the Advanced LIGO gravitational-wave observatory, where we could obtain an improvement of the detector gravitational-wave reach without introducing any bias on the source parameter estimation
    corecore