108 research outputs found

    Equilibrium Propagation and (Memristor-based) Oscillatory Neural Networks

    Get PDF
    Weakly Connected Oscillatory Networks (WCONs) are bio-inspired models which exhibit associative memory properties and can be exploited for information processing. It has been shown that the nonlinear dynamics of WCONs can be reduced to equations for the phase variable if oscillators admit stable limit cycles with nearly identical periods. Moreover, if connections are symmetric, the phase deviation equation admits a gradient formulation establishing a one-to-one correspondence between phase equilibria, limit cycle of the WCON and minima of the system’s potential function. The overall objective of this work is to provide a simulated WCON based on memristive connections and Van der Pol oscillators that exploits the device mem-conductance programmability to implement a novel local supervised learning algorithm for gradient models: Equilibrium Propagation (EP). Simulations of the phase dynamics of the WCON system trained with EP show that the retrieval accuracy of the proposed novel design outperforms the current state-of-the-art performance obtained with the Hebbian learning

    A survey of visual preprocessing and shape representation techniques

    Get PDF
    Many recent theories and methods proposed for visual preprocessing and shape representation are summarized. The survey brings together research from the fields of biology, psychology, computer science, electrical engineering, and most recently, neural networks. It was motivated by the need to preprocess images for a sparse distributed memory (SDM), but the techniques presented may also prove useful for applying other associative memories to visual pattern recognition. The material of this survey is divided into three sections: an overview of biological visual processing; methods of preprocessing (extracting parts of shape, texture, motion, and depth); and shape representation and recognition (form invariance, primitives and structural descriptions, and theories of attention)

    Hierarchically Modular Dynamical Neural Network Relaxing in a Warped Space: Basic Model and its Characteristics

    Full text link
    We propose a hierarchically modular, dynamical neural network model whose architecture minimizes a specifically designed energy function and defines its temporal characteristics. The model has an internal and an external space that are connected with a layered internetwork that consists of a pair of forward and backward subnets composed of static neurons (with an instantaneous time-course). Dynamical neurons with large time constants in the internal space determine the overall time-course. The model offers a framework in which state variables in the network relax in a warped space, due to the cooperation between dynamic and static neurons. We assume that the system operates in either a learning or an association mode, depending on the presence or absence of feedback paths and input ports. In the learning mode, synaptic weights in the internetwork are modified by strong inputs corresponding to repetitive neuronal bursting, which represents sinusoidal or quasi-sinusoidal waves in the short-term average density of nerve impulses or in the membrane potential. A two-dimensional mapping relationship can be formed by employing signals with different frequencies based on the same mechanism as Lissajous curves. In the association mode, the speed of convergence to a goal point greatly varies with the mapping relationship of the previously trained internetwork, and owing to this property, the convergence trajectory in the two-dimensional model with the non-linear mapping internetwork cannot go straight but instead must curve. We further introduce a constrained association mode with a given target trajectory and elucidate that in the internal space, an output trajectory is generated, which is mapped from the external space according to the inverse of the mapping relationship of the forward subnet.Comment: 44 pages, 22 EPS figure

    Attractor Dynamics in Feedforward Neural Networks

    Get PDF
    We study the probabilistic generative models parameterized by feedforward neural networks. An attractor dynamics for probabilistic inference in these models is derived from a mean field approximation for large, layered sigmoidal networks. Fixed points of the dynamics correspond to solutions of the mean field equations, which relate the statistics of each unit to those of its Markov blanket. We establish global convergence of the dynamics by providing a Lyapunov function and show that the dynamics generate the signals required for unsupervised learning. Our results for feedforward networks provide a counterpart to those of Cohen-Grossberg and Hopfield for symmetric networks. 1 Introduction Attractor neural networks lend a computational purpose to continuous dynamical systems. Celebrated uses of these networks include the storage of associative memories (Amit, 1989), the reconstruction of noisy images (Koch et al, 1986), and the search for shortest paths in the traveling salesman proble..

    Backpropagation at the Infinitesimal Inference Limit of Energy-Based Models: Unifying Predictive Coding, Equilibrium Propagation, and Contrastive Hebbian Learning

    Full text link
    How the brain performs credit assignment is a fundamental unsolved problem in neuroscience. Many `biologically plausible' algorithms have been proposed, which compute gradients that approximate those computed by backpropagation (BP), and which operate in ways that more closely satisfy the constraints imposed by neural circuitry. Many such algorithms utilize the framework of energy-based models (EBMs), in which all free variables in the model are optimized to minimize a global energy function. However, in the literature, these algorithms exist in isolation and no unified theory exists linking them together. Here, we provide a comprehensive theory of the conditions under which EBMs can approximate BP, which lets us unify many of the BP approximation results in the literature (namely, predictive coding, equilibrium propagation, and contrastive Hebbian learning) and demonstrate that their approximation to BP arises from a simple and general mathematical property of EBMs at free-phase equilibrium. This property can then be exploited in different ways with different energy functions, and these specific choices yield a family of BP-approximating algorithms, which both includes the known results in the literature and can be used to derive new ones.Comment: 31/05/22 initial upload; 22/06/22 change corresponding author; 03/08/22 revision

    Learning without neurons in physical systems

    Full text link
    Learning is traditionally studied in biological or computational systems. The power of learning frameworks in solving hard inverse-problems provides an appealing case for the development of `physical learning' in which physical systems adopt desirable properties on their own without computational design. It was recently realized that large classes of physical systems can physically learn through local learning rules, autonomously adapting their parameters in response to observed examples of use. We review recent work in the emerging field of physical learning, describing theoretical and experimental advances in areas ranging from molecular self-assembly to flow networks and mechanical materials. Physical learning machines provide multiple practical advantages over computer designed ones, in particular by not requiring an accurate model of the system, and their ability to autonomously adapt to changing needs over time. As theoretical constructs, physical learning machines afford a novel perspective on how physical constraints modify abstract learning theory.Comment: 25 pages, 6 figure

    The Predictive Forward-Forward Algorithm

    Full text link
    We propose the predictive forward-forward (PFF) algorithm for conducting credit assignment in neural systems. Specifically, we design a novel, dynamic recurrent neural system that learns a directed generative circuit jointly and simultaneously with a representation circuit. Notably, the system integrates learnable lateral competition, noise injection, and elements of predictive coding, an emerging and viable neurobiological process theory of cortical function, with the forward-forward (FF) adaptation scheme. Furthermore, PFF efficiently learns to propagate learning signals and updates synapses with forward passes only, eliminating key structural and computational constraints imposed by backpropagation-based schemes. Besides computational advantages, the PFF process could prove useful for understanding the learning mechanisms behind biological neurons that use local signals despite missing feedback connections. We run experiments on image data and demonstrate that the PFF procedure works as well as backpropagation, offering a promising brain-inspired algorithm for classifying, reconstructing, and synthesizing data patterns.Comment: More revisions/edits, update to key diagram depicting PFF process, link to algorithm / simulation code (repo) now include
    • …
    corecore