1,575 research outputs found

    Real time unsupervised learning of visual stimuli in neuromorphic VLSI systems

    Full text link
    Neuromorphic chips embody computational principles operating in the nervous system, into microelectronic devices. In this domain it is important to identify computational primitives that theory and experiments suggest as generic and reusable cognitive elements. One such element is provided by attractor dynamics in recurrent networks. Point attractors are equilibrium states of the dynamics (up to fluctuations), determined by the synaptic structure of the network; a `basin' of attraction comprises all initial states leading to a given attractor upon relaxation, hence making attractor dynamics suitable to implement robust associative memory. The initial network state is dictated by the stimulus, and relaxation to the attractor state implements the retrieval of the corresponding memorized prototypical pattern. In a previous work we demonstrated that a neuromorphic recurrent network of spiking neurons and suitably chosen, fixed synapses supports attractor dynamics. Here we focus on learning: activating on-chip synaptic plasticity and using a theory-driven strategy for choosing network parameters, we show that autonomous learning, following repeated presentation of simple visual stimuli, shapes a synaptic connectivity supporting stimulus-selective attractors. Associative memory develops on chip as the result of the coupled stimulus-driven neural activity and ensuing synaptic dynamics, with no artificial separation between learning and retrieval phases.Comment: submitted to Scientific Repor

    Statistical Physics and Representations in Real and Artificial Neural Networks

    Full text link
    This document presents the material of two lectures on statistical physics and neural representations, delivered by one of us (R.M.) at the Fundamental Problems in Statistical Physics XIV summer school in July 2017. In a first part, we consider the neural representations of space (maps) in the hippocampus. We introduce an extension of the Hopfield model, able to store multiple spatial maps as continuous, finite-dimensional attractors. The phase diagram and dynamical properties of the model are analyzed. We then show how spatial representations can be dynamically decoded using an effective Ising model capturing the correlation structure in the neural data, and compare applications to data obtained from hippocampal multi-electrode recordings and by (sub)sampling our attractor model. In a second part, we focus on the problem of learning data representations in machine learning, in particular with artificial neural networks. We start by introducing data representations through some illustrations. We then analyze two important algorithms, Principal Component Analysis and Restricted Boltzmann Machines, with tools from statistical physics

    Anticipatory Semantic Processes

    Get PDF
    Why anticipatory processes correspond to cognitive abilities of living systems? To be adapted to an environment, behaviors need at least i) internal representations of events occurring in the external environment; and ii) internal anticipations of possible events to occur in the external environment. Interactions of these two opposite but complementary cognitive properties lead to various patterns of experimental data on semantic processing. How to investigate dynamic semantic processes? Experimental studies in cognitive psychology offer several interests such as: i) the control of the semantic environment such as words embedded in sentences; ii) the methodological tools allowing the observation of anticipations and adapted oculomotor behavior during reading; and iii) the analyze of different anticipatory processes within the theoretical framework of semantic processing. What are the different types of semantic anticipations? Experimental data show that semantic anticipatory processes involve i) the coding in memory of sequences of words occurring in textual environments; ii) the anticipation of possible future words from currently perceived words; and iii) the selection of anticipated words as a function of the sequences of perceived words, achieved by anticipatory activations and inhibitory selection processes. How to modelize anticipatory semantic processes? Localist or distributed neural networks models can account for some types of semantic processes, anticipatory or not. Attractor neural networks coding temporal sequences are presented as good candidate for modeling anticipatory semantic processes, according to specific properties of the human brain such as i) auto-associative memory; ii) learning and memorization of sequences of patterns; and iii) anticipation of memorized patterns from previously perceived patterns

    Mammalian Brain As a Network of Networks

    Get PDF
    Acknowledgements AZ, SG and AL acknowledge support from the Russian Science Foundation (16-12-00077). Authors thank T. Kuznetsova for Fig. 6.Peer reviewedPublisher PD

    A recurrent neural network with ever changing synapses

    Full text link
    A recurrent neural network with noisy input is studied analytically, on the basis of a Discrete Time Master Equation. The latter is derived from a biologically realizable learning rule for the weights of the connections. In a numerical study it is found that the fixed points of the dynamics of the net are time dependent, implying that the representation in the brain of a fixed piece of information (e.g., a word to be recognized) is not fixed in time.Comment: 17 pages, LaTeX, 4 figure
    corecore