12 research outputs found

    Role of homeostasis in learning sparse representations

    Full text link
    Neurons in the input layer of primary visual cortex in primates develop edge-like receptive fields. One approach to understanding the emergence of this response is to state that neural activity has to efficiently represent sensory data with respect to the statistics of natural scenes. Furthermore, it is believed that such an efficient coding is achieved using a competition across neurons so as to generate a sparse representation, that is, where a relatively small number of neurons are simultaneously active. Indeed, different models of sparse coding, coupled with Hebbian learning and homeostasis, have been proposed that successfully match the observed emergent response. However, the specific role of homeostasis in learning such sparse representations is still largely unknown. By quantitatively assessing the efficiency of the neural representation during learning, we derive a cooperative homeostasis mechanism that optimally tunes the competition between neurons within the sparse coding algorithm. We apply this homeostasis while learning small patches taken from natural images and compare its efficiency with state-of-the-art algorithms. Results show that while different sparse coding algorithms give similar coding results, the homeostasis provides an optimal balance for the representation of natural images within the population of neurons. Competition in sparse coding is optimized when it is fair. By contributing to optimizing statistical competition across neurons, homeostasis is crucial in providing a more efficient solution to the emergence of independent components

    Impact of Adaptation Currents on Synchronization of Coupled Exponential Integrate-and-Fire Neurons

    Get PDF
    The ability of spiking neurons to synchronize their activity in a network depends on the response behavior of these neurons as quantified by the phase response curve (PRC) and on coupling properties. The PRC characterizes the effects of transient inputs on spike timing and can be measured experimentally. Here we use the adaptive exponential integrate-and-fire (aEIF) neuron model to determine how subthreshold and spike-triggered slow adaptation currents shape the PRC. Based on that, we predict how synchrony and phase locked states of coupled neurons change in presence of synaptic delays and unequal coupling strengths. We find that increased subthreshold adaptation currents cause a transition of the PRC from only phase advances to phase advances and delays in response to excitatory perturbations. Increased spike-triggered adaptation currents on the other hand predominantly skew the PRC to the right. Both adaptation induced changes of the PRC are modulated by spike frequency, being more prominent at lower frequencies. Applying phase reduction theory, we show that subthreshold adaptation stabilizes synchrony for pairs of coupled excitatory neurons, while spike-triggered adaptation causes locking with a small phase difference, as long as synaptic heterogeneities are negligible. For inhibitory pairs synchrony is stable and robust against conduction delays, and adaptation can mediate bistability of in-phase and anti-phase locking. We further demonstrate that stable synchrony and bistable in/anti-phase locking of pairs carry over to synchronization and clustering of larger networks. The effects of adaptation in aEIF neurons on PRCs and network dynamics qualitatively reflect those of biophysical adaptation currents in detailed Hodgkin-Huxley-based neurons, which underscores the utility of the aEIF model for investigating the dynamical behavior of networks. Our results suggest neuronal spike frequency adaptation as a mechanism synchronizing low frequency oscillations in local excitatory networks, but indicate that inhibition rather than excitation generates coherent rhythms at higher frequencies

    Adaptive Robotic Control Driven by a Versatile Spiking Cerebellar Network

    Get PDF
    The cerebellum is involved in a large number of different neural processes, especially in associative learning and in fine motor control. To develop a comprehensive theory of sensorimotor learning and control, it is crucial to determine the neural basis of coding and plasticity embedded into the cerebellar neural circuit and how they are translated into behavioral outcomes in learning paradigms. Learning has to be inferred from the interaction of an embodied system with its real environment, and the same cerebellar principles derived from cell physiology have to be able to drive a variety of tasks of different nature, calling for complex timing and movement patterns. We have coupled a realistic cerebellar spiking neural network (SNN) with a real robot and challenged it in multiple diverse sensorimotor tasks. Encoding and decoding strategies based on neuronal firing rates were applied. Adaptive motor control protocols with acquisition and extinction phases have been designed and tested, including an associative Pavlovian task (Eye blinking classical conditioning), a vestibulo-ocular task and a perturbed arm reaching task operating in closed-loop. The SNN processed in real-time mossy fiber inputs as arbitrary contextual signals, irrespective of whether they conveyed a tone, a vestibular stimulus or the position of a limb. A bidirectional long-term plasticity rule implemented at parallel fibers-Purkinje cell synapses modulated the output activity in the deep cerebellar nuclei. In all tasks, the neurorobot learned to adjust timing and gain of the motor responses by tuning its output discharge. It succeeded in reproducing how human biological systems acquire, extinguish and express knowledge of a noisy and changing world. By varying stimuli and perturbations patterns, real-time control robustness and generalizability were validated. The implicit spiking dynamics of the cerebellar model fulfill timing, prediction and learning functions.European Union (Human Brain Project) REALNET FP7-ICT270434 CEREBNET FP7-ITN238686 HBP-60410

    Spike-Based Bayesian-Hebbian Learning of Temporal Sequences

    Get PDF
    Many cognitive and motor functions are enabled by the temporal representation and processing of stimuli, but it remains an open issue how neocortical microcircuits can reliably encode and replay such sequences of information. To better understand this, a modular attractor memory network is proposed in which meta-stable sequential attractor transitions are learned through changes to synaptic weights and intrinsic excitabilities via the spike-based Bayesian Confidence Propagation Neural Network (BCPNN) learning rule. We find that the formation of distributed memories, embodied by increased periods of firing in pools of excitatory neurons, together with asymmetrical associations between these distinct network states, can be acquired through plasticity. The model's feasibility is demonstrated using simulations of adaptive exponential integrate-and-fire model neurons (AdEx). We show that the learning and speed of sequence replay depends on a confluence of biophysically relevant parameters including stimulus duration, level of background noise, ratio of synaptic currents, and strengths of short-term depression and adaptation. Moreover, sequence elements are shown to flexibly participate multiple times in the sequence, suggesting that spiking attractor networks of this type can support an efficient combinatorial code. The model provides a principled approach towards understanding how multiple interacting plasticity mechanisms can coordinate hetero-associative learning in unison

    A systematic method for configuring VLSI networks of spiking neurons

    Get PDF
    Neftci E, Chicca E, Indiveri G, Douglas RJ. A systematic method for configuring VLSI networks of spiking neurons. Neural Computation. 2011;23(10):2457-2497.An increasing number of research groups are developing custom hybrid analog/digital very large scale integration (VLSI) chips and systems that implement hundreds to thousands of spiking neurons with biophysically realistic dynamics, with the intention of emulating brainlike real-world behavior in hardware and robotic systems rather than simply simulating their performance on general-purpose digital computers. Although the electronic engineering aspects of these emulation systems is proceeding well, progress toward the actual emulation of brainlike tasks is restricted by the lack of suitable high-level configuration methods of the kind that have already been developed over many decades for simulations on general-purpose computers. The key difficulty is that the dynamics of the CMOS electronic analogs are determined by transistor biases that do not map simply to the parameter types and values used in typical abstract mathematical models of neurons and their networks. Here we provide a general method for resolving this difficulty. We describe a parameter mapping technique that permits an automatic configuration of VLSI neural networks so that their electronic emulation conforms to a higher-level neuronal simulation. We show that the neurons configured by our method exhibit spike timing statistics and temporal dynamics that are the same as those observed in the software simulated neurons and, in particular, that the key parameters of recurrent VLSI neural networks (e. g., implementing soft winner-take-all) can be precisely tuned. The proposed method permits a seamless integration between software simulations with hardware emulations and intertranslatability between the parameters of abstract neuronal models and their emulation counterparts. Most important, our method offers a route toward a high-level task configuration language for neuromorphic VLSI systems

    Polymer analysis by thermofractography

    No full text
    corecore