104 research outputs found

    Emergence of Functional Specificity in Balanced Networks with Synaptic Plasticity

    Get PDF
    In rodent visual cortex, synaptic connections between orientation-selective neurons are unspecific at the time of eye opening, and become to some degree functionally specific only later during development. An explanation for this two-stage process was proposed in terms of Hebbian plasticity based on visual experience that would eventually enhance connections between neurons with similar response features. For this to work, however, two conditions must be satisfied: First, orientation selective neuronal responses must exist before specific recurrent synaptic connections can be established. Second, Hebbian learning must be compatible with the recurrent network dynamics contributing to orientation selectivity, and the resulting specific connectivity must remain stable for unspecific background activity. Previous studies have mainly focused on very simple models, where the receptive fields of neurons were essentially determined by feedforward mechanisms, and where the recurrent network was small, lacking the complex recurrent dynamics of large-scale networks of excitatory and inhibitory neurons. Here we studied the emergence of functionally specific connectivity in large-scale recurrent networks with synaptic plasticity. Our results show that balanced random networks, which already exhibit highly selective responses at eye opening, can develop feature-specific connectivity if appropriate rules of synaptic plasticity are invoked within and between excitatory and inhibitory populations. If these conditions are met, the initial orientation selectivity guides the process of Hebbian learning and, as a result, functionally specific and a surplus of bidirectional connections emerge. Our results thus demonstrate the cooperation of synaptic plasticity and recurrent dynamics in large-scale functional networks with realistic receptive fields, highlight the role of inhibition as a critical element in this process, and paves the road for further computational studies of sensory processing in neocortical network models equipped with synaptic plasticity

    Nonlinear Hebbian learning as a unifying principle in receptive field formation

    Get PDF
    The development of sensory receptive fields has been modeled in the past by a variety of models including normative models such as sparse coding or independent component analysis and bottom-up models such as spike-timing dependent plasticity or the Bienenstock-Cooper-Munro model of synaptic plasticity. Here we show that the above variety of approaches can all be unified into a single common principle, namely Nonlinear Hebbian Learning. When Nonlinear Hebbian Learning is applied to natural images, receptive field shapes were strongly constrained by the input statistics and preprocessing, but exhibited only modest variation across different choices of nonlinearities in neuron models or synaptic plasticity rules. Neither overcompleteness nor sparse network activity are necessary for the development of localized receptive fields. The analysis of alternative sensory modalities such as auditory models or V2 development lead to the same conclusions. In all examples, receptive fields can be predicted a priori by reformulating an abstract model as nonlinear Hebbian learning. Thus nonlinear Hebbian learning and natural statistics can account for many aspects of receptive field formation across models and sensory modalities

    A synaptic learning rule for exploiting nonlinear dendritic computation

    Get PDF
    Information processing in the brain depends on the integration of synaptic input distributed throughout neuronal dendrites. Dendritic integration is a hierarchical process, proposed to be equivalent to integration by a multilayer network, potentially endowing single neurons with substantial computational power. However, whether neurons can learn to harness dendritic properties to realize this potential is unknown. Here, we develop a learning rule from dendritic cable theory and use it to investigate the processing capacity of a detailed pyramidal neuron model. We show that computations using spatial or temporal features of synaptic input patterns can be learned, and even synergistically combined, to solve a canonical nonlinear feature-binding problem. The voltage dependence of the learning rule drives coactive synapses to engage dendritic nonlinearities, whereas spike-timing dependence shapes the time course of subthreshold potentials. Dendritic input-output relationships can therefore be flexibly tuned through synaptic plasticity, allowing optimal implementation of nonlinear functions by single neurons

    Reinforcement Learning in a Spiking Neural Model of Striatum Plasticity

    Get PDF
    The basal ganglia (BG), and more specifically the striatum, have long been proposed to play an essential role in action-selection based on a reinforcement learning (RL) paradigm. However, some recent findings, such as striatal spike-timing-dependent plasticity (STDP) or striatal lateral connectivity, require further research and modelling as their respective roles are still not well understood. Theoretical models of spiking neurons with homeostatic mechanisms, lateral connectivity, and reward-modulated STDP have demonstrated a remarkable capability to learn sensorial patterns that statistically correlate with a rewarding signal. In this article, we implement a functional and biologically inspired network model of the striatum, where learning is based on a previously proposed learning rule called spike-timing-dependent eligibility (STDE), which captures important experimental features in the striatum. The proposed computational model can recognize complex input patterns and consistently choose rewarded actions to respond to such sensorial inputs. Moreover, we assess the role different neuronal and network features, such as homeostatic mechanisms and lateral inhibitory connections, play in action-selection with the proposed model. The homeostatic mechanisms make learning more robust (in terms of suitable parameters) and facilitate recovery after rewarding policy swapping, while lateral inhibitory connections are important when multiple input patterns are associated with the same rewarded action. Finally, according to our simulations, the optimal delay between the action and the dopaminergic feedback is obtained around 300 ms, as demonstrated in previous studies of RL and in biological studies

    Learning in clustered spiking networks

    Get PDF
    Neurons spike on a millisecond time scale while behaviour typically spans hundreds of milliseconds to seconds and longer. Neurons have to bridge this time gap when computing and learning behaviours of interest. Recent computational work has shown that neural circuits can bridge this time gap when connected in specific ways. Moreover, the connectivity patterns can develop using plasticity rules typically considered to be biologically plausible. In this thesis, we focus on one type of connectivity where excitatory neurons are grouped in clusters. Strong recurrent connectivity within the clusters reverberates the activity and prolongs the time scales in the network. This way, the clusters of neurons become the basic functional units of the circuit, in line with an increasing number of experimental studies. We study a general architecture where plastic synapses connect the clustered network to a read-out network. We demonstrate the usefulness of this architecture for two different problems: 1) learning and replaying sequences; 2) learning statistical structure. The time scales in both problems range from hundreds of milliseconds to seconds and we address the problems through simulation and analysis of spiking networks. We show that the clustered organization circumvents the need for non-bio-plausible mathematical optimizations and instead allows the use of unsupervised spike-timing-dependent plasticity rules. Additionally, we make qualitative links to experimental findings and predictions for both problems studied. Finally, we speculate about future directions that could extend upon our findings.Open Acces

    Bio-mimetic Spiking Neural Networks for unsupervised clustering of spatio-temporal data

    Get PDF
    Spiking neural networks aspire to mimic the brain more closely than traditional artificial neural networks. They are characterised by a spike-like activation function inspired by the shape of an action potential in biological neurons. Spiking networks remain a niche area of research, perform worse than the traditional artificial networks, and their real-world applications are limited. We hypothesised that neuroscience-inspired spiking neural networks with spike-timing-dependent plasticity demonstrate useful learning capabilities. Our objective was to identify features which play a vital role in information processing in the brain but are not commonly used in artificial networks, implement them in spiking networks without copying constraints that apply to living organisms, and to characterise their effect on data processing. The networks we created are not brain models; our approach can be labelled as artificial life. We performed a literature review and selected features such as local weight updates, neuronal sub-types, modularity, homeostasis and structural plasticity. We used the review as a guide for developing the consecutive iterations of the network, and eventually a whole evolutionary developmental system. We analysed the model’s performance on clustering of spatio-temporal data. Our results show that combining evolution and unsupervised learning leads to a faster convergence on the optimal solutions, better stability of fit solutions than each approach separately. The choice of fitness definition affects the network’s performance on fitness-related and unrelated tasks. We found that neuron type-specific weight homeostasis can be used to stabilise the networks, thus enabling longer training. We also demonstrated that networks with a rudimentary architecture can evolve developmental rules which improve their fitness. This interdisciplinary work provides contributions to three fields: it proposes novel artificial intelligence approaches, tests the possible role of the selected biological phenomena in information processing in the brain, and explores the evolution of learning in an artificial life system

    The role of excitation and inhibition in learning and memory formation

    Get PDF
    The neurons in the mammalian brain can be classified into two broad categories: excitatory and inhibitory neurons. The former has been historically associated to information processing whereas the latter has been linked to network homeostasis. More recently, inhibitory neurons have been related to several computational roles such as the gating of signal propagation, mediation of network competition, or learning. However, the ways by which excitation and inhibition can regulate learning have not been exhaustively explored. Here we explore several model systems to investigate the role of excitation and inhibition in learning and memory formation. Additionally, we investigate the effect that third factors such as neuromodulators and network state exert over this process. Firstly, we explore the effect of neuromodulators onto excitatory neurons and excitatory plasticity. Next, we investigate the plasticity rules governing excitatory connections while the neural network oscillates in a sleep-like cycle, shifting between Up and Down states. We observe that this plasticity rule depends on the state of the network. To study the role of inhibitory neurons in learning, we then investigate the mechanisms underlying place field emergence and consolidation. Our simulations suggest that dendrite-targeting interneurons play an important role in both promoting the emergence of new place fields and in ensuring place field stabilization. Soma-targeting interneurons, on the other hand, are suggested to be related to quick, context-specific changes in the assignment of place and silent cells. We next investigate the mechanisms underlying the plasticity of synaptic connections from specific types of interneurons. Our experiments suggest that different types of interneurons undergo different synaptic plasticity rules. Using a computational model, we implement these plasticity rules in a simplified network. Our simulations indicate that the interaction between the different forms of plasticity account for the development of stable place fields across multiple environments. Moreover, these plasticity rules seems to be gated by the postsynaptic membrane voltage. Inspired by these findings, we propose a voltage-based inhibitory synaptic plasticity rule. As a consequence of this rule, the network activity is kept controlled by the imposition of a maximum pyramidal cell firing rate. Remarkably, this rule does not constrain the postsynaptic firing rate to a narrow range. Overall, through multiple stages of interactions between experiments and computational simulations, we investigate the effect of excitation and inhibition in learning. We propose mechanistic explanations for experimental data, and suggest possible functional implications of experimental findings. Finally, we proposed a voltage-based inhibitory synaptic plasticity model as a mechanism for flexible network homeostasis.Open Acces

    Memristance can explain Spike-Time-Dependent-Plasticity in Neural Synapses

    Get PDF
    Interdisciplinary research broadens the view of particular problems yielding fresh and possibly unexpected insights. This is the case of neuromorphic engineering where technology and neuroscience cross-fertilize each other. For example, consider on one side the recently discovered memristor, postulated in 1974, thanks to research in nanotechnology electronics. On the other side, consider the mechanism known as Spike-Time-Dependent-Plasticity (STDP) which describes a neuronal synaptic learning mechanism that outperforms the traditional Hebbian synaptic plasticity proposed in 1949. STDP was originally postulated as a computer learning algorithm, and is being used by the machine intelligence and computational neuroscience community. At the same time its biological and physiological foundations have been reasonably well established during the past decade. If memristance and STDP can be related, then (a) recent discoveries in nanophysics and nanoelectronic principles may shed new lights into understanding the intricate molecular and physiological mechanisms behind STDP in neuroscience, and (b) new neuromorphic-like computers built out of nanotechnology memristive devices could incorporate the biological STDP mechanisms yielding a new generation of self-adaptive ultra-high-dense intelligent machines. Here we show that by combining memristance models with the electrical wave signals of neural impulses (spikes) converging from pre- and post-synaptic neurons into a synaptic junction, STDP behavior emerges naturally. This result serves to understand how neural and memristance parameters modulate STDP, which might bring new insights to neurophysiologists in searching for the ultimate physiological mechanisms responsible for STDP in biological synapses. At the same time, this result also provides a direct mean to incorporate STDP learning mechanisms into a new generation of nanotechnology computers employing memristors
    corecore