7 research outputs found

    Emergence of Functional Specificity in Balanced Networks with Synaptic Plasticity

    Get PDF
    In rodent visual cortex, synaptic connections between orientation-selective neurons are unspecific at the time of eye opening, and become to some degree functionally specific only later during development. An explanation for this two-stage process was proposed in terms of Hebbian plasticity based on visual experience that would eventually enhance connections between neurons with similar response features. For this to work, however, two conditions must be satisfied: First, orientation selective neuronal responses must exist before specific recurrent synaptic connections can be established. Second, Hebbian learning must be compatible with the recurrent network dynamics contributing to orientation selectivity, and the resulting specific connectivity must remain stable for unspecific background activity. Previous studies have mainly focused on very simple models, where the receptive fields of neurons were essentially determined by feedforward mechanisms, and where the recurrent network was small, lacking the complex recurrent dynamics of large-scale networks of excitatory and inhibitory neurons. Here we studied the emergence of functionally specific connectivity in large-scale recurrent networks with synaptic plasticity. Our results show that balanced random networks, which already exhibit highly selective responses at eye opening, can develop feature-specific connectivity if appropriate rules of synaptic plasticity are invoked within and between excitatory and inhibitory populations. If these conditions are met, the initial orientation selectivity guides the process of Hebbian learning and, as a result, functionally specific and a surplus of bidirectional connections emerge. Our results thus demonstrate the cooperation of synaptic plasticity and recurrent dynamics in large-scale functional networks with realistic receptive fields, highlight the role of inhibition as a critical element in this process, and paves the road for further computational studies of sensory processing in neocortical network models equipped with synaptic plasticity

    Randomly connected networks generate emergent selectivity and predict decoding properties of large populations of neurons

    Full text link
    Advances in neural recording methods enable sampling from populations of thousands of neurons during the performance of behavioral tasks, raising the question of how recorded activity relates to the theoretical models of computations underlying performance. In the context of decision making in rodents, patterns of functional connectivity between choice-selective cortical neurons, as well as broadly distributed choice information in both excitatory and inhibitory populations, were recently reported [1]. The straightforward interpretation of these data suggests a mechanism relying on specific patterns of anatomical connectivity to achieve selective pools of inhibitory as well as excitatory neurons. We investigate an alternative mechanism for the emergence of these experimental observations using a computational approach. We find that a randomly connected network of excitatory and inhibitory neurons generates single-cell selectivity, patterns of pairwise correlations, and indistinguishable excitatory and inhibitory readout weight distributions, as observed in recorded neural populations. Further, we make the readily verifiable experimental predictions that, for this type of evidence accumulation task, there are no anatomically defined sub-populations of neurons representing choice, and that choice preference of a particular neuron changes with the details of the task. This work suggests that distributed stimulus selectivity and patterns of functional organization in population codes could be emergent properties of randomly connected networks

    Information content in continuous attractor neural networks is preserved in the presence of moderate disordered background connectivity

    Full text link
    Continuous attractor neural networks (CANN) form an appealing conceptual model for the storage of information in the brain. However a drawback of CANN is that they require finely tuned interactions. We here study the effect of quenched noise in the interactions on the coding of positional information within CANN. Using the replica method we compute the Fisher information for a network with position-dependent input and recurrent connections composed of a short-range (in space) and a disordered component. We find that the loss in positional information is small for not too large disorder strength, indicating that CANN have a regime in which the advantageous effects of local connectivity on information storage outweigh the detrimental ones. Furthermore, a substantial part of this information can be extracted with a simple linear readout.Comment: 20 pages, 6 figure

    Event-Based Update of Synapses in Voltage-Based Learning Rules

    Get PDF
    Due to the point-like nature of neuronal spiking, efficient neural network simulators often employ event-based simulation schemes for synapses. Yet many types of synaptic plasticity rely on the membrane potential of the postsynaptic cell as a third factor in addition to pre- and postsynaptic spike times. In some learning rules membrane potentials not only influence synaptic weight changes at the time points of spike events but in a continuous manner. In these cases, synapses therefore require information on the full time course of membrane potentials to update their strength which a priori suggests a continuous update in a time-driven manner. The latter hinders scaling of simulations to realistic cortical network sizes and relevant time scales for learning. Here, we derive two efficient algorithms for archiving postsynaptic membrane potentials, both compatible with modern simulation engines based on event-based synapse updates. We theoretically contrast the two algorithms with a time-driven synapse update scheme to analyze advantages in terms of memory and computations. We further present a reference implementation in the spiking neural network simulator NEST for two prototypical voltage-based plasticity rules: the Clopath rule and the Urbanczik-Senn rule. For both rules, the two event-based algorithms significantly outperform the time-driven scheme. Depending on the amount of data to be stored for plasticity, which heavily differs between the rules, a strong performance increase can be achieved by compressing or sampling of information on membrane potentials. Our results on computational efficiency related to archiving of information provide guidelines for the design of learning rules in order to make them practically usable in large-scale networks
    corecore