38,602 research outputs found

    Neural coding strategies and mechanisms of competition

    Get PDF
    A long running debate has concerned the question of whether neural representations are encoded using a distributed or a local coding scheme. In both schemes individual neurons respond to certain specific patterns of pre-synaptic activity. Hence, rather than being dichotomous, both coding schemes are based on the same representational mechanism. We argue that a population of neurons needs to be capable of learning both local and distributed representations, as appropriate to the task, and should be capable of generating both local and distributed codes in response to different stimuli. Many neural network algorithms, which are often employed as models of cognitive processes, fail to meet all these requirements. In contrast, we present a neural network architecture which enables a single algorithm to efficiently learn, and respond using, both types of coding scheme

    Pre-integration lateral inhibition enhances unsupervised learning

    Get PDF
    A large and influential class of neural network architectures use post-integration lateral inhibition as a mechanism for competition. We argue that these algorithms are computationally deficient in that they fail to generate, or learn, appropriate perceptual representations under certain circumstances. An alternative neural network architecture is presented in which nodes compete for the right to receive inputs rather than for the right to generate outputs. This form of competition, implemented through pre-integration lateral inhibition, does provide appropriate coding properties and can be used to efficiently learn such representations. Furthermore, this architecture is consistent with both neuro-anatomical and neuro-physiological data. We thus argue that pre-integration lateral inhibition has computational advantages over conventional neural network architectures while remaining equally biologically plausible

    A Markovian event-based framework for stochastic spiking neural networks

    Full text link
    In spiking neural networks, the information is conveyed by the spike times, that depend on the intrinsic dynamics of each neuron, the input they receive and on the connections between neurons. In this article we study the Markovian nature of the sequence of spike times in stochastic neural networks, and in particular the ability to deduce from a spike train the next spike time, and therefore produce a description of the network activity only based on the spike times regardless of the membrane potential process. To study this question in a rigorous manner, we introduce and study an event-based description of networks of noisy integrate-and-fire neurons, i.e. that is based on the computation of the spike times. We show that the firing times of the neurons in the networks constitute a Markov chain, whose transition probability is related to the probability distribution of the interspike interval of the neurons in the network. In the cases where the Markovian model can be developed, the transition probability is explicitly derived in such classical cases of neural networks as the linear integrate-and-fire neuron models with excitatory and inhibitory interactions, for different types of synapses, possibly featuring noisy synaptic integration, transmission delays and absolute and relative refractory period. This covers most of the cases that have been investigated in the event-based description of spiking deterministic neural networks

    Dendritic inhibition enhances neural coding properties.

    Get PDF
    The presence of a large number of inhibitory contacts at the soma and axon initial segment of cortical pyramidal cells has inspired a large and influential class of neural network model which use post-integration lateral inhibition as a mechanism for competition between nodes. However, inhibitory synapses also target the dendrites of pyramidal cells. The role of this dendritic inhibition in competition between neurons has not previously been addressed. We demonstrate, using a simple computational model, that such pre-integration lateral inhibition provides networks of neurons with useful representational and computational properties which are not provided by post-integration inhibition

    Simulation of networks of spiking neurons: A review of tools and strategies

    Full text link
    We review different aspects of the simulation of spiking neural networks. We start by reviewing the different types of simulation strategies and algorithms that are currently implemented. We next review the precision of those simulation strategies, in particular in cases where plasticity depends on the exact timing of the spikes. We overview different simulators and simulation environments presently available (restricted to those freely available, open source and documented). For each simulation tool, its advantages and pitfalls are reviewed, with an aim to allow the reader to identify which simulator is appropriate for a given task. Finally, we provide a series of benchmark simulations of different types of networks of spiking neurons, including Hodgkin-Huxley type, integrate-and-fire models, interacting with current-based or conductance-based synapses, using clock-driven or event-driven integration strategies. The same set of models are implemented on the different simulators, and the codes are made available. The ultimate goal of this review is to provide a resource to facilitate identifying the appropriate integration strategy and simulation tool to use for a given modeling problem related to spiking neural networks.Comment: 49 pages, 24 figures, 1 table; review article, Journal of Computational Neuroscience, in press (2007

    GeNN: a code generation framework for accelerated brain simulations

    Get PDF
    Large-scale numerical simulations of detailed brain circuit models are important for identifying hypotheses on brain functions and testing their consistency and plausibility. An ongoing challenge for simulating realistic models is, however, computational speed. In this paper, we present the GeNN (GPU-enhanced Neuronal Networks) framework, which aims to facilitate the use of graphics accelerators for computational models of large-scale neuronal networks to address this challenge. GeNN is an open source library that generates code to accelerate the execution of network simulations on NVIDIA GPUs, through a flexible and extensible interface, which does not require in-depth technical knowledge from the users. We present performance benchmarks showing that 200-fold speedup compared to a single core of a CPU can be achieved for a network of one million conductance based Hodgkin-Huxley neurons but that for other models the speedup can differ. GeNN is available for Linux, Mac OS X and Windows platforms. The source code, user manual, tutorials, Wiki, in-depth example projects and all other related information can be found on the project website http://genn-team.github.io/genn/

    Cortical region interactions and the functional role of apical dendrites

    Get PDF
    The basal and distal apical dendrites of pyramidal cells occupy distinct cortical layers and are targeted by axons originating in different cortical regions. Hence, apical and basal dendrites receive information from distinct sources. Physiological evidence suggests that this anatomically observed segregation of input sources may have functional significance. This possibility has been explored in various connectionist models that employ neurons with functionally distinct apical and basal compartments. A neuron in which separate sets of inputs can be integrated independently has the potential to operate in a variety of ways which are not possible for the conventional model of a neuron in which all inputs are treated equally. This article thus considers how functionally distinct apical and basal dendrites can contribute to the information processing capacities of single neurons and, in particular, how information from different cortical regions could have disparate affects on neural activity and learning

    A perspective on cortical layering and layer-spanning neuronal elements

    Get PDF
    This review article addresses the function of the layers of the cerebral cortex. We develop the perspective that cortical layering needs to be understood in terms of its functional anatomy, i.e., the terminations of synaptic inputs on distinct cellular compartments and their effect on cortical activity. The cortex is a hierarchical structure in which feed forward and feedback pathways have a layer-specific termination pattern. We take the view that the influence of synaptic inputs arriving at different cortical layers can only be understood in terms of their complex interaction with cellular biophysics and the subsequent computation that occurs at the cellular level. We use high-resolution fMRI, which can resolve activity across layers, as a case study for implementing this approach by describing how cognitive events arising from the laminar distribution of inputs can be interpreted by taking into account the properties of neurons that span different layers. This perspective is based on recent advances in measuring subcellular activity in distinct feed-forward and feedback axons and in dendrites as they span across layers
    • …
    corecore