3,780 research outputs found

    Hardware-Amenable Structural Learning for Spike-based Pattern Classification using a Simple Model of Active Dendrites

    Full text link
    This paper presents a spike-based model which employs neurons with functionally distinct dendritic compartments for classifying high dimensional binary patterns. The synaptic inputs arriving on each dendritic subunit are nonlinearly processed before being linearly integrated at the soma, giving the neuron a capacity to perform a large number of input-output mappings. The model utilizes sparse synaptic connectivity; where each synapse takes a binary value. The optimal connection pattern of a neuron is learned by using a simple hardware-friendly, margin enhancing learning algorithm inspired by the mechanism of structural plasticity in biological neurons. The learning algorithm groups correlated synaptic inputs on the same dendritic branch. Since the learning results in modified connection patterns, it can be incorporated into current event-based neuromorphic systems with little overhead. This work also presents a branch-specific spike-based version of this structural plasticity rule. The proposed model is evaluated on benchmark binary classification problems and its performance is compared against that achieved using Support Vector Machine (SVM) and Extreme Learning Machine (ELM) techniques. Our proposed method attains comparable performance while utilizing 10 to 50% less computational resources than the other reported techniques.Comment: Accepted for publication in Neural Computatio

    NMDA-based pattern discrimination in a modeled cortical neuron

    Get PDF
    Compartmental simulations of an anatomically characterized cortical pyramidal cell were carried out to study the integrative behavior of a complex dendritic tree. Previous theoretical (Feldman and Ballard 1982; Durbin and Rumelhart 1989; Mel 1990; Mel and Koch 1990; Poggio and Girosi 1990) and compartmental modeling (Koch et al. 1983; Shepherd et al. 1985; Koch and Poggio 1987; Rall and Segev 1987; Shepherd and Brayton 1987; Shepherd et al. 1989; Brown et al. 1991) work had suggested that multiplicative interactions among groups of neighboring synapses could greatly enhance the processing power of a neuron relative to a unit with only a single global firing threshold. This issue was investigated here, with a particular focus on the role of voltage-dependent N-methyl-D-asparate (NMDA) channels in the generation of cell responses. First, it was found that when a large proportion of the excitatory synaptic input to dendritic spines is carried by NMDA channels, the pyramidal cell responds preferentially to spatially clustered, rather than random, distributions of activated synapses. Second, based on this mechanism, the NMDA-rich neuron is shown to be capable of solving a nonlinear pattern discrimination task. We propose that manipulation of the spatial ordering of afferent synaptic connections onto the dendritic arbor is a possible biological strategy for pattern information storage during learning

    Liquid State Machine with Dendritically Enhanced Readout for Low-power, Neuromorphic VLSI Implementations

    Full text link
    In this paper, we describe a new neuro-inspired, hardware-friendly readout stage for the liquid state machine (LSM), a popular model for reservoir computing. Compared to the parallel perceptron architecture trained by the p-delta algorithm, which is the state of the art in terms of performance of readout stages, our readout architecture and learning algorithm can attain better performance with significantly less synaptic resources making it attractive for VLSI implementation. Inspired by the nonlinear properties of dendrites in biological neurons, our readout stage incorporates neurons having multiple dendrites with a lumped nonlinearity. The number of synaptic connections on each branch is significantly lower than the total number of connections from the liquid neurons and the learning algorithm tries to find the best 'combination' of input connections on each branch to reduce the error. Hence, the learning involves network rewiring (NRW) of the readout network similar to structural plasticity observed in its biological counterparts. We show that compared to a single perceptron using analog weights, this architecture for the readout can attain, even by using the same number of binary valued synapses, up to 3.3 times less error for a two-class spike train classification problem and 2.4 times less error for an input rate approximation task. Even with 60 times larger synapses, a group of 60 parallel perceptrons cannot attain the performance of the proposed dendritically enhanced readout. An additional advantage of this method for hardware implementations is that the 'choice' of connectivity can be easily implemented exploiting address event representation (AER) protocols commonly used in current neuromorphic systems where the connection matrix is stored in memory. Also, due to the use of binary synapses, our proposed method is more robust against statistical variations.Comment: 14 pages, 19 figures, Journa

    Learning intrinsic excitability in medium spiny neurons

    Full text link
    We present an unsupervised, local activation-dependent learning rule for intrinsic plasticity (IP) which affects the composition of ion channel conductances for single neurons in a use-dependent way. We use a single-compartment conductance-based model for medium spiny striatal neurons in order to show the effects of parametrization of individual ion channels on the neuronal activation function. We show that parameter changes within the physiological ranges are sufficient to create an ensemble of neurons with significantly different activation functions. We emphasize that the effects of intrinsic neuronal variability on spiking behavior require a distributed mode of synaptic input and can be eliminated by strongly correlated input. We show how variability and adaptivity in ion channel conductances can be utilized to store patterns without an additional contribution by synaptic plasticity (SP). The adaptation of the spike response may result in either "positive" or "negative" pattern learning. However, read-out of stored information depends on a distributed pattern of synaptic activity to let intrinsic variability determine spike response. We briefly discuss the implications of this conditional memory on learning and addiction.Comment: 20 pages, 8 figure

    Hotspots of dendritic spine turnover facilitate clustered spine addition and learning and memory.

    Get PDF
    Modeling studies suggest that clustered structural plasticity of dendritic spines is an efficient mechanism of information storage in cortical circuits. However, why new clustered spines occur in specific locations and how their formation relates to learning and memory (L&M) remain unclear. Using in vivo two-photon microscopy, we track spine dynamics in retrosplenial cortex before, during, and after two forms of episodic-like learning and find that spine turnover before learning predicts future L&M performance, as well as the localization and rates of spine clustering. Consistent with the idea that these measures are causally related, a genetic manipulation that enhances spine turnover also enhances both L&M and spine clustering. Biophysically inspired modeling suggests turnover increases clustering, network sparsity, and memory capacity. These results support a hotspot model where spine turnover is the driver for localization of clustered spine formation, which serves to modulate network function, thus influencing storage capacity and L&M

    Advanced operator-splitting-based semi-implicit spectral method to solve the binary phase-field crystal equations with variable coefficients

    Get PDF
    We present an efficient method to solve numerically the equations of dissipative dynamics of the binary phase-field crystal model proposed by Elder et al. [Phys. Rev. B 75, 064107 (2007)] characterized by variable coefficients. Using the operator splitting method, the problem has been decomposed into sub-problems that can be solved more efficiently. A combination of non-trivial splitting with spectral semi-implicit solution leads to sets of algebraic equations of diagonal matrix form. Extensive testing of the method has been carried out to find the optimum balance among errors associated with time integration, spatial discretization, and splitting. We show that our method speeds up the computations by orders of magnitude relative to the conventional explicit finite difference scheme, while the costs of the pointwise implicit solution per timestep remains low. Also we show that due to its numerical dissipation, finite differencing can not compete with spectral differencing in terms of accuracy. In addition, we demonstrate that our method can efficiently be parallelized for distributed memory systems, where an excellent scalability with the number of CPUs is observed

    Computing with Synchrony

    Get PDF

    Single Biological Neurons as Temporally Precise Spatio-Temporal Pattern Recognizers

    Full text link
    This PhD thesis is focused on the central idea that single neurons in the brain should be regarded as temporally precise and highly complex spatio-temporal pattern recognizers. This is opposed to the prevalent view of biological neurons as simple and mainly spatial pattern recognizers by most neuroscientists today. In this thesis, I will attempt to demonstrate that this is an important distinction, predominantly because the above-mentioned computational properties of single neurons have far-reaching implications with respect to the various brain circuits that neurons compose, and on how information is encoded by neuronal activity in the brain. Namely, that these particular "low-level" details at the single neuron level have substantial system-wide ramifications. In the introduction we will highlight the main components that comprise a neural microcircuit that can perform useful computations and illustrate the inter-dependence of these components from a system perspective. In chapter 1 we discuss the great complexity of the spatio-temporal input-output relationship of cortical neurons that are the result of morphological structure and biophysical properties of the neuron. In chapter 2 we demonstrate that single neurons can generate temporally precise output patterns in response to specific spatio-temporal input patterns with a very simple biologically plausible learning rule. In chapter 3, we use the differentiable deep network analog of a realistic cortical neuron as a tool to approximate the gradient of the output of the neuron with respect to its input and use this capability in an attempt to teach the neuron to perform nonlinear XOR operation. In chapter 4 we expand chapter 3 to describe extension of our ideas to neuronal networks composed of many realistic biological spiking neurons that represent either small microcircuits or entire brain regions

    Neural coding strategies and mechanisms of competition

    Get PDF
    A long running debate has concerned the question of whether neural representations are encoded using a distributed or a local coding scheme. In both schemes individual neurons respond to certain specific patterns of pre-synaptic activity. Hence, rather than being dichotomous, both coding schemes are based on the same representational mechanism. We argue that a population of neurons needs to be capable of learning both local and distributed representations, as appropriate to the task, and should be capable of generating both local and distributed codes in response to different stimuli. Many neural network algorithms, which are often employed as models of cognitive processes, fail to meet all these requirements. In contrast, we present a neural network architecture which enables a single algorithm to efficiently learn, and respond using, both types of coding scheme
    • ā€¦
    corecore