2,101 research outputs found

    Clustering Predicts Memory Performance in Networks of Spiking and Non-Spiking Neurons

    Get PDF
    The problem we address in this paper is that of finding effective and parsimonious patterns of connectivity in sparse associative memories. This problem must be addressed in real neuronal systems, so that results in artificial systems could throw light on real systems. We show that there are efficient patterns of connectivity and that these patterns are effective in models with either spiking or non-spiking neurons. This suggests that there may be some underlying general principles governing good connectivity in such networks. We also show that the clustering of the network, measured by Clustering Coefficient, has a strong negative linear correlation to the performance of associative memory. This result is important since a purely static measure of network connectivity appears to determine an important dynamic property of the network

    Hardware-Amenable Structural Learning for Spike-based Pattern Classification using a Simple Model of Active Dendrites

    Full text link
    This paper presents a spike-based model which employs neurons with functionally distinct dendritic compartments for classifying high dimensional binary patterns. The synaptic inputs arriving on each dendritic subunit are nonlinearly processed before being linearly integrated at the soma, giving the neuron a capacity to perform a large number of input-output mappings. The model utilizes sparse synaptic connectivity; where each synapse takes a binary value. The optimal connection pattern of a neuron is learned by using a simple hardware-friendly, margin enhancing learning algorithm inspired by the mechanism of structural plasticity in biological neurons. The learning algorithm groups correlated synaptic inputs on the same dendritic branch. Since the learning results in modified connection patterns, it can be incorporated into current event-based neuromorphic systems with little overhead. This work also presents a branch-specific spike-based version of this structural plasticity rule. The proposed model is evaluated on benchmark binary classification problems and its performance is compared against that achieved using Support Vector Machine (SVM) and Extreme Learning Machine (ELM) techniques. Our proposed method attains comparable performance while utilizing 10 to 50% less computational resources than the other reported techniques.Comment: Accepted for publication in Neural Computatio

    Why Neurons Have Thousands of Synapses, A Theory of Sequence Memory in Neocortex

    Get PDF
    Neocortical neurons have thousands of excitatory synapses. It is a mystery how neurons integrate the input from so many synapses and what kind of large-scale network behavior this enables. It has been previously proposed that non-linear properties of dendrites enable neurons to recognize multiple patterns. In this paper we extend this idea by showing that a neuron with several thousand synapses arranged along active dendrites can learn to accurately and robustly recognize hundreds of unique patterns of cellular activity, even in the presence of large amounts of noise and pattern variation. We then propose a neuron model where some of the patterns recognized by a neuron lead to action potentials and define the classic receptive field of the neuron, whereas the majority of the patterns recognized by a neuron act as predictions by slightly depolarizing the neuron without immediately generating an action potential. We then present a network model based on neurons with these properties and show that the network learns a robust model of time-based sequences. Given the similarity of excitatory neurons throughout the neocortex and the importance of sequence memory in inference and behavior, we propose that this form of sequence memory is a universal property of neocortical tissue. We further propose that cellular layers in the neocortex implement variations of the same sequence memory algorithm to achieve different aspects of inference and behavior. The neuron and network models we introduce are robust over a wide range of parameters as long as the network uses a sparse distributed code of cellular activations. The sequence capacity of the network scales linearly with the number of synapses on each neuron. Thus neurons need thousands of synapses to learn the many temporal patterns in sensory stimuli and motor sequences.Comment: Submitted for publicatio

    Dopamine-modulated dynamic cell assemblies generated by the GABAergic striatal microcircuit

    Get PDF
    The striatum, the principal input structure of the basal ganglia, is crucial to both motor control and learning. It receives convergent input from all over the neocortex, hippocampal formation, amygdala and thalamus, and is the primary recipient of dopamine in the brain. Within the striatum is a GABAergic microcircuit that acts upon these inputs, formed by the dominant medium-spiny projection neurons (MSNs) and fast-spiking interneurons (FSIs). There has been little progress in understanding the computations it performs, hampered by the non-laminar structure that prevents identification of a repeating canonical microcircuit. We here begin the identification of potential dynamically-defined computational elements within the striatum. We construct a new three-dimensional model of the striatal microcircuit's connectivity, and instantiate this with our dopamine-modulated neuron models of the MSNs and FSIs. A new model of gap junctions between the FSIs is introduced and tuned to experimental data. We introduce a novel multiple spike-train analysis method, and apply this to the outputs of the model to find groups of synchronised neurons at multiple time-scales. We find that, with realistic in vivo background input, small assemblies of synchronised MSNs spontaneously appear, consistent with experimental observations, and that the number of assemblies and the time-scale of synchronisation is strongly dependent on the simulated concentration of dopamine. We also show that feed-forward inhibition from the FSIs counter-intuitively increases the firing rate of the MSNs. Such small cell assemblies forming spontaneously only in the absence of dopamine may contribute to motor control problems seen in humans and animals following a loss of dopamine cells. (C) 2009 Elsevier Ltd. All rights reserved

    Topological exploration of artificial neuronal network dynamics

    Full text link
    One of the paramount challenges in neuroscience is to understand the dynamics of individual neurons and how they give rise to network dynamics when interconnected. Historically, researchers have resorted to graph theory, statistics, and statistical mechanics to describe the spatiotemporal structure of such network dynamics. Our novel approach employs tools from algebraic topology to characterize the global properties of network structure and dynamics. We propose a method based on persistent homology to automatically classify network dynamics using topological features of spaces built from various spike-train distances. We investigate the efficacy of our method by simulating activity in three small artificial neural networks with different sets of parameters, giving rise to dynamics that can be classified into four regimes. We then compute three measures of spike train similarity and use persistent homology to extract topological features that are fundamentally different from those used in traditional methods. Our results show that a machine learning classifier trained on these features can accurately predict the regime of the network it was trained on and also generalize to other networks that were not presented during training. Moreover, we demonstrate that using features extracted from multiple spike-train distances systematically improves the performance of our method

    Integration of Spiking Neural Networks for Understanding Interval Timing

    Get PDF
    The ability to perceive the passage of time in the seconds-to-minutes range is a vital and ubiquitous characteristic of life. This ability allows organisms to make behavioral changes based on the temporal contingencies between stimuli and the potential rewards they predict. While the psychophysical manifestations of time perception have been well-characterized, many aspects of its underlying biology are still poorly understood. A major contributor to this is limitations of current in vivo techniques that do not allow for proper assessment of the di signaling over micro-, meso- and macroscopic spatial scales. Alternatively, the integration of biologically inspired artificial neural networks (ANNs) based on the dynamics and cyto-architecture of brain regions associated with time perception can help mitigate these limitations and, in conjunction, provide a powerful tool for progressing research in the field. To this end, this chapter aims to: (1) provide insight into the biological complexity of interval timing, (2) outline limitations in our ability to accurately assess these neural mechanisms in vivo, and (3) demonstrate potential application of ANNs for better understanding the biological underpinnings of temporal processing

    Recurrent neural network based approach for estimating the dynamic evolution of grinding process variables

    Get PDF
    170 p.El proceso de rectificado es ampliamente utilizado para la fabricación de componentes de precisión por arranque de viruta por sus buenos acabados y excelentes tolerancias. Así, el modelado y el control del proceso de rectificado es altamente importante para alcanzar los requisitos económicos y de precisión de los clientes. Sin embargo, los modelos analíticos desarrollados hasta ahora están lejos de poder ser implementados en la industria. Es por ello que varias investigaciones han propuesto la utilización de técnicas inteligentes para el modelado del proceso de rectificado. Sin embargo, estas propuestas a) no generalizan para nuevas muelas y b) no tienen en cuenta el desgaste de la muela, efecto esencial para un buen modelo del proceso de rectificado. Es por ello que se propone la utilización de las redes neuronales recurrentes para estimar variables del proceso de rectificado que a) sean capaces de generalizar para muelas nuevas y b) que tenga en cuenta el desgaste de la muela, es decir, que sea capaz de estimar variables del proceso de rectificado mientras la muela se va desgastando. Así, tomando como base la metodología general, se han desarrollado sensores virtuales para la medida del desgaste de la muela y la rugosidad de la pieza, dos variables esenciales del proceso de rectificado. Por otro lado, también se plantea la utilización la metodología general para estimar fuera de máquina la energía específica de rectificado que puede ayudar a seleccionar la muela y los parámetros de rectificado por adelantado. Sin embargo, una única red no es suficiente para abarcar todas las muelas y condiciones de rectificado existentes. Así, también se propone una metodología para generar redes ad-hoc seleccionando unos datos específicos de toda la base de datos. Para ello, se ha hecho uso de los algoritmos Fuzzy c-Means. Finalmente, hay que decir que los resultados obtenidos mejoran los existentes hasta ahora. Sin embargo, estos resultados no son suficientemente buenos para poder controlar el proceso. Así, se propone la utilización de las redes neuronales de impulsos. Al trabajar con impulsos, estas redes tienen inherentemente la capacidad de trabajar con datos temporales, lo que las hace adecuados para estimar valores que evolucionan con el tiempo. Sin embargo, estas redes solamente se usan para clasificación y no predicción de evoluciones temporales por la falta de métodos de codificación/decodificación de datos temporales. Así, en este trabajo se plantea una metodología para poder codificar en trenes de impulsos señales secuenciales y poder reconstruir señales secuenciales a partir de trenes de impulsos. Esto puede llevar a en un futuro poder utilizar las redes neuronales de impulsos para la predicción de secuenciales y/o temporales
    corecore