70 research outputs found

    Hierarchical Associative Memory Based on Oscillatory Neural Network

    Get PDF
    In this thesis we explore algorithms and develop architectures based on emerging nano-device technologies for cognitive computing tasks such as recognition, classification, and vision. In particular we focus on pattern matching in high dimensional vector spaces to address the nearest neighbor search problem. Recent progress in nanotechnology provides us novel nano-devices with special nonlinear response characteristics that fit cognitive tasks better than general purpose computing. We build an associative memory (AM) by weakly coupling nano-oscillators as an oscillatory neural network and design a hierarchical tree structure to organize groups of AM units. For hierarchical recognition, we first examine an architecture where image patterns are partitioned into different receptive fields and processed by individual AM units in lower levels, and then abstracted using sparse coding techniques for recognition at higher levels. A second tree structure model is developed as a more scalable AM architecture for large data sets. In this model, patterns are classified by hierarchical k-means clustering and organized in hierarchical clusters. Then the recognition process is done by comparison between the input patterns and centroids identified in the clustering process. The tree is explored in a "depth-only" manner until the closest image pattern is output. We also extend this search technique to incorporate a branch-and-bound algorithm. The models and corresponding algorithms are tested on two standard face recognition data-sets. We show that the depth-only hierarchical model is very data-set dependent and performs with 97% or 67% recognition when compared to a single large associative memory, while the branch and bound search increases time by only a factor of two compared to the depth-only search

    The Performance of Associative Memory Models with Biologically Inspired Connectivity

    Get PDF
    This thesis is concerned with one important question in artificial neural networks, that is, how biologically inspired connectivity of a network affects its associative memory performance. In recent years, research on the mammalian cerebral cortex, which has the main responsibility for the associative memory function in the brains, suggests that the connectivity of this cortical network is far from fully connected, which is commonly assumed in traditional associative memory models. It is found to be a sparse network with interesting connectivity characteristics such as the “small world network” characteristics, represented by short Mean Path Length, high Clustering Coefficient, and high Global and Local Efficiency. Most of the networks in this thesis are therefore sparsely connected. There is, however, no conclusive evidence of how these different connectivity characteristics affect the associative memory performance of a network. This thesis addresses this question using networks with different types of connectivity, which are inspired from biological evidences. The findings of this programme are unexpected and important. Results show that the performance of a non-spiking associative memory model is found to be predicted by its linear correlation with the Clustering Coefficient of the network, regardless of the detailed connectivity patterns. This is particularly important because the Clustering Coefficient is a static measure of one aspect of connectivity, whilst the associative memory performance reflects the result of a complex dynamic process. On the other hand, this research reveals that improvements in the performance of a network do not necessarily directly rely on an increase in the network’s wiring cost. Therefore it is possible to construct networks with high associative memory performance but relatively low wiring cost. Particularly, Gaussian distributed connectivity in a network is found to achieve the best performance with the lowest wiring cost, in all examined connectivity models. Our results from this programme also suggest that a modular network with an appropriate configuration of Gaussian distributed connectivity, both internal to each module and across modules, can perform nearly as well as the Gaussian distributed non-modular network. Finally, a comparison between non-spiking and spiking associative memory models suggests that in terms of associative memory performance, the implication of connectivity seems to transcend the details of the actual neural models, that is, whether they are spiking or non-spiking neurons

    Effect of Input Noise and Output Node Stochastic on Wang's k WTA

    Get PDF

    Vlsi Implementation of Olfactory Cortex Model

    Get PDF
    This thesis attempts to implement the building blocks required for the realization of the biologically motivated olfactory neural model in silicon as the special purpose hardware. The olfactory model is originally developed by R. Granger, G. Lynch, and Ambros-Ingerson. CMOS analog integrated circuits were used for this purpose. All of the building blocks were fabricated using the MOSIS service and tested at our site. The results of this study can be used to realize a system level integration of the olfactory model.Electrical Engineerin

    Hardware Architectures and Implementations for Associative Memories : the Building Blocks of Hierarchically Distributed Memories

    Get PDF
    During the past several decades, the semiconductor industry has grown into a global industry with revenues around $300 billion. Intel no longer relies on only transistor scaling for higher CPU performance, but instead, focuses more on multiple cores on a single die. It has been projected that in 2016 most CMOS circuits will be manufactured with 22 nm process. The CMOS circuits will have a large number of defects. Especially when the transistor goes below sub-micron, the original deterministic circuits will start having probabilistic characteristics. Hence, it would be challenging to map traditional computational models onto probabilistic circuits, suggesting a need for fault-tolerant computational algorithms. Biologically inspired algorithms, or associative memories (AMs)—the building blocks of cortical hierarchically distributed memories (HDMs) discussed in this dissertation, exhibit a remarkable match to the nano-scale electronics, besides having great fault-tolerance ability. Research on the potential mapping of the HDM onto CMOL (hybrid CMOS/nanoelectronic circuits) nanogrids provides useful insight into the development of non-von Neumann neuromorphic architectures and semiconductor industry. In this dissertation, we investigated the implementations of AMs on different hardware platforms, including microprocessor based personal computer (PC), PC cluster, field programmable gate arrays (FPGA), CMOS, and CMOL nanogrids. We studied two types of neural associative memory models, with and without temporal information. In this research, we first decomposed the computational models into basic and common operations, such as matrix-vector inner-product and k-winners-take-all (k-WTA). We then analyzed the baseline performance/price ratio of implementing the AMs with a PC. We continued with a similar performance/price analysis of the implementations on more parallel hardware platforms, such as PC cluster and FPGA. However, the majority of the research emphasized on the implementations with all digital and mixed-signal full-custom CMOS and CMOL nanogrids. In this dissertation, we draw the conclusion that the mixed-signal CMOL nanogrids exhibit the best performance/price ratio over other hardware platforms. We also highlighted some of the trade-offs between dedicated and virtualized hardware circuits for the HDM models. A simple time-multiplexing scheme for the digital CMOS implementations can achieve comparable throughput as the mixed-signal CMOL nanogrids

    IST Austria Thesis

    Get PDF
    Distinguishing between similar experiences is achieved by the brain in a process called pattern separation. In the hippocampus, pattern separation reduces the interference of memories and increases the storage capacity by decorrelating similar inputs patterns of neuronal activity into non-overlapping output firing patterns. Winners-take-all (WTA) mechanism is a theoretical model for pattern separation in which a "winner" cell suppresses the activity of the neighboring neurons through feedback inhibition. However, if the network properties of the dentate gyrus support WTA as a biologically conceivable model remains unknown. Here, we showed that the connectivity rules of PV+interneurons and their synaptic properties are optimizedfor efficient pattern separation. We found using multiple whole-cell in vitrorecordings that PV+interneurons mainly connect to granule cells (GC) through lateral inhibition, a form of feedback inhibition in which a GC inhibits other GCs but not itself through the activation of PV+interneurons. Thus, lateral inhibition between GC–PV+interneurons was ~10 times more abundant than recurrent connections. Furthermore, the GC–PV+interneuron connectivity was more spatially confined but less abundant than PV+interneurons–GC connectivity, leading to an asymmetrical distribution of excitatory and inhibitory connectivity. Our network model of the dentate gyrus with incorporated real connectivity rules efficiently decorrelates neuronal activity patterns using WTA as the primary mechanism. This process relied on lateral inhibition, fast-signaling properties of PV+interneurons and the asymmetrical distribution of excitatory and inhibitory connectivity. Finally, we found that silencing the activity of PV+interneurons in vivoleads to acute deficits in discrimination between similar environments, suggesting that PV+interneuron networks are necessary for behavioral relevant computations. Our results demonstrate that PV+interneurons possess unique connectivity and fast signaling properties that confer to the dentate gyrus network properties that allow the emergence of pattern separation. Thus, our results contribute to the knowledge of how specific forms of network organization underlie sophisticated types of information processing

    Taming neuronal noise with large networks

    Get PDF
    How does reliable computation emerge from networks of noisy neurons? While individual neurons are intrinsically noisy, the collective dynamics of populations of neurons taken as a whole can be almost deterministic, supporting the hypothesis that, in the brain, computation takes place at the level of neuronal populations. Mathematical models of networks of noisy spiking neurons allow us to study the effects of neuronal noise on the dynamics of large networks. Classical mean-field models, i.e., models where all neurons are identical and where each neuron receives the average spike activity of the other neurons, offer toy examples where neuronal noise is absorbed in large networks, that is, large networks behave like deterministic systems. In particular, the dynamics of these large networks can be described by deterministic neuronal population equations. In this thesis, I first generalize classical mean-field limit proofs to a broad class of spiking neuron models that can exhibit spike-frequency adaptation and short-term synaptic plasticity, in addition to refractoriness. The mean-field limit can be exactly described by a multidimensional partial differential equation; the long time behavior of which can be rigorously studied using deterministic methods. Then, we show that there is a conceptual link between mean-field models for networks of spiking neurons and latent variable models used for the analysis of multi-neuronal recordings. More specifically, we use a recently proposed finite-size neuronal population equation, which we first mathematically clarify, to design a tractable Expectation-Maximization-type algorithm capable of inferring the latent population activities of multi-population spiking neural networks from the spike activity of a few visible neurons only, illustrating the idea that latent variable models can be seen as partially observed mean-field models. In classical mean-field models, neurons in large networks behave like independent, identically distributed processes driven by the average population activity -- a deterministic quantity, by the law of large numbers. The fact the neurons are identically distributed processes implies a form of redundancy that has not been observed in the cortex and which seems biologically implausible. To show, numerically, that the redundancy present in classical mean-field models is unnecessary for neuronal noise absorption in large networks, I construct a disordered network model where networks of spiking neurons behave like deterministic rate networks, despite the absence of redundancy. This last result suggests that the concentration of measure phenomenon, which generalizes the ``law of large numbers'' of classical mean-field models, might be an instrumental principle for understanding the emergence of noise-robust population dynamics in large networks of noisy neurons

    Sparse Distributed Memory is a Continual Learner

    Full text link
    Continual learning is a problem for artificial neural networks that their biological counterparts are adept at solving. Building on work using Sparse Distributed Memory (SDM) to connect a core neural circuit with the powerful Transformer model, we create a modified Multi-Layered Perceptron (MLP) that is a strong continual learner. We find that every component of our MLP variant translated from biology is necessary for continual learning. Our solution is also free from any memory replay or task information, and introduces novel methods to train sparse networks that may be broadly applicable.Comment: 9 Pages. ICLR Acceptanc

    BIOLOGICALLY-INFORMED COMPUTATIONAL MODELS OF HARMONIC SOUND DETECTION AND IDENTIFICATION

    Get PDF
    Harmonic sounds or harmonic components of sounds are often fused into a single percept by the auditory system. Although the exact neural mechanisms for harmonic sensitivity remain unclear, it arises presumably in the auditory cortex because subcortical neurons typically prefer only a single frequency. Pitch sensitive units and harmonic template units found in awake marmoset auditory cortex are sensitive to temporal and spectral periodicity, respectively. This thesis is a study of possible computational mechanisms underlying cortical harmonic selectivity. To examine whether harmonic selectivity is related to statistical regularities of natural sounds, simulated auditory nerve responses to natural sounds were used in principal component analysis in comparison with independent component analysis, which yielded harmonic-sensitive model units with similar population distribution as real cortical neurons in terms of harmonic selectivity metrics. This result suggests that the variability of cortical harmonic selectivity may provide an efficient population representation of natural sounds. Several network models of spectral selectivity mechanisms are investigated. As a side study, adding synaptic depletion to an integrate-and-fire model could explain the observed modulation-sensitive units, which are related to pitch-sensitive units but cannot account for precise temporal regularity. When a feed-forward network is trained to detect harmonics, the result is always a sieve, which is excited by integer multiples of the fundamental frequency and inhibited by half-integer multiples. The sieve persists over a wide variety of conditions including changing evaluation criteria, incorporating Dale’s principle, and adding a hidden layer. A recurrent network trained by Hebbian learning produces harmonic-selective by a novel dynamical mechanism that could be explained by a Lyapunov function which favors inputs that match the learned frequency correlations. These model neurons have sieve-like weights like the harmonic template units when probed by random harmonic stimuli, despite there being no sieve pattern anywhere in the network’s weights. Online stimulus design has the potential to facilitate future experiments on nonlinear sensory neurons. We accelerated the sound-from-texture algorithm to enable online adaptive experimental design to maximize the activities of sparsely responding cortical units. We calculated the optimal stimuli for harmonic-selective units and investigated model-based information-theoretic method for stimulus optimization

    Learning, self-organisation and homeostasis in spiking neuron networks using spike-timing dependent plasticity

    Get PDF
    Spike-timing dependent plasticity is a learning mechanism used extensively within neural modelling. The learning rule has been shown to allow a neuron to find the onset of a spatio-temporal pattern repeated among its afferents. In this thesis, the first question addressed is ‘what does this neuron learn?’ With a spiking neuron model and linear prediction, evidence is adduced that the neuron learns two components: (1) the level of average background activity and (2) specific spike times of a pattern. Taking advantage of these findings, a network is developed that can train recognisers for longer spatio-temporal input signals using spike-timing dependent plasticity. Using a number of neurons that are mutually connected by plastic synapses and subject to a global winner-takes-all mechanism, chains of neurons can form where each neuron is selective to a different segment of a repeating input pattern, and the neurons are feedforwardly connected in such a way that both the correct stimulus and the firing of the previous neurons are required in order to activate the next neuron in the chain. This is akin to a simple class of finite state automata. Following this, a novel resource-based STDP learning rule is introduced. The learning rule has several advantages over typical implementations of STDP and results in synaptic statistics which match favourably with those observed experimentally. For example, synaptic weight distributions and the presence of silent synapses match experimental data
    • 

    corecore