2,479 research outputs found

    Six networks on a universal neuromorphic computing substrate

    Get PDF
    In this study, we present a highly configurable neuromorphic computing substrate and use it for emulating several types of neural networks. At the heart of this system lies a mixed-signal chip, with analog implementations of neurons and synapses and digital transmission of action potentials. Major advantages of this emulation device, which has been explicitly designed as a universal neural network emulator, are its inherent parallelism and high acceleration factor compared to conventional computers. Its configurability allows the realization of almost arbitrary network topologies and the use of widely varied neuronal and synaptic parameters. Fixed-pattern noise inherent to analog circuitry is reduced by calibration routines. An integrated development environment allows neuroscientists to operate the device without any prior knowledge of neuromorphic circuit design. As a showcase for the capabilities of the system, we describe the successful emulation of six different neural networks which cover a broad spectrum of both structure and functionality

    A Comprehensive Workflow for General-Purpose Neural Modeling with Highly Configurable Neuromorphic Hardware Systems

    Full text link
    In this paper we present a methodological framework that meets novel requirements emerging from upcoming types of accelerated and highly configurable neuromorphic hardware systems. We describe in detail a device with 45 million programmable and dynamic synapses that is currently under development, and we sketch the conceptual challenges that arise from taking this platform into operation. More specifically, we aim at the establishment of this neuromorphic system as a flexible and neuroscientifically valuable modeling tool that can be used by non-hardware-experts. We consider various functional aspects to be crucial for this purpose, and we introduce a consistent workflow with detailed descriptions of all involved modules that implement the suggested steps: The integration of the hardware interface into the simulator-independent model description language PyNN; a fully automated translation between the PyNN domain and appropriate hardware configurations; an executable specification of the future neuromorphic system that can be seamlessly integrated into this biology-to-hardware mapping process as a test bench for all software layers and possible hardware design modifications; an evaluation scheme that deploys models from a dedicated benchmark library, compares the results generated by virtual or prototype hardware devices with reference software simulations and analyzes the differences. The integration of these components into one hardware-software workflow provides an ecosystem for ongoing preparative studies that support the hardware design process and represents the basis for the maturity of the model-to-hardware mapping software. The functionality and flexibility of the latter is proven with a variety of experimental results

    The effect of heterogeneity on decorrelation mechanisms in spiking neural networks: a neuromorphic-hardware study

    Get PDF
    High-level brain function such as memory, classification or reasoning can be realized by means of recurrent networks of simplified model neurons. Analog neuromorphic hardware constitutes a fast and energy efficient substrate for the implementation of such neural computing architectures in technical applications and neuroscientific research. The functional performance of neural networks is often critically dependent on the level of correlations in the neural activity. In finite networks, correlations are typically inevitable due to shared presynaptic input. Recent theoretical studies have shown that inhibitory feedback, abundant in biological neural networks, can actively suppress these shared-input correlations and thereby enable neurons to fire nearly independently. For networks of spiking neurons, the decorrelating effect of inhibitory feedback has so far been explicitly demonstrated only for homogeneous networks of neurons with linear sub-threshold dynamics. Theory, however, suggests that the effect is a general phenomenon, present in any system with sufficient inhibitory feedback, irrespective of the details of the network structure or the neuronal and synaptic properties. Here, we investigate the effect of network heterogeneity on correlations in sparse, random networks of inhibitory neurons with non-linear, conductance-based synapses. Emulations of these networks on the analog neuromorphic hardware system Spikey allow us to test the efficiency of decorrelation by inhibitory feedback in the presence of hardware-specific heterogeneities. The configurability of the hardware substrate enables us to modulate the extent of heterogeneity in a systematic manner. We selectively study the effects of shared input and recurrent connections on correlations in membrane potentials and spike trains. Our results confirm ...Comment: 20 pages, 10 figures, supplement

    The Contribution of Thalamocortical Core and Matrix Pathways to Sleep Spindles.

    Get PDF
    Sleep spindles arise from the interaction of thalamic and cortical neurons. Neurons in the thalamic reticular nucleus (TRN) inhibit thalamocortical neurons, which in turn excite the TRN and cortical neurons. A fundamental principle of anatomical organization of the thalamocortical projections is the presence of two pathways: the diffuse matrix pathway and the spatially selective core pathway. Cortical layers are differentially targeted by these two pathways with matrix projections synapsing in superficial layers and core projections impinging on middle layers. Based on this anatomical observation, we propose that spindles can be classified into two classes, those arising from the core pathway and those arising from the matrix pathway, although this does not exclude the fact that some spindles might combine both pathways at the same time. We find evidence for this hypothesis in EEG/MEG studies, intracranial recordings, and computational models that incorporate this difference. This distinction will prove useful in accounting for the multiple functions attributed to spindles, in that spindles of different types might act on local and widespread spatial scales. Because spindle mechanisms are often hijacked in epilepsy and schizophrenia, the classification proposed in this review might provide valuable information in defining which pathways have gone awry in these neurological disorders

    Unsupervised Heart-rate Estimation in Wearables With Liquid States and A Probabilistic Readout

    Full text link
    Heart-rate estimation is a fundamental feature of modern wearable devices. In this paper we propose a machine intelligent approach for heart-rate estimation from electrocardiogram (ECG) data collected using wearable devices. The novelty of our approach lies in (1) encoding spatio-temporal properties of ECG signals directly into spike train and using this to excite recurrently connected spiking neurons in a Liquid State Machine computation model; (2) a novel learning algorithm; and (3) an intelligently designed unsupervised readout based on Fuzzy c-Means clustering of spike responses from a subset of neurons (Liquid states), selected using particle swarm optimization. Our approach differs from existing works by learning directly from ECG signals (allowing personalization), without requiring costly data annotations. Additionally, our approach can be easily implemented on state-of-the-art spiking-based neuromorphic systems, offering high accuracy, yet significantly low energy footprint, leading to an extended battery life of wearable devices. We validated our approach with CARLsim, a GPU accelerated spiking neural network simulator modeling Izhikevich spiking neurons with Spike Timing Dependent Plasticity (STDP) and homeostatic scaling. A range of subjects are considered from in-house clinical trials and public ECG databases. Results show high accuracy and low energy footprint in heart-rate estimation across subjects with and without cardiac irregularities, signifying the strong potential of this approach to be integrated in future wearable devices.Comment: 51 pages, 12 figures, 6 tables, 95 references. Under submission at Elsevier Neural Network

    Neural models of learning and visual grouping in the presence of finite conduction velocities

    Get PDF
    The hypothesis of object binding-by-synchronization in the visual cortex has been supported by recent experiments in awake monkeys. They demonstrated coherence among gamma-activities (30–90 Hz) of local neural groups and its perceptual modulation according to the rules of figure-ground segregation. Interactions within and between these neural groups are based on axonal spike conduction with finite velocities. Physiological studies confirmed that the majority of transmission delays is comparable to the temporal scale defined by gamma-activity (11–33 ms). How do these finite velocities influence the development of synaptic connections within and between visual areas? What is the relationship between the range of gamma-coherence and the velocity of signal transmission? Are these large temporal delays compatible with recently discovered phenomenon of gamma-waves traveling across larger parts of the primary visual cortex? The refinement of connections in the immature visual cortex depends on temporal Hebbian learning to adjust synaptic efficacies between spiking neurons. The impact of constant, finite, axonal spike conduction velocities on this process was investigated using a set of topographic network models. Random spike trains with a confined temporal correlation width mimicked cortical activity before visual experience. After learning, the lateral connectivity within one network layer became spatially restricted, the width of the connection profile being directly proportional to the lateral conduction velocity. Furthermore, restricted feedforward divergence developed between neurons of two successive layers. The size of this connection profile matched the lateral connection profile of the lower layer neuron. The mechanism in this network model is suitable to explain the emergence of larger receptive fields at higher visual areas while preserving a retinotopic mapping. The influence of finite conduction velocities on the local generation of gamma-activities and their spatial synchronization was investigated in a model of a mature visual area. Sustained input and local inhibitory feedback was sufficient for the emergence of coherent gamma-activity that extended across few millimeters. Conduction velocities had a direct impact on the frequency of gamma-oscillations, but did neither affect gamma-power nor the spatial extent of gamma-coherence. Adding long-range horizontal connections between excitatory neurons, as found in layer 2/3 of the primary visual cortex, increased the spatial range of gamma-coherence. The range was maximal for zero transmission delays, and for all distances attenuated with finite, decreasing lateral conduction velocities. Below a velocity of 0.5 m/s, gamma-power and gamma-coherence were even smaller than without these connections at all, i.e., slow horizontal connections actively desynchronized neural populations. In conclusion, the enhancement of gamma-coherence by horizontal excitatory connections critically depends on fast conduction velocities. Coherent gamma-activity in the primary visual cortex and the accompanying models was found to only cover small regions of the visual field. This challenges the role of gamma-synchronization to solve the binding problem for larger object representations. Further analysis of the previous model revealed that the patches of coherent gamma-activity (1.8 mm half-height decline) were part of more globally occurring gamma-waves, which coupled over much larger distances (6.3 mm half-height decline). The model gamma-waves observed here are very similar to those found in the primary visual cortex of awake monkeys, indicating that local recurrent inhibition and restricted horizontal connections with finite axonal velocities are sufficient requirements for their emergence. In conclusion, since the model is in accordance with the connectivity and gamma-processes in the primary visual cortex, the results support the hypothesis that gamma-waves provide a generalized concept for object binding in the visual cortex

    Decorrelation of neural-network activity by inhibitory feedback

    Get PDF
    Correlations in spike-train ensembles can seriously impair the encoding of information by their spatio-temporal structure. An inevitable source of correlation in finite neural networks is common presynaptic input to pairs of neurons. Recent theoretical and experimental studies demonstrate that spike correlations in recurrent neural networks are considerably smaller than expected based on the amount of shared presynaptic input. By means of a linear network model and simulations of networks of leaky integrate-and-fire neurons, we show that shared-input correlations are efficiently suppressed by inhibitory feedback. To elucidate the effect of feedback, we compare the responses of the intact recurrent network and systems where the statistics of the feedback channel is perturbed. The suppression of spike-train correlations and population-rate fluctuations by inhibitory feedback can be observed both in purely inhibitory and in excitatory-inhibitory networks. The effect is fully understood by a linear theory and becomes already apparent at the macroscopic level of the population averaged activity. At the microscopic level, shared-input correlations are suppressed by spike-train correlations: In purely inhibitory networks, they are canceled by negative spike-train correlations. In excitatory-inhibitory networks, spike-train correlations are typically positive. Here, the suppression of input correlations is not a result of the mere existence of correlations between excitatory (E) and inhibitory (I) neurons, but a consequence of a particular structure of correlations among the three possible pairings (EE, EI, II)
    corecore