1,956 research outputs found

    Bio-Inspired Multi-Layer Spiking Neural Network Extracts Discriminative Features from Speech Signals

    Full text link
    Spiking neural networks (SNNs) enable power-efficient implementations due to their sparse, spike-based coding scheme. This paper develops a bio-inspired SNN that uses unsupervised learning to extract discriminative features from speech signals, which can subsequently be used in a classifier. The architecture consists of a spiking convolutional/pooling layer followed by a fully connected spiking layer for feature discovery. The convolutional layer of leaky, integrate-and-fire (LIF) neurons represents primary acoustic features. The fully connected layer is equipped with a probabilistic spike-timing-dependent plasticity learning rule. This layer represents the discriminative features through probabilistic, LIF neurons. To assess the discriminative power of the learned features, they are used in a hidden Markov model (HMM) for spoken digit recognition. The experimental results show performance above 96% that compares favorably with popular statistical feature extraction methods. Our results provide a novel demonstration of unsupervised feature acquisition in an SNN

    Unsupervised Heart-rate Estimation in Wearables With Liquid States and A Probabilistic Readout

    Full text link
    Heart-rate estimation is a fundamental feature of modern wearable devices. In this paper we propose a machine intelligent approach for heart-rate estimation from electrocardiogram (ECG) data collected using wearable devices. The novelty of our approach lies in (1) encoding spatio-temporal properties of ECG signals directly into spike train and using this to excite recurrently connected spiking neurons in a Liquid State Machine computation model; (2) a novel learning algorithm; and (3) an intelligently designed unsupervised readout based on Fuzzy c-Means clustering of spike responses from a subset of neurons (Liquid states), selected using particle swarm optimization. Our approach differs from existing works by learning directly from ECG signals (allowing personalization), without requiring costly data annotations. Additionally, our approach can be easily implemented on state-of-the-art spiking-based neuromorphic systems, offering high accuracy, yet significantly low energy footprint, leading to an extended battery life of wearable devices. We validated our approach with CARLsim, a GPU accelerated spiking neural network simulator modeling Izhikevich spiking neurons with Spike Timing Dependent Plasticity (STDP) and homeostatic scaling. A range of subjects are considered from in-house clinical trials and public ECG databases. Results show high accuracy and low energy footprint in heart-rate estimation across subjects with and without cardiac irregularities, signifying the strong potential of this approach to be integrated in future wearable devices.Comment: 51 pages, 12 figures, 6 tables, 95 references. Under submission at Elsevier Neural Network

    Spike burst-pause dynamics of Purkinje cells regulate sensorimotor adaptation

    Get PDF
    Cerebellar Purkinje cells mediate accurate eye movement coordination. However, it remains unclear how oculomotor adaptation depends on the interplay between the characteristic Purkinje cell response patterns, namely tonic, bursting, and spike pauses. Here, a spiking cerebellar model assesses the role of Purkinje cell firing patterns in vestibular ocular reflex (VOR) adaptation. The model captures the cerebellar microcircuit properties and it incorporates spike-based synaptic plasticity at multiple cerebellar sites. A detailed Purkinje cell model reproduces the three spike-firing patterns that are shown to regulate the cerebellar output. Our results suggest that pauses following Purkinje complex spikes (bursts) encode transient disinhibition of target medial vestibular nuclei, critically gating the vestibular signals conveyed by mossy fibres. This gating mechanism accounts for early and coarse VOR acquisition, prior to the late reflex consolidation. In addition, properly timed and sized Purkinje cell bursts allow the ratio between long-term depression and potentiation (LTD/LTP) to be finely shaped at mossy fibre-medial vestibular nuclei synapses, which optimises VOR consolidation. Tonic Purkinje cell firing maintains the consolidated VOR through time. Importantly, pauses are crucial to facilitate VOR phase-reversal learning, by reshaping previously learnt synaptic weight distributions. Altogether, these results predict that Purkinje spike burst-pause dynamics are instrumental to VOR learning and reversal adaptation.This work was supported by the European Union (www.europa.eu), Project SpikeControl 658479 (recipient NL), the Spanish Agencia Estatal de Investigacio®n and European Regional Development Fund (www.ciencia.gob.es/ portal/site/MICINN/aei), Project CEREBROT TIN2016-81041-R (recipient ER), and the French National Research Agency (www.agence-nationalerecherche. fr) – Essilor International (www.essilor. com), Chair SilverSight ANR-14-CHIN-0001 (recipient AA)

    Dynamical principles in neuroscience

    Full text link
    Dynamical modeling of neural systems and brain functions has a history of success over the last half century. This includes, for example, the explanation and prediction of some features of neural rhythmic behaviors. Many interesting dynamical models of learning and memory based on physiological experiments have been suggested over the last two decades. Dynamical models even of consciousness now exist. Usually these models and results are based on traditional approaches and paradigms of nonlinear dynamics including dynamical chaos. Neural systems are, however, an unusual subject for nonlinear dynamics for several reasons: (i) Even the simplest neural network, with only a few neurons and synaptic connections, has an enormous number of variables and control parameters. These make neural systems adaptive and flexible, and are critical to their biological function. (ii) In contrast to traditional physical systems described by well-known basic principles, first principles governing the dynamics of neural systems are unknown. (iii) Many different neural systems exhibit similar dynamics despite having different architectures and different levels of complexity. (iv) The network architecture and connection strengths are usually not known in detail and therefore the dynamical analysis must, in some sense, be probabilistic. (v) Since nervous systems are able to organize behavior based on sensory inputs, the dynamical modeling of these systems has to explain the transformation of temporal information into combinatorial or combinatorial-temporal codes, and vice versa, for memory and recognition. In this review these problems are discussed in the context of addressing the stimulating questions: What can neuroscience learn from nonlinear dynamics, and what can nonlinear dynamics learn from neuroscience?This work was supported by NSF Grant No. NSF/EIA-0130708, and Grant No. PHY 0414174; NIH Grant No. 1 R01 NS50945 and Grant No. NS40110; MEC BFI2003-07276, and FundaciĂłn BBVA

    Emulating short-term synaptic dynamics with memristive devices

    Get PDF
    Neuromorphic architectures offer great promise for achieving computation capacities beyond conventional Von Neumann machines. The essential elements for achieving this vision are highly scalable synaptic mimics that do not undermine biological fidelity. Here we demonstrate that single solid-state TiO2 memristors can exhibit non-associative plasticity phenomena observed in biological synapses, supported by their metastable memory state transition properties. We show that, contrary to conventional uses of solid-state memory, the existence of rate-limiting volatility is a key feature for capturing short-term synaptic dynamics. We also show how the temporal dynamics of our prototypes can be exploited to implement spatio-temporal computation, demonstrating the memristors full potential for building biophysically realistic neural processing systems
    • 

    corecore