1,067 research outputs found

    Deep Learning of Explainable EEG Patterns as Dynamic Spatiotemporal Clusters and Rules in a Brain-Inspired Spiking Neural Network.

    Get PDF
    The paper proposes a new method for deep learning and knowledge discovery in a brain-inspired Spiking Neural Networks (SNN) architecture that enhances the model’s explainability while learning from streaming spatiotemporal brain data (STBD) in an incremental and on-line mode of operation. This led to the extraction of spatiotemporal rules from SNN models that explain why a certain decision (output prediction) was made by the model. During the learning process, the SNN created dynamic neural clusters, captured as polygons, which evolved in time and continuously changed their size and shape. The dynamic patterns of the clusters were quantitatively analyzed to identify the important STBD features that correspond to the most activated brain regions. We studied the trend of dynamically created clusters and their spike-driven events that occur together in specific space and time. The research contributes to: (1) enhanced interpretability of SNN learning behavior through dynamic neural clustering; (2) feature selection and enhanced accuracy of classification; (3) spatiotemporal rules to support model explainability; and (4) a better understanding of the dynamics in STBD in terms of feature interaction. The clustering method was applied to a case study of Electroencephalogram (EEG) data, recorded from a healthy control group (n = 21) and opiate use (n = 18) subjects while they were performing a cognitive task. The SNN models of EEG demonstrated different trends of dynamic clusters across the groups. This suggested to select a group of marker EEG features and resulted in an improved accuracy of EEG classification to 92%, when compared with all-feature classification. During learning of EEG data, the areas of neurons in the SNN model that form adjacent clusters (corresponding to neighboring EEG channels) were detected as fuzzy boundaries that explain overlapping activity of brain regions for each group of subjects

    A database and deep learning toolbox for noise-optimized, generalized spike inference from calcium imaging

    Get PDF
    Inference of action potentials (‘spikes’) from neuronal calcium signals is complicated by the scarcity of simultaneous measurements of action potentials and calcium signals (‘ground truth’). In this study, we compiled a large, diverse ground truth database from publicly available and newly performed recordings in zebrafish and mice covering a broad range of calcium indicators, cell types and signal-to-noise ratios, comprising a total of more than 35 recording hours from 298 neurons. We developed an algorithm for spike inference (termed CASCADE) that is based on supervised deep networks, takes advantage of the ground truth database, infers absolute spike rates and outperforms existing model-based algorithms. To optimize performance for unseen imaging data, CASCADE retrains itself by resampling ground truth data to match the respective sampling rate and noise level; therefore, no parameters need to be adjusted by the user. In addition, we developed systematic performance assessments for unseen data, openly released a resource toolbox and provide a user-friendly cloud-based implementation

    Making decisions based on context: models and applications in cognitive sciences and natural language processing

    Full text link
    It is known that humans are capable of making decisions based on context and generalizing what they have learned. This dissertation considers two related problem areas and proposes different models that take context information into account. By including the context, the proposed models exhibit strong performance in each of the problem areas considered. The first problem area focuses on a context association task studied in cognitive science, which evaluates the ability of a learning agent to associate specific stimuli with an appropriate response in particular spatial contexts. Four neural circuit models are proposed to model how the stimulus and context information are processed to produce a response. The neural networks are trained by modifying the strength of neural connections (weights) using principles of Hebbian learning. Such learning is considered biologically plausible, in contrast to back propagation techniques that do not have a solid neurophysiological basis. A series of theoretical results for the neural circuit models are established, guaranteeing convergence to an optimal configuration when all the stimulus-context pairs are provided during training. Among all the models, a specific model based on ideas from recommender systems trained with a primal-dual update rule, achieves perfect performance in learning and generalizing the mapping from context-stimulus pairs to correct responses. The second problem area considered in the thesis focuses on clinical natural language processing (NLP). A particular application is the development of deep-learning models for analyzing radiology reports. Four NLP tasks are considered including anatomy named entity recognition, negation detection, incidental finding detection, and clinical concept extraction. A hierarchical Recurrent Neural Network (RNN) is proposed for anatomy named entity recognition, which is then used to produce a set of features for incidental finding detection of pulmonary nodules. A clinical context word embedding model is obtained, which is used with an RNN to model clinical concept extraction. Finally, feature-enriched RNN and transformer-based models with contextual word embedding are proposed for negation detection. All these models take the (clinical) context information into account. The models are evaluated on different datasets and are shown to achieve strong performance, largely outperforming the state-of-art

    Spiking neural models & machine learning for systems neuroscience: Learning, Cognition and Behavior.

    Get PDF
    Learning, cognition and the ability to navigate, interact and manipulate the world around us by performing appropriate behavior are hallmarks of artificial as well as biological intelligence. In order to understand how intelligent behavior can emerge from computations of neural systems, this thesis suggests to consider and study learning, cognition and behavior simultaneously to obtain an integrative understanding. This involves building detailed functional computational models of nervous systems that can cope with sensory processing, learning, memory and motor control to drive appropriate behavior. The work further considers how the biological computational substrate of neurons, dendrites and action potentials can be successfully used as an alternative to current artificial systems to solve machine learning problems. It challenges the simplification of currently used rate-based artificial neurons, where computational power is sacrificed by mathematical convenience and statistical learning. To this end, the thesis explores single spiking neuron computations for cognition and machine learning problems as well as detailed functional networks thereof that can solve the biologically relevant foraging behavior in flying insects. The obtained results and insights are new and relevant for machine learning, neuroscience and computational systems neuroscience. The thesis concludes by providing an outlook how application of current machine learning methods can be used to obtain a statistical understanding of larger scale brain systems. In particular, by investigating the functional role of the cerebellar-thalamo-cortical system for motor control in primates

    The need for calcium imaging in nonhuman primates: New motor neuroscience and brain-machine interfaces

    Get PDF
    A central goal of neuroscience is to understand how populations of neurons coordinate and cooperate in order to give rise to perception, cognition, and action. Nonhuman primates (NHPs) are an attractive model with which to understand these mechanisms in humans, primarily due to the strong homology of their brains and the cognitively sophisticated behaviors they can be trained to perform. Using electrode recordings, the activity of one to a few hundred individual neurons may be measured electrically, which has enabled many scientific findings and the development of brain-machine interfaces. Despite these successes, electrophysiology samples sparsely from neural populations and provides little information about the genetic identity and spatial micro-organization of recorded neurons. These limitations have spurred the development of all-optical methods for neural circuit interrogation. Fluorescent calcium signals serve as a reporter of neuronal responses, and when combined with post-mortem optical clearing techniques such as CLARITY, provide dense recordings of neuronal populations, spatially organized and annotated with genetic and anatomical information. Here, we advocate that this methodology, which has been of tremendous utility in smaller animal models, can and should be developed for use with NHPs. We review here several of the key opportunities and challenges for calcium-based optical imaging in NHPs. We focus on motor neuroscience and brain-machine interface design as representative domains of opportunity within the larger field of NHP neuroscience

    A Survey of Spiking Neural Network Accelerator on FPGA

    Full text link
    Due to the ability to implement customized topology, FPGA is increasingly used to deploy SNNs in both embedded and high-performance applications. In this paper, we survey state-of-the-art SNN implementations and their applications on FPGA. We collect the recent widely-used spiking neuron models, network structures, and signal encoding formats, followed by the enumeration of related hardware design schemes for FPGA-based SNN implementations. Compared with the previous surveys, this manuscript enumerates the application instances that applied the above-mentioned technical schemes in recent research. Based on that, we discuss the actual acceleration potential of implementing SNN on FPGA. According to our above discussion, the upcoming trends are discussed in this paper and give a guideline for further advancement in related subjects

    Neuromorphic computing using non-volatile memory

    Get PDF
    Dense crossbar arrays of non-volatile memory (NVM) devices represent one possible path for implementing massively-parallel and highly energy-efficient neuromorphic computing systems. We first review recent advances in the application of NVM devices to three computing paradigms: spiking neural networks (SNNs), deep neural networks (DNNs), and ‘Memcomputing’. In SNNs, NVM synaptic connections are updated by a local learning rule such as spike-timing-dependent-plasticity, a computational approach directly inspired by biology. For DNNs, NVM arrays can represent matrices of synaptic weights, implementing the matrix–vector multiplication needed for algorithms such as backpropagation in an analog yet massively-parallel fashion. This approach could provide significant improvements in power and speed compared to GPU-based DNN training, for applications of commercial significance. We then survey recent research in which different types of NVM devices – including phase change memory, conductive-bridging RAM, filamentary and non-filamentary RRAM, and other NVMs – have been proposed, either as a synapse or as a neuron, for use within a neuromorphic computing application. The relevant virtues and limitations of these devices are assessed, in terms of properties such as conductance dynamic range, (non)linearity and (a)symmetry of conductance response, retention, endurance, required switching power, and device variability.11Yscopu
    • 

    corecore