782 research outputs found

    Neural activity classification with machine learning models trained on interspike interval series data

    Full text link
    The flow of information through the brain is reflected by the activity patterns of neural cells. Indeed, these firing patterns are widely used as input data to predictive models that relate stimuli and animal behavior to the activity of a population of neurons. However, relatively little attention was paid to single neuron spike trains as predictors of cell or network properties in the brain. In this work, we introduce an approach to neuronal spike train data mining which enables effective classification and clustering of neuron types and network activity states based on single-cell spiking patterns. This approach is centered around applying state-of-the-art time series classification/clustering methods to sequences of interspike intervals recorded from single neurons. We demonstrate good performance of these methods in tasks involving classification of neuron type (e.g. excitatory vs. inhibitory cells) and/or neural circuit activity state (e.g. awake vs. REM sleep vs. nonREM sleep states) on an open-access cortical spiking activity dataset

    Automated sleep state classification of wide-field calcium imaging data via multiplex visibility graphs and deep learning

    Get PDF
    BACKGROUND: Wide-field calcium imaging (WFCI) allows for monitoring of cortex-wide neural dynamics in mice. When applied to the study of sleep, WFCI data are manually scored into the sleep states of wakefulness, non-REM (NREM) and REM by use of adjunct EEG and EMG recordings. However, this process is time-consuming and often suffers from low inter- and intra-rater reliability and invasiveness. Therefore, an automated sleep state classification method that operates on WFCI data alone is needed. NEW METHOD: A hybrid, two-step method is proposed. In the first step, spatial-temporal WFCI data is mapped to multiplex visibility graphs (MVGs). Subsequently, a two-dimensional convolutional neural network (2D CNN) is employed on the MVGs to be classified as wakefulness, NREM and REM. RESULTS: Sleep states were classified with an accuracy of 84% and Cohen\u27s κ of 0.67. The method was also effectively applied on a binary classification of wakefulness/sleep (accuracy=0.82, κ = 0.62) and a four-class wakefulness/sleep/anesthesia/movement classification (accuracy=0.74, κ = 0.66). Gradient-weighted class activation maps revealed that the CNN focused on short- and long-term temporal connections of MVGs in a sleep state-specific manner. Sleep state classification performance when using individual brain regions was highest for the posterior area of the cortex and when cortex-wide activity was considered. COMPARISON WITH EXISTING METHOD: On a 3-hour WFCI recording, the MVG-CNN achieved a κ of 0.65, comparable to a κ of 0.60 corresponding to the human EEG/EMG-based scoring. CONCLUSIONS: The hybrid MVG-CNN method accurately classifies sleep states from WFCI data and will enable future sleep-focused studies with WFCI

    Applications of Silicon Retinas: from Neuroscience to Computer Vision

    Full text link
    Traditional visual sensor technology is firmly rooted in the concept of sequences of image frames. The sequence of stroboscopic images in these "frame cameras" is very different compared to the information running from the retina to the visual cortex. While conventional cameras have improved in the direction of smaller pixels and higher frame rates, the basics of image acquisition have remained the same. Event-based vision sensors were originally known as "silicon retinas" but are now widely called "event cameras." They are a new type of vision sensors that take inspiration from the mechanisms developed by nature for the mammalian retina and suggest a different way of perceiving the world. As in the neural system, the sensed information is encoded in a train of spikes, or so-called events, comparable to the action potential generated in the nerve. Event-based sensors produce sparse and asynchronous output that represents in- formative changes in the scene. These sensors have advantages in terms of fast response, low latency, high dynamic range, and sparse output. All these char- acteristics are appealing for computer vision and robotic applications, increasing the interest in this kind of sensor. However, since the sensor’s output is very dif- ferent, algorithms applied for frames need to be rethought and re-adapted. This thesis focuses on several applications of event cameras in scientific scenarios. It aims to identify where they can make the difference compared to frame cam- eras. The presented applications use the Dynamic Vision Sensor (event camera developed by the Sensors Group of the Institute of Neuroinformatics, University of Zurich and ETH). To explore some applications in more extreme situations, the first chapters of the thesis focus on the characterization of several advanced versions of the standard DVS. The low light condition represents a challenging situation for every vision sensor. Taking inspiration from standard Complementary Metal Oxide Semiconductor (CMOS) technology, the DVS pixel performances in a low light scenario can be improved, increasing sensitivity and quantum efficiency, by using back-side illumination. This thesis characterizes the so-called Back Side Illumination DAVIS (BSI DAVIS) camera and shows results from its application in calcium imaging of neural activity. The BSI DAVIS has shown better performance in the low light scene due to its high Quantum Efficiency (QE) of 93% and proved to be the best type of technology for microscopy application. The BSI DAVIS allows detecting fast dynamic changes in neural fluorescent imaging using the green fluorescent calcium indicator GCaMP6f. Event camera advances have pushed the exploration of event-based cameras in computer vision tasks. Chapters of this thesis focus on two of the most active research areas in computer vision: human pose estimation and hand gesture classification. Both chapters report the datasets collected to achieve the task, fulfilling the continuous need for data for this kind of new technology. The Dynamic Vision Sensor Human Pose dataset (DHP19) is an extensive collection of 33 whole-body human actions from 17 subjects. The chapter presents the first benchmark neural network model for 3D pose estimation using DHP19. The network archives a mean error of less than 8 mm in the 3D space, which is comparable with frame-based Human Pose Estimation (HPE) methods using frames. The gesture classification chapter reports an application running on a mobile device and explores future developments in the direction of embedded portable low power devices for online processing. The sparse output from the sensor suggests using a small model with a reduced number of parameters and low power consumption. The thesis also describes pilot results from two other scientific imaging applica- tions for raindrop size measurement and laser speckle analysis presented in the appendices

    Deep Learning Model With Adaptive Regularization for EEG-Based Emotion Recognition Using Temporal and Frequency Features

    Get PDF
    Since EEG signal acquisition is non-invasive and portable, it is convenient to be used for different applications. Recognizing emotions based on Brain-Computer Interface (BCI) is an important active BCI paradigm for recognizing the inner state of persons. There are extensive studies about emotion recognition, most of which heavily rely on staged complex handcrafted EEG feature extraction and classifier design. In this paper, we propose a hybrid multi-input deep model with convolution neural networks (CNNs) and bidirectional Long Short-term Memory (Bi-LSTM). CNNs extract time-invariant features from raw EEG data, and Bi-LSTM allows long-range lateral interactions between features. First, we propose a novel hybrid multi-input deep learning approach for emotion recognition from raw EEG signals. Second, in the first layers, we use two CNNs with small and large filter sizes to extract temporal and frequency features from each raw EEG epoch of 62-channel 2-s and merge with differential entropy of EEG band. Third, we apply the adaptive regularization method over each parallel CNN’s layer to consider the spatial information of EEG acquisition electrodes. The proposed method is evaluated on two public datasets, SEED and DEAP. Our results show that our technique can significantly improve the accuracy in comparison with the baseline where no adaptive regularization techniques are used

    Brain-Computer Interfaces using Machine Learning

    Get PDF
    This thesis explores machine learning models for the analysis and classification of electroencephalographic (EEG) signals used in Brain-Computer Interface (BCI) systems. The goal is 1) to develop a system that allows users to control home-automation devices using their mind, and 2) to investigate whether it is possible to achieve this, using low-cost EEG equipment. The thesis includes both a theoretical and a practical part. In the theoretical part, we overview the underlying principles of Brain-Computer Interface systems, as well as, different approaches for the interpretation and the classification of brain signals. We also discuss the emergent launch of low-cost EEG equipment on the market and its use beyond clinical research. We then dive into more technical details that involve signal processing and classification of EEG patterns using machine leaning. Purpose of the practical part is to create a brain-computer interface that will be able to control a smart home environment. As a first step, we investigate the generalizability of different classification methods, conducting a preliminary study on two public datasets of brain encephalographic data. The obtained accuracy level of classification on 9 different subjects was similar and, in some cases, superior to the reported state of the art. Having achieved relatively good offline classification results during our study, we move on to the last part, designing and implementing an online BCI system using Python. Our system consists of three modules. The first module communicates with the MUSE (a low-cost EEG device) to acquire the EEG signals in real time, the second module process those signals using machine learning techniques and trains a learning model. The model is used by the third module, that takes control of cloud-based home automation devices. Experiments using the MUSE resulted in significantly lower classification results and revealed the limitations of the low-cost EEG signal acquisition device for online BCIs
    • …
    corecore