81 research outputs found

    Independent Component Analysis in a convoluted world

    Get PDF

    Studying memory processes at different levels with simultaneous depth and surface EEG recordings

    Get PDF
    Investigating cognitive brain functions using non-invasive electrophysiology can be challenging due to the particularities of the task-related EEG activity, the depth of the activated brain areas, and the extent of the networks involved. Stereoelectroencephalographic (SEEG) investigations in patients with drug-resistant epilepsy offer an extraordinary opportunity to validate information derived from non-invasive recordings at macro-scales. The SEEG approach can provide brain activity with high spatial specificity during tasks that target specific cognitive processes (e.g., memory). Full validation is possible only when performing simultaneous scalp SEEG recordings, which allows recording signals in the exact same brain state. This is the approach we have taken in 12 subjects performing a visual memory task that requires the recognition of previously viewed objects. The intracranial signals on 965 contact pairs have been compared to 391 simultaneously recorded scalp signals at a regional and whole-brain level, using multivariate pattern analysis. The results show that the task conditions are best captured by intracranial sensors, despite the limited spatial coverage of SEEG electrodes, compared to the whole-brain non-invasive recordings. Applying beamformer source reconstruction or independent component analysis does not result in an improvement of the multivariate task decoding performance using surface sensor data. By analyzing a joint scalp and SEEG dataset, we investigated whether the two types of signals carry complementary information that might improve the machine-learning classifier performance. This joint analysis revealed that the results are driven by the modality exhibiting best individual performance, namely SEEG

    Decomposition and classification of electroencephalography data

    Get PDF

    Decoding Neural Signals with Computational Models: A Systematic Review of Invasive BMI

    Full text link
    There are significant milestones in modern human's civilization in which mankind stepped into a different level of life with a new spectrum of possibilities and comfort. From fire-lighting technology and wheeled wagons to writing, electricity and the Internet, each one changed our lives dramatically. In this paper, we take a deep look into the invasive Brain Machine Interface (BMI), an ambitious and cutting-edge technology which has the potential to be another important milestone in human civilization. Not only beneficial for patients with severe medical conditions, the invasive BMI technology can significantly impact different technologies and almost every aspect of human's life. We review the biological and engineering concepts that underpin the implementation of BMI applications. There are various essential techniques that are necessary for making invasive BMI applications a reality. We review these through providing an analysis of (i) possible applications of invasive BMI technology, (ii) the methods and devices for detecting and decoding brain signals, as well as (iii) possible options for stimulating signals into human's brain. Finally, we discuss the challenges and opportunities of invasive BMI for further development in the area.Comment: 51 pages, 14 figures, review articl

    Neural network based image representation for small scale object recognition

    Get PDF
    Object recognition can be abstractedly viewed as a two-stage process. The features learning stage selects key information that can represent the input image in a compact, robust, and discriminative manner in some feature space. Then the classification stage learns the rules to differentiate object classes based on the representations of their images in feature space. Consequently, if the first stage can produce a highly separable features set, simple and cost-effective classifiers can be used to make the recognition system more applicable in practice. Features, or representations, used to be engineered manually with different assumptions about the data population to limit the complexity in a manageable range. As more practical problems are tackled, those assumptions are no longer valid, and so are the representations built on them. More parameters and test cases have to be considered in those new challenges, that causes manual engineering to become too complicated. Machine learning approaches ease those difficulties by allowing computer to learn to identify the appropriate representation automatically. As the number of parameters increases with the divergence of data, it is always beneficial to eliminate irrelevant information from input data to reduce the complexity of learning. Chapter 3 of the thesis reports the study case where removal of colour leads to an improvement in recognition accuracy. Deep learning appears to be a very strong representation learner with new achievements coming in monthly basic. While training the phase of deep structures requires huge amount of data, tremendous calculation, and careful calibration, the inferencing phase is affordable and straightforward. Utilizing knowledge in trained deep networks is therefore promising for efficient feature extraction in smaller systems. Many approaches have been proposed under the name of “transfer learning”, aimed to take advantage of that “deep knowledge”. However, the results achieved so far could be classified as a learning room for improvement. Chapter 4 presents a new method to utilize a trained deep convolutional structure as a feature extractor and achieved state-of-the-art accuracy on the Washington RGBD dataset. Despite some good results, the potential of transfer learning is just barely exploited. On one hand, a dimensionality reduction can be used to make the deep neural network representation even more computationally efficient and allow a wider range of use cases. Inspired by the structure of the network itself, a new random orthogonal projection method for the dimensionality reduction is presented in the first half of Chapter 5. The t-SNE mimicking neural network for low-dimensional embedding is also discussed in this part with promising results. In another approach, feature encoding can be used to improve deep neural network features for classification applications. Thanks to the spatially organized structure, deep neural network features can be considered as local image descriptors, and thus the traditional feature encoding approaches such as the Fisher vector can be applied to improve those features. This method combines the advantages of both discriminative learning and generative learning to boost the features performance in difficult scenarios such as when data is noisy or incomplete. The problem of high dimensionality in deep neural network features is alleviated with the use of the Fisher vector based on sparse coding, where infinite number of Gaussian mixtures was used to model the feature space. In the second half of Chapter 5, the regularized Fisher encoding was shown to be effective in improving classification results on difficult classes. Also, the low cost incremental k-means learning was shown to be a potential dictionary learning approach that can be used to replace the slow and computationally expensive sparse coding method

    Hybrid Advanced Optimization Methods with Evolutionary Computation Techniques in Energy Forecasting

    Get PDF
    More accurate and precise energy demand forecasts are required when energy decisions are made in a competitive environment. Particularly in the Big Data era, forecasting models are always based on a complex function combination, and energy data are always complicated. Examples include seasonality, cyclicity, fluctuation, dynamic nonlinearity, and so on. These forecasting models have resulted in an over-reliance on the use of informal judgment and higher expenses when lacking the ability to determine data characteristics and patterns. The hybridization of optimization methods and superior evolutionary algorithms can provide important improvements via good parameter determinations in the optimization process, which is of great assistance to actions taken by energy decision-makers. This book aimed to attract researchers with an interest in the research areas described above. Specifically, it sought contributions to the development of any hybrid optimization methods (e.g., quadratic programming techniques, chaotic mapping, fuzzy inference theory, quantum computing, etc.) with advanced algorithms (e.g., genetic algorithms, ant colony optimization, particle swarm optimization algorithm, etc.) that have superior capabilities over the traditional optimization approaches to overcome some embedded drawbacks, and the application of these advanced hybrid approaches to significantly improve forecasting accuracy

    An overview of deep learning techniques for epileptic seizures detection and prediction based on neuroimaging modalities: Methods, challenges, and future works

    Get PDF
    Epilepsy is a disorder of the brain denoted by frequent seizures. The symptoms of seizure include confusion, abnormal staring, and rapid, sudden, and uncontrollable hand movements. Epileptic seizure detection methods involve neurological exams, blood tests, neuropsychological tests, and neuroimaging modalities. Among these, neuroimaging modalities have received considerable attention from specialist physicians. One method to facilitate the accurate and fast diagnosis of epileptic seizures is to employ computer-aided diagnosis systems (CADS) based on deep learning (DL) and neuroimaging modalities. This paper has studied a comprehensive overview of DL methods employed for epileptic seizures detection and prediction using neuroimaging modalities. First, DLbased CADS for epileptic seizures detection and prediction using neuroimaging modalities are discussed. Also, descriptions of various datasets, preprocessing algorithms, and DL models which have been used for epileptic seizures detection and prediction have been included. Then, research on rehabilitation tools has been presented, which contains brain-computer interface (BCI), cloud computing, internet of things (IoT), hardware implementation of DL techniques on field-programmable gate array (FPGA), etc. In the discussion section, a comparison has been carried out between research on epileptic seizure detection and prediction. The challenges in epileptic seizures detection and prediction using neuroimaging modalities and DL models have been described. In addition, possible directions for future works in this field, specifically for solving challenges in datasets, DL, rehabilitation, and hardware models, have been proposed. The final section is dedicated to the conclusion which summarizes the significant findings of the paper

    High Accuracy Distributed Target Detection and Classification in Sensor Networks Based on Mobile Agent Framework

    Get PDF
    High-accuracy distributed information exploitation plays an important role in sensor networks. This dissertation describes a mobile-agent-based framework for target detection and classification in sensor networks. Specifically, we tackle the challenging problems of multiple- target detection, high-fidelity target classification, and unknown-target identification. In this dissertation, we present a progressive multiple-target detection approach to estimate the number of targets sequentially and implement it using a mobile-agent framework. To further improve the performance, we present a cluster-based distributed approach where the estimated results from different clusters are fused. Experimental results show that the distributed scheme with the Bayesian fusion method have better performance in the sense that they have the highest detection probability and the most stable performance. In addition, the progressive intra-cluster estimation can reduce data transmission by 83.22% and conserve energy by 81.64% compared to the centralized scheme. For collaborative target classification, we develop a general purpose multi-modality, multi-sensor fusion hierarchy for information integration in sensor networks. The hierarchy is com- posed of four levels of enabling algorithms: local signal processing, temporal fusion, multi-modality fusion, and multi-sensor fusion using a mobile-agent-based framework. The fusion hierarchy ensures fault tolerance and thus generates robust results. In the meanwhile, it also takes into account energy efficiency. Experimental results based on two field demos show constant improvement of classification accuracy over different levels of the hierarchy. Unknown target identification in sensor networks corresponds to the capability of detecting targets without any a priori information, and of modifying the knowledge base dynamically. In this dissertation, we present a collaborative method to solve this problem among multiple sensors. When applied to the military vehicles data set collected in a field demo, about 80% unknown target samples can be recognized correctly, while the known target classification ac- curacy stays above 95%

    Impact of Machine Learning Pipeline Choices in Autism Prediction from Functional Connectivity Data

    Get PDF
    Autism Spectrum Disorder (ASD) is a largely prevalent neurodevelopmental condition with a big social and economical impact affecting the entire life of families. There is an intense search for biomarkers that can be assessed as early as possible in order to initiate treatment and preparation of the family to deal with the challenges imposed by the condition. Brain imaging biomarkers have special interest. Specifically, functional connectivity data extracted from resting state functional magnetic resonance imaging (rs-fMRI) should allow to detect brain connectivity alterations. Machine learning pipelines encompass the estimation of the functional connectivity matrix from brain parcellations, feature extraction, and building classification models for ASD prediction. The works reported in the literature are very heterogeneous from the computational and methodological point of view. In this paper, we carry out a comprehensive computational exploration of the impact of the choices involved while building these machine learning pipelines. Specifically, we consider six brain parcellation definitions, five methods for functional connectivity matrix construction, six feature extraction/selection approaches, and nine classifier building algorithms. We report the prediction performance sensitivity to each of these choices, as well as the best results that are comparable with the state of the art.This work has been partially supported by theFEDER funds through MINECO project TIN2017-85827-P. This project has received funding from theEuropean Union’s Horizon 2020 research and inno-vation program under the Marie Sklodowska-Curiegrant agreement No 77772

    A latent variable modeling framework for analyzing neural population activity

    Get PDF
    Neuroscience is entering the age of big data, due to technological advances in electrical and optical recording techniques. Where historically neuroscientists have only been able to record activity from single neurons at a time, recent advances allow the measurement of activity from multiple neurons simultaneously. In fact, this advancement follows a Moore’s Law-style trend, where the number of simultaneously recorded neurons more than doubles every seven years, and it is now common to see simultaneous recordings from hundreds and even thousands of neurons. The consequences of this data revolution for our understanding of brain struc- ture and function cannot be understated. Not only is there opportunity to address old questions in new ways, but more importantly these experimental techniques will allow neuroscientists to address new questions entirely. However, addressing these questions successfully requires the development of a wide range of new data anal- ysis tools. Many of these tools will draw on recent advances in machine learning and statistics, and in particular there has been a push to develop methods that can accurately model the statistical structure of high-dimensional neural activity. In this dissertation I develop a latent variable modeling framework for analyz- ing such high-dimensional neural data. First, I demonstrate how this framework can be used in an unsupervised fashion as an exploratory tool for large datasets. Next, I extend this framework to incorporate nonlinearities in two distinct ways, and show that the resulting models far outperform standard linear models at capturing the structure of neural activity. Finally, I use this framework to develop a new algorithm for decoding neural activity, and use this as a tool to address questions about how information is represented in populations of neurons
    • …
    corecore