437 research outputs found

    Time-varying effective EEG source connectivity: the optimization of model parameters*

    Get PDF
    Adaptive estimation methods based on general Kalman filter are powerful tools to investigate brain networks dynamics given the non-stationary nature of neural signals. These methods rely on two parameters, the model order p and adaptation constant c, which determine the resolution and smoothness of the time-varying multivariate autoregressive estimates. A sub-optimal filtering may present consistent biases in the frequency domain and temporal distortions, leading to fallacious interpretations. Thus, the performance of these methods heavily depends on the accurate choice of these two parameters in the filter design. In this work, we sought to define an objective criterion for the optimal choice of these parameters. Since residual- and information-based criteria are not guaranteed to reach an absolute minimum, we propose to study the partial derivatives of these functions to guide the choice of p and c. To validate the performance of our method, we used a dataset of human visual evoked potentials during face perception where the generation and propagation of information in the brain is well understood and a set of simulated data where the ground truth is available

    Brain Model State Space Reconstruction Using an LSTM Neural Network

    Full text link
    Objective Kalman filtering has previously been applied to track neural model states and parameters, particularly at the scale relevant to EEG. However, this approach lacks a reliable method to determine the initial filter conditions and assumes that the distribution of states remains Gaussian. This study presents an alternative, data-driven method to track the states and parameters of neural mass models (NMMs) from EEG recordings using deep learning techniques, specifically an LSTM neural network. Approach An LSTM filter was trained on simulated EEG data generated by a neural mass model using a wide range of parameters. With an appropriately customised loss function, the LSTM filter can learn the behaviour of NMMs. As a result, it can output the state vector and parameters of NMMs given observation data as the input. Main Results Test results using simulated data yielded correlations with R squared of around 0.99 and verified that the method is robust to noise and can be more accurate than a nonlinear Kalman filter when the initial conditions of the Kalman filter are not accurate. As an example of real-world application, the LSTM filter was also applied to real EEG data that included epileptic seizures, and revealed changes in connectivity strength parameters at the beginnings of seizures. Significance Tracking the state vector and parameters of mathematical brain models is of great importance in the area of brain modelling, monitoring, imaging and control. This approach has no need to specify the initial state vector and parameters, which is very difficult to do in practice because many of the variables being estimated cannot be measured directly in physiological experiments. This method may be applied using any neural mass model and, therefore, provides a general, novel, efficient approach to estimate brain model variables that are often difficult to measure

    Dynamic causal modelling of fluctuating connectivity in resting-state EEG

    Get PDF
    Functional and effective connectivity are known to change systematically over time. These changes might be explained by several factors, including intrinsic fluctuations in activity-dependent neuronal coupling and contextual factors, like experimental condition and time. Furthermore, contextual effects may be subject-specific or conserved over subjects. To characterize fluctuations in effective connectivity, we used dynamic causal modelling (DCM) of cross spectral responses over 1- min of electroencephalogram (EEG) recordings during rest, divided into 1-sec windows. We focused on two intrinsic networks: the default mode and the saliency network. DCM was applied to estimate connectivity in each time-window for both networks. Fluctuations in DCM connectivity parameters were assessed using hierarchical parametric empirical Bayes (PEB). Within-subject, between-window effects were modelled with a second-level linear model with temporal basis functions as regressors. This procedure was conducted for every subject separately. Bayesian model reduction was then used to assess which (combination of) temporal basis functions best explain dynamic connectivity over windows. A third (betweensubject) level model was used to infer which dynamic connectivity parameters are conserved over subjects. Our results indicate that connectivity fluctuations in the default mode network and to a lesser extent the saliency network comprised both subject-specific components and a common component. For both networks, connections to higher order regions appear to monotonically increase during the 1- min period. These results not only establish the predictive validity of dynamic connectivity estimates - in virtue of detecting systematic changes over subjects - they also suggest a network-specific dissociation in the relative contribution of fluctuations in connectivity that depend upon experimental context. We envisage these procedures could be useful for characterizing brain state transitions that may be explained by their cognitive or neuropathological underpinnings

    Modeling and Simulation Methods of Neuronal Populations and Neuronal Networks

    Full text link
    This thesis presents numerical methods and modeling related to simulating neurons. Two approaches to the simulation are taken: a population density approach and a neuronal network approach. The first two chapters present the results from the population density approach and its applications. The population density approach assumes that each neuron can be identified by its states (e.g., membrane potential, conductance of ion channels). Additionally, it assumes the population is large such that it can be approximated by a continuous population density distribution in the state space. By updating this population density, we can learn the macroscopic behavior of the population, such as the average firing rate and average membrane potential. The Population density approach avoids the need to simulate every single neuron when the population is large. While many previous population-density methods, such as the mean-field method, make further simplifications to the models, we developed the Asymmetric Particle Population Density (APPD) method to simulate the population density directly without the need to simplify the dynamics of the model. This enables us to simulate the macroscopic properties of coupled neuronal populations as accurately as a direct simulation. The APPD method tracks multiple asymmetric Gaussians as they advance in time due to a convection-diffusion equation, and our main theoretical innovation is deriving this update algorithm by tracking a level set. Tracking a single Gaussian is also applicable to the Bayesian filtering for continuous-discrete systems. By adding a measurement-update step, we reformulated our tracking method as the Level Set Kalman Filter(LSKF) method and find that it offers greater accuracy than state-of-the-art methods. Chapter IV presents the methods for direct simulation of a neuronal network. For this approach, the aim is to build a high-performance and expandable framework that can be used to simulate various neuronal networks. The implementation is done on GPUs using CUDA, and this framework enables simulation for millions of neurons on a high-performance desktop computer. Additionally, real-time visualization of neuron activities is implemented. Pairing with the simulation framework, a detailed mouse cortex model with experiment-determined morphology using the CUBIC-Atlas, and neuron connectome information from Allen's brain atlas is generated.PHDApplied and Interdisciplinary MathematicsUniversity of Michigan, Horace H. Rackham School of Graduate Studieshttp://deepblue.lib.umich.edu/bitstream/2027.42/169840/1/nywang_1.pd

    Epileptic focus localization using functional brain connectivity

    Get PDF

    Deep Cellular Recurrent Neural Architecture for Efficient Multidimensional Time-Series Data Processing

    Get PDF
    Efficient processing of time series data is a fundamental yet challenging problem in pattern recognition. Though recent developments in machine learning and deep learning have enabled remarkable improvements in processing large scale datasets in many application domains, most are designed and regulated to handle inputs that are static in time. Many real-world data, such as in biomedical, surveillance and security, financial, manufacturing and engineering applications, are rarely static in time, and demand models able to recognize patterns in both space and time. Current machine learning (ML) and deep learning (DL) models adapted for time series processing tend to grow in complexity and size to accommodate the additional dimensionality of time. Specifically, the biologically inspired learning based models known as artificial neural networks that have shown extraordinary success in pattern recognition, tend to grow prohibitively large and cumbersome in the presence of large scale multi-dimensional time series biomedical data such as EEG. Consequently, this work aims to develop representative ML and DL models for robust and efficient large scale time series processing. First, we design a novel ML pipeline with efficient feature engineering to process a large scale multi-channel scalp EEG dataset for automated detection of epileptic seizures. With the use of a sophisticated yet computationally efficient time-frequency analysis technique known as harmonic wavelet packet transform and an efficient self-similarity computation based on fractal dimension, we achieve state-of-the-art performance for automated seizure detection in EEG data. Subsequently, we investigate the development of a novel efficient deep recurrent learning model for large scale time series processing. For this, we first study the functionality and training of a biologically inspired neural network architecture known as cellular simultaneous recurrent neural network (CSRN). We obtain a generalization of this network for multiple topological image processing tasks and investigate the learning efficacy of the complex cellular architecture using several state-of-the-art training methods. Finally, we develop a novel deep cellular recurrent neural network (CDRNN) architecture based on the biologically inspired distributed processing used in CSRN for processing time series data. The proposed DCRNN leverages the cellular recurrent architecture to promote extensive weight sharing and efficient, individualized, synchronous processing of multi-source time series data. Experiments on a large scale multi-channel scalp EEG, and a machine fault detection dataset show that the proposed DCRNN offers state-of-the-art recognition performance while using substantially fewer trainable recurrent units

    Mobile Robots

    Get PDF
    The objective of this book is to cover advances of mobile robotics and related technologies applied for multi robot systems' design and development. Design of control system is a complex issue, requiring the application of information technologies to link the robots into a single network. Human robot interface becomes a demanding task, especially when we try to use sophisticated methods for brain signal processing. Generated electrophysiological signals can be used to command different devices, such as cars, wheelchair or even video games. A number of developments in navigation and path planning, including parallel programming, can be observed. Cooperative path planning, formation control of multi robotic agents, communication and distance measurement between agents are shown. Training of the mobile robot operators is very difficult task also because of several factors related to different task execution. The presented improvement is related to environment model generation based on autonomous mobile robot observations

    Minimum-entropy causal inference and its application in brain network analysis

    Full text link
    Identification of the causal relationship between multivariate time series is a ubiquitous problem in data science. Granger causality measure (GCM) and conditional Granger causality measure (cGCM) are widely used statistical methods for causal inference and effective connectivity analysis in neuroimaging research. Both GCM and cGCM have frequency-domain formulations that are developed based on a heuristic algorithm for matrix decompositions. The goal of this work is to generalize GCM and cGCM measures and their frequency-domain formulations by using a theoretic framework for minimum entropy (ME) estimation. The proposed ME-estimation method extends the classical theory of minimum mean squared error (MMSE) estimation for stochastic processes. It provides three formulations of cGCM that include Geweke's original time-domain cGCM as a special case. But all three frequency-domain formulations of cGCM are different from previous methods. Experimental results based on simulations have shown that one of the proposed frequency-domain cGCM has enhanced sensitivity and specificity in detecting network connections compared to other methods. In an example based on in vivo functional magnetic resonance imaging, the proposed frequency-domain measure cGCM can significantly enhance the consistency between the structural and effective connectivity of human brain networks

    Bayesian inference for stable differential equation models with applications in computational neuroscience

    Get PDF
    Inference for mechanistic models is challenging because of nonlinear interactions between model parameters and a lack of identifiability. Here we focus on a specific class of mechanistic models, which we term stable differential equations. The dynamics in these models are approximately linear around a stable fixed point of the system. We exploit this property to develop fast approximate methods for posterior inference. We first illustrate our approach using simulated EEG data on the Liley et al model, a mechanistic neural population model. Then we apply our methods to experimental EEG data from rats to estimate how parameters in the Liley et al model vary with level of isoflurane anaesthesia. More generally, stable differential equation models and the corresponding inference methods are useful for analysis of stationary time-series data. Compared to the existing state-of-the art, our methods are several orders of magnitude faster, and are particularly suited to analysis of long time-series (>10,000 time-points) and models of moderate dimension (10-50 state variables and 10-50 parameters.
    corecore