1,211 research outputs found

    A statistical approach to the inverse problem in magnetoencephalography

    Full text link
    Magnetoencephalography (MEG) is an imaging technique used to measure the magnetic field outside the human head produced by the electrical activity inside the brain. The MEG inverse problem, identifying the location of the electrical sources from the magnetic signal measurements, is ill-posed, that is, there are an infinite number of mathematically correct solutions. Common source localization methods assume the source does not vary with time and do not provide estimates of the variability of the fitted model. Here, we reformulate the MEG inverse problem by considering time-varying locations for the sources and their electrical moments and we model their time evolution using a state space model. Based on our predictive model, we investigate the inverse problem by finding the posterior source distribution given the multiple channels of observations at each time rather than fitting fixed source parameters. Our new model is more realistic than common models and allows us to estimate the variation of the strength, orientation and position. We propose two new Monte Carlo methods based on sequential importance sampling. Unlike the usual MCMC sampling scheme, our new methods work in this situation without needing to tune a high-dimensional transition kernel which has a very high cost. The dimensionality of the unknown parameters is extremely large and the size of the data is even larger. We use Parallel Virtual Machine (PVM) to speed up the computation.Comment: Published in at http://dx.doi.org/10.1214/14-AOAS716 the Annals of Applied Statistics (http://www.imstat.org/aoas/) by the Institute of Mathematical Statistics (http://www.imstat.org

    New Methods for Network Traffic Anomaly Detection

    Get PDF
    In this thesis we examine the efficacy of applying outlier detection techniques to understand the behaviour of anomalies in communication network traffic. We have identified several shortcomings. Our most finding is that known techniques either focus on characterizing the spatial or temporal behaviour of traffic but rarely both. For example DoS attacks are anomalies which violate temporal patterns while port scans violate the spatial equilibrium of network traffic. To address this observed weakness we have designed a new method for outlier detection based spectral decomposition of the Hankel matrix. The Hankel matrix is spatio-temporal correlation matrix and has been used in many other domains including climate data analysis and econometrics. Using our approach we can seamlessly integrate the discovery of both spatial and temporal anomalies. Comparison with other state of the art methods in the networks community confirms that our approach can discover both DoS and port scan attacks. The spectral decomposition of the Hankel matrix is closely tied to the problem of inference in Linear Dynamical Systems (LDS). We introduce a new problem, the Online Selective Anomaly Detection (OSAD) problem, to model the situation where the objective is to report new anomalies in the system and suppress know faults. For example, in the network setting an operator may be interested in triggering an alarm for malicious attacks but not on faults caused by equipment failure. In order to solve OSAD we combine techniques from machine learning and control theory in a unique fashion. Machine Learning ideas are used to learn the parameters of an underlying data generating system. Control theory techniques are used to model the feedback and modify the residual generated by the data generating state model. Experiments on synthetic and real data sets confirm that the OSAD problem captures a general scenario and tightly integrates machine learning and control theory to solve a practical problem

    Reduction of conductivity uncertainty propagations in the inverse problem of EEG source analysis

    Get PDF
    In computer simulations, the response of a system under study depends on the input parameters. Each of these parameters can be assigned a fixed value or a range of values within the input parameter space for system performance evaluations. Starting from values of the input parameters and a certain given model, the so-called forward problem can be solved that needs to approximate the output of the system. Starting from measurements related to the output of the system model it is possible to determine the state of the system by solving the so-called inverse problem. In the case of a non-linear inverse problem, non-linear minimization techniques need to be used where the forward model is iteratively evaluated for different input parameters. The accuracy of the solution in the inverse problem is however decreased due to the noise available in the measurements and due to uncertainties in the system model. Uncertainties are parameters for which their values are not exactly known and/or that can vary in time and/or depend on the environment. These uncertainties have, for given input parameter values, an influence on the forward problem solution. This forward uncertainty propagation leads then to errors in the inverse solutions because the forward model is iteratively evaluated for recovering the inverse solutions. Until now, it was assumed that the recovery errors could not be reduced. The only option was to either quantify the uncertain parameter values as accurate as possible or to reflect the uncertainty in the inverse solutions, i.e. determination of the region in parameter space wherein the inverse solution is likely to be situated. The overall aim of this thesis was to develop reduction techniques of inverse reconstruction errors so that the inverse problem is solved in a more robust and thus accurate way. Methodologies were specifically developed for electroencephalography (EEG) source analysis. EEG is a non-invasive technique that measures on the scalp of the head, the electric potentials induced by the neuronal activity. EEG has several applications in biomedical engineering and is an important diagnostic tool in clinical neurophysiology. In epilepsy, EEG is used to map brain areas and to receive source localization information that can be used prior to surgical operation. Starting from Maxwell’s equations in their quasi-static formulation and from a physical model of the head, the forward problem predicts the measurements that would be obtained for a given configuration of current sources. The used headmodels in this thesis are multi-layered spherical head models. The neural sources are parameterized by the location and orientation of electrical dipoles. In this thesis, a set of limited number of dipole sources is used as source model leading to a well posed inverse problem. The inverse problem starts from measured EEG data and recovers the locations and orientations of the electrical dipole sources. A loss in accuracy of the recovered neural sources occurs because of noise in the EEG measurements and uncertainties in the forward model. Especially the conductivity values of scalp, skull and brain are not well known since these values are difficult to measure. Moreover, these uncertainties can vary from person to person, in time, etc. In this thesis, novel numerical methods are developed so to provide new approaches in the improvement of spatial accuracy in EEG source analysis, taking into account model uncertainties. Nowadays, the localization of the electrical activity in the brain is still a current and challenging research topic due to the many difficulties arising e.g. in the process of modeling the head and dealing with the not well known conductivity values of its different tissues. Due to uncertainty in the conductivity value of the head tissues, high values of errors are introduced when solving the EEG inverse problem. In order to improve the accuracy of the solution of the inverse problem taking into account the uncertainty of the conductivity values, a new mathematical approach in the definition of the cost function is introduced and new techniques in the iterative scheme of the inverse reconstruction are proposed. The work in this thesis concerns three important phases. In a first stage, we developed a robust methodology for the reduction of errors when reconstructing a single electrical dipole in the case of a single uncertainty. This uncertainty concerns the skull to soft tissue conductivity ratio which is an important parameter in the forward model. This conductivity ratio is difficult to quantify and depends from person to person. The forward model that we employed is a three shell spherical head model where the forward potentials depend on the conductivity ratio. We reformulated the solution of the forward problem by using a Taylor expansion around an actual value of the conductivity ratio which led to a linear model of the solution for the simulated potentials. The introduction of this expanded forward model, led to a sensitivity analysis which provided relevant information for the reconstruction of the sources in EEG source analysis. In order to develop a technique for reducing the errors in inverse solutions, some challenging mathematical questions and computational problems needed to be tackled. We proposed in this thesis the Reduced Conductivity Dependence (RCD) method where we reformulate the traditional cost function and where we incorporated some changes with respect to the iterative scheme. More specifically, in each iteration we include an internal fitting procedure and we propose selection of sensors. The fitting procedure makes it possible to have an as accurate as possible forward model while the selection procedure eliminates the sensors which have the highest sensitivity to the uncertain skull to brain conductivity ratio. Using numerical experiments we showed that errors in reconstructed electrical dipoles are reduced using the RCD methodology in the case of no noise in measurements and in the case of noise in measurements. Moreover, the procedure for the selection of electrodes was thoroughly investigated as well as the influence of the use of different EEG caps (with different number of electrodes). When using traditional reconstruction methods, the number of electrodes has not a high influence on the spatial accuracy of the reconstructed single electrical dipole. However, we showed that when using the RCD methodology the spatial accuracy can be even more increased. This because of the selection procedure that is included within the RCD methodology. In a second stage, we proposed a RCD method that can be applied for the reconstruction of a limited number of dipoles in the case of a single uncertainty. The same ideas were applied onto the Recursively Applied and Projected Multiple Signal Classification (RAP-MUSIC) algorithm. The three shell spherical head model was employed with the skull to brain conductivity ratio as single uncertainty. We showed using numerical experiments that the spatial accuracy of each reconstructed dipole is increased, i.e. reduction of the conductivity dependence of the inverse solutions. Moreover, we illustrated that the use of the RCD-based subspace correlation cost function leads to a high efficiency even for high noise levels. Finally, in a third stage, we developed a RCD methodology for the reduction of errors in the case of multiple uncertainties. We used a five shell spherical head model where conductivity ratios with respect to skull, cerebrospinal fluid, and white matter were uncertain. The cost function as well as the fitting and selection procedure of the RCD method were extended. The numerical experiments showed reductions in the reconstructed electrical dipoles in comparison with the traditional methodology and also compared to the RCD methodology developed for dealing with a single uncertainty

    Tensor Analysis and Fusion of Multimodal Brain Images

    Get PDF
    Current high-throughput data acquisition technologies probe dynamical systems with different imaging modalities, generating massive data sets at different spatial and temporal resolutions posing challenging problems in multimodal data fusion. A case in point is the attempt to parse out the brain structures and networks that underpin human cognitive processes by analysis of different neuroimaging modalities (functional MRI, EEG, NIRS etc.). We emphasize that the multimodal, multi-scale nature of neuroimaging data is well reflected by a multi-way (tensor) structure where the underlying processes can be summarized by a relatively small number of components or "atoms". We introduce Markov-Penrose diagrams - an integration of Bayesian DAG and tensor network notation in order to analyze these models. These diagrams not only clarify matrix and tensor EEG and fMRI time/frequency analysis and inverse problems, but also help understand multimodal fusion via Multiway Partial Least Squares and Coupled Matrix-Tensor Factorization. We show here, for the first time, that Granger causal analysis of brain networks is a tensor regression problem, thus allowing the atomic decomposition of brain networks. Analysis of EEG and fMRI recordings shows the potential of the methods and suggests their use in other scientific domains.Comment: 23 pages, 15 figures, submitted to Proceedings of the IEE

    Solving large-scale MEG/EEG source localisation and functional connectivity problems simultaneously using state-space models

    Get PDF
    State-space models are widely employed across various research disciplines to study unobserved dynamics. Conventional estimation techniques, such as Kalman filtering and expectation maximisation, offer valuable insights but incur high computational costs in large-scale analyses. Sparse inverse covariance estimators can mitigate these costs, but at the expense of a trade-off between enforced sparsity and increased estimation bias, necessitating careful assessment in low signal-to-noise ratio (SNR) situations. To address these challenges, we propose a three-fold solution: (1) Introducing multiple penalised state-space (MPSS) models that leverage data-driven regularisation; (2) Developing novel algorithms derived from backpropagation, gradient descent, and alternating least squares to solve MPSS models; (3) Presenting a K-fold cross-validation extension for evaluating regularisation parameters. We validate this MPSS regularisation framework through lower and more complex simulations under varying SNR conditions, including a large-scale synthetic magneto- and electro-encephalography (MEG/EEG) data analysis. In addition, we apply MPSS models to concurrently solve brain source localisation and functional connectivity problems for real event-related MEG/EEG data, encompassing thousands of sources on the cortical surface. The proposed methodology overcomes the limitations of existing approaches, such as constraints to small-scale and region-of-interest analyses. Thus, it may enable a more accurate and detailed exploration of cognitive brain functions

    Sparse EEG Source Localization Using Bernoulli Laplacian Priors

    Get PDF
    International audienceSource localization in electroencephalography has received an increasing amount of interest in the last decade. Solving the underlying ill-posed inverse problem usually requires choosing an appropriate regularization. The usual l2 norm has been considered and provides solutions with low computational complexity. However, in several situations, realistic brain activity is believed to be focused in a few focal areas. In these cases, the l2 norm is known to overestimate the activated spatial areas. One solution to this problem is to promote sparse solutions for instance based on the l1 norm that are easy to handle with optimization techniques. In this paper, we consider the use of an l0 + l1 norm to enforce sparse source activity (by ensuring the solution has few nonzero elements) while regularizing the nonzero amplitudes of the solution. More precisely, the l0 pseudonorm handles the position of the non zero elements while the l1 norm constrains the values of their amplitudes. We use a Bernoulli–Laplace prior to introduce this combined l0 + l1 norm in a Bayesian framework. The proposed Bayesian model is shown to favor sparsity while jointly estimating the model hyperparameters using a Markov chain Monte Carlo sampling technique. We apply the model to both simulated and real EEG data, showing that the proposed method provides better results than the l2 and l1 norms regularizations in the presence of pointwise sources. A comparison with a recent method based on multiple sparse priors is also conducted

    Optimal Resource Allocation Using Deep Learning-Based Adaptive Compression For Mhealth Applications

    Get PDF
    In the last few years the number of patients with chronic diseases that require constant monitoring increases rapidly; which motivates the researchers to develop scalable remote health applications. Nevertheless, transmitting big real-time data through a dynamic network limited by the bandwidth, end-to-end delay and transmission energy; will be an obstacle against having an efficient transmission of the data. The problem can be resolved by applying data reduction techniques on the vital signs at the transmitter side and reconstructing the data at the receiver side (i.e. the m-Health center). However, a new problem will be introduced which is the ability to receive the vital signs at the server side with an acceptable distortion rate (i.e. deformation of vital signs because of inefficient data reduction). In this thesis, we integrate efficient data reduction with wireless networking to deliver an adaptive compression with an acceptable distortion, while reacting to the wireless network dynamics such as channel fading and user mobility. A Deep Learning (DL) approach was used to implement an adaptive compression technique to compress and reconstruct the vital signs in general and specifically the Electroencephalogram Signal (EEG) with the minimum distortion. Then, a resource allocation framework was introduced to minimize the transmission energy along with the distortion of the reconstructed signa

    Optimal design of on-scalp electromagnetic sensor arrays for brain source localisation

    Get PDF
    Optically pumped magnetometers (OPMs) are quickly widening the scopes of noninvasive neurophysiological imaging. The possibility of placing these magnetic field sensors on the scalp allows not only to acquire signals from people in movement, but also to reduce the distance between the sensors and the brain, with a consequent gain in the signal-to-noise ratio. These advantages make the technique particularly attractive to characterise sources of brain activity in demanding populations, such as children and patients with epilepsy. However, the technology is currently in an early stage, presenting new design challenges around the optimal sensor arrangement and their complementarity with other techniques as electroencephalography (EEG). In this article, we present an optimal array design strategy focussed on minimising the brain source localisation error. The methodology is based on the Cramér-Rao bound, which provides lower error bounds on the estimation of source parameters regardless of the algorithm used. We utilise this framework to compare whole head OPM arrays with commercially available electro/magnetoencephalography (E/MEG) systems for localising brain signal generators. In addition, we study the complementarity between EEG and OPM-based MEG, and design optimal whole head systems based on OPMs only and a combination of OPMs and EEG electrodes for characterising deep and superficial sources alike. Finally, we show the usefulness of the approach to find the nearly optimal sensor positions minimising the estimation error bound in a given cortical region when a limited number of OPMs are available. This is of special interest for maximising the performance of small scale systems to ad hoc neurophysiological experiments, a common situation arising in most OPM labs
    corecore