178 research outputs found

    Spatio-temporal prediction of wind fields

    Get PDF
    Short-term wind and wind power forecasts are required for the reliable and economic operation of power systems with significant wind power penetration. This thesis presents new statistical techniques for producing forecasts at multiple locations using spatiotemporal information. Forecast horizons of up to 6 hours are considered for which statistical methods outperform physical models in general. Several methods for producing hourly wind speed and direction forecasts from 1 to 6 hours ahead are presented in addition to a method for producing five-minute-ahead probabilistic wind power forecasts. The former have applications in areas such as energy trading and defining reserve requirements, and the latter in power system balancing and wind farm control. Spatio-temporal information is captured by vector autoregressive (VAR) models that incorporate wind direction by modelling the wind time series using complex numbers. In a further development, the VAR coefficients are replaced with coefficient functions in order to capture the dependence of the predictor on external variables, such as the time of year or wind direction. The complex-valued approach is found to produce accurate speed predictions, and the conditional predictors offer improved performance with little additional computational cost. Two non-linear algorithms have been developed for wind forecasting. In the first, the predictor is derived from an ensemble of particle swarm optimised candidate solutions. This approach is low cost and requires very little training data but fails to capitalise on spatial information. The second approach uses kernelised forms of popular linear algorithms which are shown to produce more accurate forecasts than their linear equivalents for multi-step-ahead prediction. Finally, very-short-term wind power forecasting is considered. Five-minute-ahead parametric probabilistic forecasts are produced by modelling the predictive distribution as logit-normal and forecasting its parameters using a sparse-VAR (sVAR) approach. Development of the sVAR is motivated by the desire to produce forecasts on a large spatial scale, i.e. hundreds of locations, which is critical during periods of high instantaneous wind penetration.Short-term wind and wind power forecasts are required for the reliable and economic operation of power systems with significant wind power penetration. This thesis presents new statistical techniques for producing forecasts at multiple locations using spatiotemporal information. Forecast horizons of up to 6 hours are considered for which statistical methods outperform physical models in general. Several methods for producing hourly wind speed and direction forecasts from 1 to 6 hours ahead are presented in addition to a method for producing five-minute-ahead probabilistic wind power forecasts. The former have applications in areas such as energy trading and defining reserve requirements, and the latter in power system balancing and wind farm control. Spatio-temporal information is captured by vector autoregressive (VAR) models that incorporate wind direction by modelling the wind time series using complex numbers. In a further development, the VAR coefficients are replaced with coefficient functions in order to capture the dependence of the predictor on external variables, such as the time of year or wind direction. The complex-valued approach is found to produce accurate speed predictions, and the conditional predictors offer improved performance with little additional computational cost. Two non-linear algorithms have been developed for wind forecasting. In the first, the predictor is derived from an ensemble of particle swarm optimised candidate solutions. This approach is low cost and requires very little training data but fails to capitalise on spatial information. The second approach uses kernelised forms of popular linear algorithms which are shown to produce more accurate forecasts than their linear equivalents for multi-step-ahead prediction. Finally, very-short-term wind power forecasting is considered. Five-minute-ahead parametric probabilistic forecasts are produced by modelling the predictive distribution as logit-normal and forecasting its parameters using a sparse-VAR (sVAR) approach. Development of the sVAR is motivated by the desire to produce forecasts on a large spatial scale, i.e. hundreds of locations, which is critical during periods of high instantaneous wind penetration

    Data-driven multivariate and multiscale methods for brain computer interface

    Get PDF
    This thesis focuses on the development of data-driven multivariate and multiscale methods for brain computer interface (BCI) systems. The electroencephalogram (EEG), the most convenient means to measure neurophysiological activity due to its noninvasive nature, is mainly considered. The nonlinearity and nonstationarity inherent in EEG and its multichannel recording nature require a new set of data-driven multivariate techniques to estimate more accurately features for enhanced BCI operation. Also, a long term goal is to enable an alternative EEG recording strategy for achieving long-term and portable monitoring. Empirical mode decomposition (EMD) and local mean decomposition (LMD), fully data-driven adaptive tools, are considered to decompose the nonlinear and nonstationary EEG signal into a set of components which are highly localised in time and frequency. It is shown that the complex and multivariate extensions of EMD, which can exploit common oscillatory modes within multivariate (multichannel) data, can be used to accurately estimate and compare the amplitude and phase information among multiple sources, a key for the feature extraction of BCI system. A complex extension of local mean decomposition is also introduced and its operation is illustrated on two channel neuronal spike streams. Common spatial pattern (CSP), a standard feature extraction technique for BCI application, is also extended to complex domain using the augmented complex statistics. Depending on the circularity/noncircularity of a complex signal, one of the complex CSP algorithms can be chosen to produce the best classification performance between two different EEG classes. Using these complex and multivariate algorithms, two cognitive brain studies are investigated for more natural and intuitive design of advanced BCI systems. Firstly, a Yarbus-style auditory selective attention experiment is introduced to measure the user attention to a sound source among a mixture of sound stimuli, which is aimed at improving the usefulness of hearing instruments such as hearing aid. Secondly, emotion experiments elicited by taste and taste recall are examined to determine the pleasure and displeasure of a food for the implementation of affective computing. The separation between two emotional responses is examined using real and complex-valued common spatial pattern methods. Finally, we introduce a novel approach to brain monitoring based on EEG recordings from within the ear canal, embedded on a custom made hearing aid earplug. The new platform promises the possibility of both short- and long-term continuous use for standard brain monitoring and interfacing applications

    Tensor Analysis and Fusion of Multimodal Brain Images

    Get PDF
    Current high-throughput data acquisition technologies probe dynamical systems with different imaging modalities, generating massive data sets at different spatial and temporal resolutions posing challenging problems in multimodal data fusion. A case in point is the attempt to parse out the brain structures and networks that underpin human cognitive processes by analysis of different neuroimaging modalities (functional MRI, EEG, NIRS etc.). We emphasize that the multimodal, multi-scale nature of neuroimaging data is well reflected by a multi-way (tensor) structure where the underlying processes can be summarized by a relatively small number of components or "atoms". We introduce Markov-Penrose diagrams - an integration of Bayesian DAG and tensor network notation in order to analyze these models. These diagrams not only clarify matrix and tensor EEG and fMRI time/frequency analysis and inverse problems, but also help understand multimodal fusion via Multiway Partial Least Squares and Coupled Matrix-Tensor Factorization. We show here, for the first time, that Granger causal analysis of brain networks is a tensor regression problem, thus allowing the atomic decomposition of brain networks. Analysis of EEG and fMRI recordings shows the potential of the methods and suggests their use in other scientific domains.Comment: 23 pages, 15 figures, submitted to Proceedings of the IEE

    Extraction and Detection of Fetal Electrocardiograms from Abdominal Recordings

    Get PDF
    The non-invasive fetal ECG (NIFECG), derived from abdominal surface electrodes, offers novel diagnostic possibilities for prenatal medicine. Despite its straightforward applicability, NIFECG signals are usually corrupted by many interfering sources. Most significantly, by the maternal ECG (MECG), whose amplitude usually exceeds that of the fetal ECG (FECG) by multiple times. The presence of additional noise sources (e.g. muscular/uterine noise, electrode motion, etc.) further affects the signal-to-noise ratio (SNR) of the FECG. These interfering sources, which typically show a strong non-stationary behavior, render the FECG extraction and fetal QRS (FQRS) detection demanding signal processing tasks. In this thesis, several of the challenges regarding NIFECG signal analysis were addressed. In order to improve NIFECG extraction, the dynamic model of a Kalman filter approach was extended, thus, providing a more adequate representation of the mixture of FECG, MECG, and noise. In addition, aiming at the FECG signal quality assessment, novel metrics were proposed and evaluated. Further, these quality metrics were applied in improving FQRS detection and fetal heart rate estimation based on an innovative evolutionary algorithm and Kalman filtering signal fusion, respectively. The elaborated methods were characterized in depth using both simulated and clinical data, produced throughout this thesis. To stress-test extraction algorithms under ideal circumstances, a comprehensive benchmark protocol was created and contributed to an extensively improved NIFECG simulation toolbox. The developed toolbox and a large simulated dataset were released under an open-source license, allowing researchers to compare results in a reproducible manner. Furthermore, to validate the developed approaches under more realistic and challenging situations, a clinical trial was performed in collaboration with the University Hospital of Leipzig. Aside from serving as a test set for the developed algorithms, the clinical trial enabled an exploratory research. This enables a better understanding about the pathophysiological variables and measurement setup configurations that lead to changes in the abdominal signal's SNR. With such broad scope, this dissertation addresses many of the current aspects of NIFECG analysis and provides future suggestions to establish NIFECG in clinical settings.:Abstract Acknowledgment Contents List of Figures List of Tables List of Abbreviations List of Symbols (1)Introduction 1.1)Background and Motivation 1.2)Aim of this Work 1.3)Dissertation Outline 1.4)Collaborators and Conflicts of Interest (2)Clinical Background 2.1)Physiology 2.1.1)Changes in the maternal circulatory system 2.1.2)Intrauterine structures and feto-maternal connection 2.1.3)Fetal growth and presentation 2.1.4)Fetal circulatory system 2.1.5)Fetal autonomic nervous system 2.1.6)Fetal heart activity and underlying factors 2.2)Pathology 2.2.1)Premature rupture of membrane 2.2.2)Intrauterine growth restriction 2.2.3)Fetal anemia 2.3)Interpretation of Fetal Heart Activity 2.3.1)Summary of clinical studies on FHR/FHRV 2.3.2)Summary of studies on heart conduction 2.4)Chapter Summary (3)Technical State of the Art 3.1)Prenatal Diagnostic and Measuring Technique 3.1.1)Fetal heart monitoring 3.1.2)Related metrics 3.2)Non-Invasive Fetal ECG Acquisition 3.2.1)Overview 3.2.2)Commercial equipment 3.2.3)Electrode configurations 3.2.4)Available NIFECG databases 3.2.5)Validity and usability of the non-invasive fetal ECG 3.3)Non-Invasive Fetal ECG Extraction Methods 3.3.1)Overview on the non-invasive fetal ECG extraction methods 3.3.2)Kalman filtering basics 3.3.3)Nonlinear Kalman filtering 3.3.4)Extended Kalman filter for FECG estimation 3.4)Fetal QRS Detection 3.4.1)Merging multichannel fetal QRS detections 3.4.2)Detection performance 3.5)Fetal Heart Rate Estimation 3.5.1)Preprocessing the fetal heart rate 3.5.2)Fetal heart rate statistics 3.6)Fetal ECG Morphological Analysis 3.7)Problem Description 3.8)Chapter Summary (4)Novel Approaches for Fetal ECG Analysis 4.1)Preliminary Considerations 4.2)Fetal ECG Extraction by means of Kalman Filtering 4.2.1)Optimized Gaussian approximation 4.2.2)Time-varying covariance matrices 4.2.3)Extended Kalman filter with unknown inputs 4.2.4)Filter calibration 4.3)Accurate Fetal QRS and Heart Rate Detection 4.3.1)Multichannel evolutionary QRS correction 4.3.2)Multichannel fetal heart rate estimation using Kalman filters 4.4)Chapter Summary (5)Data Material 5.1)Simulated Data 5.1.1)The FECG Synthetic Generator (FECGSYN) 5.1.2)The FECG Synthetic Database (FECGSYNDB) 5.2)Clinical Data 5.2.1)Clinical NIFECG recording 5.2.2)Scope and limitations of this study 5.2.3)Data annotation: signal quality and fetal amplitude 5.2.4)Data annotation: fetal QRS annotation 5.3)Chapter Summary (6)Results for Data Analysis 6.1)Simulated Data 6.1.1)Fetal QRS detection 6.1.2)Morphological analysis 6.2)Own Clinical Data 6.2.1)FQRS correction using the evolutionary algorithm 6.2.2)FHR correction by means of Kalman filtering (7)Discussion and Prospective 7.1)Data Availability 7.1.1)New measurement protocol 7.2)Signal Quality 7.3)Extraction Methods 7.4)FQRS and FHR Correction Algorithms (8)Conclusion References (A)Appendix A - Signal Quality Annotation (B)Appendix B - Fetal QRS Annotation (C)Appendix C - Data Recording GU

    Widely Linear State Space Filtering of Improper Complex Signals

    Get PDF
    Complex signals are the backbone of many modern applications, such as power systems, communication systems, biomedical sciences and military technologies. However, standard complex valued signal processing approaches are suited to only a subset of complex signals known as proper, and are inadequate of the generality of complex signals, as they do not fully exploit the available information. This is mainly due to the inherent blindness of the algorithms to the complete second order statistics of the signals, or due to under-modelling of the underlying system. The aim of this thesis is to provide enhanced complex valued, state space based, signal processing solutions for the generality of complex signals and systems. This is achieved based on the recent advances in the so called augmented complex statistics and widely linear modelling, which have brought to light the limitations of conventional statistical complex signal processing approaches. Exploiting these developments, we propose a class of widely linear adaptive state space estimation techniques, which provide a unified framework and enhanced performance for the generality of complex signals, compared with conventional approaches. These include the linear and nonlinear Kalman and particle filters, whereby it is shown that catering for the complete second order information and system models leads to significant performance gains. The proposed techniques are also extended to the case of cooperative distributed estimation, where nodes in a network collaborate locally to estimate signals, under a framework that caters for general complex signals, as well as the cross-correlations between observation noises, unlike earlier solutions. The analysis of the algorithms are supported by numerous case studies, including frequency estimation in three phase power systems, DIFAR sonobuoy underwater target tracking, and real-world wind modeling and prediction.Open Acces

    The SURE-LET approach to image denoising

    Get PDF
    Denoising is an essential step prior to any higher-level image-processing tasks such as segmentation or object tracking, because the undesirable corruption by noise is inherent to any physical acquisition device. When the measurements are performed by photosensors, one usually distinguish between two main regimes: in the first scenario, the measured intensities are sufficiently high and the noise is assumed to be signal-independent. In the second scenario, only few photons are detected, which leads to a strong signal-dependent degradation. When the noise is considered as signal-independent, it is often modeled as an additive independent (typically Gaussian) random variable, whereas, otherwise, the measurements are commonly assumed to follow independent Poisson laws, whose underlying intensities are the unknown noise-free measures. We first consider the reduction of additive white Gaussian noise (AWGN). Contrary to most existing denoising algorithms, our approach does not require an explicit prior statistical modeling of the unknown data. Our driving principle is the minimization of a purely data-adaptive unbiased estimate of the mean-squared error (MSE) between the processed and the noise-free data. In the AWGN case, such a MSE estimate was first proposed by Stein, and is known as "Stein's unbiased risk estimate" (SURE). We further develop the original SURE theory and propose a general methodology for fast and efficient multidimensional image denoising, which we call the SURE-LET approach. While SURE allows the quantitative monitoring of the denoising quality, the flexibility and the low computational complexity of our approach are ensured by a linear parameterization of the denoising process, expressed as a linear expansion of thresholds (LET).We propose several pointwise, multivariate, and multichannel thresholding functions applied to arbitrary (in particular, redundant) linear transformations of the input data, with a special focus on multiscale signal representations. We then transpose the SURE-LET approach to the estimation of Poisson intensities degraded by AWGN. The signal-dependent specificity of the Poisson statistics leads to the derivation of a new unbiased MSE estimate that we call "Poisson's unbiased risk estimate" (PURE) and requires more adaptive transform-domain thresholding rules. In a general PURE-LET framework, we first devise a fast interscale thresholding method restricted to the use of the (unnormalized) Haar wavelet transform. We then lift this restriction and show how the PURE-LET strategy can be used to design and optimize a wide class of nonlinear processing applied in an arbitrary (in particular, redundant) transform domain. We finally apply some of the proposed denoising algorithms to real multidimensional fluorescence microscopy images. Such in vivo imaging modality often operates under low-illumination conditions and short exposure time; consequently, the random fluctuations of the measured fluorophore radiations are well described by a Poisson process degraded (or not) by AWGN. We validate experimentally this statistical measurement model, and we assess the performance of the PURE-LET algorithms in comparison with some state-of-the-art denoising methods. Our solution turns out to be very competitive both qualitatively and computationally, allowing for a fast and efficient denoising of the huge volumes of data that are nowadays routinely produced in biomedical imaging

    Measuring Directed Functional Connectivity Using Non-Parametric Directionality Analysis : Validation and Comparison with Non-Parametric Granger Causality

    Get PDF
    BACKGROUND: 'Non-parametric directionality' (NPD) is a novel method for estimation of directed functional connectivity (dFC) in neural data. The method has previously been verified in its ability to recover causal interactions in simulated spiking networks in Halliday et al. (2015). METHODS: This work presents a validation of NPD in continuous neural recordings (e.g. local field potentials). Specifically, we use autoregressive models to simulate time delayed correlations between neural signals. We then test for the accurate recovery of networks in the face of several confounds typically encountered in empirical data. We examine the effects of NPD under varying: a) signal-to-noise ratios, b) asymmetries in signal strength, c) instantaneous mixing, d) common drive, e) data length, and f) parallel/convergent signal routing. We also apply NPD to data from a patient who underwent simultaneous magnetoencephalography and deep brain recording. RESULTS: We demonstrate that NPD can accurately recover directed functional connectivity from simulations with known patterns of connectivity. The performance of the NPD measure is compared with non-parametric estimators of Granger causality (NPG), a well-established methodology for model-free estimation of dFC. A series of simulations investigating synthetically imposed confounds demonstrate that NPD provides estimates of connectivity that are equivalent to NPG, albeit with an increased sensitivity to data length. However, we provide evidence that: i) NPD is less sensitive than NPG to degradation by noise; ii) NPD is more robust to the generation of false positive identification of connectivity resulting from SNR asymmetries; iii) NPD is more robust to corruption via moderate amounts of instantaneous signal mixing. CONCLUSIONS: The results in this paper highlight that to be practically applied to neural data, connectivity metrics should not only be accurate in their recovery of causal networks but also resistant to the confounding effects often encountered in experimental recordings of multimodal data. Taken together, these findings position NPD at the state-of-the-art with respect to the estimation of directed functional connectivity in neuroimaging

    Connectivity Analysis in EEG Data: A Tutorial Review of the State of the Art and Emerging Trends

    Get PDF
    Understanding how different areas of the human brain communicate with each other is a crucial issue in neuroscience. The concepts of structural, functional and effective connectivity have been widely exploited to describe the human connectome, consisting of brain networks, their structural connections and functional interactions. Despite high-spatial-resolution imaging techniques such as functional magnetic resonance imaging (fMRI) being widely used to map this complex network of multiple interactions, electroencephalographic (EEG) recordings claim high temporal resolution and are thus perfectly suitable to describe either spatially distributed and temporally dynamic patterns of neural activation and connectivity. In this work, we provide a technical account and a categorization of the most-used data-driven approaches to assess brain-functional connectivity, intended as the study of the statistical dependencies between the recorded EEG signals. Different pairwise and multivariate, as well as directed and non-directed connectivity metrics are discussed with a pros-cons approach, in the time, frequency, and information-theoretic domains. The establishment of conceptual and mathematical relationships between metrics from these three frameworks, and the discussion of novel methodological approaches, will allow the reader to go deep into the problem of inferring functional connectivity in complex networks. Furthermore, emerging trends for the description of extended forms of connectivity (e.g., high-order interactions) are also discussed, along with graph-theory tools exploring the topological properties of the network of connections provided by the proposed metrics. Applications to EEG data are reviewed. In addition, the importance of source localization, and the impacts of signal acquisition and pre-processing techniques (e.g., filtering, source localization, and artifact rejection) on the connectivity estimates are recognized and discussed. By going through this review, the reader could delve deeply into the entire process of EEG pre-processing and analysis for the study of brain functional connectivity and learning, thereby exploiting novel methodologies and approaches to the problem of inferring connectivity within complex networks
    • …
    corecore