2,090 research outputs found

    Optimal embedding parameters: A modelling paradigm

    Full text link
    Reconstruction of a dynamical system from a time series requires the selection of two parameters, the embedding dimension ded_e and the embedding lag τ\tau. Many competing criteria to select these parameters exist, and all are heuristic. Within the context of modeling the evolution operator of the underlying dynamical system, we show that one only need be concerned with the product deτd_e\tau. We introduce an information theoretic criteria for the optimal selection of the embedding window dw=deτd_w=d_e\tau. For infinitely long time series this method is equivalent to selecting the embedding lag that minimises the nonlinear model prediction error. For short and noisy time series we find that the results of this new algorithm are data dependent and superior to estimation of embedding parameters with the standard techniques

    Bayesian Framework for Simultaneous Registration and Estimation of Noisy, Sparse and Fragmented Functional Data

    Get PDF
    Mathematical and Physical Sciences: 3rd Place (The Ohio State University Edward F. Hayes Graduate Research Forum)In many applications, smooth processes generate data that is recorded under a variety of observation regimes, such as dense sampling and sparse or fragmented observations that are often contaminated with error. The statistical goal of registering and estimating the individual underlying functions from discrete observations has thus far been mainly approached sequentially without formal uncertainty propagation, or in an application-specific manner by pooling information across subjects. We propose a unified Bayesian framework for simultaneous registration and estimation, which is flexible enough to accommodate inference on individual functions under general observation regimes. Our ability to do this relies on the specification of strongly informative prior models over the amplitude component of function variability. We provide two strategies for this critical choice: a data-driven approach that defines an empirical basis for the amplitude subspace based on available training data, and a shape-restricted approach when the relative location and number of local extrema is well-understood. The proposed methods build on the elastic functional data analysis framework to separately model amplitude and phase variability inherent in functional data. We emphasize the importance of uncertainty quantification and visualization of these two components as they provide complementary information about the estimated functions. We validate the proposed framework using simulation studies, and real applications to estimation of fractional anisotropy profiles based on diffusion tensor imaging measurements, growth velocity functions and bone mineral density curves.No embarg

    Bayesian wavelet de-noising with the caravan prior

    Get PDF
    According to both domain expert knowledge and empirical evidence, wavelet coefficients of real signals tend to exhibit clustering patterns, in that they contain connected regions of coefficients of similar magnitude (large or small). A wavelet de-noising approach that takes into account such a feature of the signal may in practice outperform other, more vanilla methods, both in terms of the estimation error and visual appearance of the estimates. Motivated by this observation, we present a Bayesian approach to wavelet de-noising, where dependencies between neighbouring wavelet coefficients are a priori modelled via a Markov chain-based prior, that we term the caravan prior. Posterior computations in our method are performed via the Gibbs sampler. Using representative synthetic and real data examples, we conduct a detailed comparison of our approach with a benchmark empirical Bayes de-noising method (due to Johnstone and Silverman). We show that the caravan prior fares well and is therefore a useful addition to the wavelet de-noising toolbox.Comment: 32 pages, 15 figures, 4 table

    fMRI activation detection with EEG priors

    Get PDF
    The purpose of brain mapping techniques is to advance the understanding of the relationship between structure and function in the human brain in so-called activation studies. In this work, an advanced statistical model for combining functional magnetic resonance imaging (fMRI) and electroencephalography (EEG) recordings is developed to fuse complementary information about the location of neuronal activity. More precisely, a new Bayesian method is proposed for enhancing fMRI activation detection by the use of EEG-based spatial prior information in stimulus based experimental paradigms. I.e., we model and analyse stimulus influence by a spatial Bayesian variable selection scheme, and extend existing high-dimensional regression methods by incorporating prior information on binary selection indicators via a latent probit regression with either a spatially-varying or constant EEG effect. Spatially-varying effects are regularized by intrinsic Markov random field priors. Inference is based on a full Bayesian Markov Chain Monte Carlo (MCMC) approach. Whether the proposed algorithm is able to increase the sensitivity of mere fMRI models is examined in both a real-world application and a simulation study. We observed, that carefully selected EEG--prior information additionally increases sensitivity in activation regions that have been distorted by a low signal-to-noise ratio

    Modeling Binary Time Series Using Gaussian Processes with Application to Predicting Sleep States

    Full text link
    Motivated by the problem of predicting sleep states, we develop a mixed effects model for binary time series with a stochastic component represented by a Gaussian process. The fixed component captures the effects of covariates on the binary-valued response. The Gaussian process captures the residual variations in the binary response that are not explained by covariates and past realizations. We develop a frequentist modeling framework that provides efficient inference and more accurate predictions. Results demonstrate the advantages of improved prediction rates over existing approaches such as logistic regression, generalized additive mixed model, models for ordinal data, gradient boosting, decision tree and random forest. Using our proposed model, we show that previous sleep state and heart rates are significant predictors for future sleep states. Simulation studies also show that our proposed method is promising and robust. To handle computational complexity, we utilize Laplace approximation, golden section search and successive parabolic interpolation. With this paper, we also submit an R-package (HIBITS) that implements the proposed procedure.Comment: Journal of Classification (2018

    Non-invasive fetal electrocardiogram : analysis and interpretation

    Get PDF
    High-risk pregnancies are becoming more and more prevalent because of the progressively higher age at which women get pregnant. Nowadays about twenty percent of all pregnancies are complicated to some degree, for instance because of preterm delivery, fetal oxygen deficiency, fetal growth restriction, or hypertension. Early detection of these complications is critical to permit timely medical intervention, but is hampered by strong limitations of existing monitoring technology. This technology is either only applicable in hospital settings, is obtrusive, or is incapable of providing, in a robust way, reliable information for diagnosis of the well-being of the fetus. The most prominent method for monitoring of the fetal health condition is monitoring of heart rate variability in response to activity of the uterus (cardiotocography; CTG). Generally, in obstetrical practice, the heart rate is determined in either of two ways: unobtrusively with a (Doppler) ultrasound probe on the maternal abdomen, or obtrusively with an invasive electrode fixed onto the fetal scalp. The first method is relatively inaccurate but is non-invasive and applicable in all stages of pregnancy. The latter method is far more accurate but can only be applied following rupture of the membranes and sufficient dilatation, restricting its applicability to only the very last phase of pregnancy. Besides these accuracy and applicability issues, the use of CTG in obstetrical practice also has another limitation: despite its high sensitivity, the specificity of CTG is relatively low. This means that in most cases of fetal distress the CTG reveals specific patterns of heart rate variability, but that these specific patterns can also be encountered for healthy fetuses, complicating accurate diagnosis of the fetal condition. Hence, a prerequisite for preventing unnecessary interventions that are based on CTG alone, is the inclusion of additional information in diagnostics. Monitoring of the fetal electrocardiogram (ECG), as a supplement of CTG, has been demonstrated to have added value for monitoring of the fetal health condition. Unfortunately the application of the fetal ECG in obstetrical diagnostics is limited because at present the fetal ECG can only be measured reliably by means of an invasive scalp electrode. To overcome this limited applicability, many attempts have been made to record the fetal ECG non-invasively from the maternal abdomen, but these attempts have not yet led to approaches that permit widespread clinical application. One key difficulty is that the signal to noise ratio (SNR) of the transabdominal ECG recordings is relatively low. Perhaps even more importantly, the abdominal ECG recordings yield ECG signals for which the morphology depends strongly on the orientation of the fetus within the maternal uterus. Accordingly, for any fetal orientation, the ECG morphology is different. This renders correct clinical interpretation of the recorded ECG signals complicated, if not impossible. This thesis aims to address these difficulties and to provide new contributions on the clinical interpretation of the fetal ECG. At first the SNR of the recorded signals is enhanced through a series of signal processing steps that exploit specific and a priori known properties of the fetal ECG. More particularly, the dominant interference (i.e. the maternal ECG) is suppressed by exploiting the absence of temporal correlation between the maternal and fetal ECG. In this suppression, the maternal ECG complex is dynamically segmented into individual ECG waves and each of these waves is estimated through averaging corresponding waves from preceding ECG complexes. The maternal ECG template generated by combining the estimated waves is subsequently subtracted from the original signal to yield a non-invasive recording in which the maternal ECG has been suppressed. This suppression method is demonstrated to be more accurate than existing methods. Other interferences and noise are (partly) suppressed by exploiting the quasiperiodicity of the fetal ECG through averaging consecutive ECG complexes or by exploiting the spatial correlation of the ECG. The averaging of several consecutive ECG complexes, synchronized on their QRS complex, enhances the SNR of the ECG but also can suppress morphological variations in the ECG that are clinically relevant. The number of ECG complexes included in the average hence constitutes a trade-off between SNR enhancement on the one hand and loss of morphological variability on the other hand. To relax this trade-off, in this thesis a method is presented that can adaptively estimate the number of ECG complexes included in the average. In cases of morphological variations, this number is decreased ensuring that the variations are not suppressed. In cases of no morphological variability, this number is increased to ensure adequate SNR enhancement. The further suppression of noise by exploiting the spatial correlation of the ECG is based on the fact that all ECG signals recorded at several locations on the maternal abdomen originate from the same electrical source, namely the fetal heart. The electrical activity of the fetal heart at any point in time can be modeled as a single electrical field vector with stationary origin. This vector varies in both amplitude and orientation in three-dimensional space during the cardiac cycle and the time-path described by this vector is referred to as the fetal vectorcardiogram (VCG). In this model, the abdominal ECG constitutes the projection of the VCG onto the vector that describes the position of the abdominal electrode with respect to a reference electrode. This means that when the VCG is known, any desired ECG signal can be calculated. Equivalently, this also means that when enough ECG signals (i.e. at least three independent signals) are known, the VCG can be calculated. By using more than three ECG signals for the calculation of the VCG, redundancy in the ECG signals can be exploited for added noise suppression. Unfortunately, when calculating the fetal VCG from the ECG signals recorded from the maternal abdomen, the distance between the fetal heart and the electrodes is not the same for each electrode. Because the amplitude of the ECG signals decreases with propagation to the abdominal surface, these different distances yield a specific, unknown attenuation for each ECG signal. Existing methods for estimating the VCG operate with a fixed linear combination of the ECG signals and, hence, cannot account for variations in signal attenuation. To overcome this problem and be able to account for fetal movement, in this thesis a method is presented that estimates both the VCG and, to some extent, also the signal attenuation. This is done by determining for which VCG and signal attenuation the joint probability over both these variables is maximal given the observed ECG signals. The underlying joint probability distribution is determined by assuming the ECG signals to originate from scaled VCG projections and additive noise. With this method, a VCG, tailored to each specific patient, is determined. With respect to the fixed linear combinations, the presented method performs significantly better in the accurate estimation of the VCG. Besides describing the electrical activity of the fetal heart in three dimensions, the fetal VCG also provides a framework to account for the fetal orientation in the uterus. This framework enables the detection of the fetal orientation over time and allows for rotating the fetal VCG towards a prescribed orientation. From the normalized fetal VCG obtained in this manner, standardized ECG signals can be calculated, facilitating correct clinical interpretation of the non-invasive fetal ECG signals. The potential of the presented approach (i.e. the combination of all methods described above) is illustrated for three different clinical cases. In the first case, the fetal ECG is analyzed to demonstrate that the electrical behavior of the fetal heart differs significantly from the adult heart. In fact, this difference is so substantial that diagnostics based on the fetal ECG should be based on different guidelines than those for adult ECG diagnostics. In the second case, the fetal ECG is used to visualize the origin of fetal supraventricular extrasystoles and the results suggest that the fetal ECG might in future serve as diagnostic tool for relating fetal arrhythmia to congenital heart diseases. In the last case, the non-invasive fetal ECG is compared to the invasively recorded fetal ECG to gauge the SNR of the transabdominal recordings and to demonstrate the suitability of the non-invasive fetal ECG in clinical applications that, as yet, are only possible for the invasive fetal ECG

    Bayesian Inference with Combined Dynamic and Sparsity Models: Application in 3D Electrophysiological Imaging

    Get PDF
    Data-driven inference is widely encountered in various scientific domains to convert the observed measurements into information that cannot be directly observed about a system. Despite the quickly-developing sensor and imaging technologies, in many domains, data collection remains an expensive endeavor due to financial and physical constraints. To overcome the limits in data and to reduce the demand on expensive data collection, it is important to incorporate prior information in order to place the data-driven inference in a domain-relevant context and to improve its accuracy. Two sources of assumptions have been used successfully in many inverse problem applications. One is the temporal dynamics of the system (dynamic structure). The other is the low-dimensional structure of a system (sparsity structure). In existing work, these two structures have often been explored separately, while in most high-dimensional dynamic system they are commonly co-existing and contain complementary information. In this work, our main focus is to build a robustness inference framework to combine dynamic and sparsity constraints. The driving application in this work is a biomedical inverse problem of electrophysiological (EP) imaging, which noninvasively and quantitatively reconstruct transmural action potentials from body-surface voltage data with the goal to improve cardiac disease prevention, diagnosis, and treatment. The general framework can be extended to a variety of applications that deal with the inference of high-dimensional dynamic systems
    corecore