768 research outputs found

    Uncertainty Quantification and Reduction in Cardiac Electrophysiological Imaging

    Get PDF
    Cardiac electrophysiological (EP) imaging involves solving an inverse problem that infers cardiac electrical activity from body-surface electrocardiography data on a physical domain defined by the body torso. To avoid unreasonable solutions that may fit the data, this inference is often guided by data-independent prior assumptions about different properties of cardiac electrical sources as well as the physical domain. However, these prior assumptions may involve errors and uncertainties that could affect the inference accuracy. For example, common prior assumptions on the source properties, such as fixed spatial and/or temporal smoothness or sparseness assumptions, may not necessarily match the true source property at different conditions, leading to uncertainties in the inference. Furthermore, prior assumptions on the physical domain, such as the anatomy and tissue conductivity of different organs in the thorax model, represent an approximation of the physical domain, introducing errors to the inference. To determine the robustness of the EP imaging systems for future clinical practice, it is important to identify these errors/uncertainties and assess their impact on the solution. This dissertation focuses on the quantification and reduction of the impact of uncertainties caused by prior assumptions/models on cardiac source properties as well as anatomical modeling uncertainties on the EP imaging solution. To assess the effect of fixed prior assumptions/models about cardiac source properties on the solution of EP imaging, we propose a novel yet simple Lp-norm regularization method for volumetric cardiac EP imaging. This study reports the necessity of an adaptive prior model (rather than fixed model) for constraining the complex spatiotemporally changing properties of the cardiac sources. We then propose a multiple-model Bayesian approach to cardiac EP imaging that employs a continuous combination of prior models, each re-effecting a specific spatial property for volumetric sources. The 3D source estimation is then obtained as a weighted combination of solutions across all models. Including a continuous combination of prior models, our proposed method reduces the chance of mismatch between prior models and true source properties, which in turn enhances the robustness of the EP imaging solution. To quantify the impact of anatomical modeling uncertainties on the EP imaging solution, we propose a systematic statistical framework. Founded based on statistical shape modeling and unscented transform, our method quantifies anatomical modeling uncertainties and establish their relation to the EP imaging solution. Applied on anatomical models generated from different image resolutions and different segmentations, it reports the robustness of EP imaging solution to these anatomical shape-detail variations. We then propose a simplified anatomical model for the heart that only incorporates certain subject-specific anatomical parameters, while discarding local shape details. Exploiting less resources and processing for successful EP imaging, this simplified model provides a simple clinically-compatible anatomical modeling experience for EP imaging systems. Different components of our proposed methods are validated through a comprehensive set of synthetic and real-data experiments, including various typical pathological conditions and/or diagnostic procedures, such as myocardial infarction and pacing. Overall, the methods presented in this dissertation for the quantification and reduction of uncertainties in cardiac EP imaging enhance the robustness of EP imaging, helping to close the gap between EP imaging in research and its clinical application

    Bayesian Inference with Combined Dynamic and Sparsity Models: Application in 3D Electrophysiological Imaging

    Get PDF
    Data-driven inference is widely encountered in various scientific domains to convert the observed measurements into information that cannot be directly observed about a system. Despite the quickly-developing sensor and imaging technologies, in many domains, data collection remains an expensive endeavor due to financial and physical constraints. To overcome the limits in data and to reduce the demand on expensive data collection, it is important to incorporate prior information in order to place the data-driven inference in a domain-relevant context and to improve its accuracy. Two sources of assumptions have been used successfully in many inverse problem applications. One is the temporal dynamics of the system (dynamic structure). The other is the low-dimensional structure of a system (sparsity structure). In existing work, these two structures have often been explored separately, while in most high-dimensional dynamic system they are commonly co-existing and contain complementary information. In this work, our main focus is to build a robustness inference framework to combine dynamic and sparsity constraints. The driving application in this work is a biomedical inverse problem of electrophysiological (EP) imaging, which noninvasively and quantitatively reconstruct transmural action potentials from body-surface voltage data with the goal to improve cardiac disease prevention, diagnosis, and treatment. The general framework can be extended to a variety of applications that deal with the inference of high-dimensional dynamic systems

    On Learning and Generalization to Solve Inverse Problem of Electrophysiological Imaging

    Get PDF
    In this dissertation, we are interested in solving a linear inverse problem: inverse electrophysiological (EP) imaging, where our objective is to computationally reconstruct personalized cardiac electrical signals based on body surface electrocardiogram (ECG) signals. EP imaging has shown promise in the diagnosis and treatment planning of cardiac dysfunctions such as atrial flutter, atrial fibrillation, ischemia, infarction and ventricular arrhythmia. Towards this goal, we frame it as a problem of learning a function from the domain of measurements to signals. Depending upon the assumptions, we present two classes of solutions: 1) Bayesian inference in a probabilistic graphical model, 2) Learning from samples using deep networks. In both of these approaches, we emphasize on learning the inverse function with good generalization ability, which becomes a main theme of the dissertation. In a Bayesian framework, we argue that this translates to appropriately integrating different sources of knowledge into a common probabilistic graphical model framework and using it for patient specific signal estimation through Bayesian inference. In learning from samples setting, this translates to designing a deep network with good generalization ability, where good generalization refers to the ability to reconstruct inverse EP signals in a distribution of interest (which could very well be outside the sample distribution used during training). By drawing ideas from different areas like functional analysis (e.g. Fenchel duality), variational inference (e.g. Variational Bayes) and deep generative modeling (e.g. variational autoencoder), we show how we can incorporate different prior knowledge in a principled manner in a probabilistic graphical model framework to obtain a good inverse solution with generalization ability. Similarly, to improve generalization of deep networks learning from samples, we use ideas from information theory (e.g. information bottleneck), learning theory (e.g. analytical learning theory), adversarial training, complexity theory and functional analysis (e.g. RKHS). We test our algorithms on synthetic data and real data of the patients who had undergone through catheter ablation in clinics and show that our approach yields significant improvement over existing methods. Towards the end of the dissertation, we investigate general questions on generalization and stabilization of adversarial training of deep networks and try to understand the role of smoothness and function space complexity in answering those questions. We conclude by identifying limitations of the proposed methods, areas of further improvement and open questions that are specific to inverse electrophysiological imaging as well as broader, encompassing theory of learning and generalization

    The Application of Computer Techniques to ECG Interpretation

    Get PDF
    This book presents some of the latest available information on automated ECG analysis written by many of the leading researchers in the field. It contains a historical introduction, an outline of the latest international standards for signal processing and communications and then an exciting variety of studies on electrophysiological modelling, ECG Imaging, artificial intelligence applied to resting and ambulatory ECGs, body surface mapping, big data in ECG based prediction, enhanced reliability of patient monitoring, and atrial abnormalities on the ECG. It provides an extremely valuable contribution to the field

    -Norm Regularization in Volumetric Imaging of Cardiac Current Sources

    Get PDF
    Advances in computer vision have substantially improved our ability to analyze the structure and mechanics of the heart. In comparison, our ability to observe and analyze cardiac electrical activities is much limited. The progress to computationally reconstruct cardiac current sources from noninvasive voltage data sensed on the body surface has been hindered by the ill-posedness and the lack of a unique solution of the reconstruction problem. Common L2- and L1-norm regularizations tend to produce a solution that is either too diffused or too scattered to reflect the complex spatial structure of current source distribution in the heart. In this work, we propose a general regularization with Lp-norm () constraint to bridge the gap and balance between an overly smeared and overly focal solution in cardiac source reconstruction. In a set of phantom experiments, we demonstrate the superiority of the proposed Lp-norm method over its L1 and L2 counterparts in imaging cardiac current sources with increasing extents. Through computer-simulated and real-data experiments, we further demonstrate the feasibility of the proposed method in imaging the complex structure of excitation wavefront, as well as current sources distributed along the postinfarction scar border. This ability to preserve the spatial structure of source distribution is important for revealing the potential disruption to the normal heart excitation

    Doctor of Philosophy

    Get PDF
    dissertationInverse Electrocardiography (ECG) aims to noninvasively estimate the electrophysiological activity of the heart from the voltages measured at the body surface, with promising clinical applications in diagnosis and therapy. The main challenge of this emerging technique lies in its mathematical foundation: an inverse source problem governed by partial differential equations (PDEs) which is severely ill-conditioned. Essential to the success of inverse ECG are computational methods that reliably achieve accurate inverse solutions while harnessing the ever-growing complexity and realism of the bioelectric simulation. This dissertation focuses on the formulation, optimization, and solution of the inverse ECG problem based on finite element methods, consisting of two research thrusts. The first thrust explores the optimal finite element discretization specifically oriented towards the inverse ECG problem. In contrast, most existing discretization strategies are designed for forward problems and may become inappropriate for the corresponding inverse problems. Based on a Fourier analysis of how discretization relates to ill-conditioning, this work proposes refinement strategies that optimize approximation accuracy o f the inverse ECG problem while mitigating its ill-conditioning. To fulfill these strategies, two refinement techniques are developed: one uses hybrid-shaped finite elements whereas the other adapts high-order finite elements. The second research thrust involves a new methodology for inverse ECG solutions called PDE-constrained optimization, an optimization framework that flexibly allows convex objectives and various physically-based constraints. This work features three contributions: (1) fulfilling optimization in the continuous space, (2) formulating rigorous finite element solutions, and (3) fulfilling subsequent numerical optimization by a primal-dual interiorpoint method tailored to the given optimization problem's specific algebraic structure. The efficacy o f this new method is shown by its application to localization o f cardiac ischemic disease, in which the method, under realistic settings, achieves promising solutions to a previously intractable inverse ECG problem involving the bidomain heart model. In summary, this dissertation advances the computational research of inverse ECG, making it evolve toward an image-based, patient-specific modality for biomedical research

    Non-invasive fetal electrocardiogram : analysis and interpretation

    Get PDF
    High-risk pregnancies are becoming more and more prevalent because of the progressively higher age at which women get pregnant. Nowadays about twenty percent of all pregnancies are complicated to some degree, for instance because of preterm delivery, fetal oxygen deficiency, fetal growth restriction, or hypertension. Early detection of these complications is critical to permit timely medical intervention, but is hampered by strong limitations of existing monitoring technology. This technology is either only applicable in hospital settings, is obtrusive, or is incapable of providing, in a robust way, reliable information for diagnosis of the well-being of the fetus. The most prominent method for monitoring of the fetal health condition is monitoring of heart rate variability in response to activity of the uterus (cardiotocography; CTG). Generally, in obstetrical practice, the heart rate is determined in either of two ways: unobtrusively with a (Doppler) ultrasound probe on the maternal abdomen, or obtrusively with an invasive electrode fixed onto the fetal scalp. The first method is relatively inaccurate but is non-invasive and applicable in all stages of pregnancy. The latter method is far more accurate but can only be applied following rupture of the membranes and sufficient dilatation, restricting its applicability to only the very last phase of pregnancy. Besides these accuracy and applicability issues, the use of CTG in obstetrical practice also has another limitation: despite its high sensitivity, the specificity of CTG is relatively low. This means that in most cases of fetal distress the CTG reveals specific patterns of heart rate variability, but that these specific patterns can also be encountered for healthy fetuses, complicating accurate diagnosis of the fetal condition. Hence, a prerequisite for preventing unnecessary interventions that are based on CTG alone, is the inclusion of additional information in diagnostics. Monitoring of the fetal electrocardiogram (ECG), as a supplement of CTG, has been demonstrated to have added value for monitoring of the fetal health condition. Unfortunately the application of the fetal ECG in obstetrical diagnostics is limited because at present the fetal ECG can only be measured reliably by means of an invasive scalp electrode. To overcome this limited applicability, many attempts have been made to record the fetal ECG non-invasively from the maternal abdomen, but these attempts have not yet led to approaches that permit widespread clinical application. One key difficulty is that the signal to noise ratio (SNR) of the transabdominal ECG recordings is relatively low. Perhaps even more importantly, the abdominal ECG recordings yield ECG signals for which the morphology depends strongly on the orientation of the fetus within the maternal uterus. Accordingly, for any fetal orientation, the ECG morphology is different. This renders correct clinical interpretation of the recorded ECG signals complicated, if not impossible. This thesis aims to address these difficulties and to provide new contributions on the clinical interpretation of the fetal ECG. At first the SNR of the recorded signals is enhanced through a series of signal processing steps that exploit specific and a priori known properties of the fetal ECG. More particularly, the dominant interference (i.e. the maternal ECG) is suppressed by exploiting the absence of temporal correlation between the maternal and fetal ECG. In this suppression, the maternal ECG complex is dynamically segmented into individual ECG waves and each of these waves is estimated through averaging corresponding waves from preceding ECG complexes. The maternal ECG template generated by combining the estimated waves is subsequently subtracted from the original signal to yield a non-invasive recording in which the maternal ECG has been suppressed. This suppression method is demonstrated to be more accurate than existing methods. Other interferences and noise are (partly) suppressed by exploiting the quasiperiodicity of the fetal ECG through averaging consecutive ECG complexes or by exploiting the spatial correlation of the ECG. The averaging of several consecutive ECG complexes, synchronized on their QRS complex, enhances the SNR of the ECG but also can suppress morphological variations in the ECG that are clinically relevant. The number of ECG complexes included in the average hence constitutes a trade-off between SNR enhancement on the one hand and loss of morphological variability on the other hand. To relax this trade-off, in this thesis a method is presented that can adaptively estimate the number of ECG complexes included in the average. In cases of morphological variations, this number is decreased ensuring that the variations are not suppressed. In cases of no morphological variability, this number is increased to ensure adequate SNR enhancement. The further suppression of noise by exploiting the spatial correlation of the ECG is based on the fact that all ECG signals recorded at several locations on the maternal abdomen originate from the same electrical source, namely the fetal heart. The electrical activity of the fetal heart at any point in time can be modeled as a single electrical field vector with stationary origin. This vector varies in both amplitude and orientation in three-dimensional space during the cardiac cycle and the time-path described by this vector is referred to as the fetal vectorcardiogram (VCG). In this model, the abdominal ECG constitutes the projection of the VCG onto the vector that describes the position of the abdominal electrode with respect to a reference electrode. This means that when the VCG is known, any desired ECG signal can be calculated. Equivalently, this also means that when enough ECG signals (i.e. at least three independent signals) are known, the VCG can be calculated. By using more than three ECG signals for the calculation of the VCG, redundancy in the ECG signals can be exploited for added noise suppression. Unfortunately, when calculating the fetal VCG from the ECG signals recorded from the maternal abdomen, the distance between the fetal heart and the electrodes is not the same for each electrode. Because the amplitude of the ECG signals decreases with propagation to the abdominal surface, these different distances yield a specific, unknown attenuation for each ECG signal. Existing methods for estimating the VCG operate with a fixed linear combination of the ECG signals and, hence, cannot account for variations in signal attenuation. To overcome this problem and be able to account for fetal movement, in this thesis a method is presented that estimates both the VCG and, to some extent, also the signal attenuation. This is done by determining for which VCG and signal attenuation the joint probability over both these variables is maximal given the observed ECG signals. The underlying joint probability distribution is determined by assuming the ECG signals to originate from scaled VCG projections and additive noise. With this method, a VCG, tailored to each specific patient, is determined. With respect to the fixed linear combinations, the presented method performs significantly better in the accurate estimation of the VCG. Besides describing the electrical activity of the fetal heart in three dimensions, the fetal VCG also provides a framework to account for the fetal orientation in the uterus. This framework enables the detection of the fetal orientation over time and allows for rotating the fetal VCG towards a prescribed orientation. From the normalized fetal VCG obtained in this manner, standardized ECG signals can be calculated, facilitating correct clinical interpretation of the non-invasive fetal ECG signals. The potential of the presented approach (i.e. the combination of all methods described above) is illustrated for three different clinical cases. In the first case, the fetal ECG is analyzed to demonstrate that the electrical behavior of the fetal heart differs significantly from the adult heart. In fact, this difference is so substantial that diagnostics based on the fetal ECG should be based on different guidelines than those for adult ECG diagnostics. In the second case, the fetal ECG is used to visualize the origin of fetal supraventricular extrasystoles and the results suggest that the fetal ECG might in future serve as diagnostic tool for relating fetal arrhythmia to congenital heart diseases. In the last case, the non-invasive fetal ECG is compared to the invasively recorded fetal ECG to gauge the SNR of the transabdominal recordings and to demonstrate the suitability of the non-invasive fetal ECG in clinical applications that, as yet, are only possible for the invasive fetal ECG

    Preventing premature convergence and proving the optimality in evolutionary algorithms

    Get PDF
    http://ea2013.inria.fr//proceedings.pdfInternational audienceEvolutionary Algorithms (EA) usually carry out an efficient exploration of the search-space, but get often trapped in local minima and do not prove the optimality of the solution. Interval-based techniques, on the other hand, yield a numerical proof of optimality of the solution. However, they may fail to converge within a reasonable time due to their inability to quickly compute a good approximation of the global minimum and their exponential complexity. The contribution of this paper is a hybrid algorithm called Charibde in which a particular EA, Differential Evolution, cooperates with a Branch and Bound algorithm endowed with interval propagation techniques. It prevents premature convergence toward local optima and outperforms both deterministic and stochastic existing approaches. We demonstrate its efficiency on a benchmark of highly multimodal problems, for which we provide previously unknown global minima and certification of optimality
    • …
    corecore