245 research outputs found

    Uncertainty Quantification and Reduction in Cardiac Electrophysiological Imaging

    Get PDF
    Cardiac electrophysiological (EP) imaging involves solving an inverse problem that infers cardiac electrical activity from body-surface electrocardiography data on a physical domain defined by the body torso. To avoid unreasonable solutions that may fit the data, this inference is often guided by data-independent prior assumptions about different properties of cardiac electrical sources as well as the physical domain. However, these prior assumptions may involve errors and uncertainties that could affect the inference accuracy. For example, common prior assumptions on the source properties, such as fixed spatial and/or temporal smoothness or sparseness assumptions, may not necessarily match the true source property at different conditions, leading to uncertainties in the inference. Furthermore, prior assumptions on the physical domain, such as the anatomy and tissue conductivity of different organs in the thorax model, represent an approximation of the physical domain, introducing errors to the inference. To determine the robustness of the EP imaging systems for future clinical practice, it is important to identify these errors/uncertainties and assess their impact on the solution. This dissertation focuses on the quantification and reduction of the impact of uncertainties caused by prior assumptions/models on cardiac source properties as well as anatomical modeling uncertainties on the EP imaging solution. To assess the effect of fixed prior assumptions/models about cardiac source properties on the solution of EP imaging, we propose a novel yet simple Lp-norm regularization method for volumetric cardiac EP imaging. This study reports the necessity of an adaptive prior model (rather than fixed model) for constraining the complex spatiotemporally changing properties of the cardiac sources. We then propose a multiple-model Bayesian approach to cardiac EP imaging that employs a continuous combination of prior models, each re-effecting a specific spatial property for volumetric sources. The 3D source estimation is then obtained as a weighted combination of solutions across all models. Including a continuous combination of prior models, our proposed method reduces the chance of mismatch between prior models and true source properties, which in turn enhances the robustness of the EP imaging solution. To quantify the impact of anatomical modeling uncertainties on the EP imaging solution, we propose a systematic statistical framework. Founded based on statistical shape modeling and unscented transform, our method quantifies anatomical modeling uncertainties and establish their relation to the EP imaging solution. Applied on anatomical models generated from different image resolutions and different segmentations, it reports the robustness of EP imaging solution to these anatomical shape-detail variations. We then propose a simplified anatomical model for the heart that only incorporates certain subject-specific anatomical parameters, while discarding local shape details. Exploiting less resources and processing for successful EP imaging, this simplified model provides a simple clinically-compatible anatomical modeling experience for EP imaging systems. Different components of our proposed methods are validated through a comprehensive set of synthetic and real-data experiments, including various typical pathological conditions and/or diagnostic procedures, such as myocardial infarction and pacing. Overall, the methods presented in this dissertation for the quantification and reduction of uncertainties in cardiac EP imaging enhance the robustness of EP imaging, helping to close the gap between EP imaging in research and its clinical application

    -Norm Regularization in Volumetric Imaging of Cardiac Current Sources

    Get PDF
    Advances in computer vision have substantially improved our ability to analyze the structure and mechanics of the heart. In comparison, our ability to observe and analyze cardiac electrical activities is much limited. The progress to computationally reconstruct cardiac current sources from noninvasive voltage data sensed on the body surface has been hindered by the ill-posedness and the lack of a unique solution of the reconstruction problem. Common L2- and L1-norm regularizations tend to produce a solution that is either too diffused or too scattered to reflect the complex spatial structure of current source distribution in the heart. In this work, we propose a general regularization with Lp-norm () constraint to bridge the gap and balance between an overly smeared and overly focal solution in cardiac source reconstruction. In a set of phantom experiments, we demonstrate the superiority of the proposed Lp-norm method over its L1 and L2 counterparts in imaging cardiac current sources with increasing extents. Through computer-simulated and real-data experiments, we further demonstrate the feasibility of the proposed method in imaging the complex structure of excitation wavefront, as well as current sources distributed along the postinfarction scar border. This ability to preserve the spatial structure of source distribution is important for revealing the potential disruption to the normal heart excitation

    Doctor of Philosophy

    Get PDF
    dissertationInverse Electrocardiography (ECG) aims to noninvasively estimate the electrophysiological activity of the heart from the voltages measured at the body surface, with promising clinical applications in diagnosis and therapy. The main challenge of this emerging technique lies in its mathematical foundation: an inverse source problem governed by partial differential equations (PDEs) which is severely ill-conditioned. Essential to the success of inverse ECG are computational methods that reliably achieve accurate inverse solutions while harnessing the ever-growing complexity and realism of the bioelectric simulation. This dissertation focuses on the formulation, optimization, and solution of the inverse ECG problem based on finite element methods, consisting of two research thrusts. The first thrust explores the optimal finite element discretization specifically oriented towards the inverse ECG problem. In contrast, most existing discretization strategies are designed for forward problems and may become inappropriate for the corresponding inverse problems. Based on a Fourier analysis of how discretization relates to ill-conditioning, this work proposes refinement strategies that optimize approximation accuracy o f the inverse ECG problem while mitigating its ill-conditioning. To fulfill these strategies, two refinement techniques are developed: one uses hybrid-shaped finite elements whereas the other adapts high-order finite elements. The second research thrust involves a new methodology for inverse ECG solutions called PDE-constrained optimization, an optimization framework that flexibly allows convex objectives and various physically-based constraints. This work features three contributions: (1) fulfilling optimization in the continuous space, (2) formulating rigorous finite element solutions, and (3) fulfilling subsequent numerical optimization by a primal-dual interiorpoint method tailored to the given optimization problem's specific algebraic structure. The efficacy o f this new method is shown by its application to localization o f cardiac ischemic disease, in which the method, under realistic settings, achieves promising solutions to a previously intractable inverse ECG problem involving the bidomain heart model. In summary, this dissertation advances the computational research of inverse ECG, making it evolve toward an image-based, patient-specific modality for biomedical research

    On Learning and Generalization to Solve Inverse Problem of Electrophysiological Imaging

    Get PDF
    In this dissertation, we are interested in solving a linear inverse problem: inverse electrophysiological (EP) imaging, where our objective is to computationally reconstruct personalized cardiac electrical signals based on body surface electrocardiogram (ECG) signals. EP imaging has shown promise in the diagnosis and treatment planning of cardiac dysfunctions such as atrial flutter, atrial fibrillation, ischemia, infarction and ventricular arrhythmia. Towards this goal, we frame it as a problem of learning a function from the domain of measurements to signals. Depending upon the assumptions, we present two classes of solutions: 1) Bayesian inference in a probabilistic graphical model, 2) Learning from samples using deep networks. In both of these approaches, we emphasize on learning the inverse function with good generalization ability, which becomes a main theme of the dissertation. In a Bayesian framework, we argue that this translates to appropriately integrating different sources of knowledge into a common probabilistic graphical model framework and using it for patient specific signal estimation through Bayesian inference. In learning from samples setting, this translates to designing a deep network with good generalization ability, where good generalization refers to the ability to reconstruct inverse EP signals in a distribution of interest (which could very well be outside the sample distribution used during training). By drawing ideas from different areas like functional analysis (e.g. Fenchel duality), variational inference (e.g. Variational Bayes) and deep generative modeling (e.g. variational autoencoder), we show how we can incorporate different prior knowledge in a principled manner in a probabilistic graphical model framework to obtain a good inverse solution with generalization ability. Similarly, to improve generalization of deep networks learning from samples, we use ideas from information theory (e.g. information bottleneck), learning theory (e.g. analytical learning theory), adversarial training, complexity theory and functional analysis (e.g. RKHS). We test our algorithms on synthetic data and real data of the patients who had undergone through catheter ablation in clinics and show that our approach yields significant improvement over existing methods. Towards the end of the dissertation, we investigate general questions on generalization and stabilization of adversarial training of deep networks and try to understand the role of smoothness and function space complexity in answering those questions. We conclude by identifying limitations of the proposed methods, areas of further improvement and open questions that are specific to inverse electrophysiological imaging as well as broader, encompassing theory of learning and generalization

    Integrated Cardiac Electromechanics: Modeling and Personalization

    Get PDF
    Cardiac disease remains the leading cause of morbidity and mortality in the world. A variety of heart diagnosis techniques have been developed during the last century, and generally fall into two groups. The first group evaluates the electrical function of the heart using electrophysiological data such as electrocardiogram (ECG), while the second group aims to assess the mechanical function of the heart through medical imaging data. Nevertheless, the heart is an integrated electromechanical organ, where its cyclic pumping arises from the synergy of its electrical and mechanical function which requires first to be electrically excited in order to contract. At the same time, cardiac electrical function experiences feedback from mechanical contraction. This inter-dependent relationship determines that neither electrical function nor mechanical function alone can completely reflect the pathophysiological conditions of the heart. The aim of this thesis is working towards building an integrated framework for heart diagnosis through evaluation of electrical and mechanical functions simultaneously. The basic rational is to obtain quantitative interpretation of a subject-specific heart system by combining an electromechanical heart model and individual clinical measurements of the heart. To this end, we first develop a biologically-inspired mathematical model of the heart that provides a general, macroscopic description of cardiac electromechanics. The intrinsic electromechanical coupling arises from both excitation-induced contraction and deformation-induced mechano-electrical feedback. Then, as a first step towards a fully electromechanically integrated framework, we develop a model-based approach for investigating the effect of cardiac motion on noninvasive transmural imaging of cardiac electrophysiology. Specifically, we utilize the proposed heart model to obtain updated heart geometry through simulation, and further recover the electrical activities of the heart from body surface potential maps (BSPMs) by solving an optimization problem. Various simulations of the heart have been performed under healthy and abnormal conditions, which demonstrate the physiological plausibility of the proposed integrated electromechanical heart model. What\u27s more, this work presents the effect of cardiac motion to the solution of noninvasive estimation of cardiac electrophysiology and shows the importance of integrating cardiac electrical and mechanical functions for heart diagnosis. This thesis also paves the road for noninvasive evaluation of cardiac electromechanics

    Rapid Segmentation Techniques for Cardiac and Neuroimage Analysis

    Get PDF
    Recent technological advances in medical imaging have allowed for the quick acquisition of highly resolved data to aid in diagnosis and characterization of diseases or to guide interventions. In order to to be integrated into a clinical work flow, accurate and robust methods of analysis must be developed which manage this increase in data. Recent improvements in in- expensive commercially available graphics hardware and General-Purpose Programming on Graphics Processing Units (GPGPU) have allowed for many large scale data analysis problems to be addressed in meaningful time and will continue to as parallel computing technology improves. In this thesis we propose methods to tackle two clinically relevant image segmentation problems: a user-guided segmentation of myocardial scar from Late-Enhancement Magnetic Resonance Images (LE-MRI) and a multi-atlas segmentation pipeline to automatically segment and partition brain tissue from multi-channel MRI. Both methods are based on recent advances in computer vision, in particular max-flow optimization that aims at solving the segmentation problem in continuous space. This allows for (approximately) globally optimal solvers to be employed in multi-region segmentation problems, without the particular drawbacks of their discrete counterparts, graph cuts, which typically present with metrication artefacts. Max-flow solvers are generally able to produce robust results, but are known for being computationally expensive, especially with large datasets, such as volume images. Additionally, we propose two new deformable registration methods based on Gauss-Newton optimization and smooth the resulting deformation fields via total-variation regularization to guarantee the problem is mathematically well-posed. We compare the performance of these two methods against four highly ranked and well-known deformable registration methods on four publicly available databases and are able to demonstrate a highly accurate performance with low run times. The best performing variant is subsequently used in a multi-atlas segmentation pipeline for the segmentation of brain tissue and facilitates fast run times for this computationally expensive approach. All proposed methods are implemented using GPGPU for a substantial increase in computational performance and so facilitate deployment into clinical work flows. We evaluate all proposed algorithms in terms of run times, accuracy, repeatability and errors arising from user interactions and we demonstrate that these methods are able to outperform established methods. The presented approaches demonstrate high performance in comparison with established methods in terms of accuracy and repeatability while largely reducing run times due to the employment of GPU hardware

    Proceedings of the second "international Traveling Workshop on Interactions between Sparse models and Technology" (iTWIST'14)

    Get PDF
    The implicit objective of the biennial "international - Traveling Workshop on Interactions between Sparse models and Technology" (iTWIST) is to foster collaboration between international scientific teams by disseminating ideas through both specific oral/poster presentations and free discussions. For its second edition, the iTWIST workshop took place in the medieval and picturesque town of Namur in Belgium, from Wednesday August 27th till Friday August 29th, 2014. The workshop was conveniently located in "The Arsenal" building within walking distance of both hotels and town center. iTWIST'14 has gathered about 70 international participants and has featured 9 invited talks, 10 oral presentations, and 14 posters on the following themes, all related to the theory, application and generalization of the "sparsity paradigm": Sparsity-driven data sensing and processing; Union of low dimensional subspaces; Beyond linear and convex inverse problem; Matrix/manifold/graph sensing/processing; Blind inverse problems and dictionary learning; Sparsity and computational neuroscience; Information theory, geometry and randomness; Complexity/accuracy tradeoffs in numerical methods; Sparsity? What's next?; Sparse machine learning and inference.Comment: 69 pages, 24 extended abstracts, iTWIST'14 website: http://sites.google.com/site/itwist1
    • …
    corecore