1,773 research outputs found

    Graph- and finite element-based total variation models for the inverse problem in diffuse optical tomography

    Get PDF
    Total variation (TV) is a powerful regularization method that has been widely applied in different imaging applications, but is difficult to apply to diffuse optical tomography (DOT) image reconstruction (inverse problem) due to complex and unstructured geometries, non-linearity of the data fitting and regularization terms, and non-differentiability of the regularization term. We develop several approaches to overcome these difficulties by: i) defining discrete differential operators for unstructured geometries using both finite element and graph representations; ii) developing an optimization algorithm based on the alternating direction method of multipliers (ADMM) for the non-differentiable and non-linear minimization problem; iii) investigating isotropic and anisotropic variants of TV regularization, and comparing their finite element- and graph-based implementations. These approaches are evaluated on experiments on simulated data and real data acquired from a tissue phantom. Our results show that both FEM and graph-based TV regularization is able to accurately reconstruct both sparse and non-sparse distributions without the over-smoothing effect of Tikhonov regularization and the over-sparsifying effect of L1_1 regularization. The graph representation was found to out-perform the FEM method for low-resolution meshes, and the FEM method was found to be more accurate for high-resolution meshes.Comment: 24 pages, 11 figures. Reviced version includes revised figures and improved clarit

    Space-time adaptive solution of inverse problems with the discrete adjoint method

    Get PDF
    Adaptivity in both space and time has become the norm for solving problems modeled by partial differential equations. The size of the discretized problem makes uniformly refined grids computationally prohibitive. Adaptive refinement of meshes and time steps allows to capture the phenomena of interest while keeping the cost of a simulation tractable on the current hardware. Many fields in science and engineering require the solution of inverse problems where parameters for a given model are estimated based on available measurement information. In contrast to forward (regular) simulations, inverse problems have not extensively benefited from the adaptive solver technology. Previous research in inverse problems has focused mainly on the continuous approach to calculate sensitivities, and has typically employed fixed time and space meshes in the solution process. Inverse problem solvers that make exclusive use of uniform or static meshes avoid complications such as the differentiation of mesh motion equations, or inconsistencies in the sensitivity equations between subdomains with different refinement levels. However, this comes at the cost of low computational efficiency. More efficient computations are possible through judicious use of adaptive mesh refinement, adaptive time steps, and the discrete adjoint method. This paper develops a framework for the construction and analysis of discrete adjoint sensitivities in the context of time dependent, adaptive grid, adaptive step models. Discrete adjoints are attractive in practice since they can be generated with low effort using automatic differentiation. However, this approach brings several important challenges. The adjoint of the forward numerical scheme may be inconsistent with the continuous adjoint equations. A reduction in accuracy of the discrete adjoint sensitivities may appear due to the intergrid transfer operators. Moreover, the optimization algorithm may need to accommodate state and gradient vectors whose dimensions change between iterations. This work shows that several of these potential issues can be avoided for the discontinuous Galerkin (DG) method. The adjoint model development is considerably simplified by decoupling the adaptive mesh refinement mechanism from the forward model solver, and by selectively applying automatic differentiation on individual algorithms. In forward models discontinuous Galerkin discretizations can efficiently handle high orders of accuracy, h/ph/p-refinement, and parallel computation. The analysis reveals that this approach, paired with Runge Kutta time stepping, is well suited for the adaptive solutions of inverse problems. The usefulness of discrete discontinuous Galerkin adjoints is illustrated on a two-dimensional adaptive data assimilation problem

    Doctor of Philosophy

    Get PDF
    dissertationInverse Electrocardiography (ECG) aims to noninvasively estimate the electrophysiological activity of the heart from the voltages measured at the body surface, with promising clinical applications in diagnosis and therapy. The main challenge of this emerging technique lies in its mathematical foundation: an inverse source problem governed by partial differential equations (PDEs) which is severely ill-conditioned. Essential to the success of inverse ECG are computational methods that reliably achieve accurate inverse solutions while harnessing the ever-growing complexity and realism of the bioelectric simulation. This dissertation focuses on the formulation, optimization, and solution of the inverse ECG problem based on finite element methods, consisting of two research thrusts. The first thrust explores the optimal finite element discretization specifically oriented towards the inverse ECG problem. In contrast, most existing discretization strategies are designed for forward problems and may become inappropriate for the corresponding inverse problems. Based on a Fourier analysis of how discretization relates to ill-conditioning, this work proposes refinement strategies that optimize approximation accuracy o f the inverse ECG problem while mitigating its ill-conditioning. To fulfill these strategies, two refinement techniques are developed: one uses hybrid-shaped finite elements whereas the other adapts high-order finite elements. The second research thrust involves a new methodology for inverse ECG solutions called PDE-constrained optimization, an optimization framework that flexibly allows convex objectives and various physically-based constraints. This work features three contributions: (1) fulfilling optimization in the continuous space, (2) formulating rigorous finite element solutions, and (3) fulfilling subsequent numerical optimization by a primal-dual interiorpoint method tailored to the given optimization problem's specific algebraic structure. The efficacy o f this new method is shown by its application to localization o f cardiac ischemic disease, in which the method, under realistic settings, achieves promising solutions to a previously intractable inverse ECG problem involving the bidomain heart model. In summary, this dissertation advances the computational research of inverse ECG, making it evolve toward an image-based, patient-specific modality for biomedical research

    Improving the forward model for electrical impedance tomography of brain function through rapid generation of subject specific finite element models

    Get PDF
    Electrical Impedance Tomography (EIT) is a non-invasive imaging method which allows internal electrical impedance of any conductive object to be imaged by means of current injection and surface voltage measurements through an array of externally applied electrodes. The successful generation of the image requires the simulation of the current injection patterns on either an analytical or a numerical model of the domain under examination, known as the forward model, and using the resulting voltage data in the inverse solution from which images of conductivity changes can be constructed. Recent research strongly indicates that geometric and anatomical conformance of the forward model to the subject under investigation significantly affects the quality of the images. This thesis focuses mainly on EIT of brain function and describes a novel approach for the rapid generation of patient or subject specific finite element models for use as the forward model. After introduction of the topic, methods of generating accurate finite element (FE) models using commercially available Computer-Aided Design (CAD) tools are described and show that such methods, though effective and successful, are inappropriate for time critical clinical use. The feasibility of warping or morphing a finite element mesh as a means of reducing the lead time for model generation is then presented and demonstrated. This leads on to the description of methods of acquiring and utilising known system geometry, namely the positions of electrodes and registration landmarks, to construct an accurate surface of the subject, the results of which are successfully validated. The outcome of this procedure is then used to specify boundary conditions to a mesh warping algorithm based on elastic deformation using well-established continuum mechanics procedures. The algorithm is applied to a range of source models to empirically establish optimum values for the parameters defining the problem which can successfully generate meshes of acceptable quality in terms of discretization errors and which more accurately define the geometry of the target subject. Further validation of the algorithm is performed by comparison of boundary voltages and image reconstructions from simulated and laboratory data to demonstrate that benefits in terms of image artefact reduction and localisation of conductivity changes can be gained. The processes described in the thesis are evaluated and discussed and topics of further work and application are described

    Numerical modelling of the fluid-structure interaction in complex vascular geometries

    Get PDF
    A complex network of vessels is responsible for the transportation of blood throughout the body and back to the heart. Fluid mechanics and solid mechanics play a fundamental role in this transport phenomenon and are particularly suited for computer simulations. The latter may contribute to a better comprehension of the physiological processes and mechanisms leading to cardiovascular diseases, which are currently the leading cause of death in the western world. In case these computational models include patient-specific geometries and/or the interaction between the blood flow and the arterial wall, they become challenging to develop and to solve, increasing both the operator time and the computational time. This is especially true when the domain of interest involves vascular pathologies such as a local narrowing (stenosis) or a local dilatation (aneurysm) of the arterial wall. To overcome these issues of high operator times and high computational times when addressing the bio(fluid)mechanics of complex geometries, this PhD thesis focuses on the development of computational strategies which improve the generation and the accuracy of image-based, fluid-structure interaction (FSI) models. First, a robust procedure is introduced for the generation of hexahedral grids, which allows for local grid refinements and automation. Secondly, a straightforward algorithm is developed to obtain the prestress which is implicitly present in the arterial wall of a – by the blood pressure – loaded geometry at the moment of medical image acquisition. Both techniques are validated, applied to relevant cases, and finally integrated into a fluid-structure interaction model of an abdominal mouse aorta, based on in vivo measurements

    Doctor of Philosophy

    Get PDF
    dissertationComputational simulation has become an indispensable tool in the study of both basic mechanisms and pathophysiology of all forms of cardiac electrical activity. Because the heart is comprised of approximately 4 billion electrically active cells, it is not possible to geometrically model or computationally simulate each individual cell. As a result computational models of the heart are, of necessity, abstractions that approximate electrical behavior at the cell, tissue, and whole body level. The goal of this PhD dissertation was to evaluate several aspects of these abstractions by exploring a set of modeling approaches in the field of cardiac electrophysiology and to develop means to evaluate both the amplitude of these errors from a purely technical perspective as well as the impacts of those errors in terms of physiological parameters. The first project used subject specific models and experiments with acute myocardial ischemia to show that one common simplification used to model myocardial ischemia-the simplest form of the border zone between healthy and ischemic tissue-was not supported by the experimental results. We propose a alternative approximation of the border zone that better simulates the experimental results. The second study examined the impact of simplifications in geometric models on simulations of cardiac electrophysiology. Such models consist of a connected mesh of polygonal elements and must often capture complex external and internal boundaries. A conforming mesh contains elements that follow closely the shapes of boundaries; nonconforming meshes fit the boundaries only approximately and are easier to construct but their impact on simulation accuracy has, to our knowledge, remained unknown. We evaluated the impact of this simplification on a set of three different forms of bioelectric field simulations. The third project evaluated the impact of an additional geometric modeling error; positional uncertainty of the heart in simulations of the ECG. We applied a relatively novel and highly efficient statistical approach, the generalized Polynomial Chaos-Stochastic Collocation method (gPC-SC), to a boundary element formulation of the electrocardiographic forward problem to carry out the necessary comprehensive sensitivity analysis. We found variations large enough to mask or to mimic signs of ischemia in the ECG

    Analysis of uncertainty and variability in finite element computational models for biomedical engineering: characterization and propagation

    Get PDF
    Computational modeling has become a powerful tool in biomedical engineering thanks to its potential to simulate coupled systems. However, real parameters are usually not accurately known, and variability is inherent in living organisms. To cope with this, probabilistic tools, statistical analysis and stochastic approaches have been used. This article aims to review the analysis of uncertainty and variability in the context of finite element modeling in biomedical engineering. Characterization techniques and propagation methods are presented, as well as examples of their applications in biomedical finite element simulations. Uncertainty propagation methods, both non-intrusive and intrusive, are described. Finally, pros and cons of the different approaches and their use in the scientific community are presented. This leads us to identify future directions for research and methodological development of uncertainty modeling in biomedical engineering

    Méthodes numériques et statistiques pour l'analyse de trajectoire dans un cadre de geométrie Riemannienne.

    Get PDF
    This PhD proposes new Riemannian geometry tools for the analysis of longitudinal observations of neuro-degenerative subjects. First, we propose a numerical scheme to compute the parallel transport along geodesics. This scheme is efficient as long as the co-metric can be computed efficiently. Then, we tackle the issue of Riemannian manifold learning. We provide some minimal theoretical sanity checks to illustrate that the procedure of Riemannian metric estimation can be relevant. Then, we propose to learn a Riemannian manifold so as to model subject's progressions as geodesics on this manifold. This allows fast inference, extrapolation and classification of the subjects.Cette thèse porte sur l'élaboration d'outils de géométrie riemannienne et de leur application en vue de la modélisation longitudinale de sujets atteints de maladies neuro-dégénératives. Dans une première partie, nous prouvons la convergence d'un schéma numérique pour le transport parallèle. Ce schéma reste efficace tant que l'inverse de la métrique peut être calculé rapidement. Dans une deuxième partie, nous proposons l'apprentissage une variété et une métrique riemannienne. Après quelques résultats théoriques encourageants, nous proposons d'optimiser la modélisation de progression de sujets comme des géodésiques sur cette variété
    corecore