371 research outputs found

    On the regularizing power of multigrid-type algorithms

    Get PDF
    We consider the deblurring problem of noisy and blurred images in the case of known space invariant point spread functions with four choices of boundary conditions. We combine an algebraic multigrid previously defined ad hoc for structured matrices related to space invariant operators (Toeplitz, circulants, trigonometric matrix algebras, etc.) and the classical geometric multigrid studied in the partial differential equations context. The resulting technique is parameterized in order to have more degrees of freedom: a simple choice of the parameters allows us to devise a quite powerful regularizing method. It defines an iterative regularizing method where the smoother itself has to be an iterative regularizing method (e.g., conjugate gradient, Landweber, conjugate gradient for normal equations, etc.). More precisely, with respect to the smoother, the regularization properties are improved and the total complexity is lower. Furthermore, in several cases, when it is directly applied to the system Af=gA{\bf f}={\bf g}, the quality of the restored image is comparable with that of all the best known techniques for the normal equations ATAf=ATgA^TA{\bf f}=A^T{\bf g}, but the related convergence is substantially faster. Finally, the associated curves of the relative errors versus the iteration numbers are ``flatter'' with respect to the smoother (the estimation of the stop iteration is less crucial). Therefore, we can choose multigrid procedures which are much more efficient than classical techniques without losing accuracy in the restored image (as often occurs when using preconditioning). Several numerical experiments show the effectiveness of our proposals

    Doctor of Philosophy

    Get PDF
    dissertationInverse Electrocardiography (ECG) aims to noninvasively estimate the electrophysiological activity of the heart from the voltages measured at the body surface, with promising clinical applications in diagnosis and therapy. The main challenge of this emerging technique lies in its mathematical foundation: an inverse source problem governed by partial differential equations (PDEs) which is severely ill-conditioned. Essential to the success of inverse ECG are computational methods that reliably achieve accurate inverse solutions while harnessing the ever-growing complexity and realism of the bioelectric simulation. This dissertation focuses on the formulation, optimization, and solution of the inverse ECG problem based on finite element methods, consisting of two research thrusts. The first thrust explores the optimal finite element discretization specifically oriented towards the inverse ECG problem. In contrast, most existing discretization strategies are designed for forward problems and may become inappropriate for the corresponding inverse problems. Based on a Fourier analysis of how discretization relates to ill-conditioning, this work proposes refinement strategies that optimize approximation accuracy o f the inverse ECG problem while mitigating its ill-conditioning. To fulfill these strategies, two refinement techniques are developed: one uses hybrid-shaped finite elements whereas the other adapts high-order finite elements. The second research thrust involves a new methodology for inverse ECG solutions called PDE-constrained optimization, an optimization framework that flexibly allows convex objectives and various physically-based constraints. This work features three contributions: (1) fulfilling optimization in the continuous space, (2) formulating rigorous finite element solutions, and (3) fulfilling subsequent numerical optimization by a primal-dual interiorpoint method tailored to the given optimization problem's specific algebraic structure. The efficacy o f this new method is shown by its application to localization o f cardiac ischemic disease, in which the method, under realistic settings, achieves promising solutions to a previously intractable inverse ECG problem involving the bidomain heart model. In summary, this dissertation advances the computational research of inverse ECG, making it evolve toward an image-based, patient-specific modality for biomedical research

    Analytical and Iterative Regularization Methods for Nonlinear Ill-posed Inverse Problems: Applications to Diffuse Optical and Electrical Impedance Tomography

    Get PDF
    Electrical impedance tomography (EIT) and Diffuse Optical Tomography (DOT) are imaging methods that have been gaining more popularity due to their ease of use and non-ivasiveness. EIT and DOT can potentially be used as alternatives to traditional imaging techniques, such as computed tomography (CT) scans, to reduce the damaging effects of radiation on tissue. The process of imaging using either EIT or DOT involves measuring the ability for tissue to impede electrical flow or absorb light, respectively. For EIT, the inner distribution of resistivity, which corresponds to different resistivity properties of different tissues, is estimated from the voltage potentials measured on the boundary of the object being imaged. In DOT, the optical properties of the tissue, mainly scattering and absorption, are estimated by measuring the light on the boundary of the tissue illuminated by a near-infrared source at the tissue\u27s surface. In this dissertation, we investigate a direct method for solving the EIT inverse problem using mollifier regularization, which is then modified and extended to solve the inverse problem in DOT. First, the mollifier method is formulated and then its efficacy is verified by developing an appropriate algorithm. For EIT and DOT, a comprehensive numerical and computational comparison, using several types of regularization techniques ranging from analytical to iterative to statistical method, is performed. Based on the comparative results using the aforementioned regularization methods, a novel hybrid method combining the deterministic (mollifier and iterative) and statistical (iterative and statistical) is proposed. The efficacy of the proposed method is then further investigated via simulations and using experimental data for damage detection in concrete

    Inverse problems and medical imaging: lecture notes

    Get PDF
    Lecture notes for the curricular unit of Inverse Problems and Medical Imaging (23036 - https://guiadoscursos.uab.pt/en/ucs/problemas-inversos-e-imagiologia-medica/ ) of the Doctor’s Degree in Applied Mathematics and Modelling of Universidade Aberta.info:eu-repo/semantics/draf

    Assisting digital volume correlation with mechanical image-based modeling: application to the measurement of kinematic fields at the architecture scale in cellular materials

    Get PDF
    La mesure de champs de déplacement et de déformation aux petites échelles dans des microstructures complexes représente encore un défi majeur dans le monde de la mécanique expérimentale. Ceci est en partie dû aux acquisitions d'images et à la pauvreté de la texture à ces échelles. C'est notamment le cas pour les matériaux cellulaires lorsqu'ils sont imagés avec des micro-tomographes conventionnels et qu'ils peuvent être sujets à des mécanismes de déformation complexes. Comme la validation de modèles numériques et l'identification des propriétés mécaniques de matériaux se base sur des mesures précises de déplacements et de déformations, la conception et l'implémentation d'algorithmes robustes et fiables de corrélation d'images semble nécessaire. Lorsque l'on s'intéresse à l'utilisation de la corrélation d'images volumiques (DVC) pour les matériaux cellulaires, on est confronté à un paradoxe: l'absence de texture à l'échelle du constituant conduit à considérer l'architecture comme marqueur pour la corrélation. Ceci conduit à l'échec des techniques ordinaires de DVC à mesurer des cinématiques aux échelles subcellulaires en lien avec des comportements mécaniques locaux complexes tels que la flexion ou le flambement de travées. L'objectif de cette thèse est la conception d'une technique de DVC pour la mesure de champs de déplacement dans des matériaux cellulaires à l'échelle de leurs architectures. Cette technique assiste la corrélation d'images par une régularisation élastique faible en utilisant un modèle mécanique généré automatiquement et basé sur les images. La méthode suggérée introduit une séparation d'échelles au dessus desquelles la DVC est dominante et en dessous desquelles elle est assistée par le modèle mécanique basé sur l'image. Une première étude numérique consistant à comparer différentes techniques de construction de modèles mécaniques basés sur les images est conduite. L'accent est mis sur deux méthodes de calcul particulières: la méthode des éléments finis (FEM) et la méthode des cellules finies (FCM) qui consiste à immerger la géométrie complexe dans une grille régulière de haut ordre sans utiliser de mailleurs. Si la FCM évite une première phase délicate de discrétisation, plusieurs paramètres restent néanmoins délicats à fixer. Dans ce travail, ces paramètres sont ajustés afin d'obtenir (a) la meilleure précision (bornée par les erreurs de pixellisation) tout en (b) assurant une complexité minimale. Pour l'aspect mesure par corrélation d'images régularisée, plusieurs expérimentations virtuelles à partir de différentes simulations numériques (en élasticité, en plasticité et en non-linéarité géométrique) sont d'abord réalisées afin d'analyser l'influence des paramètres de régularisation introduits. Les erreurs de mesures peuvent dans ce cas être quantifiées à l'aide des solutions de référence éléments finis. La capacité de la méthode à mesurer des cinématiques complexes en absence de texture est démontrée pour des régimes non-linéaires tels que le flambement. Finalement, le travail proposé est généralisé à la corrélation volumique des différents états de déformation du matériau et à la construction automatique de la micro-architecture cellulaire en utilisant soit une grille B-spline d'ordre arbitraire (FCM) soit un maillage éléments finis (FEM). Une mise en évidence expérimentale de l'efficacité et de la justesse de l'approche proposée est effectuée à travers de la mesure de cinématiques complexes dans une mousse polyuréthane sollicitée en compression lors d'un essai in situ.Measuring displacement and strain fields at low observable scales in complex microstructures still remains a challenge in experimental mechanics often because of the combination of low definition images with poor texture at this scale. The problem is particularly acute in the case of cellular materials, when imaged by conventional micro-tomographs, for which complex highly non-linear local phenomena can occur. As the validation of numerical models and the identification of mechanical properties of materials must rely on accurate measurements of displacement and strain fields, the design and implementation of robust and faithful image correlation algorithms must be conducted. With cellular materials, the use of digital volume correlation (DVC) faces a paradox: in the absence of markings of exploitable texture on/or in the struts or cell walls, the available speckle will be formed by the material architecture itself. This leads to the inability of classical DVC codes to measure kinematics at the cellular and a fortiori sub-cellular scales, precisely because the interpolation basis of the displacement field cannot account for the complexity of the underlying kinematics, especially when bending or buckling of beams or walls occurs. The objective of the thesis is to develop a DVC technique for the measurement of displacement fields in cellular materials at the scale of their architecture. The proposed solution consists in assisting DVC by a weak elastic regularization using an automatic image-based mechanical model. The proposed method introduces a separation of scales above which DVC is dominant and below which it is assisted by image-based modeling. First, a numerical investigation and comparison of different techniques for building automatically a geometric and mechanical model from tomographic images is conducted. Two particular methods are considered: the finite element method (FEM) and the finite-cell method (FCM). The FCM is a fictitious domain method that consists in immersing the complex geometry in a high order structured grid and does not require meshing. In this context, various discretization parameters appear delicate to choose. In this work, these parameters are adjusted to obtain (a) the best possible accuracy (bounded by pixelation errors) while (b) ensuring minimal complexity. Concerning the ability of the mechanical image-based models to regularize DIC, several virtual experimentations are performed in two-dimensions in order to finely analyze the influence of the introduced regularization lengths for different input mechanical behaviors (elastic, elasto-plastic and geometrically non-linear) and in comparison with ground truth. We show that the method can estimate complex local displacement and strain fields with speckle-free low definition images, even in non-linear regimes such as local buckling. Finally a three-dimensional generalization is performed through the development of a DVC framework. It takes as an input the reconstructed volumes at the different deformation states of the material and constructs automatically the cellular micro-architeture geometry. It considers either an immersed structured B-spline grid of arbitrary order or a finite-element mesh. An experimental evidence is performed by measuring the complex kinematics of a polyurethane foam under compression during an in situ test

    Mini-Workshop: Deep Learning and Inverse Problems

    Get PDF
    Machine learning and in particular deep learning offer several data-driven methods to amend the typical shortcomings of purely analytical approaches. The mathematical research on these combined models is presently exploding on the experimental side but still lacking on the theoretical point of view. This workshop addresses the challenge of developing a solid mathematical theory for analyzing deep neural networks for inverse problems

    Novel Methods to Incorporate Physiological Prior Knowledge into the Inverse Problem of Electrocardiography - Application to Localization of Ventricular Excitation Origins

    Get PDF
    17 Millionen Todesfälle jedes Jahr werden auf kardiovaskuläre Erkankungen zurückgeführt. Plötzlicher Herztod tritt bei ca. 25% der Patienten mit kardiovaskulären Erkrankungen auf und kann mit ventrikulärer Tachykardie in Verbindung gebracht werden. Ein wichtiger Schritt für die Behandlung von ventrikulärer Tachykardie ist die Detektion sogenannter Exit-Points, d.h. des räumlichen Ursprungs der Erregung. Da dieser Prozess sehr zeitaufwändig ist und nur von fähigen Kardiologen durchgeführt werden kann, gibt es eine Notwendigkeit für assistierende Lokalisationsmöglichkeiten, idealerweise automatisch und nichtinvasiv. Elektrokardiographische Bildgebung versucht, diesen klinischen Anforderungen zu genügen, indem die elektrische Aktivität des Herzens aus Messungen der Potentiale auf der Körperoberfläche rekonstruiert wird. Die resultierenden Informationen können verwendet werden, um den Erregungsursprung zu detektieren. Aktuelle Methoden um das inverse Problem zu lösen weisen jedoch entweder eine geringe Genauigkeit oder Robustheit auf, was ihren klinischen Nutzen einschränkt. Diese Arbeit analysiert zunächst das Vorwärtsproblem im Zusammenhang mit zwei Quellmodellen: Transmembranspannungen und extrazelluläre Potentiale. Die mathematischen Eigenschaften der Relation zwischen den Quellen des Herzens und der Körperoberflächenpotentiale werden systematisch analysiert und der Einfluss auf das inverse Problem verdeutlicht. Dieses Wissen wird anschließend zur Lösung des inversen Problems genutzt. Hierzu werden drei neue Methoden eingeführt: eine verzögerungsbasierte Regularisierung, eine Methode basierend auf einer Regression von Körperoberflächenpotentialen und eine Deep-Learning-basierte Lokalisierungsmethode. Diese drei Methoden werden in einem simulierten und zwei klinischen Setups vier etablierten Methoden gegenübergestellt und bewertet. Auf dem simulierten Datensatz und auf einem der beiden klinischen Datensätze erzielte eine der neuen Methoden bessere Ergebnisse als die konventionellen Ansätze, während Tikhonov-Regularisierung auf dem verbleibenden klinischen Datensatz die besten Ergebnisse erzielte. Potentielle Ursachen für diese Ergebnisse werden diskutiert und mit Eigenschaften des Vorwärtsproblems in Verbindung gebracht
    corecore