79,221 research outputs found
Unique reconstruction of simple magnetizations from their magnetic potential
Inverse problems arising in (geo)magnetism are typically ill-posed, in
particular {they exhibit non-uniqueness}. Nevertheless, there exist nontrivial
model spaces on which the problem is uniquely solvable. Our goal is here to
describe such spaces that accommodate constraints suited for applications. In
this paper we treat the inverse magnetization problem on a Lipschitz domain
with fairly general topology. We characterize the subspace of -vector
fields that causes non-uniqueness, and identify a subspace of harmonic
gradients on which the inversion becomes unique. This classification has
consequences for applications and we present some of them in the context of
geo-sciences. In the second part of the paper, we discuss the space of
piecewise constant vector fields. This vector space is too large to make the
inversion unique. But as we show, it contains a dense subspace in on
which the problem becomes uniquely solvable, i.e., magnetizations from this
subspace are uniquely determined by their magnetic potential
Iterative Methods for Computing Eigenvalues and Exponentials of Large Matrices
In this dissertation, we study iterative methods for computing eigenvalues and exponentials of large matrices. These types of computational problems arise in a large number of applications, including mathematical models in economics, physical and biological processes. Although numerical methods for computing eigenvalues and matrix exponentials have been well studied in the literature, there is a lack of analysis in inexact iterative methods for eigenvalue computation and certain variants of the Krylov subspace methods for approximating the matrix exponentials. In this work, we proposed an inexact inverse subspace iteration method that generalizes the inexact inverse iteration for computing multiple and clustered eigenvalues of a generalized eigenvalue problem. Compared with other methods, the inexact inverse subspace iteration method is generally more robust. Convergence analysis showed that the linear convergence rate of the exact case is preserved. The second part of the work is to present an inverse Lanczos method to approximate the product of a matrix exponential and a vector. This is proposed to allow use of larger time step in a time-propagation scheme for solving linear initial value problems. Error analysis is given for the inverse Lanczos method, the standard Lanczos method as well as the shift-and-invert Lanczos method. The analysis demonstrates different behaviors of these variants and helps in choosing which variant to use in practice
Inverse problems with inexact forward operator : iterative regularization and application in dynamic imaging
The classic regularization theory for solving inverse problems is built on the
assumption that the forward operator perfectly represents the underlying physical model of the data acquisition. However, in many applications, for instance in
microscopy or magnetic particle imaging, this is not the case. Another important
example represent dynamic inverse problems, where changes of the searchedfor quantity during data collection can be interpreted as model uncertainties. In
this article, we propose a regularization strategy for linear inverse problems with
inexact forward operator based on sequential subspace optimization methods
(SESOP). In order to account for local modelling errors, we suggest to combine
SESOP with the Kaczmarz’ method. We study convergence and regularization
properties of the proposed method and discuss several practical realizations.
Relevance and performance of our approach are evaluated at simulated data
from dynamic computerized tomography with various dynamic scenarios
Singular Value Computation and Subspace Clustering
In this dissertation we discuss two problems. In the first part, we consider the problem of computing a few extreme eigenvalues of a symmetric definite generalized eigenvalue problem or a few extreme singular values of a large and sparse matrix. The standard method of choice of computing a few extreme eigenvalues of a large symmetric matrix is the Lanczos or the implicitly restarted Lanczos method. These methods usually employ a shift-and-invert transformation to accelerate the speed of convergence, which is not practical for truly large problems. With this in mind, Golub and Ye proposes an inverse-free preconditioned Krylov subspace method, which uses preconditioning instead of shift-and-invert to accelerate the convergence. To compute several eigenvalues, Wielandt is used in a straightforward manner. However, the Wielandt deflation alters the structure of the problem and may cause some difficulties in certain applications such as the singular value computations. So we first propose to consider a deflation by restriction method for the inverse-free Krylov subspace method. We generalize the original convergence theory for the inverse-free preconditioned Krylov subspace method to justify this deflation scheme. We next extend the inverse-free Krylov subspace method with deflation by restriction to the singular value problem. We consider preconditioning based on robust incomplete factorization to accelerate the convergence. Numerical examples are provided to demonstrate efficiency and robustness of the new algorithm.
In the second part of this thesis, we consider the so-called subspace clustering problem, which aims for extracting a multi-subspace structure from a collection of points lying in a high-dimensional space. Recently, methods based on self expressiveness property (SEP) such as Sparse Subspace Clustering and Low Rank Representations have been shown to enjoy superior performances than other methods. However, methods with SEP may result in representations that are not amenable to clustering through graph partitioning. We propose a method where the points are expressed in terms of an orthonormal basis. The orthonormal basis is optimally chosen in the sense that the representation of all points is sparsest. Numerical results are given to illustrate the effectiveness and efficiency of this method
On Krylov Methods for Large Scale CBCT Reconstruction
Krylov subspace methods are a powerful family of iterative solvers for linear
systems of equations, which are commonly used for inverse problems due to their
intrinsic regularization properties. Moreover, these methods are naturally
suited to solve large-scale problems, as they only require matrix-vector
products with the system matrix (and its adjoint) to compute approximate
solutions, and they display a very fast convergence. Even if this class of
methods has been widely researched and studied in the numerical linear algebra
community, its use in applied medical physics and applied engineering is still
very limited. e.g. in realistic large-scale Computed Tomography (CT) problems,
and more specifically in Cone Beam CT (CBCT). This work attempts to breach this
gap by providing a general framework for the most relevant Krylov subspace
methods applied to 3D CT problems, including the most well-known Krylov solvers
for non-square systems (CGLS, LSQR, LSMR), possibly in combination with
Tikhonov regularization, and methods that incorporate total variation (TV)
regularization. This is provided within an open source framework: the
Tomographic Iterative GPU-based Reconstruction (TIGRE) toolbox, with the idea
of promoting accessibility and reproducibility of the results for the
algorithms presented. Finally, numerical results in synthetic and real-world 3D
CT applications (medical CBCT and {\mu}-CT datasets) are provided to showcase
and compare the different Krylov subspace methods presented in the paper, as
well as their suitability for different kinds of problems.Comment: submitte
- …