92,111 research outputs found

    Dynamical systems method for solving operator equations

    Full text link
    Consider an operator equation F(u)=0F(u)=0 in a real Hilbert space. The problem of solving this equation is ill-posed if the operator F′(u)F'(u) is not boundedly invertible, and well-posed otherwise. A general method, dynamical systems method (DSM) for solving linear and nonlinear ill-posed problems in a Hilbert space is presented. This method consists of the construction of a nonlinear dynamical system, that is, a Cauchy problem, which has the following properties: 1) it has a global solution, 2) this solution tends to a limit as time tends to infinity, 3) the limit solves the original linear or non-linear problem. New convergence and discretization theorems are obtained. Examples of the applications of this approach are given. The method works for a wide range of well-posed problems as well.Comment: 21p

    Dynamical Systems Method for solving ill-conditioned linear algebraic systems

    Full text link
    A new method, the Dynamical Systems Method (DSM), justified recently, is applied to solving ill-conditioned linear algebraic system (ICLAS). The DSM gives a new approach to solving a wide class of ill-posed problems. In this paper a new iterative scheme for solving ICLAS is proposed. This iterative scheme is based on the DSM solution. An a posteriori stopping rules for the proposed method is justified. This paper also gives an a posteriori stopping rule for a modified iterative scheme developed in A.G.Ramm, JMAA,330 (2007),1338-1346, and proves convergence of the solution obtained by the iterative scheme.Comment: 26 page

    Embedded techniques for choosing the parameter in Tikhonov regularization

    Full text link
    This paper introduces a new strategy for setting the regularization parameter when solving large-scale discrete ill-posed linear problems by means of the Arnoldi-Tikhonov method. This new rule is essentially based on the discrepancy principle, although no initial knowledge of the norm of the error that affects the right-hand side is assumed; an increasingly more accurate approximation of this quantity is recovered during the Arnoldi algorithm. Some theoretical estimates are derived in order to motivate our approach. Many numerical experiments, performed on classical test problems as well as image deblurring are presented

    On convergence rates of proximal alternating direction method of multipliers

    Full text link
    In this paper we consider from two different aspects the proximal alternating direction method of multipliers (ADMM) in Hilbert spaces. We first consider the application of the proximal ADMM to solve well-posed linearly constrained two-block separable convex minimization problems in Hilbert spaces and obtain new and improved non-ergodic convergence rate results, including linear and sublinear rates under certain regularity conditions. We next consider proximal ADMM as a regularization method for solving linear ill-posed inverse problems in Hilbert spaces. When the data is corrupted by additive noise, we establish, under a benchmark source condition, a convergence rate result in terms of the noise level when the number of iteration is properly chosen

    An Accelerated Iterative Reweighted Least Squares Algorithm for Compressed Sensing MRI

    Full text link
    Compressed sensing for MRI (CS-MRI) attempts to recover an object from undersampled k-space data by minimizing sparsity-promoting regularization criteria. The iterative reweighted least squares (IRLS) algorithm can perform the minimization task by solving iteration-dependent linear systems, recursively. However, this process can be slow as the associated linear system is often poorly conditioned for ill-posed problems. We propose a new scheme based on the matrix inversion lemma (MIL) to accelerate the solving process. We demonstrate numerically for CS-MRI that our method provides significant speed-up compared to linear and nonlinear conjugate gradient algorithms, thus making it a promising alternative for such applications.Peer Reviewedhttp://deepblue.lib.umich.edu/bitstream/2027.42/85957/1/Fessler250.pd

    Identification of linear response functions from arbitrary perturbation experiments in the presence of noise - Part I. Method development and toy model demonstration

    Get PDF
    Existent methods to identify linear response functions from data require tailored perturbation experiments, e.g., impulse or step experiments, and if the system is noisy, these experiments need to be repeated several times to obtain good statistics. In contrast, for the method developed here, data from only a single perturbation experiment at arbitrary perturbation are sufficient if in addition data from an unperturbed (control) experiment are available. To identify the linear response function for this ill-posed problem, we invoke regularization theory. The main novelty of our method lies in the determination of the level of background noise needed for a proper estimation of the regularization parameter: this is achieved by comparing the frequency spectrum of the perturbation experiment with that of the additional control experiment. The resulting noise-level estimate can be further improved for linear response functions known to be monotonic. The robustness of our method and its advantages are investigated by means of a toy model. We discuss in detail the dependence of the identified response function on the quality of the data (signal-to-noise ratio) and on possible nonlinear contributions to the response. The method development presented here prepares in particular for the identification of carbon cycle response functions in Part 2 of this study (Torres Mendonça et al., 2021a). However, the core of our method, namely our new approach to obtaining the noise level for a proper estimation of the regularization parameter, may find applications in also solving other types of linear ill-posed problems

    An Augmented Lagrangian Approach to the Constrained Optimization Formulation of Imaging Inverse Problems

    Full text link
    We propose a new fast algorithm for solving one of the standard approaches to ill-posed linear inverse problems (IPLIP), where a (possibly non-smooth) regularizer is minimized under the constraint that the solution explains the observations sufficiently well. Although the regularizer and constraint are usually convex, several particular features of these problems (huge dimensionality, non-smoothness) preclude the use of off-the-shelf optimization tools and have stimulated a considerable amount of research. In this paper, we propose a new efficient algorithm to handle one class of constrained problems (often known as basis pursuit denoising) tailored to image recovery applications. The proposed algorithm, which belongs to the family of augmented Lagrangian methods, can be used to deal with a variety of imaging IPLIP, including deconvolution and reconstruction from compressive observations (such as MRI), using either total-variation or wavelet-based (or, more generally, frame-based) regularization. The proposed algorithm is an instance of the so-called "alternating direction method of multipliers", for which convergence sufficient conditions are known; we show that these conditions are satisfied by the proposed algorithm. Experiments on a set of image restoration and reconstruction benchmark problems show that the proposed algorithm is a strong contender for the state-of-the-art.Comment: 13 pages, 8 figure, 8 tables. Submitted to the IEEE Transactions on Image Processin
    • …
    corecore