492 research outputs found

    Regularization by conjugate gradient of nonnegatively constrained least squares

    Get PDF
    In many image deconvolution applications the nonnegativity of the computed solution is required. Conjugate Gradient (CG), often used as a reliable regularization tool, may give solutions with negative entries, particularly evident when large nearly zero plateaus are present. <br>The active constrains set, detected by projection? onto the nonnegative quadrant, turns out to be largely incomplete and? poor effects on the accuracy of the reconstructed image may occur. In this paper an inner-outer method based on CG is proposed? to compute nonnegative reconstructed images with a strategy which enlarges subsequently the active constrains set. <br>This method appears to be especially suitable for the deconvolution of images having large nearly zero backgrounds. The numerical experimentation validates the effectiveness of the proposed method with respect to widely used classical algorithms for nonnegative reconstructio

    Statistical unfolding of elementary particle spectra: Empirical Bayes estimation and bias-corrected uncertainty quantification

    Full text link
    We consider the high energy physics unfolding problem where the goal is to estimate the spectrum of elementary particles given observations distorted by the limited resolution of a particle detector. This important statistical inverse problem arising in data analysis at the Large Hadron Collider at CERN consists in estimating the intensity function of an indirectly observed Poisson point process. Unfolding typically proceeds in two steps: one first produces a regularized point estimate of the unknown intensity and then uses the variability of this estimator to form frequentist confidence intervals that quantify the uncertainty of the solution. In this paper, we propose forming the point estimate using empirical Bayes estimation which enables a data-driven choice of the regularization strength through marginal maximum likelihood estimation. Observing that neither Bayesian credible intervals nor standard bootstrap confidence intervals succeed in achieving good frequentist coverage in this problem due to the inherent bias of the regularized point estimate, we introduce an iteratively bias-corrected bootstrap technique for constructing improved confidence intervals. We show using simulations that this enables us to achieve nearly nominal frequentist coverage with only a modest increase in interval length. The proposed methodology is applied to unfolding the ZZ boson invariant mass spectrum as measured in the CMS experiment at the Large Hadron Collider.Comment: Published at http://dx.doi.org/10.1214/15-AOAS857 in the Annals of Applied Statistics (http://www.imstat.org/aoas/) by the Institute of Mathematical Statistics (http://www.imstat.org). arXiv admin note: substantial text overlap with arXiv:1401.827

    Generalized Cross-Validation applied to Conjugate Gradient for discrete ill-posed problems

    Get PDF
    To apply the Generalized Cross-Validation (GCV) as a stopping rule for an iterative method, we must estimate the trace of the so-called in?uence matrix which appears in the denominator of the GCV function. In the case of conjugate gradient, unlike what happens with stationary iterative methods, the regularized solution has a nonlinear dependence on the noise which a?ects the data of the problem. This fact is often pointed out as a cause of poor performance of GCV. To overcome this drawback, in this paper we propose a new method which linearizes the dependence by computing the derivatives through iterative formulas along the lines of Perry and Reeves (1994) and Bardsley (2008). We compare the proposed method with other methods suggested in the literature by an extensive numerical experimentation both on 1D and on 2D test problems

    Joint Image and Pupil Plane Reconstruction Algorithm based on Bayesian Techniques

    Get PDF
    The focus of this research was to develop an joint pupil and focal plane image recovery algorithm for use with coherent LADAR systems. The benefits of such a system would include increased resolution with little or no increase in system weight and volume as well as allowing for operation in the absence of natural light since the target of interest would be actively illuminated. Since a pupil plane collection aperture can be conformal, such a system would also potentially allow for the formation of large synthetic apertures. The system is demonstrated to be robust and in all but extreme cases yield better results than algorithms using a single data set (such as deconvolution). It was shown that the joint algorithm had a resolution increase of 70% over deconvolution alone and a 40% increase over traditional pupil plane algorithms. It is also demonstrated that the new algorithm does not suffer as severely from stagnation problems typical with pupil plane algorithms

    Stopping rules for iterative methods in nonnegatively constrained deconvolution

    Get PDF
    We consider the two-dimensional discrete nonnegatively constrained deconvolution problem, whose goal is to reconstruct an object x? from its image b obtained through an optical system and a?ected by noise. When the large size of the problem prevents regularization by ?l- tering, iterative methods enjoying semiconvergence property, coupled with suitable strategies for enforcing nonnegativity, are suggested. For these methods an accurate detection of the stopping index is essential. In this paper we analyze various stopping rules and test their e?ect on three di?erent iterative regularizing methods, by a large experimenta- tion

    Tikhonov-type iterative regularization methods for ill-posed inverse problems: theoretical aspects and applications

    Get PDF
    Ill-posed inverse problems arise in many fields of science and engineering. The ill-conditioning and the big dimension make the task of numerically solving this kind of problems very challenging. In this thesis we construct several algorithms for solving ill-posed inverse problems. Starting from the classical Tikhonov regularization method we develop iterative methods that enhance the performances of the originating method. In order to ensure the accuracy of the constructed algorithms we insert a priori knowledge on the exact solution and empower the regularization term. By exploiting the structure of the problem we are also able to achieve fast computation even when the size of the problem becomes very big. We construct algorithms that enforce constraint on the reconstruction, like nonnegativity or flux conservation and exploit enhanced version of the Euclidian norm using a regularization operator and different semi-norms, like the Total Variaton, for the regularization term. For most of the proposed algorithms we provide efficient strategies for the choice of the regularization parameters, which, most of the times, rely on the knowledge of the norm of the noise that corrupts the data. For each method we analyze the theoretical properties in the finite dimensional case or in the more general case of Hilbert spaces. Numerical examples prove the good performances of the algorithms proposed in term of both accuracy and efficiency

    Tikhonov-type iterative regularization methods for ill-posed inverse problems: theoretical aspects and applications

    Get PDF
    Ill-posed inverse problems arise in many fields of science and engineering. The ill-conditioning and the big dimension make the task of numerically solving this kind of problems very challenging. In this thesis we construct several algorithms for solving ill-posed inverse problems. Starting from the classical Tikhonov regularization method we develop iterative methods that enhance the performances of the originating method. In order to ensure the accuracy of the constructed algorithms we insert a priori knowledge on the exact solution and empower the regularization term. By exploiting the structure of the problem we are also able to achieve fast computation even when the size of the problem becomes very big. We construct algorithms that enforce constraint on the reconstruction, like nonnegativity or flux conservation and exploit enhanced version of the Euclidian norm using a regularization operator and different semi-norms, like the Total Variaton, for the regularization term. For most of the proposed algorithms we provide efficient strategies for the choice of the regularization parameters, which, most of the times, rely on the knowledge of the norm of the noise that corrupts the data. For each method we analyze the theoretical properties in the finite dimensional case or in the more general case of Hilbert spaces. Numerical examples prove the good performances of the algorithms proposed in term of both accuracy and efficiency

    REGULARIZATION PARAMETER SELECTION METHODS FOR ILL POSED POISSON IMAGING PROBLEMS

    Get PDF
    A common problem in imaging science is to estimate some underlying true image given noisy measurements of image intensity. When image intensity is measured by the counting of incident photons emitted by the object of interest, the data-noise is accurately modeled by a Poisson distribution, which motivates the use of Poisson maximum likelihood estimation. When the underlying model equation is ill-posed, regularization must be employed. I will present a computational framework for solving such problems, including statistically motivated methods for choosing the regularization parameter. Numerical examples will be included

    First order algorithms in variational image processing

    Get PDF
    Variational methods in imaging are nowadays developing towards a quite universal and flexible tool, allowing for highly successful approaches on tasks like denoising, deblurring, inpainting, segmentation, super-resolution, disparity, and optical flow estimation. The overall structure of such approaches is of the form D(Ku)+αR(u)→min⁥u{\cal D}(Ku) + \alpha {\cal R} (u) \rightarrow \min_u ; where the functional D{\cal D} is a data fidelity term also depending on some input data ff and measuring the deviation of KuKu from such and R{\cal R} is a regularization functional. Moreover KK is a (often linear) forward operator modeling the dependence of data on an underlying image, and α\alpha is a positive regularization parameter. While D{\cal D} is often smooth and (strictly) convex, the current practice almost exclusively uses nonsmooth regularization functionals. The majority of successful techniques is using nonsmooth and convex functionals like the total variation and generalizations thereof or ℓ1\ell_1-norms of coefficients arising from scalar products with some frame system. The efficient solution of such variational problems in imaging demands for appropriate algorithms. Taking into account the specific structure as a sum of two very different terms to be minimized, splitting algorithms are a quite canonical choice. Consequently this field has revived the interest in techniques like operator splittings or augmented Lagrangians. Here we shall provide an overview of methods currently developed and recent results as well as some computational studies providing a comparison of different methods and also illustrating their success in applications.Comment: 60 pages, 33 figure
    • 

    corecore