12 research outputs found

    On the choice of regularization matrix for an â„“2-â„“ minimization method for image restoration

    Get PDF
    Ill-posed problems arise in many areas of science and engineering. Their solutions, if they exist, are very sensitive to perturbations in the data. To reduce this sensitivity, the original problem may be replaced by a minimization problem with a fidelity term and a regularization term. We consider minimization problems of this kind, in which the fidelity term is the square of the â„“2-norm of a discrepancy and the regularization term is the qth power of the â„“q-norm of the size of the computed solution measured in some manner. We are interested in the situation when

    Accelerated Sparse Recovery via Gradient Descent with Nonlinear Conjugate Gradient Momentum

    Full text link
    This paper applies an idea of adaptive momentum for the nonlinear conjugate gradient to accelerate optimization problems in sparse recovery. Specifically, we consider two types of minimization problems: a (single) differentiable function and the sum of a non-smooth function and a differentiable function. In the first case, we adopt a fixed step size to avoid the traditional line search and establish the convergence analysis of the proposed algorithm for a quadratic problem. This acceleration is further incorporated with an operator splitting technique to deal with the non-smooth function in the second case. We use the convex ℓ1\ell_1 and the nonconvex ℓ1−ℓ2\ell_1-\ell_2 functionals as two case studies to demonstrate the efficiency of the proposed approaches over traditional methods

    A comparison of parameter choice rules for â„“p - â„“q minimization

    Get PDF
    Images that have been contaminated by various kinds of blur and noise can be restored by the minimization of an â„“p-â„“q functional. The quality of the reconstruction depends on the choice of a regularization parameter. Several approaches to determine this parameter have been described in the literature. This work presents a numerical comparison of known approaches as well as of a new one

    Fractional graph Laplacian for image reconstruction

    Get PDF
    Image reconstruction problems, like image deblurring and computer tomography, are usually ill-posed and require regularization. A popular approach to regularization is to substitute the original problem with an optimization problem that minimizes the sum of two terms, an term and an term with . The first penalizes the distance between the measured data and the reconstructed one, the latter imposes sparsity on some features of the computed solution. In this work, we propose to use the fractional Laplacian of a properly constructed graph in the term to compute extremely accurate reconstructions of the desired images. A simple model with a fully automatic method, i.e., that does not require the tuning of any parameter, is used to construct the graph and enhanced diffusion on the graph is achieved with the use of a fractional exponent in the Laplacian operator. Since the fractional Laplacian is a global operator, i.e., its matrix representation is completely full, it cannot be formed and stored. We propose to replace it with an approximation in an appropriate Krylov subspace. We show that the algorithm is a regularization method under some reasonable assumptions. Some selected numerical examples in image deblurring and computer tomography show the performance of our proposal

    Composite Minimization: Proximity Algorithms and Their Applications

    Get PDF
    ABSTRACT Image and signal processing problems of practical importance, such as incomplete data recovery and compressed sensing, are often modeled as nonsmooth optimization problems whose objective functions are the sum of two terms, each of which is the composition of a prox-friendly function with a matrix. Therefore, there is a practical need to solve such optimization problems. Besides the nondifferentiability of the objective functions of the associated optimization problems and the larger dimension of the underlying images and signals, the sum of the objective functions is not, in general, prox-friendly, which makes solving the problems challenging. Many algorithms have been proposed in literature to attack these problems by making use of the prox-friendly functions in the problems. However, the efficiency of these algorithms relies heavily on the underlying structures of the matrices, particularly for large scale optimization problems. In this dissertation, we propose a novel algorithmic framework that exploits the availability of the prox-friendly functions, without requiring any structural information of the matrices. This makes our algorithms suitable for large scale optimization problems of interest. We also prove the convergence of the developed algorithms. This dissertation has three main parts. In part 1, we consider the minimization of functions that are the sum of the compositions of prox-friendly functions with matrices. We characterize the solutions to the associated optimization problems as the solutions of fixed point equations that are formulated in terms of the proximity operators of the dual of the prox-friendly functions. By making use of the flexibility provided by this characterization, we develop a block Gauss-Seidel iterative scheme for finding a solution to the optimization problem and prove its convergence. We discuss the connection of our developed algorithms with some existing ones and point out the advantages of our proposed scheme. In part 2, we give a comprehensive study on the computation of the proximity operator of the ℓp-norm with 0 ≤ p \u3c 1. Nonconvexity and non-smoothness have been recognized as important features of many optimization problems in image and signal processing. The nonconvex, nonsmooth ℓp-regularization has been recognized as an efficient tool to identify the sparsity of wavelet coefficients of an image or signal under investigation. To solve an ℓp-regularized optimization problem, the proximity operator of the ℓp-norm needs to be computed in an accurate and computationally efficient way. We first study the general properties of the proximity operator of the ℓp-norm. Then, we derive the explicit form of the proximity operators of the ℓp-norm for p ∈ {0, 1/2, 2/3, 1}. Using these explicit forms and the properties of the proximity operator of the ℓp-norm, we develop an efficient algorithm to compute the proximity operator of the ℓp-norm for any p between 0 and 1. In part 3, the usefulness of the research results developed in the previous two parts is demonstrated in two types of applications, namely, image restoration and compressed sensing. A comparison with the results from some existing algorithms is also presented. For image restoration, the results developed in part 1 are applied to solve the ℓ2-TV and ℓ1-TV models. The resulting restored images have higher peak signal-to-noise ratios and the developed algorithms require less CPU time than state-of-the-art algorithms. In addition, for compressed sensing applications, our algorithm has smaller ℓ2- and ℓ∞-errors and shorter computation times than state-ofthe- art algorithms. For compressed sensing with the ℓp-regularization, our numerical simulations show smaller ℓ2- and ℓ∞-errors than that from the ℓ0-regularization and ℓ1-regularization. In summary, our numerical simulations indicate that not only can our developed algorithms be applied to a wide variety of important optimization problems, but also they are more accurate and computationally efficient than stateof- the-art algorithms

    Sparse and Redundant Representations for Inverse Problems and Recognition

    Get PDF
    Sparse and redundant representation of data enables the description of signals as linear combinations of a few atoms from a dictionary. In this dissertation, we study applications of sparse and redundant representations in inverse problems and object recognition. Furthermore, we propose two novel imaging modalities based on the recently introduced theory of Compressed Sensing (CS). This dissertation consists of four major parts. In the first part of the dissertation, we study a new type of deconvolution algorithm that is based on estimating the image from a shearlet decomposition. Shearlets provide a multi-directional and multi-scale decomposition that has been mathematically shown to represent distributed discontinuities such as edges better than traditional wavelets. We develop a deconvolution algorithm that allows for the approximation inversion operator to be controlled on a multi-scale and multi-directional basis. Furthermore, we develop a method for the automatic determination of the threshold values for the noise shrinkage for each scale and direction without explicit knowledge of the noise variance using a generalized cross validation method. In the second part of the dissertation, we study a reconstruction method that recovers highly undersampled images assumed to have a sparse representation in a gradient domain by using partial measurement samples that are collected in the Fourier domain. Our method makes use of a robust generalized Poisson solver that greatly aids in achieving a significantly improved performance over similar proposed methods. We will demonstrate by experiments that this new technique is more flexible to work with either random or restricted sampling scenarios better than its competitors. In the third part of the dissertation, we introduce a novel Synthetic Aperture Radar (SAR) imaging modality which can provide a high resolution map of the spatial distribution of targets and terrain using a significantly reduced number of needed transmitted and/or received electromagnetic waveforms. We demonstrate that this new imaging scheme, requires no new hardware components and allows the aperture to be compressed. Also, it presents many new applications and advantages which include strong resistance to countermesasures and interception, imaging much wider swaths and reduced on-board storage requirements. The last part of the dissertation deals with object recognition based on learning dictionaries for simultaneous sparse signal approximations and feature extraction. A dictionary is learned for each object class based on given training examples which minimize the representation error with a sparseness constraint. A novel test image is then projected onto the span of the atoms in each learned dictionary. The residual vectors along with the coefficients are then used for recognition. Applications to illumination robust face recognition and automatic target recognition are presented

    Multiscale and High-Dimensional Problems

    Get PDF
    High-dimensional problems appear naturally in various scientific areas. Two primary examples are PDEs describing complex processes in computational chemistry and physics, and stochastic/ parameter-dependent PDEs arising in uncertainty quantification and optimal control. Other highly visible examples are big data analysis including regression and classification which typically encounters high-dimensional data as input and/or output. High dimensional problems cannot be solved by traditional numerical techniques, because of the so-called curse of dimensionality. Rather, they require the development of novel theoretical and computational approaches to make them tractable and to capture fine resolutions and relevant features. Paradoxically, increasing computational power may even serve to heighten this demand, since the wealth of new computational data itself becomes a major obstruction. Extracting essential information from complex structures and developing rigorous models to quantify the quality of information in a high dimensional setting constitute challenging tasks from both theoretical and numerical perspective. The last decade has seen the emergence of several new computational methodologies which address the obstacles to solving high dimensional problems. These include adaptive methods based on mesh refinement or sparsity, random forests, model reduction, compressed sensing, sparse grid and hyperbolic wavelet approximations, and various new tensor structures. Their common features are the nonlinearity of the solution method that prioritize variables and separate solution characteristics living on different scales. These methods have already drastically advanced the frontiers of computability for certain problem classes. This workshop proposed to deepen the understanding of the underlying mathematical concepts that drive this new evolution of computational methods and to promote the exchange of ideas emerging in various disciplines about how to treat multiscale and high-dimensional problems

    LIPIcs, Volume 251, ITCS 2023, Complete Volume

    Get PDF
    LIPIcs, Volume 251, ITCS 2023, Complete Volum
    corecore