368 research outputs found

    A Fast Multilevel Algorithm for Wavelet-Regularized Image Restoration

    Full text link

    Multilevel Approach For Signal Restoration Problems With Toeplitz Matrices

    Get PDF
    We present a multilevel method for discrete ill-posed problems arising from the discretization of Fredholm integral equations of the first kind. In this method, we use the Haar wavelet transform to define restriction and prolongation operators within a multigrid-type iteration. The choice of the Haar wavelet operator has the advantage of preserving matrix structure, such as Toeplitz, between grids, which can be exploited to obtain faster solvers on each level where an edge-preserving Tikhonov regularization is applied. Finally, we present results that indicate the promise of this approach for restoration of signals and images with edges

    Iterative algorithms based on decoupling of deblurring and denoising for image restoration

    Get PDF
    In this paper, we propose iterative algorithms for solving image restoration problems. The iterative algorithms are based on decoupling of deblurring and denoising steps in the restoration process. In the deblurring step, an efficient deblurring method using fast transforms can be employed. In the denoising step, effective methods such as the wavelet shrinkage denoising method or the total variation denoising method can be used. The main advantage of this proposal is that the resulting algorithms can be very efficient and can produce better restored images in visual quality and signalto-noise ratio than those by the restoration methods using the combination of a data-fitting term and a regularization term. The convergence of the proposed algorithms is shown in the paper. Numerical examples are also given to demonstrate the effectiveness of these algorithms. © 2008 Society for Industrial and Applied Mathematics.published_or_final_versio

    Image Fusion via Sparse Regularization with Non-Convex Penalties

    Full text link
    The L1 norm regularized least squares method is often used for finding sparse approximate solutions and is widely used in 1-D signal restoration. Basis pursuit denoising (BPD) performs noise reduction in this way. However, the shortcoming of using L1 norm regularization is the underestimation of the true solution. Recently, a class of non-convex penalties have been proposed to improve this situation. This kind of penalty function is non-convex itself, but preserves the convexity property of the whole cost function. This approach has been confirmed to offer good performance in 1-D signal denoising. This paper demonstrates the aforementioned method to 2-D signals (images) and applies it to multisensor image fusion. The problem is posed as an inverse one and a corresponding cost function is judiciously designed to include two data attachment terms. The whole cost function is proved to be convex upon suitably choosing the non-convex penalty, so that the cost function minimization can be tackled by convex optimization approaches, which comprise simple computations. The performance of the proposed method is benchmarked against a number of state-of-the-art image fusion techniques and superior performance is demonstrated both visually and in terms of various assessment measures

    MAGMA: Multi-level accelerated gradient mirror descent algorithm for large-scale convex composite minimization

    Full text link
    Composite convex optimization models arise in several applications, and are especially prevalent in inverse problems with a sparsity inducing norm and in general convex optimization with simple constraints. The most widely used algorithms for convex composite models are accelerated first order methods, however they can take a large number of iterations to compute an acceptable solution for large-scale problems. In this paper we propose to speed up first order methods by taking advantage of the structure present in many applications and in image processing in particular. Our method is based on multi-level optimization methods and exploits the fact that many applications that give rise to large scale models can be modelled using varying degrees of fidelity. We use Nesterov's acceleration techniques together with the multi-level approach to achieve O(1/ϵ)\mathcal{O}(1/\sqrt{\epsilon}) convergence rate, where ϵ\epsilon denotes the desired accuracy. The proposed method has a better convergence rate than any other existing multi-level method for convex problems, and in addition has the same rate as accelerated methods, which is known to be optimal for first-order methods. Moreover, as our numerical experiments show, on large-scale face recognition problems our algorithm is several times faster than the state of the art
    corecore