64 research outputs found

    A flexible space-variant anisotropic regularisation for image restoration with automated parameter selection

    Get PDF
    We propose a new space-variant anisotropic regularisation term for variational image restoration, based on the statistical assumption that the gradients of the target image distribute locally according to a bivariate generalised Gaussian distribution. The highly flexible variational structure of the corresponding regulariser encodes several free parameters which hold the potential for faithfully modelling the local geometry in the image and describing local orientation preferences. For an automatic estimation of such parameters, we design a robust maximum likelihood approach and report results on its reliability on synthetic data and natural images. For the numerical solution of the corresponding image restoration model, we use an iterative algorithm based on the Alternating Direction Method of Multipliers (ADMM). A suitable preliminary variable splitting together with a novel result in multivariate non-convex proximal calculus yield a very efficient minimisation algorithm. Several numerical results showing significant quality-improvement of the proposed model with respect to some related state-of-the-art competitors are reported, in particular in terms of texture and detail preservation

    Joint Image Reconstruction and Segmentation Using the Potts Model

    Full text link
    We propose a new algorithmic approach to the non-smooth and non-convex Potts problem (also called piecewise-constant Mumford-Shah problem) for inverse imaging problems. We derive a suitable splitting into specific subproblems that can all be solved efficiently. Our method does not require a priori knowledge on the gray levels nor on the number of segments of the reconstruction. Further, it avoids anisotropic artifacts such as geometric staircasing. We demonstrate the suitability of our method for joint image reconstruction and segmentation. We focus on Radon data, where we in particular consider limited data situations. For instance, our method is able to recover all segments of the Shepp-Logan phantom from 77 angular views only. We illustrate the practical applicability on a real PET dataset. As further applications, we consider spherical Radon data as well as blurred data

    Efficient Reconstruction of Piecewise Constant Images Using Nonsmooth Nonconvex Minimization

    Get PDF
    We consider the restoration of piecewise constant images where the number of the regions and their values are not fixed in advance, with a good difference of piecewise constant values between neighboring regions, from noisy data obtained at the output of a linear operator (e.g., a blurring kernel or a Radon transform). Thus we also address the generic problem of unsupervised segmentation in the context of linear inverse problems. The segmentation and the restoration tasks are solved jointly by minimizing an objective function (an energy) composed of a quadratic data-fidelity term and a nonsmooth nonconvex regularization term. The pertinence of such an energy is ensured by the analytical properties of its minimizers. However, its practical interest used to be limited by the difficulty of the computational stage which requires a nonsmooth nonconvex minimization. Indeed, the existing methods are unsatisfactory since they (implicitly or explicitly) involve a smooth approximation of the regularization term and often get stuck in shallow local minima. The goal of this paper is to design a method that efficiently handles the nonsmooth nonconvex minimization. More precisely, we propose a continuation method where one tracks the minimizers along a sequence of approximate nonsmooth energies {Jฮต}, the first of which being strictly convex and the last one the original energy to minimize. Knowing the importance of the nonsmoothness of the regularization term for the segmentation task, each Jฮต is nonsmooth and is expressed as the sum of an l1 regularization term and a smooth nonconvex function. Furthermore, the local minimization of each Jฮต is reformulated as the minimization of a smooth function subject to a set of linear constraints. The latter problem is solved by the modified primal-dual interior point method, which guarantees the descent direction at each step. Experimental results are presented and show the effectiveness and the efficiency of the proposed method. Comparison with simulated annealing methods further shows the advantage of our method.published_or_final_versio

    On the convergence of a linesearch based proximal-gradient method for nonconvex optimization

    Get PDF
    We consider a variable metric linesearch based proximal gradient method for the minimization of the sum of a smooth, possibly nonconvex function plus a convex, possibly nonsmooth term. We prove convergence of this iterative algorithm to a critical point if the objective function satisfies the Kurdyka-Lojasiewicz property at each point of its domain, under the assumption that a limit point exists. The proposed method is applied to a wide collection of image processing problems and our numerical tests show that our algorithm results to be flexible, robust and competitive when compared to recently proposed approaches able to address the optimization problems arising in the considered applications

    ์ฝ”์‹œ์žก์Œ ์ œ๊ฑฐ๋ฅผ ์œ„ํ•œ ๋ณ€๋ถ„๋ฒ•์  ์ ‘๊ทผ

    Get PDF
    ํ•™์œ„๋…ผ๋ฌธ(๋ฐ•์‚ฌ)--์„œ์šธ๋Œ€ํ•™๊ต ๋Œ€ํ•™์› :์ž์—ฐ๊ณผํ•™๋Œ€ํ•™ ์ˆ˜๋ฆฌ๊ณผํ•™๋ถ€,2020. 2. ๊ฐ•๋ช…์ฃผ.In image processing, image noise removal is one of the most important problems. In this thesis, we study Cauchy noise removal by variational approaches. Cauchy noise occurs often in engineering applications. However, because of the non-convexity of the variational model of Cauchy noise, it is difficult to solve and were not studied much. To denoise Cauchy noise, we use the non-convex alternating direction method of multipliers and present two variational models. The first thing is fractional total variation(FTV) model. FTV is derived by fractional derivative which is an extended version of integer order derivative to real order derivative. The second thing is the weighted nuclear norm model. Weighted nuclear norm has an excellent performance in low-level vision. We have combined our novel ideas with weighted nuclear norm minimization to achieve better results than existing models in Cauchy noise removal. Finally, we show the superiority of the proposed model from numerical experiments.์ด๋ฏธ์ง€ ์ฒ˜๋ฆฌ์—์„œ ์ด๋ฏธ์ง€ ์žก์Œ ์ œ๊ฑฐ๋Š” ๊ฐ€์žฅ ์ค‘์š”ํ•œ ๋ฌธ์ œ ์ค‘ ํ•˜๋‚˜๋‹ค. ์ด ๋…ผ๋ฌธ์—์„œ ์šฐ๋ฆฌ๋Š” ๋‹ค์–‘ํ•œ ์ ‘๊ทผ ๋ฐฉ์‹์— ์˜ํ•œ ์ฝ”์‹œ ์žก์Œ ์ œ๊ฑฐ๋ฅผ ์—ฐ๊ตฌํ•œ๋‹ค. ์ฝ”์‹œ ์žก์Œ์€ ์—”์ง€๋‹ˆ์–ด๋ง ์• ํ”Œ๋ฆฌ์ผ€์ด์…˜์—์„œ ์ž์ฃผ ๋ฐœ์ƒํ•˜๋‚˜ ์ฝ”์‹œ ์žก์Œ์˜ ๋ณ€๋ถ„๋ฒ•์  ๋ชจ๋ธ์˜ ๋น„ ๋ณผ๋ก์„ฑ์œผ๋กœ ์ธํ•ด ํ•ด๊ฒฐํ•˜๊ธฐ๊ฐ€ ์–ด๋ ต๊ณ  ๋งŽ์ด ์—ฐ๊ตฌ๋˜์ง€ ์•Š์•˜๋‹ค. ์ฝ”์‹œ ๋…ธ์ด์ฆˆ๋ฅผ ์ œ๊ฑฐํ•˜๊ธฐ ์œ„ํ•ด ์šฐ๋ฆฌ๋Š” ๊ณฑ์…ˆ๊ธฐ์˜ ๋ณผ๋กํ•˜์ง€ ์•Š์€ ๊ต๋ฅ˜ ๋ฐฉํ–ฅ ๋ฐฉ๋ฒ•(nonconvex ADMM)์„ ์‚ฌ์šฉํ•˜์˜€์œผ๋ฉฐ ๋‘ ๊ฐ€์ง€ ๋ณ€๋ถ„๋ฒ•์  ๋ชจ๋ธ์„ ์ œ์‹œํ•œ๋‹ค. ์ฒซ ๋ฒˆ์งธ๋Š” ๋ถ„์ˆ˜ ์ด ๋ณ€์ด(FTV)๋ฅผ ์ด์šฉํ•œ ๋ชจ๋ธ์ด๋‹ค. ๋ถ„์ˆ˜ ์ด ๋ณ€์ด๋Š” ์ผ๋ฐ˜์ ์ธ ์ •์ˆ˜ ๋„ํ•จ์ˆ˜๋ฅผ ์‹ค์ˆ˜ ๋„ํ•จ์ˆ˜๋กœ ํ™•์žฅ ํ•œ ๋ถ„์ˆ˜ ๋„ํ•จ์ˆ˜์— ์˜ํ•ด ์ •์˜๋œ๋‹ค. ๋‘ ๋ฒˆ์งธ๋Š” ๊ฐ€์ค‘ ํ•ต ๋…ธ๋ฆ„์„ ์ด์šฉํ•œ ๋ชจ๋ธ์ด๋‹ค. ๊ฐ€์ค‘ ํ•ต ๋…ธ๋ฆ„์€ ์ €์ˆ˜์ค€ ์˜์ƒ์ฒ˜๋ฆฌ์—์„œ ํƒ์›”ํ•œ ์„ฑ๋Šฅ์„ ๋ฐœํœ˜ํ•œ๋‹ค. ์šฐ๋ฆฌ๋Š” ๊ฐ€์ค‘ ํ•ต ๋…ธ๋ฆ„์ด ์ฝ”์‹œ ์žก์Œ ์ œ๊ฑฐ์—์„œ๋„ ๋›ฐ์–ด๋‚œ ์„ฑ๋Šฅ์„ ๋ฐœํœ˜ํ•  ๊ฒƒ์œผ๋กœ ์˜ˆ์ƒํ•˜์˜€๊ณ , ์šฐ๋ฆฌ์˜ ์ƒˆ๋กœ์šด ์•„์ด๋””์–ด๋ฅผ ๊ฐ€์ค‘ ํ•ต ๋…ธ๋ฆ„ ์ตœ์†Œํ™”์™€ ๊ฒฐํ•ฉํ•˜์—ฌ ํ˜„์กดํ•˜๋Š” ์ฝ”์‹œ ์žก์Œ ์ œ๊ฑฐ ์ตœ์‹  ๋ชจ๋ธ๋“ค๋ณด๋‹ค ๋” ๋‚˜์€ ๊ฒฐ๊ณผ๋ฅผ ์–ป์„ ์ˆ˜ ์žˆ์—ˆ๋‹ค. ๋งˆ์ง€๋ง‰ ์žฅ์—์„œ ์‹ค์ œ ์ฝ”์‹œ ์žก์Œ ์ œ๊ฑฐ ํ…Œ์ŠคํŠธ๋ฅผ ํ†ตํ•ด ์šฐ๋ฆฌ ๋ชจ๋ธ์ด ์–ผ๋งˆ๋‚˜ ๋›ฐ์–ด๋‚œ์ง€ ํ™•์ธํ•˜๋ฉฐ ๋…ผ๋ฌธ์„ ๋งˆ์นœ๋‹ค.1 Introduction 1 2 The Cauchy distribution and the Cauchy noise 5 2.1 The Cauchy distribution 5 2.1.1 The alpha-stable distribution 5 2.1.2 The Cauchy distribution 8 2.2 The Cauchy noise 13 2.2.1 Analysis of the Cauchy noise 13 2.2.2 Variational model of Cauchy noise 14 2.3 Previous work 16 3 Fractional order derivatives and total fractional order variational model 19 3.1 Some fractional derivatives and integrals 19 3.1.1 Grunwald-Letnikov Fractional Derivatives 20 3.1.2 Riemann-Liouville Fractional Derivatives 28 3.2 Proposed model: Cauchy noise removal model by fractional total variation 33 3.2.1 Fractional total variation and Cauchy noise removal model 34 3.2.2 nonconvex ADMM algorithm 37 3.2.3 The algorithm for solving fractional total variational model of Cauchy noise 39 3.3 Numerical results of fractional total variational model 51 3.3.1 Parameter and termination condition 51 3.3.2 Experimental results 54 4 Nuclear norm minimization and Cauchy noise denoising model 67 4.1 Weighted Nuclear Norm 67 4.1.1 Weighted Nuclear Norm and Its Applications 68 4.1.2 Iteratively Reweighted l1 Minimization 74 4.2 Proposed Model: Weighted Nuclear Norm For Cauchy Noise Denoising 77 4.2.1 Model and algorithm description 77 4.2.2 Convergence of algorithm7 79 4.2.3 Block matching method 81 4.3 Numerical Results OfWeighted Nuclear Norm Denoising Model For Cauchy Noise 83 4.3.1 Parameter setting and truncated weighted nuclear norm 84 4.3.2 Termination condition 85 4.3.3 Experimental results 86 5 Conclusion 95 Abstract (in Korean) 105Docto

    A Unified Bregman Alternating Minimization Algorithm for Generalized DC Programming with Application to Imaging Data

    Full text link
    In this paper, we consider a class of nonconvex (not necessarily differentiable) optimization problems called generalized DC (Difference-of-Convex functions) programming, which is minimizing the sum of two separable DC parts and one two-block-variable coupling function. To circumvent the nonconvexity and nonseparability of the problem under consideration, we accordingly introduce a Unified Bregman Alternating Minimization Algorithm (UBAMA) by maximally exploiting the favorable DC structure of the objective. Specifically, we first follow the spirit of alternating minimization to update each block variable in a sequential order, which can efficiently tackle the nonseparablitity caused by the coupling function. Then, we employ the Fenchel-Young inequality to approximate the second DC components (i.e., concave parts) so that each subproblem reduces to a convex optimization problem, thereby alleviating the computational burden of the nonconvex DC parts. Moreover, each subproblem absorbs a Bregman proximal regularization term, which is usually beneficial for inducing closed-form solutions of subproblems for many cases via choosing appropriate Bregman kernel functions. It is remarkable that our algorithm not only provides an algorithmic framework to understand the iterative schemes of some novel existing algorithms, but also enjoys implementable schemes with easier subproblems than some state-of-the-art first-order algorithms developed for generic nonconvex and nonsmooth optimization problems. Theoretically, we prove that the sequence generated by our algorithm globally converges to a critical point under the Kurdyka-{\L}ojasiewicz (K{\L}) condition. Besides, we estimate the local convergence rates of our algorithm when we further know the prior information of the K{\L} exponent.Comment: 44 pages, 7figures, 5 tables. Any comments are welcom

    Combining Weighted Total Variation and Deep Image Prior for natural and medical image restoration via ADMM

    Full text link
    In the last decades, unsupervised deep learning based methods have caught researchers attention, since in many real applications, such as medical imaging, collecting a great amount of training examples is not always feasible. Moreover, the construction of a good training set is time consuming and hard because the selected data have to be enough representative for the task. In this paper, we focus on the Deep Image Prior (DIP) framework and we propose to combine it with a space-variant Total Variation regularizer with an automatic estimation of the local regularization parameters. Differently from other existing approaches, we solve the arising minimization problem via the flexible Alternating Direction Method of Multipliers (ADMM). Furthermore, we provide a specific implementation also for the standard isotropic Total Variation. The promising performances of the proposed approach, in terms of PSNR and SSIM values, are addressed through several experiments on simulated as well as real natural and medical corrupted images.Comment: conference pape
    • โ€ฆ
    corecore