130 research outputs found
Variational models for multiplicative noise removal
νμλ
Όλ¬Έ (λ°μ¬)-- μμΈλνκ΅ λνμ μμ°κ³Όνλν μ리과νλΆ, 2017. 8. κ°λͺ
μ£Ό.This dissertation discusses a variational partial differential equation (PDE) models for restoration of images corrupted by multiplicative Gamma noise. The two proposed models are suitable for heavy multiplicative noise which is often seen in applications. First, we propose a total variation (TV) based model with local constraints. The local constraint involves multiple local windows which is related a spatially adaptive regularization parameter (SARP). In addition, convergence analysis such as the existence and uniqueness of a solution is also provided. Second model is an extension of the first one using nonconvex version of the total generalized variation (TGV). The nonconvex TGV regularization enables to efficiently denoise smooth regions, without staircasing artifacts that appear on total variation regularization based models, and to conserve edges and details.1. Introduction 1
2. Previous works 6
2.1 Variational models for image denoising 6
2.2.1 Convex and nonconvex regularizers 6
2.2.2 Variational models for multiplicative noise removal 8
2.2 Proximal linearized alternating direction method of multipliers 10
3. Proposed models 13
3.1 Proposed model 1 :exp TV model with SARP 13
3.1.1 Derivation of our model 13
3.1.2 Proposed TV model with local constraints 16
3.1.3 A SARP algorithm for solving model (3.1.16) 27
3.1.4 Numerical results 32
3.2 Proposed model 2 :exp NTGV model with SARP 51
3.2.1 Proposed NTGV model 51
3.2.2 Updating rule for in (3.2.1) 52
3.2.3 Algorithm for solving the proposed model (3.2.1) 55
3.2.4 Numerical results 62
3.2.5 Selection of parameters 63
3.2.6 Image denoising 65
4. Conclusion 79Docto
First order algorithms in variational image processing
Variational methods in imaging are nowadays developing towards a quite
universal and flexible tool, allowing for highly successful approaches on tasks
like denoising, deblurring, inpainting, segmentation, super-resolution,
disparity, and optical flow estimation. The overall structure of such
approaches is of the form ; where the functional is a data fidelity term also
depending on some input data and measuring the deviation of from such
and is a regularization functional. Moreover is a (often linear)
forward operator modeling the dependence of data on an underlying image, and
is a positive regularization parameter. While is often
smooth and (strictly) convex, the current practice almost exclusively uses
nonsmooth regularization functionals. The majority of successful techniques is
using nonsmooth and convex functionals like the total variation and
generalizations thereof or -norms of coefficients arising from scalar
products with some frame system. The efficient solution of such variational
problems in imaging demands for appropriate algorithms. Taking into account the
specific structure as a sum of two very different terms to be minimized,
splitting algorithms are a quite canonical choice. Consequently this field has
revived the interest in techniques like operator splittings or augmented
Lagrangians. Here we shall provide an overview of methods currently developed
and recent results as well as some computational studies providing a comparison
of different methods and also illustrating their success in applications.Comment: 60 pages, 33 figure
Multiplicative Noise Removal: Nonlocal Low-Rank Model and Its Proximal Alternating Reweighted Minimization Algorithm
The goal of this paper is to develop a novel numerical method for efficient
multiplicative noise removal. The nonlocal self-similarity of natural images
implies that the matrices formed by their nonlocal similar patches are
low-rank. By exploiting this low-rank prior with application to multiplicative
noise removal, we propose a nonlocal low-rank model for this task and develop a
proximal alternating reweighted minimization (PARM) algorithm to solve the
optimization problem resulting from the model. Specifically, we utilize a
generalized nonconvex surrogate of the rank function to regularize the patch
matrices and develop a new nonlocal low-rank model, which is a nonconvex
nonsmooth optimization problem having a patchwise data fidelity and a
generalized nonlocal low-rank regularization term. To solve this optimization
problem, we propose the PARM algorithm, which has a proximal alternating scheme
with a reweighted approximation of its subproblem. A theoretical analysis of
the proposed PARM algorithm is conducted to guarantee its global convergence to
a critical point. Numerical experiments demonstrate that the proposed method
for multiplicative noise removal significantly outperforms existing methods
such as the benchmark SAR-BM3D method in terms of the visual quality of the
denoised images, and the PSNR (the peak-signal-to-noise ratio) and SSIM (the
structural similarity index measure) values
Accelerated algorithms for linearly constrained convex minimization
νμλ
Όλ¬Έ (λ°μ¬)-- μμΈλνκ΅ λνμ : μ리과νλΆ, 2014. 2. κ°λͺ
μ£Ό.μ ν μ ν 쑰건μ μνμ μ΅μ νλ λ€μν μμ μ²λ¦¬ λ¬Έμ μ λͺ¨λΈλ‘μ μ¬
μ©λκ³ μλ€. μ΄ λ
Όλ¬Έμμλ μ΄ μ ν μ ν 쑰건μ μνμ μ΅μ ν λ¬Έμ λ₯Ό
νκΈ°μν λΉ λ₯Έ μκ³ λ¦¬λ¬λ€μ μκ°νκ³ μ νλ€. μ°λ¦¬κ° μ μνλ λ°©λ²λ€
μ 곡ν΅μ μΌλ‘ Nesterovμ μν΄μ κ°λ°λμλ κ°μνν νλ‘μλ§ κ·Έλ λ
μΈνΈ λ°©λ²μμ μ¬μ©λμλ 보μΈλ²μ κΈ°μ΄λ‘ νκ³ μλ€. μ¬κΈ°μμ μ°λ¦¬λ
ν¬κ²λ³΄μμ λκ°μ§ μκ³ λ¦¬λ¬μ μ μνκ³ μ νλ€. 첫λ²μ§Έ λ°©λ²μ κ°μνν
Bregman λ°©λ²μ΄λ©°, μμΆμΌμ±λ¬Έμ μ μ μ©νμ¬μ μλμ Bregman λ°©λ²λ³΄λ€
κ°μνν λ°©λ²μ΄ λ λΉ λ¦μ νμΈνλ€. λλ²μ§Έ λ°©λ²μ κ°μνν μ΄κ·Έλ¨Όν°λ
λΌκ·Έλμ§μ λ°©λ²μ νμ₯ν κ²μΈλ°, μ΄κ·Έλ¨Όν°λ λΌκ·Έλμ§μ λ°©λ²μ λ΄λΆ
λ¬Έμ λ₯Ό κ°μ§κ³ μκ³ , μ΄λ° λ΄λΆλ¬Έμ λ μΌλ°μ μΌλ‘ μ νν λ΅μ κ³μ°ν μ
μλ€. κ·Έλ κΈ° λλ¬Έμ μ΄λ° λ΄λΆλ¬Έμ λ₯Ό μ λΉν 쑰건μ λ§μ‘±νλλ‘ λΆμ νν
κ² νλλΌλ κ°μνν μ΄κ·Έλ¨Όν°λ λΌκ·Έλμ§ λ°©λ²μ΄ μ ννκ² λ΄λΆλ¬Έμ λ₯Ό
νλμ κ°μ μλ ΄μ±μ κ°λ 쑰건μ μ μνλ€. μ°λ¦¬λ λν κ°μνν μΌν°
λ€μ΄ν
λλ μ
λ°©λ²λ° λν΄μλ λΉμ·ν λ΄μ©μ μ κ°νλ€.Abstract i
1 Introduction 1
2 Previous Methods 5
2.1 Mathematical Preliminary . . . . . . . . . . . . . . . . . . . . 5
2.2 The algorithms for solving the linearly constrained convex
minimization . . . . . . . . . . . . . . . . . . . . . . . . . . . 8
2.2.1 Augmented Lagrangian Method . . . . . . . . . . . . . 8
2.2.2 Bregman Methods . . . . . . . . . . . . . . . . . . . . 9
2.2.3 Alternating direction method of multipliers . . . . . . . 13
2.3 The accelerating algorithms for unconstrained convex minimization problem . . . . . . . . . . . . . . . . . . . . . . . . . 15
2.3.1 Fast inexact iterative shrinkage thresholding algorithm 16
2.3.2 Inexact accelerated proximal point method . . . . . . . 19
3 Proposed Algorithms 23
3.1 Proposed Algorithm 1 : Accelerated Bregman method . . . . . 23
3.1.1 Equivalence to the accelerated augmented Lagrangian
method . . . . . . . . . . . . . . . . . . . . . . . . . . 24
3.1.2 Complexity of the accelerated Bregman method . . . . 27
3.2 Proposed Algorithm 2 : I-AALM . . . . . . . . . . . . . . . . 35
3.3 Proposed Algorithm 3 : I-AADMM . . . . . . . . . . . . . . . 43
3.4 Numerical Results . . . . . . . . . . . . . . . . . . . . . . . . . 54
3.4.1 Comparison to Bregman method with accelerated Bregman method . . . . . . . . . . . . . . . . . . . . . . . . 54
3.4.2 Numerical results of inexact accelerated augmented Lagrangian method using various subproblem solvers . . . 60
3.4.3 Comparison to the inexact accelerated augmented Lagrangian method with other methods . . . . . . . . . . 63
3.4.4 Inexact accelerated alternating direction method of multipliers for Multiplicative Noise Removal . . . . . . . . 69
4 Conclusion 86
Abstract (in Korean) 94Docto
- β¦