229 research outputs found
A universal accelerated primal-dual method for convex optimization problems
This work presents a universal accelerated first-order primal-dual method for
affinely constrained convex optimization problems. It can handle both Lipschitz
and H\"{o}lder gradients but does not need to know the smoothness level of the
objective function. In line search part, it uses dynamically decreasing
parameters and produces approximate Lipschitz constant with moderate magnitude.
In addition, based on a suitable discrete Lyapunov function and tight decay
estimates of some differential/difference inequalities, a universal optimal
mixed-type convergence rate is established. Some numerical tests are provided
to confirm the efficiency of the proposed method
A Smooth Primal-Dual Optimization Framework for Nonsmooth Composite Convex Minimization
We propose a new first-order primal-dual optimization framework for a convex
optimization template with broad applications. Our optimization algorithms
feature optimal convergence guarantees under a variety of common structure
assumptions on the problem template. Our analysis relies on a novel combination
of three classic ideas applied to the primal-dual gap function: smoothing,
acceleration, and homotopy. The algorithms due to the new approach achieve the
best known convergence rate results, in particular when the template consists
of only non-smooth functions. We also outline a restart strategy for the
acceleration to significantly enhance the practical performance. We demonstrate
relations with the augmented Lagrangian method and show how to exploit the
strongly convex objectives with rigorous convergence rate guarantees. We
provide numerical evidence with two examples and illustrate that the new
methods can outperform the state-of-the-art, including Chambolle-Pock, and the
alternating direction method-of-multipliers algorithms.Comment: 35 pages, accepted for publication on SIAM J. Optimization. Tech.
Report, Oct. 2015 (last update Sept. 2016
Accelerated algorithms for linearly constrained convex minimization
ํ์๋
ผ๋ฌธ (๋ฐ์ฌ)-- ์์ธ๋ํ๊ต ๋ํ์ : ์๋ฆฌ๊ณผํ๋ถ, 2014. 2. ๊ฐ๋ช
์ฃผ.์ ํ ์ ํ ์กฐ๊ฑด์ ์ํ์ ์ต์ ํ๋ ๋ค์ํ ์์ ์ฒ๋ฆฌ ๋ฌธ์ ์ ๋ชจ๋ธ๋ก์ ์ฌ
์ฉ๋๊ณ ์๋ค. ์ด ๋
ผ๋ฌธ์์๋ ์ด ์ ํ ์ ํ ์กฐ๊ฑด์ ์ํ์ ์ต์ ํ ๋ฌธ์ ๋ฅผ
ํ๊ธฐ์ํ ๋น ๋ฅธ ์๊ณ ๋ฆฌ๋ฌ๋ค์ ์๊ฐํ๊ณ ์ ํ๋ค. ์ฐ๋ฆฌ๊ฐ ์ ์ํ๋ ๋ฐฉ๋ฒ๋ค
์ ๊ณตํต์ ์ผ๋ก Nesterov์ ์ํด์ ๊ฐ๋ฐ๋์๋ ๊ฐ์ํํ ํ๋ก์๋ง ๊ทธ๋ ๋
์ธํธ ๋ฐฉ๋ฒ์์ ์ฌ์ฉ๋์๋ ๋ณด์ธ๋ฒ์ ๊ธฐ์ด๋ก ํ๊ณ ์๋ค. ์ฌ๊ธฐ์์ ์ฐ๋ฆฌ๋
ํฌ๊ฒ๋ณด์์ ๋๊ฐ์ง ์๊ณ ๋ฆฌ๋ฌ์ ์ ์ํ๊ณ ์ ํ๋ค. ์ฒซ๋ฒ์งธ ๋ฐฉ๋ฒ์ ๊ฐ์ํํ
Bregman ๋ฐฉ๋ฒ์ด๋ฉฐ, ์์ถ์ผ์ฑ๋ฌธ์ ์ ์ ์ฉํ์ฌ์ ์๋์ Bregman ๋ฐฉ๋ฒ๋ณด๋ค
๊ฐ์ํํ ๋ฐฉ๋ฒ์ด ๋ ๋น ๋ฆ์ ํ์ธํ๋ค. ๋๋ฒ์งธ ๋ฐฉ๋ฒ์ ๊ฐ์ํํ ์ด๊ทธ๋จผํฐ๋
๋ผ๊ทธ๋์ง์ ๋ฐฉ๋ฒ์ ํ์ฅํ ๊ฒ์ธ๋ฐ, ์ด๊ทธ๋จผํฐ๋ ๋ผ๊ทธ๋์ง์ ๋ฐฉ๋ฒ์ ๋ด๋ถ
๋ฌธ์ ๋ฅผ ๊ฐ์ง๊ณ ์๊ณ , ์ด๋ฐ ๋ด๋ถ๋ฌธ์ ๋ ์ผ๋ฐ์ ์ผ๋ก ์ ํํ ๋ต์ ๊ณ์ฐํ ์
์๋ค. ๊ทธ๋ ๊ธฐ ๋๋ฌธ์ ์ด๋ฐ ๋ด๋ถ๋ฌธ์ ๋ฅผ ์ ๋นํ ์กฐ๊ฑด์ ๋ง์กฑํ๋๋ก ๋ถ์ ํํ
๊ฒ ํ๋๋ผ๋ ๊ฐ์ํํ ์ด๊ทธ๋จผํฐ๋ ๋ผ๊ทธ๋์ง ๋ฐฉ๋ฒ์ด ์ ํํ๊ฒ ๋ด๋ถ๋ฌธ์ ๋ฅผ
ํ๋์ ๊ฐ์ ์๋ ด์ฑ์ ๊ฐ๋ ์กฐ๊ฑด์ ์ ์ํ๋ค. ์ฐ๋ฆฌ๋ ๋ํ ๊ฐ์ํํ ์ผํฐ
๋ค์ดํ
๋๋ ์
๋ฐฉ๋ฒ๋ฐ ๋ํด์๋ ๋น์ทํ ๋ด์ฉ์ ์ ๊ฐํ๋ค.Abstract i
1 Introduction 1
2 Previous Methods 5
2.1 Mathematical Preliminary . . . . . . . . . . . . . . . . . . . . 5
2.2 The algorithms for solving the linearly constrained convex
minimization . . . . . . . . . . . . . . . . . . . . . . . . . . . 8
2.2.1 Augmented Lagrangian Method . . . . . . . . . . . . . 8
2.2.2 Bregman Methods . . . . . . . . . . . . . . . . . . . . 9
2.2.3 Alternating direction method of multipliers . . . . . . . 13
2.3 The accelerating algorithms for unconstrained convex minimization problem . . . . . . . . . . . . . . . . . . . . . . . . . 15
2.3.1 Fast inexact iterative shrinkage thresholding algorithm 16
2.3.2 Inexact accelerated proximal point method . . . . . . . 19
3 Proposed Algorithms 23
3.1 Proposed Algorithm 1 : Accelerated Bregman method . . . . . 23
3.1.1 Equivalence to the accelerated augmented Lagrangian
method . . . . . . . . . . . . . . . . . . . . . . . . . . 24
3.1.2 Complexity of the accelerated Bregman method . . . . 27
3.2 Proposed Algorithm 2 : I-AALM . . . . . . . . . . . . . . . . 35
3.3 Proposed Algorithm 3 : I-AADMM . . . . . . . . . . . . . . . 43
3.4 Numerical Results . . . . . . . . . . . . . . . . . . . . . . . . . 54
3.4.1 Comparison to Bregman method with accelerated Bregman method . . . . . . . . . . . . . . . . . . . . . . . . 54
3.4.2 Numerical results of inexact accelerated augmented Lagrangian method using various subproblem solvers . . . 60
3.4.3 Comparison to the inexact accelerated augmented Lagrangian method with other methods . . . . . . . . . . 63
3.4.4 Inexact accelerated alternating direction method of multipliers for Multiplicative Noise Removal . . . . . . . . 69
4 Conclusion 86
Abstract (in Korean) 94Docto
- โฆ