229 research outputs found

    A universal accelerated primal-dual method for convex optimization problems

    Full text link
    This work presents a universal accelerated first-order primal-dual method for affinely constrained convex optimization problems. It can handle both Lipschitz and H\"{o}lder gradients but does not need to know the smoothness level of the objective function. In line search part, it uses dynamically decreasing parameters and produces approximate Lipschitz constant with moderate magnitude. In addition, based on a suitable discrete Lyapunov function and tight decay estimates of some differential/difference inequalities, a universal optimal mixed-type convergence rate is established. Some numerical tests are provided to confirm the efficiency of the proposed method

    A Smooth Primal-Dual Optimization Framework for Nonsmooth Composite Convex Minimization

    Get PDF
    We propose a new first-order primal-dual optimization framework for a convex optimization template with broad applications. Our optimization algorithms feature optimal convergence guarantees under a variety of common structure assumptions on the problem template. Our analysis relies on a novel combination of three classic ideas applied to the primal-dual gap function: smoothing, acceleration, and homotopy. The algorithms due to the new approach achieve the best known convergence rate results, in particular when the template consists of only non-smooth functions. We also outline a restart strategy for the acceleration to significantly enhance the practical performance. We demonstrate relations with the augmented Lagrangian method and show how to exploit the strongly convex objectives with rigorous convergence rate guarantees. We provide numerical evidence with two examples and illustrate that the new methods can outperform the state-of-the-art, including Chambolle-Pock, and the alternating direction method-of-multipliers algorithms.Comment: 35 pages, accepted for publication on SIAM J. Optimization. Tech. Report, Oct. 2015 (last update Sept. 2016

    Accelerated algorithms for linearly constrained convex minimization

    Get PDF
    ํ•™์œ„๋…ผ๋ฌธ (๋ฐ•์‚ฌ)-- ์„œ์šธ๋Œ€ํ•™๊ต ๋Œ€ํ•™์› : ์ˆ˜๋ฆฌ๊ณผํ•™๋ถ€, 2014. 2. ๊ฐ•๋ช…์ฃผ.์„ ํ˜• ์ œํ•œ ์กฐ๊ฑด์˜ ์ˆ˜ํ•™์  ์ตœ์ ํ™”๋Š” ๋‹ค์–‘ํ•œ ์˜์ƒ ์ฒ˜๋ฆฌ ๋ฌธ์ œ์˜ ๋ชจ๋ธ๋กœ์„œ ์‚ฌ ์šฉ๋˜๊ณ  ์žˆ๋‹ค. ์ด ๋…ผ๋ฌธ์—์„œ๋Š” ์ด ์„ ํ˜• ์ œํ•œ ์กฐ๊ฑด์˜ ์ˆ˜ํ•™์  ์ตœ์ ํ™” ๋ฌธ์ œ๋ฅผ ํ’€๊ธฐ์œ„ํ•œ ๋น ๋ฅธ ์•Œ๊ณ ๋ฆฌ๋“ฌ๋“ค์„ ์†Œ๊ฐœํ•˜๊ณ ์ž ํ•œ๋‹ค. ์šฐ๋ฆฌ๊ฐ€ ์ œ์•ˆํ•˜๋Š” ๋ฐฉ๋ฒ•๋“ค ์€ ๊ณตํ†ต์ ์œผ๋กœ Nesterov์— ์˜ํ•ด์„œ ๊ฐœ๋ฐœ๋˜์—ˆ๋˜ ๊ฐ€์†ํ™”ํ•œ ํ”„๋ก์‹œ๋ง ๊ทธ๋ ˆ๋”” ์–ธํŠธ ๋ฐฉ๋ฒ•์—์„œ ์‚ฌ์šฉ๋˜์—ˆ๋˜ ๋ณด์™ธ๋ฒ•์„ ๊ธฐ์ดˆ๋กœ ํ•˜๊ณ  ์žˆ๋‹ค. ์—ฌ๊ธฐ์—์„œ ์šฐ๋ฆฌ๋Š” ํฌ๊ฒŒ๋ณด์•„์„œ ๋‘๊ฐ€์ง€ ์•Œ๊ณ ๋ฆฌ๋“ฌ์„ ์ œ์•ˆํ•˜๊ณ ์ž ํ•œ๋‹ค. ์ฒซ๋ฒˆ์งธ ๋ฐฉ๋ฒ•์€ ๊ฐ€์†ํ™”ํ•œ Bregman ๋ฐฉ๋ฒ•์ด๋ฉฐ, ์••์ถ•์„ผ์‹ฑ๋ฌธ์ œ์— ์ ์šฉํ•˜์—ฌ์„œ ์›๋ž˜์˜ Bregman ๋ฐฉ๋ฒ•๋ณด๋‹ค ๊ฐ€์†ํ™”ํ•œ ๋ฐฉ๋ฒ•์ด ๋” ๋น ๋ฆ„์„ ํ™•์ธํ•œ๋‹ค. ๋‘๋ฒˆ์งธ ๋ฐฉ๋ฒ•์€ ๊ฐ€์†ํ™”ํ•œ ์–ด๊ทธ๋จผํ‹ฐ๋“œ ๋ผ๊ทธ๋ž‘์ง€์•ˆ ๋ฐฉ๋ฒ•์„ ํ™•์žฅํ•œ ๊ฒƒ์ธ๋ฐ, ์–ด๊ทธ๋จผํ‹ฐ๋“œ ๋ผ๊ทธ๋ž‘์ง€์•ˆ ๋ฐฉ๋ฒ•์€ ๋‚ด๋ถ€ ๋ฌธ์ œ๋ฅผ ๊ฐ€์ง€๊ณ  ์žˆ๊ณ , ์ด๋Ÿฐ ๋‚ด๋ถ€๋ฌธ์ œ๋Š” ์ผ๋ฐ˜์ ์œผ๋กœ ์ •ํ™•ํ•œ ๋‹ต์„ ๊ณ„์‚ฐํ•  ์ˆ˜ ์—†๋‹ค. ๊ทธ๋ ‡๊ธฐ ๋•Œ๋ฌธ์— ์ด๋Ÿฐ ๋‚ด๋ถ€๋ฌธ์ œ๋ฅผ ์ ๋‹นํ•œ ์กฐ๊ฑด์„ ๋งŒ์กฑํ•˜๋„๋ก ๋ถ€์ •ํ™•ํ•˜ ๊ฒŒ ํ’€๋”๋ผ๋„ ๊ฐ€์†ํ™”ํ•œ ์–ด๊ทธ๋จผํ‹ฐ๋“œ ๋ผ๊ทธ๋ž‘์ง€ ๋ฐฉ๋ฒ•์ด ์ •ํ™•ํ•˜๊ฒŒ ๋‚ด๋ถ€๋ฌธ์ œ๋ฅผ ํ’€๋•Œ์™€ ๊ฐ™์€ ์ˆ˜๋ ด์„ฑ์„ ๊ฐ–๋Š” ์กฐ๊ฑด์„ ์ œ์‹œํ•œ๋‹ค. ์šฐ๋ฆฌ๋Š” ๋˜ํ•œ ๊ฐ€์†ํ™”ํ•œ ์–ผํ„ฐ ๋„ค์ดํŒ… ๋””๋ ‰์…˜ ๋ฐฉ๋ฒ•๋ฐ ๋Œ€ํ•ด์„œ๋„ ๋น„์Šทํ•œ ๋‚ด์šฉ์„ ์ „๊ฐœํ•œ๋‹ค.Abstract i 1 Introduction 1 2 Previous Methods 5 2.1 Mathematical Preliminary . . . . . . . . . . . . . . . . . . . . 5 2.2 The algorithms for solving the linearly constrained convex minimization . . . . . . . . . . . . . . . . . . . . . . . . . . . 8 2.2.1 Augmented Lagrangian Method . . . . . . . . . . . . . 8 2.2.2 Bregman Methods . . . . . . . . . . . . . . . . . . . . 9 2.2.3 Alternating direction method of multipliers . . . . . . . 13 2.3 The accelerating algorithms for unconstrained convex minimization problem . . . . . . . . . . . . . . . . . . . . . . . . . 15 2.3.1 Fast inexact iterative shrinkage thresholding algorithm 16 2.3.2 Inexact accelerated proximal point method . . . . . . . 19 3 Proposed Algorithms 23 3.1 Proposed Algorithm 1 : Accelerated Bregman method . . . . . 23 3.1.1 Equivalence to the accelerated augmented Lagrangian method . . . . . . . . . . . . . . . . . . . . . . . . . . 24 3.1.2 Complexity of the accelerated Bregman method . . . . 27 3.2 Proposed Algorithm 2 : I-AALM . . . . . . . . . . . . . . . . 35 3.3 Proposed Algorithm 3 : I-AADMM . . . . . . . . . . . . . . . 43 3.4 Numerical Results . . . . . . . . . . . . . . . . . . . . . . . . . 54 3.4.1 Comparison to Bregman method with accelerated Bregman method . . . . . . . . . . . . . . . . . . . . . . . . 54 3.4.2 Numerical results of inexact accelerated augmented Lagrangian method using various subproblem solvers . . . 60 3.4.3 Comparison to the inexact accelerated augmented Lagrangian method with other methods . . . . . . . . . . 63 3.4.4 Inexact accelerated alternating direction method of multipliers for Multiplicative Noise Removal . . . . . . . . 69 4 Conclusion 86 Abstract (in Korean) 94Docto
    • โ€ฆ
    corecore