23 research outputs found

    Accelerated Linearized Bregman Method

    Full text link
    In this paper, we propose and analyze an accelerated linearized Bregman (ALB) method for solving the basis pursuit and related sparse optimization problems. This accelerated algorithm is based on the fact that the linearized Bregman (LB) algorithm is equivalent to a gradient descent method applied to a certain dual formulation. We show that the LB method requires O(1/ϵ)O(1/\epsilon) iterations to obtain an ϵ\epsilon-optimal solution and the ALB algorithm reduces this iteration complexity to O(1/ϵ)O(1/\sqrt{\epsilon}) while requiring almost the same computational effort on each iteration. Numerical results on compressed sensing and matrix completion problems are presented that demonstrate that the ALB method can be significantly faster than the LB method

    A primal-dual flow for affine constrained convex optimization

    Full text link
    We introduce a novel primal-dual flow for affine constrained convex optimization problem. As a modification of the standard saddle-point system, our primal-dual flow is proved to possesses the exponential decay property, in terms of a tailored Lyapunov function. Then a class of primal-dual methods for the original optimization problem are obtained from numerical discretizations of the continuous flow, and with a unified discrete Lyapunov function, nonergodic convergence rates are established. Among those algorithms, we can recover the (linearized) augmented Lagrangian method and the quadratic penalty method with continuation technique. Also, new methods with a special inner problem, that is a linear symmetric positive definite system or a nonlinear equation which may be solved efficiently via the semi-smooth Newton method, have been proposed as well. Especially, numerical tests on the linearly constrained l1l_1-l2l_2 minimization show that our method outperforms the accelerated linearized Bregman method

    Gradient methods for convex minimization: better rates under weaker conditions

    Full text link
    The convergence behavior of gradient methods for minimizing convex differentiable functions is one of the core questions in convex optimization. This paper shows that their well-known complexities can be achieved under conditions weaker than the commonly accepted ones. We relax the common gradient Lipschitz-continuity condition and strong convexity condition to ones that hold only over certain line segments. Specifically, we establish complexities O(Rϵ)O(\frac{R}{\epsilon}) and O(Rϵ)O(\sqrt{\frac{R}{\epsilon}}) for the ordinary and accelerate gradient methods, respectively, assuming that f\nabla f is Lipschitz continuous with constant RR over the line segment joining xx and x1Rfx-\frac{1}{R}\nabla f for each x\in\dom f. Then we improve them to O(Rνlog(1ϵ))O(\frac{R}{\nu}\log(\frac{1}{\epsilon})) and O(Rνlog(1ϵ))O(\sqrt{\frac{R}{\nu}}\log(\frac{1}{\epsilon})) for function ff that also satisfies the secant inequality  νxx2\ \ge \nu\|x-x^*\|^2 for each x\in \dom f and its projection xx^* to the minimizer set of ff. The secant condition is also shown to be necessary for the geometric decay of solution error. Not only are the relaxed conditions met by more functions, the restrictions give smaller RR and larger ν\nu than they are without the restrictions and thus lead to better complexity bounds. We apply these results to sparse optimization and demonstrate a faster algorithm.Comment: 20 pages, 4 figures, typos are corrected, Theorem 2 is ne

    Accelerated algorithms for linearly constrained convex minimization

    Get PDF
    학위논문 (박사)-- 서울대학교 대학원 : 수리과학부, 2014. 2. 강명주.선형 제한 조건의 수학적 최적화는 다양한 영상 처리 문제의 모델로서 사 용되고 있다. 이 논문에서는 이 선형 제한 조건의 수학적 최적화 문제를 풀기위한 빠른 알고리듬들을 소개하고자 한다. 우리가 제안하는 방법들 은 공통적으로 Nesterov에 의해서 개발되었던 가속화한 프록시말 그레디 언트 방법에서 사용되었던 보외법을 기초로 하고 있다. 여기에서 우리는 크게보아서 두가지 알고리듬을 제안하고자 한다. 첫번째 방법은 가속화한 Bregman 방법이며, 압축센싱문제에 적용하여서 원래의 Bregman 방법보다 가속화한 방법이 더 빠름을 확인한다. 두번째 방법은 가속화한 어그먼티드 라그랑지안 방법을 확장한 것인데, 어그먼티드 라그랑지안 방법은 내부 문제를 가지고 있고, 이런 내부문제는 일반적으로 정확한 답을 계산할 수 없다. 그렇기 때문에 이런 내부문제를 적당한 조건을 만족하도록 부정확하 게 풀더라도 가속화한 어그먼티드 라그랑지 방법이 정확하게 내부문제를 풀때와 같은 수렴성을 갖는 조건을 제시한다. 우리는 또한 가속화한 얼터 네이팅 디렉션 방법데 대해서도 비슷한 내용을 전개한다.Abstract i 1 Introduction 1 2 Previous Methods 5 2.1 Mathematical Preliminary . . . . . . . . . . . . . . . . . . . . 5 2.2 The algorithms for solving the linearly constrained convex minimization . . . . . . . . . . . . . . . . . . . . . . . . . . . 8 2.2.1 Augmented Lagrangian Method . . . . . . . . . . . . . 8 2.2.2 Bregman Methods . . . . . . . . . . . . . . . . . . . . 9 2.2.3 Alternating direction method of multipliers . . . . . . . 13 2.3 The accelerating algorithms for unconstrained convex minimization problem . . . . . . . . . . . . . . . . . . . . . . . . . 15 2.3.1 Fast inexact iterative shrinkage thresholding algorithm 16 2.3.2 Inexact accelerated proximal point method . . . . . . . 19 3 Proposed Algorithms 23 3.1 Proposed Algorithm 1 : Accelerated Bregman method . . . . . 23 3.1.1 Equivalence to the accelerated augmented Lagrangian method . . . . . . . . . . . . . . . . . . . . . . . . . . 24 3.1.2 Complexity of the accelerated Bregman method . . . . 27 3.2 Proposed Algorithm 2 : I-AALM . . . . . . . . . . . . . . . . 35 3.3 Proposed Algorithm 3 : I-AADMM . . . . . . . . . . . . . . . 43 3.4 Numerical Results . . . . . . . . . . . . . . . . . . . . . . . . . 54 3.4.1 Comparison to Bregman method with accelerated Bregman method . . . . . . . . . . . . . . . . . . . . . . . . 54 3.4.2 Numerical results of inexact accelerated augmented Lagrangian method using various subproblem solvers . . . 60 3.4.3 Comparison to the inexact accelerated augmented Lagrangian method with other methods . . . . . . . . . . 63 3.4.4 Inexact accelerated alternating direction method of multipliers for Multiplicative Noise Removal . . . . . . . . 69 4 Conclusion 86 Abstract (in Korean) 94Docto

    Augmented L1 and Nuclear-Norm Models with a Globally Linearly Convergent Algorithm

    Full text link
    This paper studies the long-existing idea of adding a nice smooth function to "smooth" a non-differentiable objective function in the context of sparse optimization, in particular, the minimization of x1+1/(2α)x22||x||_1+1/(2\alpha)||x||_2^2, where xx is a vector, as well as the minimization of X+1/(2α)XF2||X||_*+1/(2\alpha)||X||_F^2, where XX is a matrix and X||X||_* and XF||X||_F are the nuclear and Frobenius norms of XX, respectively. We show that they can efficiently recover sparse vectors and low-rank matrices. In particular, they enjoy exact and stable recovery guarantees similar to those known for minimizing x1||x||_1 and X||X||_* under the conditions on the sensing operator such as its null-space property, restricted isometry property, spherical section property, or RIPless property. To recover a (nearly) sparse vector x0x^0, minimizing x1+1/(2α)x22||x||_1+1/(2\alpha)||x||_2^2 returns (nearly) the same solution as minimizing x1||x||_1 almost whenever α10x0\alpha\ge 10||x^0||_\infty. The same relation also holds between minimizing X+1/(2α)XF2||X||_*+1/(2\alpha)||X||_F^2 and minimizing X||X||_* for recovering a (nearly) low-rank matrix X0X^0, if α10X02\alpha\ge 10||X^0||_2. Furthermore, we show that the linearized Bregman algorithm for minimizing x1+1/(2α)x22||x||_1+1/(2\alpha)||x||_2^2 subject to Ax=bAx=b enjoys global linear convergence as long as a nonzero solution exists, and we give an explicit rate of convergence. The convergence property does not require a solution solution or any properties on AA. To our knowledge, this is the best known global convergence result for first-order sparse optimization algorithms.Comment: arXiv admin note: text overlap with arXiv:1207.5326 by other author
    corecore