23 research outputs found
Accelerated Linearized Bregman Method
In this paper, we propose and analyze an accelerated linearized Bregman (ALB)
method for solving the basis pursuit and related sparse optimization problems.
This accelerated algorithm is based on the fact that the linearized Bregman
(LB) algorithm is equivalent to a gradient descent method applied to a certain
dual formulation. We show that the LB method requires
iterations to obtain an -optimal solution and the ALB algorithm
reduces this iteration complexity to while requiring
almost the same computational effort on each iteration. Numerical results on
compressed sensing and matrix completion problems are presented that
demonstrate that the ALB method can be significantly faster than the LB method
A primal-dual flow for affine constrained convex optimization
We introduce a novel primal-dual flow for affine constrained convex
optimization problem. As a modification of the standard saddle-point system,
our primal-dual flow is proved to possesses the exponential decay property, in
terms of a tailored Lyapunov function. Then a class of primal-dual methods for
the original optimization problem are obtained from numerical discretizations
of the continuous flow, and with a unified discrete Lyapunov function,
nonergodic convergence rates are established. Among those algorithms, we can
recover the (linearized) augmented Lagrangian method and the quadratic penalty
method with continuation technique. Also, new methods with a special inner
problem, that is a linear symmetric positive definite system or a nonlinear
equation which may be solved efficiently via the semi-smooth Newton method,
have been proposed as well. Especially, numerical tests on the linearly
constrained - minimization show that our method outperforms the
accelerated linearized Bregman method
Recommended from our members
Convex Optimization Algorithms and Recovery Theories for Sparse Models in Machine Learning
Sparse modeling is a rapidly developing topic that arises frequently in areas such as machine learning, data analysis and signal processing. One important application of sparse modeling is the recovery of a high-dimensional object from relatively low number of noisy observations, which is the main focuses of the Compressed Sensing, Matrix Completion(MC) and Robust Principal Component Analysis (RPCA) . However, the power of sparse models is hampered by the unprecedented size of the data that has become more and more available in practice. Therefore, it has become increasingly important to better harnessing the convex optimization techniques to take advantage of any underlying "sparsity" structure in problems of extremely large size.
This thesis focuses on two main aspects of sparse modeling. From the modeling perspective, it extends convex programming formulations for matrix completion and robust principal component analysis problems to the case of tensors, and derives theoretical guarantees for exact tensor recovery under a framework of strongly convex programming. On the optimization side, an efficient first-order algorithm with the optimal convergence rate has been proposed and studied for a wide range of problems of linearly constraint sparse modeling problems
Gradient methods for convex minimization: better rates under weaker conditions
The convergence behavior of gradient methods for minimizing convex
differentiable functions is one of the core questions in convex optimization.
This paper shows that their well-known complexities can be achieved under
conditions weaker than the commonly accepted ones. We relax the common gradient
Lipschitz-continuity condition and strong convexity condition to ones that hold
only over certain line segments. Specifically, we establish complexities
and for the ordinary and
accelerate gradient methods, respectively, assuming that is
Lipschitz continuous with constant over the line segment joining and
for each x\in\dom f. Then we improve them to
and
for function that also
satisfies the secant inequality
for each x\in \dom f and its projection to the minimizer set of .
The secant condition is also shown to be necessary for the geometric decay of
solution error. Not only are the relaxed conditions met by more functions, the
restrictions give smaller and larger than they are without the
restrictions and thus lead to better complexity bounds. We apply these results
to sparse optimization and demonstrate a faster algorithm.Comment: 20 pages, 4 figures, typos are corrected, Theorem 2 is ne
Accelerated algorithms for linearly constrained convex minimization
학위논문 (박사)-- 서울대학교 대학원 : 수리과학부, 2014. 2. 강명주.선형 제한 조건의 수학적 최적화는 다양한 영상 처리 문제의 모델로서 사
용되고 있다. 이 논문에서는 이 선형 제한 조건의 수학적 최적화 문제를
풀기위한 빠른 알고리듬들을 소개하고자 한다. 우리가 제안하는 방법들
은 공통적으로 Nesterov에 의해서 개발되었던 가속화한 프록시말 그레디
언트 방법에서 사용되었던 보외법을 기초로 하고 있다. 여기에서 우리는
크게보아서 두가지 알고리듬을 제안하고자 한다. 첫번째 방법은 가속화한
Bregman 방법이며, 압축센싱문제에 적용하여서 원래의 Bregman 방법보다
가속화한 방법이 더 빠름을 확인한다. 두번째 방법은 가속화한 어그먼티드
라그랑지안 방법을 확장한 것인데, 어그먼티드 라그랑지안 방법은 내부
문제를 가지고 있고, 이런 내부문제는 일반적으로 정확한 답을 계산할 수
없다. 그렇기 때문에 이런 내부문제를 적당한 조건을 만족하도록 부정확하
게 풀더라도 가속화한 어그먼티드 라그랑지 방법이 정확하게 내부문제를
풀때와 같은 수렴성을 갖는 조건을 제시한다. 우리는 또한 가속화한 얼터
네이팅 디렉션 방법데 대해서도 비슷한 내용을 전개한다.Abstract i
1 Introduction 1
2 Previous Methods 5
2.1 Mathematical Preliminary . . . . . . . . . . . . . . . . . . . . 5
2.2 The algorithms for solving the linearly constrained convex
minimization . . . . . . . . . . . . . . . . . . . . . . . . . . . 8
2.2.1 Augmented Lagrangian Method . . . . . . . . . . . . . 8
2.2.2 Bregman Methods . . . . . . . . . . . . . . . . . . . . 9
2.2.3 Alternating direction method of multipliers . . . . . . . 13
2.3 The accelerating algorithms for unconstrained convex minimization problem . . . . . . . . . . . . . . . . . . . . . . . . . 15
2.3.1 Fast inexact iterative shrinkage thresholding algorithm 16
2.3.2 Inexact accelerated proximal point method . . . . . . . 19
3 Proposed Algorithms 23
3.1 Proposed Algorithm 1 : Accelerated Bregman method . . . . . 23
3.1.1 Equivalence to the accelerated augmented Lagrangian
method . . . . . . . . . . . . . . . . . . . . . . . . . . 24
3.1.2 Complexity of the accelerated Bregman method . . . . 27
3.2 Proposed Algorithm 2 : I-AALM . . . . . . . . . . . . . . . . 35
3.3 Proposed Algorithm 3 : I-AADMM . . . . . . . . . . . . . . . 43
3.4 Numerical Results . . . . . . . . . . . . . . . . . . . . . . . . . 54
3.4.1 Comparison to Bregman method with accelerated Bregman method . . . . . . . . . . . . . . . . . . . . . . . . 54
3.4.2 Numerical results of inexact accelerated augmented Lagrangian method using various subproblem solvers . . . 60
3.4.3 Comparison to the inexact accelerated augmented Lagrangian method with other methods . . . . . . . . . . 63
3.4.4 Inexact accelerated alternating direction method of multipliers for Multiplicative Noise Removal . . . . . . . . 69
4 Conclusion 86
Abstract (in Korean) 94Docto
Augmented L1 and Nuclear-Norm Models with a Globally Linearly Convergent Algorithm
This paper studies the long-existing idea of adding a nice smooth function to
"smooth" a non-differentiable objective function in the context of sparse
optimization, in particular, the minimization of
, where is a vector, as well as the
minimization of , where is a matrix and
and are the nuclear and Frobenius norms of ,
respectively. We show that they can efficiently recover sparse vectors and
low-rank matrices. In particular, they enjoy exact and stable recovery
guarantees similar to those known for minimizing and under
the conditions on the sensing operator such as its null-space property,
restricted isometry property, spherical section property, or RIPless property.
To recover a (nearly) sparse vector , minimizing
returns (nearly) the same solution as minimizing
almost whenever . The same relation also
holds between minimizing and minimizing
for recovering a (nearly) low-rank matrix , if . Furthermore, we show that the linearized Bregman algorithm for
minimizing subject to enjoys global
linear convergence as long as a nonzero solution exists, and we give an
explicit rate of convergence. The convergence property does not require a
solution solution or any properties on . To our knowledge, this is the best
known global convergence result for first-order sparse optimization algorithms.Comment: arXiv admin note: text overlap with arXiv:1207.5326 by other author