15 research outputs found

    A First-order Augmented Lagrangian Method for Compressed Sensing

    Full text link
    We propose a first-order augmented Lagrangian algorithm (FAL) for solving the basis pursuit problem. FAL computes a solution to this problem by inexactly solving a sequence of L1-regularized least squares sub-problems. These sub-problems are solved using an infinite memory proximal gradient algorithm wherein each update reduces to "shrinkage" or constrained "shrinkage". We show that FAL converges to an optimal solution of the basis pursuit problem whenever the solution is unique, which is the case with very high probability for compressed sensing problems. We construct a parameter sequence such that the corresponding FAL iterates are eps-feasible and eps-optimal for all eps>0 within O(log(1/eps)) FAL iterations. Moreover, FAL requires at most O(1/eps) matrix-vector multiplications of the form Ax or A^Ty to compute an eps-feasible, eps-optimal solution. We show that FAL can be easily extended to solve the basis pursuit denoising problem when there is a non-trivial level of noise on the measurements. We report the results of numerical experiments comparing FAL with the state-of-the-art algorithms for both noisy and noiseless compressed sensing problems. A striking property of FAL that we observed in the numerical experiments with randomly generated instances when there is no measurement noise was that FAL always correctly identifies the support of the target signal without any thresholding or post-processing, for moderately small error tolerance values

    OSGA: A fast subgradient algorithm with optimal complexity

    Full text link
    This paper presents an algorithm for approximately minimizing a convex function in simple, not necessarily bounded convex domains, assuming only that function values and subgradients are available. No global information about the objective function is needed apart from a strong convexity parameter (which can be put to zero if only convexity is known). The worst case number of iterations needed to achieve a given accuracy is independent of the dimension (which may be infinite) and - apart from a constant factor - best possible under a variety of smoothness assumptions on the objective function.Comment: 19 page

    Iteration-complexity of an inner accelerated inexact proximal augmented Lagrangian method based on the classical Lagrangian function and a full Lagrange multiplier update

    Full text link
    This paper establishes the iteration-complexity of an inner accelerated inexact proximal augmented Lagrangian (IAIPAL) method for solving linearly-constrained smooth nonconvex composite optimization problems that is based on the classical augmented Lagrangian (AL) function. More specifically, each IAIPAL iteration consists of inexactly solving a proximal AL subproblem by an accelerated composite gradient (ACG) method followed by a classical Lagrange multiplier update. Under the assumption that the domain of the composite function is bounded and the problem has a Slater point, it is shown that IAIPAL generates an approximate stationary solution in O(Ļāˆ’3){\cal O}(\rho^{-3}) ACG iterations (up to a logarithmic term) where Ļ>0\rho>0 is the tolerance for both stationarity and feasibility. Moreover, the above bound is derived without assuming that the initial point is feasible. Finally, numerical results are presented to demonstrate the strong practical performance of IAIPAL

    A Computational Framework for Multivariate Convex Regression and Its Variants

    Get PDF
    We study the nonparametric least squares estimator (LSE) of a multivariate convex regression function. The LSE, given as the solution to a quadratic program with O(nĀ²) linear constraints (n being the sample size), is difficult to compute for large problems. Exploiting problem specific structure, we propose a scalable algorithmic framework based on the augmented Lagrangian method to compute the LSE. We develop a novel approach to obtain smooth convex approximations to the fitted (piecewise affine) convex LSE and provide formal bounds on the quality of approximation. When the number of samples is not too large compared to the dimension of the predictor, we propose a regularization schemeā€”Lipschitz convex regressionā€”where we constrain the norm of the subgradients, and study the rates of convergence of the obtained LSE. Our algorithmic framework is simple and flexible and can be easily adapted to handle variants: estimation of a nondecreasing/nonincreasing convex/concave (with or without a Lipschitz bound) function. We perform numerical studies illustrating the scalability of the proposed algorithmā€”on some instances our proposal leads to more than a 10,000-fold improvement in runtime when compared to off-the-shelf interior point solvers for problems with n = 500. Keywords: Augmented Lagrangian method; Lipschitz convex regression; Non parametric least squares estimator; Scalable quadratic programming; Smooth convex regressionUnited States. Office of Naval Research (Grant N00014-15-1-2342
    corecore