4,293 research outputs found

    A variational approach to stable principal component pursuit

    Get PDF
    We introduce a new convex formulation for stable principal component pursuit (SPCP) to decompose noisy signals into low-rank and sparse representations. For numerical solutions of our SPCP formulation, we first develop a convex variational framework and then accelerate it with quasi-Newton methods. We show, via synthetic and real data experiments, that our approach offers advantages over the classical SPCP formulations in scalability and practical parameter selection.Comment: 10 pages, 5 figure

    A penalty method for PDE-constrained optimization in inverse problems

    Full text link
    Many inverse and parameter estimation problems can be written as PDE-constrained optimization problems. The goal, then, is to infer the parameters, typically coefficients of the PDE, from partial measurements of the solutions of the PDE for several right-hand-sides. Such PDE-constrained problems can be solved by finding a stationary point of the Lagrangian, which entails simultaneously updating the paramaters and the (adjoint) state variables. For large-scale problems, such an all-at-once approach is not feasible as it requires storing all the state variables. In this case one usually resorts to a reduced approach where the constraints are explicitly eliminated (at each iteration) by solving the PDEs. These two approaches, and variations thereof, are the main workhorses for solving PDE-constrained optimization problems arising from inverse problems. In this paper, we present an alternative method that aims to combine the advantages of both approaches. Our method is based on a quadratic penalty formulation of the constrained optimization problem. By eliminating the state variable, we develop an efficient algorithm that has roughly the same computational complexity as the conventional reduced approach while exploiting a larger search space. Numerical results show that this method indeed reduces some of the non-linearity of the problem and is less sensitive the initial iterate

    Multiplier-continuation algorthms for constrained optimization

    Get PDF
    Several path following algorithms based on the combination of three smooth penalty functions, the quadratic penalty for equality constraints and the quadratic loss and log barrier for inequality constraints, their modern counterparts, augmented Lagrangian or multiplier methods, sequential quadratic programming, and predictor-corrector continuation are described. In the first phase of this methodology, one minimizes the unconstrained or linearly constrained penalty function or augmented Lagrangian. A homotopy path generated from the functions is then followed to optimality using efficient predictor-corrector continuation methods. The continuation steps are asymptotic to those taken by sequential quadratic programming which can be used in the final steps. Numerical test results show the method to be efficient, robust, and a competitive alternative to sequential quadratic programming

    Lagrange optimality system for a class of nonsmooth convex optimization

    Get PDF
    In this paper, we revisit the augmented Lagrangian method for a class of nonsmooth convex optimization. We present the Lagrange optimality system of the augmented Lagrangian associated with the problems, and establish its connections with the standard optimality condition and the saddle point condition of the augmented Lagrangian, which provides a powerful tool for developing numerical algorithms. We apply a linear Newton method to the Lagrange optimality system to obtain a novel algorithm applicable to a variety of nonsmooth convex optimization problems arising in practical applications. Under suitable conditions, we prove the nonsingularity of the Newton system and the local convergence of the algorithm.Comment: 19 page

    Efficient SDP Inference for Fully-connected CRFs Based on Low-rank Decomposition

    Full text link
    Conditional Random Fields (CRF) have been widely used in a variety of computer vision tasks. Conventional CRFs typically define edges on neighboring image pixels, resulting in a sparse graph such that efficient inference can be performed. However, these CRFs fail to model long-range contextual relationships. Fully-connected CRFs have thus been proposed. While there are efficient approximate inference methods for such CRFs, usually they are sensitive to initialization and make strong assumptions. In this work, we develop an efficient, yet general algorithm for inference on fully-connected CRFs. The algorithm is based on a scalable SDP algorithm and the low- rank approximation of the similarity/kernel matrix. The core of the proposed algorithm is a tailored quasi-Newton method that takes advantage of the low-rank matrix approximation when solving the specialized SDP dual problem. Experiments demonstrate that our method can be applied on fully-connected CRFs that cannot be solved previously, such as pixel-level image co-segmentation.Comment: 15 pages. A conference version of this work appears in Proc. IEEE Conference on Computer Vision and Pattern Recognition, 201

    A second derivative SQP method: theoretical issues

    Get PDF
    Sequential quadratic programming (SQP) methods form a class of highly efficient algorithms for solving nonlinearly constrained optimization problems. Although second derivative information may often be calculated, there is little practical theory that justifies exact-Hessian SQP methods. In particular, the resulting quadratic programming (QP) subproblems are often nonconvex, and thus finding their global solutions may be computationally nonviable. This paper presents a second-derivative SQP method based on quadratic subproblems that are either convex, and thus may be solved efficiently, or need not be solved globally. Additionally, an explicit descent-constraint is imposed on certain QP subproblems, which “guides” the iterates through areas in which nonconvexity is a concern. Global convergence of the resulting algorithm is established
    • …
    corecore