50 research outputs found

    A Primal-Dual Augmented Lagrangian

    Get PDF
    Nonlinearly constrained optimization problems can be solved by minimizing a sequence of simpler unconstrained or linearly constrained subproblems. In this paper, we discuss the formulation of subproblems in which the objective is a primal-dual generalization of the Hestenes-Powell augmented Lagrangian function. This generalization has the crucial feature that it is minimized with respect to both the primal and the dual variables simultaneously. A benefit of this approach is that the quality of the dual variables is monitored explicitly during the solution of the subproblem. Moreover, each subproblem may be regularized by imposing explicit bounds on the dual variables. Two primal-dual variants of conventional primal methods are proposed: a primal-dual bound constrained Lagrangian (pdBCL) method and a primal-dual â„“\ell1 linearly constrained Lagrangian (pdâ„“\ell1-LCL) method

    A Primal-Dual Augmented Lagrangian Penalty-Interior-Point Algorithm for Nonlinear Programming

    Get PDF
    This thesis treats a new numerical solution method for large-scale nonlinear optimization problems. Nonlinear programs occur in a wide range of engineering and academic applications like discretized optimal control processes and parameter identification of physical systems. The most efficient and robust solution approaches for this problem class have been shown to be sequential quadratic programming and primal-dual interior-point methods. The proposed algorithm combines a variant of the latter with a special penalty function to increase its robustness due to an automatic regularization of the nonlinear constraints caused by the penalty term. In detail, a modified barrier function and a primal-dual augmented Lagrangian approach with an exact l2-penalty is used. Both share the property that for certain Lagrangian multiplier estimates the barrier and penalty parameter do not have to converge to zero or diverge, respectively. This improves the conditioning of the internal linear equation systems near the optimal solution, handles rank-deficiency of the constraint derivatives for all non-feasible iterates and helps with identifying infeasible problem formulations. Although the resulting merit function is non-smooth, a certain step direction is a guaranteed descent. The algorithm includes an adaptive update strategy for the barrier and penalty parameters as well as the Lagrangian multiplier estimates based on a sensitivity analysis. Global convergence is proven to yield a first-order optimal solution, a certificate of infeasibility or a Fritz-John point and is maintained by combining the merit function with a filter or piecewise linear penalty function. Unlike the majority of filter methods, no separate feasibility restoration phase is required. For a fixed barrier parameter the method has a quadratic order of convergence. Furthermore, a sensitivity based iterative refinement strategy is developed to approximate the optimal solution of a parameter dependent nonlinear program under parameter changes. It exploits special sensitivity derivative approximations and converges locally with a linear convergence order to a feasible point that further satisfies the perturbed complementarity condition of the modified barrier method. Thereby, active-set changes from active to inactive can be handled. Due to a certain update of the Lagrangian multiplier estimate, the refinement is suitable in the context of warmstarting the penalty-interior-point approach. A special focus of the thesis is the development of an algorithm with excellent performance in practice. Details on an implementation of the proposed primal-dual penalty-interior-point algorithm in the nonlinear programming solver WORHP and a numerical study based on the CUTEst test collection is provided. The efficiency and robustness of the algorithm is further compared to state-of-the-art nonlinear programming solvers, in particular the interior-point solvers IPOPT and KNITRO as well as the sequential quadratic programming solvers SNOPT and WORHP

    Projection methods in conic optimization

    Get PDF
    There exist efficient algorithms to project a point onto the intersection of a convex cone and an affine subspace. Those conic projections are in turn the work-horse of a range of algorithms in conic optimization, having a variety of applications in science, finance and engineering. This chapter reviews some of these algorithms, emphasizing the so-called regularization algorithms for linear conic optimization, and applications in polynomial optimization. This is a presentation of the material of several recent research articles; we aim here at clarifying the ideas, presenting them in a general framework, and pointing out important techniques

    Multi-task multi-modality SVM for early COVID-19 Diagnosis using chest CT data

    Get PDF
    Since January 2020 Elsevier has created a COVID-19 resource centre with free information in English and Mandarin on the novel coronavirus COVID 19. The COVID-19 resource centre is hosted on Elsevier Connect, the company's public news and information website. Elsevier hereby grants permission to make all its COVID-19-related research that is available on the COVID-19 resource centre - including this research content - immediately available in PubMed Central and other publicly funded repositories, such as the WHO COVID database with rights for unrestricted research re-use and analyses in any form or by any means with acknowledgement of the original source. These permissions are granted for free by Elsevier for as long as the COVID-19 resource centre remains activePublishe

    Full Waveform Inversion and Lagrange Multipliers

    Full text link
    Full-waveform inversion (FWI) is an effective method for imaging subsurface properties using sparsely recorded data. It involves solving a wave propagation problem to estimate model parameters that accurately reproduce the data. Recent trends in FWI have led to the development of extended methodologies, among which source extension methods leveraging reconstructed wavefields to solve penalty or augmented Lagrangian (AL) formulations have emerged as robust algorithms, even for inaccurate initial models. Despite their demonstrated robustness, challenges remain, such as the lack of a clear physical interpretation, difficulty in comparison, and reliance on difficult-to-compute least squares (LS) wavefields. This paper is divided into two critical parts. In the first, a novel formulation of these methods is explored within a unified Lagrangian framework. This novel perspective permits the introduction of alternative algorithms that employ LS multipliers instead of wavefields. These multiplier-oriented variants appear as regularizations of the standard FWI, are adaptable to the time domain, offer tangible physical interpretations, and foster enhanced convergence efficiency. The second part of the paper delves into understanding the underlying mechanisms of these techniques. This is achieved by solving the FWI equations using iterative linearization and inverse scattering methods. The paper provides insight into the role and significance of Lagrange multipliers in enhancing the linearization of FWI equations. It explains how different methods estimate multipliers or make approximations to increase computing efficiency. Additionally, it presents a new physical understanding of the Lagrange multiplier used in the AL method, highlighting how important it is for improving algorithm performance when compared to penalty methods
    corecore