17 research outputs found

    An a posteriori error control framework for adaptive precision optimization using discontinuous Galerkin finite element method

    Get PDF
    Thesis (Ph. D.)--Massachusetts Institute of Technology, Dept. of Aeronautics and Astronautics, 2005.Includes bibliographical references (leaves 169-178).Introduction: Aerodynamic design optimization has seen significant development over the past decade. Adjoint-based shape design for elliptic systems was first proposed by Pironneau and applied to transonic flow by Jameson . A review of the aerodynamic shape optimization literature and a large list of references is given in. Over the years much technology has been developed, allowing engineers to contemplate applying optimization methods to a wide variety of problems. In the context of structured grids, adjoint-based applications include multipoint, multi-objective airfoil design using compressible Navier-Stokes equations and 3D multipoint design of aircraft configurations using inviscid Euler equations. There have also been significant effort in applying adjoint methods to the unstructured grid setting. In this context, Newman et al., Elliot and Peraire were among the first to develop discrete adjoint approaches for the inviscid Euler equations.by James Ching-ChiPh.D

    A Primal-Dual Augmented Lagrangian Penalty-Interior-Point Algorithm for Nonlinear Programming

    Get PDF
    This thesis treats a new numerical solution method for large-scale nonlinear optimization problems. Nonlinear programs occur in a wide range of engineering and academic applications like discretized optimal control processes and parameter identification of physical systems. The most efficient and robust solution approaches for this problem class have been shown to be sequential quadratic programming and primal-dual interior-point methods. The proposed algorithm combines a variant of the latter with a special penalty function to increase its robustness due to an automatic regularization of the nonlinear constraints caused by the penalty term. In detail, a modified barrier function and a primal-dual augmented Lagrangian approach with an exact l2-penalty is used. Both share the property that for certain Lagrangian multiplier estimates the barrier and penalty parameter do not have to converge to zero or diverge, respectively. This improves the conditioning of the internal linear equation systems near the optimal solution, handles rank-deficiency of the constraint derivatives for all non-feasible iterates and helps with identifying infeasible problem formulations. Although the resulting merit function is non-smooth, a certain step direction is a guaranteed descent. The algorithm includes an adaptive update strategy for the barrier and penalty parameters as well as the Lagrangian multiplier estimates based on a sensitivity analysis. Global convergence is proven to yield a first-order optimal solution, a certificate of infeasibility or a Fritz-John point and is maintained by combining the merit function with a filter or piecewise linear penalty function. Unlike the majority of filter methods, no separate feasibility restoration phase is required. For a fixed barrier parameter the method has a quadratic order of convergence. Furthermore, a sensitivity based iterative refinement strategy is developed to approximate the optimal solution of a parameter dependent nonlinear program under parameter changes. It exploits special sensitivity derivative approximations and converges locally with a linear convergence order to a feasible point that further satisfies the perturbed complementarity condition of the modified barrier method. Thereby, active-set changes from active to inactive can be handled. Due to a certain update of the Lagrangian multiplier estimate, the refinement is suitable in the context of warmstarting the penalty-interior-point approach. A special focus of the thesis is the development of an algorithm with excellent performance in practice. Details on an implementation of the proposed primal-dual penalty-interior-point algorithm in the nonlinear programming solver WORHP and a numerical study based on the CUTEst test collection is provided. The efficiency and robustness of the algorithm is further compared to state-of-the-art nonlinear programming solvers, in particular the interior-point solvers IPOPT and KNITRO as well as the sequential quadratic programming solvers SNOPT and WORHP

    Hybrid Filter Methods for Nonlinear Optimization

    Get PDF
    Globalization strategies used by algorithms to solve nonlinear constrained optimization problems must balance the oftentimes conflicting goals of reducing the objective function and satisfying the constraints. The use of merit functions and filters are two such popular strategies, both of which have their strengths and weaknesses. In particular, traditional filter methods require the use of a restoration phase that is designed to reduce infeasibility while ignoring the objective function. For this reason, there is often a significant decrease in performance when restoration is triggered. In Chapter 3, we present a new filter method that addresses this main weakness of traditional filter methods. Specifically, we present a hybrid filter method that avoids a traditional restoration phase and instead employs a penalty mode that is built upon the l-1 penalty function; the penalty mode is entered when an iterate decreases both the penalty function and the constraint violation. Moreover, the algorithm uses the same search direction computation procedure during every iteration and uses local feasibility estimates that emerge during this procedure to define a new, improved, and adaptive margin (envelope) of the filter. Since we use the penalty function (a combination of the objective function and constraint violation) to define the search direction, our algorithm never ignores the objective function, a property that is not shared by traditional filter methods. Our algorithm thusly draws upon the strengths of both filter and penalty methods to form a novel hybrid approach that is robust and efficient. In particular, under common assumptions, we prove global convergence of our algorithm. In Chapter 4, we present a nonmonotonic variant of the algorithm in Chapter 3. For this version of our method, we prove that it generates iterates that converge to a first-order solution from an arbitrary starting point, with a superlinear rate of convergence. We also present numerical results that validate the efficiency of our method. Finally, in Chapter 5, we present a numerical study on the application of a recently developed bound-constrained quadratic optimization algorithm on the dual formulation of sparse large-scale strictly convex quadratic problems. Such problems are of particular interest since they arise as subproblems during every iteration of our new filter methods

    Globally Convergent Coderivative-Based Generalized Newton Methods in Nonsmooth Optimization

    Full text link
    This paper proposes and justifies two globally convergent Newton-type methods to solve unconstrained and constrained problems of nonsmooth optimization by using tools of variational analysis and generalized differentiation. Both methods are coderivative-based and employ generalized Hessians (coderivatives of subgradient mappings) associated with objective functions, which are either of class C1,1\mathcal{C}^{1,1}, or are represented in the form of convex composite optimization, where one of the terms may be extended-real-valued. The proposed globally convergent algorithms are of two types. The first one extends the damped Newton method and requires positive-definiteness of the generalized Hessians for its well-posedness and efficient performance, while the other algorithm is of {the regularized Newton type} being well-defined when the generalized Hessians are merely positive-semidefinite. The obtained convergence rates for both methods are at least linear, but become superlinear under the semismooth^* property of subgradient mappings. Problems of convex composite optimization are investigated with and without the strong convexity assumption {on smooth parts} of objective functions by implementing the machinery of forward-backward envelopes. Numerical experiments are conducted for Lasso problems and for box constrained quadratic programs with providing performance comparisons of the new algorithms and some other first-order and second-order methods that are highly recognized in nonsmooth optimization.Comment: arXiv admin note: text overlap with arXiv:2101.1055

    Levenberg-Marquardt Algorithms for Nonlinear Equations, Multi-objective Optimization, and Complementarity Problems

    Get PDF
    The Levenberg-Marquardt algorithm is a classical method for solving nonlinear systems of equations that can come from various applications in engineering and economics. Recently, Levenberg-Marquardt methods turned out to be a valuable principle for obtaining fast convergence to a solution of the nonlinear system if the classical nonsingularity assumption is replaced by a weaker error bound condition. In this way also problems with nonisolated solutions can be treated successfully. Such problems increasingly arise in engineering applications and in mathematical programming. In this thesis we use Levenberg-Marquardt algorithms to deal with nonlinear equations, multi-objective optimization and complementarity problems. We develop new algorithms for solving these problems and investigate their convergence properties. For sufficiently smooth nonlinear equations we provide convergence results for inexact Levenberg-Marquardt type algorithms. In particular, a sharp bound on the maximal level of inexactness that is sufficient for a quadratic (or a superlinear) rate of convergence is derived. Moreover, the theory developed is used to show quadratic convergence of a robust projected Levenberg-Marquardt algorithm. The use of Levenberg-Marquardt type algorithms for unconstrained multi-objective optimization problems is investigated in detail. In particular, two globally and locally quadratically convergent algorithms for these problems are developed. Moreover, assumptions under which the error bound condition for a Pareto-critical system is fulfilled are derived. We also treat nonsmooth equations arising from reformulating complementarity problems by means of NCP functions. For these reformulations, we show that existing smoothness conditions are not satisfied at degenerate solutions. Moreover, we derive new results for positively homogeneous functions. The latter results are used to show that appropriate weaker smoothness conditions (enabling a local Q-quadratic rate of convergence) hold for certain reformulations.Der Levenberg-Marquardt-Algorithmus ist ein klassisches Verfahren zur Lösung von nichtlinearen Gleichungssystemen, welches in verschiedenen Anwendungen der Ingenieur-und Wirtschaftswissenschaften vorkommen kann. Kürzlich, erwies sich das Verfahren als ein wertvolles Instrument für die Gewährleistung einer schnelleren Konvergenz für eine Lösung des nichtlinearen Systems, wenn die klassische nichtsinguläre Annahme durch eine schwächere Fehlerschranke der eingebundenen Bedingung ersetzt wird. Auf diese Weise, lassen sich ebenfalls Probleme mit nicht isolierten Lösungen erfolgreich behandeln. Solche Probleme ergeben sich zunehmend in den praktischen, ingenieurwissenschaftlichen Anwendungen und in der mathematischen Programmierung. In dieser Arbeit verwenden wir Levenberg-Marquardt- Algorithmus für nichtlinearere Gleichungen, multikriterielle Optimierung - und nichtlineare Komplementaritätsprobleme. Wir entwickeln neue Algorithmen zur Lösung dieser Probleme und untersuchen ihre Konvergenzeigenschaften. Für ausreichend differenzierbare nichtlineare Gleichungen, analysieren und bieten wir Konvergenzergebnisse für ungenaue Levenberg-Marquardt-Algorithmen Typen. Insbesondere, bieten wir eine strenge Schranke für die maximale Höhe der Ungenauigkeit, die ausreichend ist für eine quadratische (oder eine superlineare) Rate der Konvergenz. Darüber hinaus, die entwickelte Theorie wird verwendet, um quadratische Konvergenz eines robusten projizierten Levenberg-Marquardt-Algorithmus zu zeigen. Die Verwendung von Levenberg-Marquardt-Algorithmen Typen für unbeschränkte multikriterielle Optimierungsprobleme im Detail zu untersucht. Insbesondere sind zwei globale und lokale quadratische konvergente Algorithmen für multikriterielle Optimierungsprobleme entwickelt worden. Die Annahmen wurden hergeleitet, unter welche die Fehlerschranke der eingebundenen Bedingung für ein Pareto-kritisches System erfüllt ist. Wir behandeln auch nicht differenzierbare nichtlineare Gleichungen aus Umformulierung der nichtlinearen Komplementaritätsprobleme durch NCP-Funktionen. Wir zeigen für diese Umformulierungen, dass die bestehenden differenzierbaren Bedingungen nicht zufrieden mit degenerierten Lösungen sind. Außerdem, leiten wir neue Ergebnisse für positiv homogene NCP-Funktionen. Letztere Ergebnisse werden verwendet um zu zeigen, dass geeignete schwächeren differenzierbare Bedingungen (so dass eine lokale Q-quadratische Konvergenzgeschwindigkeit ermöglichen) für bestimmte Umformulierungen gelten

    Globalizing Stabilized Sequential Quadratic Programming Method by Smooth Primal-Dual Exact Penalty Function

    No full text
    An iteration of the stabilized sequential quadratic programming method consists in solving a certain quadratic program in the primal-dual space, regularized in the dual variables. The advantage with respect to the classical sequential quadratic programming is that no constraint qualifications are required for fast local convergence (i.e., the problem can be degenerate). In particular, for equality-constrained problems, the superlinear rate of convergence is guaranteed under the only assumption that the primal-dual starting point is close enough to a stationary point and a noncritical Lagrange multiplier (the latter being weaker than the second-order sufficient optimality condition). However, unlike for the usual sequential quadratic programming method, designing natural globally convergent algorithms based on the stabilized version proved quite a challenge and, currently, there are very few proposals in this direction. For equality-constrained problems, we suggest to use for the task linesearch for the smooth two-parameter exact penalty function, which is the sum of the Lagrangian with squared penalizations of the violation of the constraints and of the violation of the Lagrangian stationarity with respect to primal variables. Reasonable global convergence properties are established. Moreover, we show that the globalized algorithm preserves the superlinear rate of the stabilized sequential quadratic programming method under the weak conditions mentioned above. We also present some numerical experiments on a set of degenerate test problems. © 2016, Springer Science+Business Media New York

    Multi-Period Natural Gas Market Modeling - Applications, Stochastic Extensions and Solution Approaches

    Get PDF
    This dissertation develops deterministic and stochastic multi-period mixed complementarity problems (MCP) for the global natural gas market, as well as solution approaches for large-scale stochastic MCP. The deterministic model is unique in the combination of the level of detail of the actors in the natural gas markets and the transport options, the detailed regional and global coverage, the multi-period approach with endogenous capacity expansions for transportation and storage infrastructure, the seasonal variation in demand and the representation of market power according to Nash-Cournot theory. The model is applied to several scenarios for the natural gas market that cover the formation of a cartel by the members of the Gas Exporting Countries Forum, a low availability of unconventional gas in the United States, and cost reductions in long-distance gas transportation. The results provide insights in how different regions are affected by various developments, in terms of production, consumption, traded volumes, prices and profits of market participants. The stochastic MCP is developed and applied to a global natural gas market problem with four scenarios for a time horizon until 2050 with nineteen regions and containing 78,768 variables. The scenarios vary in the possibility of a gas market cartel formation and varying depletion rates of gas reserves in the major gas importing regions. Outcomes for hedging decisions of market participants show some significant shifts in the timing and location of infrastructure investments, thereby affecting local market situations. A first application of Benders decomposition (BD) is presented to solve a large-scale stochastic MCP for the global gas market with many hundreds of first-stage capacity expansion variables and market players exerting various levels of market power. The largest problem solved successfully using BD contained 47,373 variables of which 763 first-stage variables, however using BD did not result in shorter solution times relative to solving the extensive-forms. Larger problems, up to 117,481 variables, were solved in extensive-form, but not when applying BD due to numerical issues. It is discussed how BD could significantly reduce the solution time of large-scale stochastic models, but various challenges remain and more research is needed to assess the potential of Benders decomposition for solving large-scale stochastic MCP

    Globalizing Stabilized Sequential Quadratic Programming Method by Smooth Primal-Dual Exact Penalty Function

    No full text
    An iteration of the stabilized sequential quadratic programming method consists in solving a certain quadratic program in the primal-dual space, regularized in the dual variables. The advantage with respect to the classical sequential quadratic programming is that no constraint qualifications are required for fast local convergence (i.e., the problem can be degenerate). In particular, for equality-constrained problems, the superlinear rate of convergence is guaranteed under the only assumption that the primal-dual starting point is close enough to a stationary point and a noncritical Lagrange multiplier (the latter being weaker than the second-order sufficient optimality condition). However, unlike for the usual sequential quadratic programming method, designing natural globally convergent algorithms based on the stabilized version proved quite a challenge and, currently, there are very few proposals in this direction. For equality-constrained problems, we suggest to use for the task linesearch for the smooth two-parameter exact penalty function, which is the sum of the Lagrangian with squared penalizations of the violation of the constraints and of the violation of the Lagrangian stationarity with respect to primal variables. Reasonable global convergence properties are established. Moreover, we show that the globalized algorithm preserves the superlinear rate of the stabilized sequential quadratic programming method under the weak conditions mentioned above. We also present some numerical experiments on a set of degenerate test problems. © 2016, Springer Science+Business Media New York
    corecore