10 research outputs found

    Bibliographie der Veröffentlichungen von Angehörigen der Technischen Hochschule Ilmenau: aus d. Jahr 1987

    Get PDF
    Übersicht: Bibliographie der Veröffentlichungen von Angehörigen der Technischen Hochschule Ilmena

    Solving Constrained Piecewise Linear Optimization Problems by Exploiting the Abs-linear Approach

    Get PDF
    In dieser Arbeit wird ein Algorithmus zur Lösung von endlichdimensionalen Optimierungsproblemen mit stückweise linearer Zielfunktion und stückweise linearen Nebenbedingungen vorgestellt. Dabei wird angenommen, dass die Funktionen in der sogenannten Abs-Linear Form, einer Matrix-Vektor-Darstellung, vorliegen. Mit Hilfe dieser Form lässt sich der Urbildraum in Polyeder zerlegen, so dass die Nichtglattheiten der stückweise linearen Funktionen mit den Kanten der Polyeder zusammenfallen können. Für die Klasse der abs-linearen Funktionen werden sowohl für den unbeschränkten als auch für den beschränkten Fall notwendige und hinreichende Optimalitätsbedingungen bewiesen, die in polynomialer Zeit verifiziert werden können. Für unbeschränkte stückweise lineare Optimierungsprobleme haben Andrea Walther und Andreas Griewank bereits 2019 mit der Active Signature Method (ASM) einen Lösungsalgorithmus vorgestellt. Aufbauend auf dieser Methode und in Kombination mit der Idee der aktiven Mengen Strategie zur Behandlung von Ungleichungsnebenbedingungen entsteht ein neuer Algorithmus mit dem Namen Constrained Active Signature Method (CASM) für beschränkte Probleme. Beide Algorithmen nutzen die stückweise lineare Struktur der Funktionen explizit aus, indem sie die Abs-Linear Form verwenden. Teil der Analyse der Algorithmen ist der Nachweis der endlichen Konvergenz zu lokalen Minima der jeweiligen Probleme sowie die Betrachtung effizienter Berechnung von Lösungen der in jeder Iteration der Algorithmen auftretenden Sattelpunktsysteme. Die numerische Performanz von CASM wird anhand verschiedener Beispiele demonstriert. Dazu gehören akademische Probleme, einschließlich bi-level und lineare Komplementaritätsprobleme, sowie Anwendungsprobleme aus der Gasnetzwerkoptimierung und dem Einzelhandel.This thesis presents an algorithm for solving finite-dimensional optimization problems with a piecewise linear objective function and piecewise linear constraints. For this purpose, it is assumed that the functions are in the so-called Abs-Linear Form, a matrix-vector representation. Using this form, the domain space can be decomposed into polyhedra, so that the nonsmoothness of the piecewise linear functions can coincide with the edges of the polyhedra. For the class of abs-linear functions, necessary and sufficient optimality conditions that can be verified in polynomial time are given for both the unconstrained and the constrained case. For unconstrained piecewise linear optimization problems, Andrea Walther and Andreas Griewank already presented a solution algorithm called the Active Signature Method (ASM) in 2019. Building on this method and combining it with the idea of the Active Set Method to handle inequality constraints, a new algorithm called the Constrained Active Signature Method (CASM) for constrained problems emerges. Both algorithms explicitly exploit the piecewise linear structure of the functions by using the Abs-Linear Form. Part of the analysis of the algorithms is to show finite convergence to local minima of the respective problems as well as an efficient solution of the saddle point systems occurring in each iteration of the algorithms. The numerical performance of CASM is illustrated by several examples. The test problems cover academic problems, including bi-level and linear complementarity problems, as well as application problems from gas network optimization and inventory problems

    Optimality conditions for abs-normal NLPs

    Get PDF
    Structured nonsmoothness is widely present in practical optimization problems. A particularly attractive class of nonsmooth problems, both from a theoretical and from an algorithmic perspective, are nonsmooth NLPs with equality and inequality constraints in abs-normal form, so-called abs-normal NLPs. In this thesis optimality conditions for this particular class are obtained. To this aim, first the theory for the case of unconstrained optimization problems in abs-normal form of Andreas Griewank and Andrea Walther is extended. In particular, similar necessary and sufficient conditions of first and second order are obtained that are directly based on classical Karush-Kuhn-Tucker (KKT) theory for smooth NLPs. Then, it is shown that the class of abs-normal NLPs is equivalent to the class of Mathematical Programs with Equilibrium Constraints (MPECs). Hence, the regularity assumption LIKQ introduced for the abs-normal NLP turns out to be equivalent to MPEC-LICQ. Moreover, stationarity concepts and optimality conditions under these regularity assumptions of linear independece type are equivalent up to technical assumptions. Next, well established constraint qualifications of Mangasarian Fromovitz, Abadie and Guignard type for MPECs are used to define corresponding concepts for abs-normal NLPs. Then, it is shown that kink qualifications and MPEC constraint qualifications of Mangasarian Fromovitz resp. Abadie type are equivalent. As it remains open if this holds for Guignard type kink and constraint qualifications, branch formulations for abs-normal NLPs and MPECs are introduced. Then, equivalence of Abadie’s and Guignard’s constraint qualifications for all branch problems hold. Throughout a reformulation of inequalities with absolute value slacks is considered. It preserves constraint qualifications of linear independence and Abadie type but not of Mangasarian Fromovitz type. For Guignard type it is still an open question but ACQ and GCQ are preserved passing over to branch problems. Further, M-stationarity and B-stationarity concepts for abs-normal NLPs are introduced and corresponding first order optimality con- ditions are proven using the corresponding concepts for MPECs. Moreover, a reformulation to extend the optimality conditions for abs-normal NLPs to those with additional nonsmooth objective functions is given and the preservation of regularity assumptions is considered. Using this, it is shown that the unconstrained abs-normal NLP always satisfies constraint qualifications of Abadie and thus Guignard type. Hence, in this special case every local minimizer satisfies the M-stationarity and B-stationarity concepts for abs-normal NLPs

    Local Convergence of Newton-type Methods for Nonsmooth Constrained Equations and Applications

    Get PDF
    In this thesis we consider constrained systems of equations. The focus is on local Newton-type methods for the solution of constrained systems which converge locally quadratically under mild assumptions implying neither local uniqueness of solutions nor differentiability of the equation function at solutions. The first aim of this thesis is to improve existing local convergence results of the constrained Levenberg-Marquardt method. To this end, we describe a general Newton-type algorithm. Then we prove local quadratic convergence of this general algorithm under the same four assumptions which were recently used for the local convergence analysis of the LP-Newton method. Afterwards, we show that, besides the LP-Newton method, the constrained Levenberg-Marquardt method can be regarded as a special realization of the general Newton-type algorithm and therefore enjoys the same local convergence properties. Thus, local quadratic convergence of a nonsmooth constrained Levenberg-Marquardt method is proved without requiring conditions implying the local uniqueness of solutions. As already mentioned, we use four assumptions for the local convergence analysis of the general Newton-type algorithm. The second aim of this thesis is a detailed discussion of these convergence assumptions for the case that the equation function of the constrained system is piecewise continuously differentiable. Some of the convergence assumptions seem quite technical and difficult to check. Therefore, we look for sufficient conditions which are still mild but which seem to be more familiar. We will particularly prove that the whole set of the convergence assumptions holds if some set of local error bound conditions is satisfied and in addition the feasible set of the constrained system excludes those zeros of the selection functions which are not zeros of the equation function itself, at least in a sufficiently small neighborhood of some fixed solution. We apply our results to constrained systems arising from complementarity systems, i.e., systems of equations and inequalities which contain complementarity constraints. Our new conditions are discussed for a suitable reformulation of the complementarity system as constrained system of equations by means of the minimum function. In particular, it will turn out that the whole set of the convergence assumptions is actually implied by some set of local error bound conditions. In addition, we provide a new constant rank condition implying the whole set of the convergence assumptions. Particularly, we provide adapted formulations of our new conditions for special classes of complementarity systems. We consider Karush-Kuhn-Tucker (KKT) systems arising from optimization problems, variational inequalities, or generalized Nash equilibrium problems (GNEPs) and Fritz-John (FJ) systems arising from GNEPs. Thus, we obtain for each problem class conditions which guarantee local quadratic convergence of the general Newton-type algorithm and its special realizations to a solution of the particular problem. Moreover, we prove for FJ systems of GNEPs that generically some full row rank condition is satisfied at any solution of the FJ system of a GNEP. The latter condition implies the whole set of the convergence assumptions if the functions which characterize the GNEP are sufficiently smooth. Finally, we describe an idea for a possible globalization of our Newton-type methods, at least for the case that the constrained system arises from a certain smooth reformulation of the KKT system of a GNEP. More precisely, a hybrid method is presented whose local part is the LP-Newton method. The hybrid method turns out to be, under appropriate conditions, both globally and locally quadratically convergent
    corecore