27 research outputs found

    Non-Smooth Optimization by Abs-Linearization in Reflexive Function Spaces

    Get PDF
    Nichtglatte Optimierungsprobleme in reflexiven Banachräumen treten in vielen Anwendungen auf. Häufig wird angenommen, dass alle vorkommenden Nichtdifferenzierbarkeiten durch Lipschitz-stetige Operatoren wie abs, min und max gegeben sind. Bei solchen Problemen kann es sich zum Beispiel um optimale Steuerungsprobleme mit möglicherweise nicht glatten Zielfunktionen handeln, welche durch partielle Differentialgleichungen (PDG) eingeschränkt sind, die ebenfalls nicht glatte Terme enthalten können. Eine effiziente und robuste Lösung erfordert eine Kombination numerischer Simulationen und spezifischer Optimierungsalgorithmen. Lokal Lipschitz-stetige, nichtglatte Nemytzkii-Operatoren, welche direkt in der Problemformulierung auftreten, spielen eine wesentliche Rolle in der Untersuchung der zugrundeliegenden Optimierungsprobleme. In dieser Dissertation werden zwei spezifische Methoden und Algorithmen zur Lösung solcher nichtglatter Optimierungsprobleme in reflexiven Banachräumen vorgestellt und diskutiert. Als erste Lösungsmethode wird in dieser Dissertation die Minimierung von nichtglatten Operatoren in reflexiven Banachräumen mittels sukzessiver quadratischer Überschätzung vorgestellt, SALMIN. Ein neuartiger Optimierungsansatz für Optimierungsprobleme mit nichtglatten elliptischen PDG-Beschränkungen, welcher auf expliziter Strukturausnutzung beruht, stellt die zweite Lösungsmethode dar, SCALi. Das zentrale Merkmal dieser Methoden ist ein geeigneter Umgang mit Nichtglattheiten. Besonderes Augenmerk liegt dabei auf der zugrundeliegenden nichtglatten Struktur des Problems und der effektiven Ausnutzung dieser, um das Optimierungsproblem auf angemessene und effiziente Weise zu lösen.Non-smooth optimization problems in reflexive Banach spaces arise in many applications. Frequently, all non-differentiabilities involved are assumed to be given by Lipschitz-continuous operators such as abs, min and max. For example, such problems can refer to optimal control problems with possibly non-smooth objective functionals constrained by partial differential equations (PDEs) which can also include non-smooth terms. Their efficient as well as robust solution requires numerical simulations combined with specific optimization algorithms. Locally Lipschitz-continuous non-smooth non-linearities described by appropriate Nemytzkii operators which arise directly in the problem formulation play an essential role in the study of the underlying optimization problems. In this dissertation, two specific solution methods and algorithms to solve such non-smooth optimization problems in reflexive Banach spaces are proposed and discussed. The minimization of non-smooth operators in reflexive Banach spaces by means of successive quadratic overestimation is presented as the first solution method, SALMIN. A novel structure exploiting optimization approach for optimization problems with non-smooth elliptic PDE constraints constitutes the second solution method, SCALi. The central feature of these methods is the appropriate handling of non-differentiabilities. Special focus lies on the underlying structure of the problem stemming from the non-smoothness and how it can be effectively exploited to solve the optimization problem in an appropriate and efficient way

    Duality theory in mathematical programming and optimal control

    Get PDF

    Nondifferentiable Optimization: Motivations and Applications

    Get PDF
    IIASA has been involved in research on nondifferentiable optimization since 1976. The Institute's research in this field has been very productive, leading to many important theoretical, algorithmic and applied results. Nondifferentiable optimization has now become a recognized and rapidly developing branch of mathematical programming. To continue this tradition and to review developments in this field IIASA held this Workshop in Sopron (Hungary) in September 1984. This volume contains selected papers presented at the Workshop. It is divided into four sections dealing with the following topics: (I) Concepts in Nonsmooth Analysis; (II) Multicriteria Optimization and Control Theory; (III) Algorithms and Optimization Methods; (IV) Stochastic Programming and Applications

    Generating structured non-smooth priors and associated primal-dual methods

    Get PDF
    The purpose of the present chapter is to bind together and extend some recent developments regarding data-driven non-smooth regularization techniques in image processing through the means of a bilevel minimization scheme. The scheme, considered in function space, takes advantage of a dualization framework and it is designed to produce spatially varying regularization parameters adapted to the data for well-known regularizers, e.g. Total Variation and Total Generalized variation, leading to automated (monolithic), image reconstruction workflows. An inclusion of the theory of bilevel optimization and the theoretical background of the dualization framework, as well as a brief review of the aforementioned regularizers and their parameterization, makes this chapter a self-contained one. Aspects of the numerical implementation of the scheme are discussed and numerical examples are provided

    A Nonsmooth Nonconvex Descent Algorithm

    Get PDF
    In many applications nonsmooth nonconvex energy functions, which are Lipschitz continuous, appear quite naturally. Contact mechanics with friction is a classic example. A second example is the 1-Laplace operator and its eigenfunctions. In this work we will give an algorithm such that for every locally Lipschitz continuous function f and every sequence produced by this algorithm it holds that every accumulation point of the sequence is a critical point of f in the sense of Clarke. Here f is defined on a reflexive Banach space X, such that X and its dual space X' are strictly convex and Clarkson's inequalities hold. (E.g. Sobolev spaces and every closed subspace equipped with the Sobolev norm satisfy these assumptions for p>1.) This algorithm is designed primarily to solve variational problems or their high dimensional discretizations, but can be applied to a variety of locally Lipschitz functions. In elastic contact mechanics the strain energy is often smooth and nonconvex on a suitable domain, while the contact and the friction energy are nonsmooth and have a support on a subspace which has a substantially smaller dimension than the strain energy, since all points in the interior of the bodies only have effect on the strain energy. For such elastic contact problems we suggest a specialization of our algorithm, which treats the smooth part with Newton like methods. In the case that the gradient of the entire energy function is semismooth close to the minimizer, we can even prove superlinear convergence of this specialization of our algorithm. We test the algorithm and its specialization with a couple of benchmark problems. Moreover, we apply the algorithm to the 1-Laplace minimization problem restricted to finitely dimensional subspaces of piecewise affine, continuous functions. The algorithm developed here uses ideas of the bundle trust region method by Schramm, and a new generalization of the concept of gradients on a set. The basic idea behind this gradients on sets is that we want to find a stable descent direction, which is a descent direction on an entire neighborhood of an iteration point. This way we avoid oscillations of the gradients and very small descent steps (in the smooth and in the nonsmooth case). It turns out, that the norm smallest element of the gradient on a set provides a stable descent direction. The algorithm we present here is the first algorithm which can treat locally Lipschitz continuous functions in this generality, up to our knowledge. In particular, large finitely dimensional Banach spaces haven't been studied for nonsmooth nonconvex functions so far. We will show that the algorithm is very robust and often faster than common algorithms. Furthermore, we will see that with this algorithm it is possible to compute reliably the first eigenfunctions of the 1-Laplace operator up to disretization errors, for the first time.In vielen Anwendungen tauchen nichtglatte, nichtkonvexe, Lipschitz-stetige Energie Funktionen in natuerlicher Weise auf. Ein klassische Beispiel bildet die Kontaktmechanik mit Reibung. Ein weiteres Beispiel ist der 11-Laplace Operator und seine Eigenfunktionen. In dieser Dissertation werden wir ein Abstiegsverfahren angeben, so dass fuer jede lokal Lipschitz-stetige Funktion f jeder Haeufungspunkt einer durch dieses Verfahren erzeugten Folge ein kritischer Punkt von f im Sinne von Clarke ist. Hier ist f auf einem einem reflexiver, strikt konvexem Banachraum definierert, fuer den der Dualraum ebenfalls strikt konvex ist und die Clarkeson Ungleichungen gelten. (Z.B. Sobolevraeume und jeder abgeschlossene Unterraum mit der Sobolevnorm versehen, erfuellt diese Bedingung fuer p>1.) Dieser Algorithmus ist primaer entwickelt worden um Variationsprobleme, bzw. deren hochdimensionalen Diskretisierungen zu loesen. Er kann aber auch fuer eine Vielzahl anderer lokal Lipschitz stetige Funktionen eingesetzt werden. In der elastischen Kontaktmechanik ist die Spannungsenergie oft glatt und nichtkonvex auf einem geeignetem Definitionsbereich, waehrend der Kontakt und die Reibung durch nicht glatte Funktionen modelliert werden, deren Traeger ein Unterraum mit wesentlich kleineren Dimension ist, da alle Punkte im Inneren des Koerpers nur die Spannungsenergie beeinflussen. Fuer solche elastischen Kontaktprobleme schlagen wir eine Spezialisierung unseres Algorithmuses vor, der den glatten Teil mit Newton aehnlichen Methoden behandelt. Falls der Gradient der gesamten Energiefunktion semiglatt in der Naehe der Minimalstelle ist, koennen wir sogar beweisen, dass der Algorithmus superlinear konvergiert. Wir testen den Algorithmus und seine Spezialisierung an mehreren Benchmark Problemen. Ausserdem wenden wir den Algorithmus auf 1-Laplace Minimierungsproblem eingeschraenkt auf eine endlich dimensionalen Unterraum der stueckweise affinen, stetigen Funktionen an. Der hier entwickelte Algorithmus verwendet Ideen des Bundle-Trust-Region-Verfahrens von Schramm, und einen neu entwickelten Verallgemeinerung von Gradienten auf Mengen. Die zentrale Idee hinter den Gradienten auf Mengen ist die, dass wir stabile Abstiegsrichtungen auf einer ganzen Umgebung der Iterationspunkte finden wollen. Auf diese Weise vermeiden wir das Oszillieren der Gradienten und sehr kleine Abstiegsschritte (im glatten, wie im nichtglatten Fall.) Es stellt sich heraus, dass das normkleinste Element dieses Gradienten auf der Umgebung eine stabil Abstiegsrichtung bestimmt. So weit es uns bekannt ist, koennen die hier entwickelten Algorithmen zum ersten Mal lokal Lipschitz-stetige Funktionen in dieser Allgemeinheit behandeln. Insbesondere wurden nichtglatte, nichtkonvexe Funktionen auf derart hochdimensionale Banachraeume bis jetzt nicht behandelt. Wir werden zeigen, dass unser Algorithmus sehr robust und oft schneller als uebliche Algorithmen ist. Des Weiteren, werden wir sehen, dass es mit diesem Algorithmus das erste mal moeglich ist, zuverlaessig die erste Eigenfunktion des 1-Laplace Operators bis auf Diskretisierungsfehler zu bestimmen

    Nonsmooth dynamic optimization of systems with varying structure

    Get PDF
    Thesis (Ph. D.)--Massachusetts Institute of Technology, Dept. of Mechanical Engineering, 2011.Cataloged from PDF version of thesis.Includes bibliographical references (p. 357-365).In this thesis, an open-loop numerical dynamic optimization method for a class of dynamic systems is developed. The structure of the governing equations of the systems under consideration change depending on the values of the states, parameters and the controls. Therefore, these systems are called systems with varying structure. Such systems occur frequently in the models of electric and hydraulic circuits, chemical processes, biological networks and machinery. As a result, the determination of parameters and controls resulting in the optimal performance of these systems has been an important research topic. Unlike dynamic optimization problems where the structure of the underlying system is constant, the dynamic optimization of systems with varying structure requires the determination of the optimal evolution of the system structure in time in addition to optimal parameters and controls. The underlying varying structure results in nonsmooth and discontinuous optimization problems. The nonsmooth single shooting method introduced in this thesis uses concepts from nonsmooth analysis and nonsmooth optimization to solve dynamic optimization problems involving systems with varying structure whose dynamics can be described by locally Lipschitz continuous ordinary or differential-algebraic equations. The method converts the infinitedimensional dynamic optimization problem into an nonlinear program by parameterizing the controls. Unlike the state of the art, the method does not enumerate possible structures explicitly in the optimization and it does not depend on the discretization of the dynamics. Instead, it uses a special integration algorithm to compute state trajectories and derivative information. As a result, the method produces more accurate solutions to problems where the underlying dynamics is highly nonlinear and/or stiff for less effort than the state of the art. The thesis develops substitutes for the gradient and the Jacobian of a function in case these quantities do not exist. These substitutes are set-valued maps and an elements of these maps need to be computed for optimization purposes. Differential equations are derived whose solutions furnish the necessary elements. These differential equations have discontinuities in time. A numerical method for their solution is proposed based on state event location algorithms that detects these discontinuities. Necessary conditions of optimality for nonlinear programs are derived using these substitutes and it is shown that nonsmooth optimization methods called bundle methods can be used to obtain solutions satisfying these necessary conditions. Case studies compare the method to the state of the art and investigate its complexity empirically.by Mehmet Yunt.Ph.D

    Morceaux Choisis en Optimisation Continue et sur les Systèmes non Lisses

    Get PDF
    MasterThis course starts with the presentation of the optimality conditions of an optimization problem described in a rather abstract manner, so that these can be useful for dealing with a large variety of problems. Next, the course describes and analyzes various advanced algorithms to solve optimization problems (nonsmooth methods, linearization methods, proximal and augmented Lagrangian methods, interior point methods) and shows how they can be used to solve a few classical optimization problems (linear optimization, convex quadratic optimization, semidefinite optimization (SDO), nonlinear optimization). Along the way, various tools from convex and nonsmooth analysis will be presented. Everything is conceptualized in finite dimension. The goal of the lectures is therefore to consolidate basic knowledge in optimization, on both theoretical and algorithmic aspects

    Numerical Solution of Optimal Control Problems with Explicit and Implicit Switches

    Get PDF
    This dissertation deals with the efficient numerical solution of switched optimal control problems whose dynamics may coincidentally be affected by both explicit and implicit switches. A framework is being developed for this purpose, in which both problem classes are uniformly converted into a mixed–integer optimal control problem with combinatorial constraints. Recent research results relate this problem class to a continuous optimal control problem with vanishing constraints, which in turn represents a considerable subclass of an optimal control problem with equilibrium constraints. In this thesis, this connection forms the foundation for a numerical treatment. We employ numerical algorithms that are based on a direct collocation approach and require, in particular, a highly accurate determination of the switching structure of the original problem. Due to the fact that the switching structure is a priori unknown in general, our approach aims to identify it successively. During this process, a sequence of nonlinear programs, which are derived by applying discretization schemes to optimal control problems, is solved approximatively. After each iteration, the discretization grid is updated according to the currently estimated switching structure. Besides a precise determination of the switching structure, it is of central importance to estimate the global error that occurs when optimal control problems are solved numerically. Again, we focus on certain direct collocation discretization schemes and analyze error contributions of individual discretization intervals. For this purpose, we exploit a relationship between discrete adjoints and the Lagrange multipliers associated with those nonlinear programs that arise from the collocation transcription process. This relationship can be derived with the help of a functional analytic framework and by interrelating collocation methods and Petrov–Galerkin finite element methods. In analogy to the dual-weighted residual methodology for Galerkin methods, which is well–known in the partial differential equation community, we then derive goal–oriented global error estimators. Based on those error estimators, we present mesh refinement strategies that allow for an equilibration and an efficient reduction of the global error. In doing so we note that the grid adaption processes with respect to both switching structure detection and global error reduction get along with each other. This allows us to distill an iterative solution framework. Usually, individual state and control components have the same polynomial degree if they originate from a collocation discretization scheme. Due to the special role which some control components have in the proposed solution framework it is desirable to allow varying polynomial degrees. This results in implementation problems, which can be solved by means of clever structure exploitation techniques and a suitable permutation of variables and equations. The resulting algorithm was developed in parallel to this work and implemented in a software package. The presented methods are implemented and evaluated on the basis of several benchmark problems. Furthermore, their applicability and efficiency is demonstrated. With regard to a future embedding of the described methods in an online optimal control context and the associated real-time requirements, an extension of the well–known multi–level iteration schemes is proposed. This approach is based on the trapezoidal rule and, compared to a full evaluation of the involved Jacobians, it significantly reduces the computational costs in case of sparse data matrices
    corecore