41 research outputs found

    Non-Smooth Optimization by Abs-Linearization in Reflexive Function Spaces

    Get PDF
    Nichtglatte Optimierungsprobleme in reflexiven BanachrĂ€umen treten in vielen Anwendungen auf. HĂ€ufig wird angenommen, dass alle vorkommenden Nichtdifferenzierbarkeiten durch Lipschitz-stetige Operatoren wie abs, min und max gegeben sind. Bei solchen Problemen kann es sich zum Beispiel um optimale Steuerungsprobleme mit möglicherweise nicht glatten Zielfunktionen handeln, welche durch partielle Differentialgleichungen (PDG) eingeschrĂ€nkt sind, die ebenfalls nicht glatte Terme enthalten können. Eine effiziente und robuste Lösung erfordert eine Kombination numerischer Simulationen und spezifischer Optimierungsalgorithmen. Lokal Lipschitz-stetige, nichtglatte Nemytzkii-Operatoren, welche direkt in der Problemformulierung auftreten, spielen eine wesentliche Rolle in der Untersuchung der zugrundeliegenden Optimierungsprobleme. In dieser Dissertation werden zwei spezifische Methoden und Algorithmen zur Lösung solcher nichtglatter Optimierungsprobleme in reflexiven BanachrĂ€umen vorgestellt und diskutiert. Als erste Lösungsmethode wird in dieser Dissertation die Minimierung von nichtglatten Operatoren in reflexiven BanachrĂ€umen mittels sukzessiver quadratischer ÜberschĂ€tzung vorgestellt, SALMIN. Ein neuartiger Optimierungsansatz fĂŒr Optimierungsprobleme mit nichtglatten elliptischen PDG-BeschrĂ€nkungen, welcher auf expliziter Strukturausnutzung beruht, stellt die zweite Lösungsmethode dar, SCALi. Das zentrale Merkmal dieser Methoden ist ein geeigneter Umgang mit Nichtglattheiten. Besonderes Augenmerk liegt dabei auf der zugrundeliegenden nichtglatten Struktur des Problems und der effektiven Ausnutzung dieser, um das Optimierungsproblem auf angemessene und effiziente Weise zu lösen.Non-smooth optimization problems in reflexive Banach spaces arise in many applications. Frequently, all non-differentiabilities involved are assumed to be given by Lipschitz-continuous operators such as abs, min and max. For example, such problems can refer to optimal control problems with possibly non-smooth objective functionals constrained by partial differential equations (PDEs) which can also include non-smooth terms. Their efficient as well as robust solution requires numerical simulations combined with specific optimization algorithms. Locally Lipschitz-continuous non-smooth non-linearities described by appropriate Nemytzkii operators which arise directly in the problem formulation play an essential role in the study of the underlying optimization problems. In this dissertation, two specific solution methods and algorithms to solve such non-smooth optimization problems in reflexive Banach spaces are proposed and discussed. The minimization of non-smooth operators in reflexive Banach spaces by means of successive quadratic overestimation is presented as the first solution method, SALMIN. A novel structure exploiting optimization approach for optimization problems with non-smooth elliptic PDE constraints constitutes the second solution method, SCALi. The central feature of these methods is the appropriate handling of non-differentiabilities. Special focus lies on the underlying structure of the problem stemming from the non-smoothness and how it can be effectively exploited to solve the optimization problem in an appropriate and efficient way

    International Conference on Continuous Optimization (ICCOPT) 2019 Conference Book

    Get PDF
    The Sixth International Conference on Continuous Optimization took place on the campus of the Technical University of Berlin, August 3-8, 2019. The ICCOPT is a flagship conference of the Mathematical Optimization Society (MOS), organized every three years. ICCOPT 2019 was hosted by the Weierstrass Institute for Applied Analysis and Stochastics (WIAS) Berlin. It included a Summer School and a Conference with a series of plenary and semi-plenary talks, organized and contributed sessions, and poster sessions. This book comprises the full conference program. It contains, in particular, the scientific program in survey style as well as with all details, and information on the social program, the venue, special meetings, and more

    Acta Cybernetica : Volume 25. Number 1.

    Get PDF

    Global optimization at work

    Get PDF
    In many research situations where mathematical models are used, researchers try to find parameter values such that a given performance criterion is at an optimum. If the parameters can be varied in a continuous way, this in general defines a so-called Nonlinear Programming Problem. Methods for Nonlinear Programming usually result in local optima. A local optimum is a solution (parameter values) which is the best with respect to values in the neighbourhood of that solution, not necessarily the best over the total admissible, feasible set of all possible parameter values, solutions.For mathematicians this results in the research question: How to find the best, global optimum in situations where several local optima exist?, the field of Global Optimization (GLOP). Literature, books and a specific journal, has appeared during the last decades on the field. Main focus has been on the mathematical side, i.e. given assumptions on the structure of the problems to be solved and specific global optimization methods and properties are derived. Cooperation between mathematicians and researchers (in this book called 'the modeller' or 'the potential user'), who saw global optimization problems in practical problems has lead to application of GLOP algorithms to practical optimization problems. Some of those can be found in this book. In this book we started with the question:Given a potential user with an arbitrary global optimization problem, what route can be taken in the GLOP forest to find solutions of the problem?From this first question we proceed by raising new questions. In Chapter 1 we outline the target group of users we have in mind, i.e. agricultural and environmental engineers, designers and OR workers in agricultural science. These groups are not clearly defined, nor mutually exclusive, but have in common that mathematical modelling is used and there is knowledge of linear programming and possibly of combinatorial optimization.In general, when modellers are confronted with optimization aspects, the first approach is to develop heuristics or to look for standard nonlinear programming codes to generate solutions of the optimization problem. During the search for solutions, multiple local optima may appear. We distinguished two major tracks for the path to be taken from there by the potential user to solve the problem. One track is called the deterministic track and is discussed in Chapters 2, 3 and 4. The other track is called the stochastic track and is discussed in Chapters 5 and 6. The two approaches are intended to reach a different goal.The deterministic track aims at:The global optimum is approximated (found) with certainty in a finite number of steps.The stochastic track is understood to contain some stochastic elements and aims at:Approaching the optimum in a probabilistic sense as effort grows to infinity.Both tracks are investigated in this book from the viewpoint of a potential user corresponding to the way of thinking in Popperian science. The final results are new challenging problems, questions for further research. A side question along the way is:How can the user influence the search process given the knowledge of the underlying problem and the information that becomes available during the search?The deterministic approachWhen one starts looking into the deterministic track for a given problem, one runs into the requirements which determine a major difference in applicability of the two approaches.Deterministic methods require the availability of explicit mathematical expressions of the functions to be optimized.In many practical situations which are also discussed in this book, these expressions are not available and deterministic methods cannot be applied. The operations in deterministic methods are based on concepts such as Branch-and-Bound and Cutting which require bounding of functions and parameters based on so-called mathematical structures.In Chapter 2 we describe these structures and distinguish between those which can be derived directly from the expressions, such as quadratic, bilinear and fractional functions and other structures which require analysis of the expressions such as concave and Lipschitz continuous functions. Examples are given of optimization problems revealing their structure. Moreover, we show that symmetry in the model formulation may cause models to have more than one extreme.In Chapter 3 the relationship between GLOP and Integer Programming (IP) is highlighted for several reasons.Sometimes practical GLOP problems can be approximated by IP variants and solved by standard Mixed Integer Linear Programming (MILP) techniques.The algorithms of GLOP and IP can similarly be classified.The transformability of GLOP problems to IP problems and vice versa shows that difficult problems in one class will not become easier to solve in the other.Analysis of problems, which is common in Global Optimization, can be used to better understand the complexity of some IP problems.In Chapter 4 we analyze the use of deterministic methods, demonstrating the application of the Branch-and-Bound concept. The following can be stated from the point of view of the potential user:Analysis of the expressions is required to find useful mathematical structures (Chapter 2). It should be noted that also interval arithmetic techniques can be applied directly on the expressions.The elegance of the techniques is the guarantee that we are certain about the global optimality of the optimum, when it has been discovered and verified.The methods are hard to implement. Thorough use should be made of special data structures to store the necessary information in memory.Two cases are elaborated. The quadratic product design problem illustrates how the level of Decision Support Systems can be reached for low dimensional problems, i.e. the number of variables, components or ingredients, is less than 10. The other case, the nutrient problem, shows how by analysis of the problem many useful properties can be derived which help to cut away large areas of the feasible space where the optimum cannot be situated. However, it also demonstrates the so-called Curse of Dimensionality; the problem has so many variables in a realistic situation that it is impossible to traverse the complete Branch-and-Bound tree. Therefore it is good to see the relativity of the use of deterministic methods:No global optimization method can guarantee to find and verify the global optimum for every practical situation, within a humans lifetime.The stochastic approachThe stochastic approach is followed in practice for many optimization problems by combining the generation of random points with standard nonlinear optimization algorithms. The following can be said from the point of view of the potential user.The methods require no mathematical structure of the problem and are therefore more generally applicable.The methods are relatively easy to implement.The user is never completely certain that the global optimum has been reached.The optimum is approximated in a probabilistic sense when effort increases to infinity.In Chapter 5 much attention is paid to the question what happens when a user wants to spend a limited (not infinite) amount of time to the search for the optimum, preferably less than a humans lifetime:What to do when the time for solving the problem is finite?First we looked at the information which becomes available during the search and the instruments with which the user can influence the search. It appeared that besides classical instruments which are also available in traditional nonlinear programming, the main instrument is to influence the trade-off between global (random) and local search (looking for a local optimum). This lead to a new question:Is there a best way to rule the choice between global and local search, given the information which becomes available?Analyzing in a mathematical way with extreme cases lead to the comfortable conclusion that a best method of choosing between global and local search -thus a best global optimization method- does not exist. This is valid for cases where further information (more than the information which becomes available during the search) on the function to be optimized is not available, called in literature the black-box case. The conclusion again shows that mathematical analysis with extreme cases is a powerful tool to demonstrate that so-called magic algorithms -algorithms which are said in scientific journals to be very promising, because they perform well on some test cases- can be analyzed and 'falsified' in the way of Popperian thinking. This leads to the conclusion that:Magic algorithms which are going to solve all of your problems do not exist.Several side questions derived from the main problem are investigated in this book.In Chapter 6 we place the optimization problem in the context of parameter estimation. One practical question is raised by the phenomenonEvery local search leads to a new local optimum.We know from parameter estimation that this is a symptom in so called non-identifiable systems. The minimum is obtained at a lower dimensional surface or curve. Some (non-magic) heuristics are discussed to overcome this problem.There are two side questions of users derived from the general remark:"I am not interested in the best (GLOP) solution, but in good points".The first question is that of Robust Solutions, introduced in Chapter 4, and the other is called Uniform Covering, concerning the generation of points which are nearly as good as the optimum, discussed in Chapter 6.Robust solutions are discussed in the context of product design. The robustness is defined as a measure of the error one can make from the solution so that the solution (product) is still acceptable. Looking for the most robust product is looking for that point which is as far away as possible from the boundaries of the feasible (acceptable) area. For the solution procedures, we had a look at the appearance of the problem in practice, where boundaries are given by linear and quadratic surfaces, properties of the product.For linear boundaries, finding the most robust solution is an LP problem and thus rather easy.For quadratic properties the development of specific algorithms is required.The question of Uniform Covering concerns the desire to have a set of "suboptimal" points, i.e. points with low function value (given an upper level of the function value); the points are in a so-called level set. To generate "low" points, one could run a local search many times. However, we want the points not to be concentrated in one of the compartments or one sub-area of the level set, we want them to be equally, uniformly spread over the region. This is a very difficult problem for which we test and analyze several approaches in Chapter 6. The analysis taught us that:It is unlikely that stochastic methods will be proposed which solve problems in an expected calculation time, which is polynomial in the number of variables of the problem.Final resultWhether an arbitrary problem of a user can be solved by GLOP requires analysis. There are many optimization problems which can be solved satisfactorily. Besides the selection of algorithms the user has various instruments to steer the process. For stochastic methods it mainly concerns the trade-off between local and global search. For deterministic methods it includes setting bounds and influencing the selection rule in Branch-and-Bound. We hope with this book to have given a tool and a guidance to solution procedures. Moreover, it is an introduction to further literature on the subject of Global Optimization.</p

    Inégalités de Kurdyka-Lojasiewicz et convexité : algorithmes et applications

    Get PDF
    Cette thĂšse traite des mĂ©thodes de descente d’ordre un pour les problĂšmes de minimisation. Elle comprend trois parties. Dans la premiĂšre partie, nous apportons une vue d’ensemble des bornes d’erreur et les premiĂšres briques d’unification d’un concept. Nous montrons en effet la place centrale de l’inĂ©galitĂ© du gradient de Lojasiewicz, en mettant en relation cette inĂ©galitĂ© avec les bornes d’erreur. Dans la seconde partie, en usant de l’inĂ©galitĂ© de Kurdyka-Lojasiewicz (KL), nous apportons un nouvel outil pour calculer la complexitĂ© des mÂŽmĂ©thodes de descente d’ordre un pour la minimisation convexe. Notre approche est totalement originale et utilise une suite proximale “worst-case” unidimensionnelle. Ces rĂ©sultats introduisent une mĂ©thodologie simple : trouver une borne d’erreur, calculer la fonction KL dĂ©singularisante quand c’est possible, identifier les constantes pertinentes dans la mĂ©thode de descente, et puis calculer la complexitĂ© en usant de la suite proximale “worst-case” unidimensionnelle. Enfin, nous Ă©tendons la mĂ©thode extragradient pour minimiser la somme de deux fonctions, la premiĂšre Ă©tant lisse et la seconde convexe. Sous l’hypothĂšse de l’inĂ©galitĂ© KL, nous montrons que la suite produite par la mĂ©thode extragradient converge vers un point critique de ce problĂšme et qu’elle est de longueur finie. Quand les deux fonctions sont convexes, nous donnons la vitesse de convergence O(1/k) qui est classique pour la mĂ©thode de gradient. De plus, nous montrons que notre complexitĂ© de la seconde partie peut ĂȘtre appliquĂ©e Ă  cette mĂ©thode. ConsidĂ©rer la mĂ©thode extragradient est l’occasion de dÂŽĂ©crire la recherche linĂ©aire exacte pour les mĂ©thodes de dĂ©composition proximales. Nous donnons des dĂ©tails pour l’implĂ©mentation de ce programme pour le problĂšme des moindres carrĂ©s avec rĂ©gularisation ℓ1 et nous donnons des rĂ©sultats numĂ©riques qui suggĂšrent que combiner des mĂ©thodes non-accĂ©lĂ©rĂ©es avec la recherche linĂ©aire exacte peut ĂȘtre un choix performant.This thesis focuses on first order descent methods in the minimization problems. There are three parts. Firstly, we give an overview on local and global error bounds. We try to provide the first bricks of a unified theory by showing the centrality of the Lojasiewicz gradient inequality. In the second part, by using Kurdyka- Lojasiewicz (KL) inequality, we provide new tools to compute the complexity of first-order descent methods in convex minimization. Our approach is completely original and makes use of a one-dimensional worst-case proximal sequence. This result inaugurates a simple methodology: derive an error bound, compute the KL esingularizing function whenever possible, identify essential constants in the descent method and finally compute the complexity using the one-dimensional worst case proximal sequence. Lastly, we extend the extragradient method to minimize the sum of two functions, the first one being smooth and the second being convex. Under Kurdyka-Lojasiewicz assumption, we prove that the sequence produced by the extragradient method converges to a critical point of this problem and has finite length. When both functions are convex, we provide a O(1/k) convergence rate. Furthermore, we show that our complexity result in the second part can be applied to this method. Considering the extragradient method is the occasion to describe exact line search for proximal decomposition methods. We provide details for the implementation of this scheme for the ℓ1 regularized least squares problem and give numerical results which suggest that combining nonaccelerated methods with exact line search can be a competitive choice

    Inégalités de Kurdyka-Lojasiewicz et convexité : algorithmes et applications

    Get PDF
    Cette thĂšse traite des mĂ©thodes de descente d’ordre un pour les problĂšmes de minimisation. Elle comprend trois parties. Dans la premiĂšre partie, nous apportons une vue d’ensemble des bornes d’erreur et les premiĂšres briques d’unification d’un concept. Nous montrons en effet la place centrale de l’inĂ©galitĂ© du gradient de Lojasiewicz, en mettant en relation cette inĂ©galitĂ© avec les bornes d’erreur. Dans la seconde partie, en usant de l’inĂ©galitĂ© de Kurdyka-Lojasiewicz (KL), nous apportons un nouvel outil pour calculer la complexitĂ© des mÂŽmĂ©thodes de descente d’ordre un pour la minimisation convexe. Notre approche est totalement originale et utilise une suite proximale “worst-case” unidimensionnelle. Ces rĂ©sultats introduisent une mĂ©thodologie simple : trouver une borne d’erreur, calculer la fonction KL dĂ©singularisante quand c’est possible, identifier les constantes pertinentes dans la mĂ©thode de descente, et puis calculer la complexitĂ© en usant de la suite proximale “worst-case” unidimensionnelle. Enfin, nous Ă©tendons la mĂ©thode extragradient pour minimiser la somme de deux fonctions, la premiĂšre Ă©tant lisse et la seconde convexe. Sous l’hypothĂšse de l’inĂ©galitĂ© KL, nous montrons que la suite produite par la mĂ©thode extragradient converge vers un point critique de ce problĂšme et qu’elle est de longueur finie. Quand les deux fonctions sont convexes, nous donnons la vitesse de convergence O(1/k) qui est classique pour la mĂ©thode de gradient. De plus, nous montrons que notre complexitĂ© de la seconde partie peut ĂȘtre appliquĂ©e Ă  cette mĂ©thode. ConsidĂ©rer la mĂ©thode extragradient est l’occasion de dÂŽĂ©crire la recherche linĂ©aire exacte pour les mĂ©thodes de dĂ©composition proximales. Nous donnons des dĂ©tails pour l’implĂ©mentation de ce programme pour le problĂšme des moindres carrĂ©s avec rĂ©gularisation ℓ1 et nous donnons des rĂ©sultats numĂ©riques qui suggĂšrent que combiner des mĂ©thodes non-accĂ©lĂ©rĂ©es avec la recherche linĂ©aire exacte peut ĂȘtre un choix performant.This thesis focuses on first order descent methods in the minimization problems. There are three parts. Firstly, we give an overview on local and global error bounds. We try to provide the first bricks of a unified theory by showing the centrality of the Lojasiewicz gradient inequality. In the second part, by using Kurdyka- Lojasiewicz (KL) inequality, we provide new tools to compute the complexity of first-order descent methods in convex minimization. Our approach is completely original and makes use of a one-dimensional worst-case proximal sequence. This result inaugurates a simple methodology: derive an error bound, compute the KL esingularizing function whenever possible, identify essential constants in the descent method and finally compute the complexity using the one-dimensional worst case proximal sequence. Lastly, we extend the extragradient method to minimize the sum of two functions, the first one being smooth and the second being convex. Under Kurdyka-Lojasiewicz assumption, we prove that the sequence produced by the extragradient method converges to a critical point of this problem and has finite length. When both functions are convex, we provide a O(1/k) convergence rate. Furthermore, we show that our complexity result in the second part can be applied to this method. Considering the extragradient method is the occasion to describe exact line search for proximal decomposition methods. We provide details for the implementation of this scheme for the ℓ1 regularized least squares problem and give numerical results which suggest that combining nonaccelerated methods with exact line search can be a competitive choice

    New Advancements in Pure and Applied Mathematics via Fractals and Fractional Calculus

    Get PDF
    This reprint focuses on exploring new developments in both pure and applied mathematics as a result of fractional behaviour. It covers the range of ongoing activities in the context of fractional calculus by offering alternate viewpoints, workable solutions, new derivatives, and methods to solve real-world problems. It is impossible to deny that fractional behaviour exists in nature. Any phenomenon that has a pulse, rhythm, or pattern appears to be a fractal. The 17 papers that were published and are part of this volume provide credence to that claim. A variety of topics illustrate the use of fractional calculus in a range of disciplines and offer sufficient coverage to pique every reader's attention
    corecore