48 research outputs found

    A Binarisation Heuristic for Non-Convex Quadratic Programming with Box Constraints

    Get PDF
    Non-convex quadratic programming with box constraints is a fundamental problem in the global optimization literature, being one of the simplest NP-hard nonlinear programs. We present a new heuristic for this problem, which enables one to obtain solutions of excellent quality in reasonable computing times. The heuristic consists of four phases: binarisation, convexification, branch-and-bound, and local optimisation. Some very encouraging computational results are given

    Adaptive exact penalty DC algorithms for nonsmooth DC optimization problems with equality and inequality constraints

    Full text link
    We propose and study two DC (difference of convex functions) algorithms based on exact penalty functions for solving nonsmooth DC optimization problems with nonsmooth DC equality and inequality constraints. Both methods employ adaptive penalty updating strategies to improve their performance. The first method is based on exact penalty functions with individual penalty parameter for each constraint (i.e. multidimensional penalty parameter) and utilizes a primal-dual approach to penalty updates. The second method is based on the so-called steering exact penalty methodology and relies on solving some auxiliary convex subproblems to determine a suitable value of the penalty parameter. We present a detailed convergence analysis of both methods and give several simple numerical examples highlighting peculiarites of two different penalty updating strategies studied in this paper

    Proceedings of the XIII Global Optimization Workshop: GOW'16

    Get PDF
    [Excerpt] Preface: Past Global Optimization Workshop shave been held in Sopron (1985 and 1990), Szeged (WGO, 1995), Florence (GO’99, 1999), Hanmer Springs (Let’s GO, 2001), Santorini (Frontiers in GO, 2003), San José (Go’05, 2005), Mykonos (AGO’07, 2007), Skukuza (SAGO’08, 2008), Toulouse (TOGO’10, 2010), Natal (NAGO’12, 2012) and Málaga (MAGO’14, 2014) with the aim of stimulating discussion between senior and junior researchers on the topic of Global Optimization. In 2016, the XIII Global Optimization Workshop (GOW’16) takes place in Braga and is organized by three researchers from the University of Minho. Two of them belong to the Systems Engineering and Operational Research Group from the Algoritmi Research Centre and the other to the Statistics, Applied Probability and Operational Research Group from the Centre of Mathematics. The event received more than 50 submissions from 15 countries from Europe, South America and North America. We want to express our gratitude to the invited speaker Panos Pardalos for accepting the invitation and sharing his expertise, helping us to meet the workshop objectives. GOW’16 would not have been possible without the valuable contribution from the authors and the International Scientific Committee members. We thank you all. This proceedings book intends to present an overview of the topics that will be addressed in the workshop with the goal of contributing to interesting and fruitful discussions between the authors and participants. After the event, high quality papers can be submitted to a special issue of the Journal of Global Optimization dedicated to the workshop. [...

    Visualizing data as objects by DC (difference of convex) optimization

    Get PDF
    In this paper we address the problem of visualizing in a bounded region a set of individuals, which has attached a dissimilarity measure and a statistical value, as convex objects. This problem, which extends the standard Multidimensional Scaling Analysis, is written as a global optimization problem whose objective is the difference of two convex functions (DC). Suitable DC decompositions allow us to use the Difference of Convex Algorithm (DCA) in a very efficient way. Our algorithmic approach is used to visualize two real-world datasets.Ministerio de Economía y CompetitividadJunta de AndalucíaUnión EuropeaUniversidad de Sevill

    Global optimization at work

    Get PDF
    In many research situations where mathematical models are used, researchers try to find parameter values such that a given performance criterion is at an optimum. If the parameters can be varied in a continuous way, this in general defines a so-called Nonlinear Programming Problem. Methods for Nonlinear Programming usually result in local optima. A local optimum is a solution (parameter values) which is the best with respect to values in the neighbourhood of that solution, not necessarily the best over the total admissible, feasible set of all possible parameter values, solutions.For mathematicians this results in the research question: How to find the best, global optimum in situations where several local optima exist?, the field of Global Optimization (GLOP). Literature, books and a specific journal, has appeared during the last decades on the field. Main focus has been on the mathematical side, i.e. given assumptions on the structure of the problems to be solved and specific global optimization methods and properties are derived. Cooperation between mathematicians and researchers (in this book called 'the modeller' or 'the potential user'), who saw global optimization problems in practical problems has lead to application of GLOP algorithms to practical optimization problems. Some of those can be found in this book. In this book we started with the question:Given a potential user with an arbitrary global optimization problem, what route can be taken in the GLOP forest to find solutions of the problem?From this first question we proceed by raising new questions. In Chapter 1 we outline the target group of users we have in mind, i.e. agricultural and environmental engineers, designers and OR workers in agricultural science. These groups are not clearly defined, nor mutually exclusive, but have in common that mathematical modelling is used and there is knowledge of linear programming and possibly of combinatorial optimization.In general, when modellers are confronted with optimization aspects, the first approach is to develop heuristics or to look for standard nonlinear programming codes to generate solutions of the optimization problem. During the search for solutions, multiple local optima may appear. We distinguished two major tracks for the path to be taken from there by the potential user to solve the problem. One track is called the deterministic track and is discussed in Chapters 2, 3 and 4. The other track is called the stochastic track and is discussed in Chapters 5 and 6. The two approaches are intended to reach a different goal.The deterministic track aims at:The global optimum is approximated (found) with certainty in a finite number of steps.The stochastic track is understood to contain some stochastic elements and aims at:Approaching the optimum in a probabilistic sense as effort grows to infinity.Both tracks are investigated in this book from the viewpoint of a potential user corresponding to the way of thinking in Popperian science. The final results are new challenging problems, questions for further research. A side question along the way is:How can the user influence the search process given the knowledge of the underlying problem and the information that becomes available during the search?The deterministic approachWhen one starts looking into the deterministic track for a given problem, one runs into the requirements which determine a major difference in applicability of the two approaches.Deterministic methods require the availability of explicit mathematical expressions of the functions to be optimized.In many practical situations which are also discussed in this book, these expressions are not available and deterministic methods cannot be applied. The operations in deterministic methods are based on concepts such as Branch-and-Bound and Cutting which require bounding of functions and parameters based on so-called mathematical structures.In Chapter 2 we describe these structures and distinguish between those which can be derived directly from the expressions, such as quadratic, bilinear and fractional functions and other structures which require analysis of the expressions such as concave and Lipschitz continuous functions. Examples are given of optimization problems revealing their structure. Moreover, we show that symmetry in the model formulation may cause models to have more than one extreme.In Chapter 3 the relationship between GLOP and Integer Programming (IP) is highlighted for several reasons.Sometimes practical GLOP problems can be approximated by IP variants and solved by standard Mixed Integer Linear Programming (MILP) techniques.The algorithms of GLOP and IP can similarly be classified.The transformability of GLOP problems to IP problems and vice versa shows that difficult problems in one class will not become easier to solve in the other.Analysis of problems, which is common in Global Optimization, can be used to better understand the complexity of some IP problems.In Chapter 4 we analyze the use of deterministic methods, demonstrating the application of the Branch-and-Bound concept. The following can be stated from the point of view of the potential user:Analysis of the expressions is required to find useful mathematical structures (Chapter 2). It should be noted that also interval arithmetic techniques can be applied directly on the expressions.The elegance of the techniques is the guarantee that we are certain about the global optimality of the optimum, when it has been discovered and verified.The methods are hard to implement. Thorough use should be made of special data structures to store the necessary information in memory.Two cases are elaborated. The quadratic product design problem illustrates how the level of Decision Support Systems can be reached for low dimensional problems, i.e. the number of variables, components or ingredients, is less than 10. The other case, the nutrient problem, shows how by analysis of the problem many useful properties can be derived which help to cut away large areas of the feasible space where the optimum cannot be situated. However, it also demonstrates the so-called Curse of Dimensionality; the problem has so many variables in a realistic situation that it is impossible to traverse the complete Branch-and-Bound tree. Therefore it is good to see the relativity of the use of deterministic methods:No global optimization method can guarantee to find and verify the global optimum for every practical situation, within a humans lifetime.The stochastic approachThe stochastic approach is followed in practice for many optimization problems by combining the generation of random points with standard nonlinear optimization algorithms. The following can be said from the point of view of the potential user.The methods require no mathematical structure of the problem and are therefore more generally applicable.The methods are relatively easy to implement.The user is never completely certain that the global optimum has been reached.The optimum is approximated in a probabilistic sense when effort increases to infinity.In Chapter 5 much attention is paid to the question what happens when a user wants to spend a limited (not infinite) amount of time to the search for the optimum, preferably less than a humans lifetime:What to do when the time for solving the problem is finite?First we looked at the information which becomes available during the search and the instruments with which the user can influence the search. It appeared that besides classical instruments which are also available in traditional nonlinear programming, the main instrument is to influence the trade-off between global (random) and local search (looking for a local optimum). This lead to a new question:Is there a best way to rule the choice between global and local search, given the information which becomes available?Analyzing in a mathematical way with extreme cases lead to the comfortable conclusion that a best method of choosing between global and local search -thus a best global optimization method- does not exist. This is valid for cases where further information (more than the information which becomes available during the search) on the function to be optimized is not available, called in literature the black-box case. The conclusion again shows that mathematical analysis with extreme cases is a powerful tool to demonstrate that so-called magic algorithms -algorithms which are said in scientific journals to be very promising, because they perform well on some test cases- can be analyzed and 'falsified' in the way of Popperian thinking. This leads to the conclusion that:Magic algorithms which are going to solve all of your problems do not exist.Several side questions derived from the main problem are investigated in this book.In Chapter 6 we place the optimization problem in the context of parameter estimation. One practical question is raised by the phenomenonEvery local search leads to a new local optimum.We know from parameter estimation that this is a symptom in so called non-identifiable systems. The minimum is obtained at a lower dimensional surface or curve. Some (non-magic) heuristics are discussed to overcome this problem.There are two side questions of users derived from the general remark:"I am not interested in the best (GLOP) solution, but in good points".The first question is that of Robust Solutions, introduced in Chapter 4, and the other is called Uniform Covering, concerning the generation of points which are nearly as good as the optimum, discussed in Chapter 6.Robust solutions are discussed in the context of product design. The robustness is defined as a measure of the error one can make from the solution so that the solution (product) is still acceptable. Looking for the most robust product is looking for that point which is as far away as possible from the boundaries of the feasible (acceptable) area. For the solution procedures, we had a look at the appearance of the problem in practice, where boundaries are given by linear and quadratic surfaces, properties of the product.For linear boundaries, finding the most robust solution is an LP problem and thus rather easy.For quadratic properties the development of specific algorithms is required.The question of Uniform Covering concerns the desire to have a set of "suboptimal" points, i.e. points with low function value (given an upper level of the function value); the points are in a so-called level set. To generate "low" points, one could run a local search many times. However, we want the points not to be concentrated in one of the compartments or one sub-area of the level set, we want them to be equally, uniformly spread over the region. This is a very difficult problem for which we test and analyze several approaches in Chapter 6. The analysis taught us that:It is unlikely that stochastic methods will be proposed which solve problems in an expected calculation time, which is polynomial in the number of variables of the problem.Final resultWhether an arbitrary problem of a user can be solved by GLOP requires analysis. There are many optimization problems which can be solved satisfactorily. Besides the selection of algorithms the user has various instruments to steer the process. For stochastic methods it mainly concerns the trade-off between local and global search. For deterministic methods it includes setting bounds and influencing the selection rule in Branch-and-Bound. We hope with this book to have given a tool and a guidance to solution procedures. Moreover, it is an introduction to further literature on the subject of Global Optimization.</p

    Nonconvex and mixed integer multiobjective optimization with an application to decision uncertainty

    Get PDF
    Multiobjective optimization problems commonly arise in different fields like economics or engineering. In general, when dealing with several conflicting objective functions, there is an infinite number of optimal solutions which cannot usually be determined analytically. This thesis presents new branch-and-bound-based approaches for computing the globally optimal solutions of multiobjective optimization problems of various types. New algorithms are proposed for smooth multiobjective nonconvex optimization problems with convex constraints as well as for multiobjective mixed integer convex optimization problems. Both algorithms guarantee a certain accuracy of the computed solutions, and belong to the first deterministic algorithms within their class of optimization problems. Additionally, a new approach to compute a covering of the optimal solution set of multiobjective optimization problems with decision uncertainty is presented. The three new algorithms are tested numerically. The results are evaluated in this thesis as well. The branch-and-bound based algorithms deal with box partitions and use selection rules, discarding tests and termination criteria. The discarding tests are the most important aspect, as they give criteria whether a box can be discarded as it does not contain any optimal solution. We present discarding tests which combine techniques from global single objective optimization with outer approximation techniques from multiobjective convex optimization and with the concept of local upper bounds from multiobjective combinatorial optimization. The new discarding tests aim to find appropriate lower bounds of subsets of the image set in order to compare them with known upper bounds numerically.Multikriterielle Optimierungprobleme sind in diversen Anwendungsgebieten wie beispielsweise in den Wirtschafts- oder Ingenieurwissenschaften zu finden. Da hierbei mehrere konkurrierende Zielfunktionen auftreten, ist die Lösungsmenge eines derartigen Optimierungsproblems im Allgemeinen unendlich groß und kann meist nicht in analytischer Form berechnet werden. In dieser Dissertation werden neue Branch-and-Bound basierte Algorithmen zur Lösung verschiedener Klassen von multikriteriellen Optimierungsproblemen entwickelt und vorgestellt. Der Branch-and-Bound Ansatz ist eine typische Methode der globalen Optimierung. Einer der neuen Algorithmen löst glatte multikriterielle nichtkonvexe Optimierungsprobleme mit konvexen Nebenbedingungen, während ein zweiter zur Lösung multikriterieller gemischt-ganzzahliger konvexer Optimierungsprobleme dient. Beide Algorithmen garantieren eine gewisse Genauigkeit der berechneten Lösungen und gehören damit zu den ersten deterministischen Algorithmen ihrer Art. Zusätzlich wird ein Algorithmus zur Berechnung einer Überdeckung der Lösungsmenge multikriterieller Optimierungsprobleme mit Entscheidungsunsicherheit vorgestellt. Alle drei Algorithmen wurden numerisch getestet. Die Ergebnisse werden ebenfalls in dieser Arbeit ausgewertet. Die neuen Algorithmen arbeiten alle mit Boxunterteilungen und nutzen Auswahlregeln, sowie Verwerfungs- und Terminierungskriterien. Dabei spielen gute Verwerfungskriterien eine zentrale Rolle. Diese entscheiden, ob eine Box verworfen werden kann, da diese sicher keine Optimallösung enthält. Die neuen Verwerfungskriterien nutzen Methoden aus der globalen skalarwertigen Optimierung, Approximationstechniken aus der multikriteriellen konvexen Optimierung sowie ein Konzept aus der kombinatorischen Optimierung. Dabei werden stets untere Schranken der Bildmengen konstruiert, die mit bisher berechneten oberen Schranken numerisch verglichen werden können
    corecore