95 research outputs found

    MathOptInterface: a data structure for mathematical optimization problems

    Full text link
    We introduce MathOptInterface, an abstract data structure for representing mathematical optimization problems based on combining pre-defined functions and sets. MathOptInterface is significantly more general than existing data structures in the literature, encompassing, for example, a spectrum of problems classes from integer programming with indicator constraints to bilinear semidefinite programming. We also outline an automated rewriting system between equivalent formulations of a constraint. MathOptInterface has been implemented in practice, forming the foundation of a recent rewrite of JuMP, an open-source algebraic modeling language in the Julia language. The regularity of the MathOptInterface representation leads naturally to a general file format for mathematical optimization we call MathOptFormat. In addition, the automated rewriting system provides modeling power to users while making it easy to connect new solvers to JuMP

    Mixed integer programming on transputers

    Get PDF
    Mixed Integer Programming (MIP) problems occur in many industries and their practical solution can be challenging in terms of both time and effort. Although faster computer hardware has allowed the solution of more MIP problems in reasonable times, there will come a point when the hardware cannot be speeded up any more. One way of improving the solution times of MIP problems without further speeding up the hardware is to improve the effectiveness of the solution algorithm used. The advent of accessible parallel processing technology and techniques provides the opportunity to exploit any parallelism within MIP solving algorithms in order to accelerate the solution of MIP problems. Many of the MIP problem solving algorithms in the literature contain a degree of exploitable parallelism. Several algorithms were considered as candidates for parallelisation within the constraints imposed by the currently available parallel hardware and techniques. A parallel Branch and Bound algorithm was designed for and implemented on an array of transputers hosted by a PC. The parallel algorithm was designed to operate as a process farm, with a master passing work to various slave processors. A message-passing harness was developed to allow full control of the slaves and the work sent to them. The effects of using various node selection techniques were studied and a default node selection strategy decided upon for the parallel algorithm. The parallel algorithm was also designed to take full advantage of the structure of MIP problems formulated using global entities such as general integers and special ordered sets. The presence of parallel processors makes practicable the idea of performing more than two branches on an unsatisfied global entity. Experiments were carried out using multiway branching strategies and a default branching strategy decided upon for appropriate types of MIP problem

    Experiments in reduction techniques for linear and integer programming

    Get PDF
    This study consisted of evaluating the relative performance to a selection of the most promising size-reduction techniques. Experiments and comparisons were made among these techniques on a series of tested problems to determine their relative efficiency, efficiency versus time etc. Three main new methods were developed by modifying and extending the previous ones. These methods were also tested and their results are compared with the earlier methods

    SOLVING TWO-LEVEL OPTIMIZATION PROBLEMS WITH APPLICATIONS TO ROBUST DESIGN AND ENERGY MARKETS

    Get PDF
    This dissertation provides efficient techniques to solve two-level optimization problems. Three specific types of problems are considered. The first problem is robust optimization, which has direct applications to engineering design. Traditionally robust optimization problems have been solved using an inner-outer structure, which can be computationally expensive. This dissertation provides a method to decompose and solve this two-level structure using a modified Benders decomposition. This gradient-based technique is applicable to robust optimization problems with quasiconvex constraints and provides approximate solutions to problems with nonlinear constraints. The second types of two-level problems considered are mathematical and equilibrium programs with equilibrium constraints. Their two-level structure is simplified using Schur's decomposition and reformulation schemes for absolute value functions. The resulting formulations are applicable to game theory problems in operations research and economics. The third type of two-level problem studied is discretely-constrained mixed linear complementarity problems. These are first formulated into a two-level mathematical program with equilibrium constraints and then solved using the aforementioned technique for mathematical and equilibrium programs with equilibrium constraints. The techniques for all three problems help simplify the two-level structure into one level, which helps gain numerical and application insights. The computational effort for solving these problems is greatly reduced using the techniques in this dissertation. Finally, a host of numerical examples are presented to verify the approaches. Diverse applications to economics, operations research, and engineering design motivate the relevance of the novel methods developed in this dissertation

    On modelling planning under uncertainty in manufacturing

    Get PDF
    We present a modelling framework for two-stage and multi-stage mixed 0-1 problems under uncertainty for strategic Supply Chain Management, tactical production planning and operations assignment and scheduling. A scenario tree based scheme is used to represent the uncertainty. We present the Deterministic Equivalent Model of the stochastic mixed 0-1 programs with complete recourse that we study. The constraints are modelled by compact and splitting variable representations via scenarios

    On Vibration Analysis and Reduction for Damped Linear Systems

    No full text

    Global optimization at work

    Get PDF
    In many research situations where mathematical models are used, researchers try to find parameter values such that a given performance criterion is at an optimum. If the parameters can be varied in a continuous way, this in general defines a so-called Nonlinear Programming Problem. Methods for Nonlinear Programming usually result in local optima. A local optimum is a solution (parameter values) which is the best with respect to values in the neighbourhood of that solution, not necessarily the best over the total admissible, feasible set of all possible parameter values, solutions.For mathematicians this results in the research question: How to find the best, global optimum in situations where several local optima exist?, the field of Global Optimization (GLOP). Literature, books and a specific journal, has appeared during the last decades on the field. Main focus has been on the mathematical side, i.e. given assumptions on the structure of the problems to be solved and specific global optimization methods and properties are derived. Cooperation between mathematicians and researchers (in this book called 'the modeller' or 'the potential user'), who saw global optimization problems in practical problems has lead to application of GLOP algorithms to practical optimization problems. Some of those can be found in this book. In this book we started with the question:Given a potential user with an arbitrary global optimization problem, what route can be taken in the GLOP forest to find solutions of the problem?From this first question we proceed by raising new questions. In Chapter 1 we outline the target group of users we have in mind, i.e. agricultural and environmental engineers, designers and OR workers in agricultural science. These groups are not clearly defined, nor mutually exclusive, but have in common that mathematical modelling is used and there is knowledge of linear programming and possibly of combinatorial optimization.In general, when modellers are confronted with optimization aspects, the first approach is to develop heuristics or to look for standard nonlinear programming codes to generate solutions of the optimization problem. During the search for solutions, multiple local optima may appear. We distinguished two major tracks for the path to be taken from there by the potential user to solve the problem. One track is called the deterministic track and is discussed in Chapters 2, 3 and 4. The other track is called the stochastic track and is discussed in Chapters 5 and 6. The two approaches are intended to reach a different goal.The deterministic track aims at:The global optimum is approximated (found) with certainty in a finite number of steps.The stochastic track is understood to contain some stochastic elements and aims at:Approaching the optimum in a probabilistic sense as effort grows to infinity.Both tracks are investigated in this book from the viewpoint of a potential user corresponding to the way of thinking in Popperian science. The final results are new challenging problems, questions for further research. A side question along the way is:How can the user influence the search process given the knowledge of the underlying problem and the information that becomes available during the search?The deterministic approachWhen one starts looking into the deterministic track for a given problem, one runs into the requirements which determine a major difference in applicability of the two approaches.Deterministic methods require the availability of explicit mathematical expressions of the functions to be optimized.In many practical situations which are also discussed in this book, these expressions are not available and deterministic methods cannot be applied. The operations in deterministic methods are based on concepts such as Branch-and-Bound and Cutting which require bounding of functions and parameters based on so-called mathematical structures.In Chapter 2 we describe these structures and distinguish between those which can be derived directly from the expressions, such as quadratic, bilinear and fractional functions and other structures which require analysis of the expressions such as concave and Lipschitz continuous functions. Examples are given of optimization problems revealing their structure. Moreover, we show that symmetry in the model formulation may cause models to have more than one extreme.In Chapter 3 the relationship between GLOP and Integer Programming (IP) is highlighted for several reasons.Sometimes practical GLOP problems can be approximated by IP variants and solved by standard Mixed Integer Linear Programming (MILP) techniques.The algorithms of GLOP and IP can similarly be classified.The transformability of GLOP problems to IP problems and vice versa shows that difficult problems in one class will not become easier to solve in the other.Analysis of problems, which is common in Global Optimization, can be used to better understand the complexity of some IP problems.In Chapter 4 we analyze the use of deterministic methods, demonstrating the application of the Branch-and-Bound concept. The following can be stated from the point of view of the potential user:Analysis of the expressions is required to find useful mathematical structures (Chapter 2). It should be noted that also interval arithmetic techniques can be applied directly on the expressions.The elegance of the techniques is the guarantee that we are certain about the global optimality of the optimum, when it has been discovered and verified.The methods are hard to implement. Thorough use should be made of special data structures to store the necessary information in memory.Two cases are elaborated. The quadratic product design problem illustrates how the level of Decision Support Systems can be reached for low dimensional problems, i.e. the number of variables, components or ingredients, is less than 10. The other case, the nutrient problem, shows how by analysis of the problem many useful properties can be derived which help to cut away large areas of the feasible space where the optimum cannot be situated. However, it also demonstrates the so-called Curse of Dimensionality; the problem has so many variables in a realistic situation that it is impossible to traverse the complete Branch-and-Bound tree. Therefore it is good to see the relativity of the use of deterministic methods:No global optimization method can guarantee to find and verify the global optimum for every practical situation, within a humans lifetime.The stochastic approachThe stochastic approach is followed in practice for many optimization problems by combining the generation of random points with standard nonlinear optimization algorithms. The following can be said from the point of view of the potential user.The methods require no mathematical structure of the problem and are therefore more generally applicable.The methods are relatively easy to implement.The user is never completely certain that the global optimum has been reached.The optimum is approximated in a probabilistic sense when effort increases to infinity.In Chapter 5 much attention is paid to the question what happens when a user wants to spend a limited (not infinite) amount of time to the search for the optimum, preferably less than a humans lifetime:What to do when the time for solving the problem is finite?First we looked at the information which becomes available during the search and the instruments with which the user can influence the search. It appeared that besides classical instruments which are also available in traditional nonlinear programming, the main instrument is to influence the trade-off between global (random) and local search (looking for a local optimum). This lead to a new question:Is there a best way to rule the choice between global and local search, given the information which becomes available?Analyzing in a mathematical way with extreme cases lead to the comfortable conclusion that a best method of choosing between global and local search -thus a best global optimization method- does not exist. This is valid for cases where further information (more than the information which becomes available during the search) on the function to be optimized is not available, called in literature the black-box case. The conclusion again shows that mathematical analysis with extreme cases is a powerful tool to demonstrate that so-called magic algorithms -algorithms which are said in scientific journals to be very promising, because they perform well on some test cases- can be analyzed and 'falsified' in the way of Popperian thinking. This leads to the conclusion that:Magic algorithms which are going to solve all of your problems do not exist.Several side questions derived from the main problem are investigated in this book.In Chapter 6 we place the optimization problem in the context of parameter estimation. One practical question is raised by the phenomenonEvery local search leads to a new local optimum.We know from parameter estimation that this is a symptom in so called non-identifiable systems. The minimum is obtained at a lower dimensional surface or curve. Some (non-magic) heuristics are discussed to overcome this problem.There are two side questions of users derived from the general remark:"I am not interested in the best (GLOP) solution, but in good points".The first question is that of Robust Solutions, introduced in Chapter 4, and the other is called Uniform Covering, concerning the generation of points which are nearly as good as the optimum, discussed in Chapter 6.Robust solutions are discussed in the context of product design. The robustness is defined as a measure of the error one can make from the solution so that the solution (product) is still acceptable. Looking for the most robust product is looking for that point which is as far away as possible from the boundaries of the feasible (acceptable) area. For the solution procedures, we had a look at the appearance of the problem in practice, where boundaries are given by linear and quadratic surfaces, properties of the product.For linear boundaries, finding the most robust solution is an LP problem and thus rather easy.For quadratic properties the development of specific algorithms is required.The question of Uniform Covering concerns the desire to have a set of "suboptimal" points, i.e. points with low function value (given an upper level of the function value); the points are in a so-called level set. To generate "low" points, one could run a local search many times. However, we want the points not to be concentrated in one of the compartments or one sub-area of the level set, we want them to be equally, uniformly spread over the region. This is a very difficult problem for which we test and analyze several approaches in Chapter 6. The analysis taught us that:It is unlikely that stochastic methods will be proposed which solve problems in an expected calculation time, which is polynomial in the number of variables of the problem.Final resultWhether an arbitrary problem of a user can be solved by GLOP requires analysis. There are many optimization problems which can be solved satisfactorily. Besides the selection of algorithms the user has various instruments to steer the process. For stochastic methods it mainly concerns the trade-off between local and global search. For deterministic methods it includes setting bounds and influencing the selection rule in Branch-and-Bound. We hope with this book to have given a tool and a guidance to solution procedures. Moreover, it is an introduction to further literature on the subject of Global Optimization.</p
    corecore