95,440 research outputs found

    An augmented lagrangian fish swarm based method for global optimization

    Get PDF
    This paper presents an augmented Lagrangian methodology with a stochastic population based algorithm for solving nonlinear constrained global optimization problems. The method approximately solves a sequence of simple bound global optimization subproblems using a fish swarm intelligent algorithm. A stochastic convergence analysis of the fish swarm iterative process is included. Numerical results with a benchmark set of problems are shown, including a comparison with other stochastic-type algorithms.Fundação para a Ciência e a Tecnologia (FCT

    Constrained Global Optimization by Smoothing

    Full text link
    This paper proposes a novel technique called "successive stochastic smoothing" that optimizes nonsmooth and discontinuous functions while considering various constraints. Our methodology enables local and global optimization, making it a powerful tool for many applications. First, a constrained problem is reduced to an unconstrained one by the exact nonsmooth penalty function method, which does not assume the existence of the objective function outside the feasible area and does not require the selection of the penalty coefficient. This reduction is exact in the case of minimization of a lower semicontinuous function under convex constraints. Then the resulting objective function is sequentially smoothed by the kernel method starting from relatively strong smoothing and with a gradually vanishing degree of smoothing. The finite difference stochastic gradient descent with trajectory averaging minimizes each smoothed function locally. Finite differences over stochastic directions sampled from the kernel estimate the stochastic gradients of the smoothed functions. We investigate the convergence rate of such stochastic finite-difference method on convex optimization problems. The "successive smoothing" algorithm uses the results of previous optimization runs to select the starting point for optimizing a consecutive, less smoothed function. Smoothing provides the "successive smoothing" method with some global properties. We illustrate the performance of the "successive stochastic smoothing" method on test-constrained optimization problems from the literature.Comment: 17 pages, 1 tabl

    Stability and Performance Verification of Optimization-based Controllers

    Get PDF
    This paper presents a method to verify closed-loop properties of optimization-based controllers for deterministic and stochastic constrained polynomial discrete-time dynamical systems. The closed-loop properties amenable to the proposed technique include global and local stability, performance with respect to a given cost function (both in a deterministic and stochastic setting) and the L2\mathcal{L}_2 gain. The method applies to a wide range of practical control problems: For instance, a dynamical controller (e.g., a PID) plus input saturation, model predictive control with state estimation, inexact model and soft constraints, or a general optimization-based controller where the underlying problem is solved with a fixed number of iterations of a first-order method are all amenable to the proposed approach. The approach is based on the observation that the control input generated by an optimization-based controller satisfies the associated Karush-Kuhn-Tucker (KKT) conditions which, provided all data is polynomial, are a system of polynomial equalities and inequalities. The closed-loop properties can then be analyzed using sum-of-squares (SOS) programming

    Global optimization method for design problems

    Get PDF
    In structural design optimization method, numerical techniques are increasingly used. In typical structural optimization problems there may be many locally minimum configurations. For that reason, the application of a global method, which may escape from the locally minimum points, remains essential. In this paper, a new hybrid simulated annealing algorithm for global optimization with constraints is proposed. We have developed a new algorithm called Adaptive Simulated Annealing Penalty Simultaneous Perturbation Stochastic Approximation algorithm (ASAPSPSA) that uses Adaptive Simulated Annealing algorithm (ASA); ASA is a series of modifications done to the traditional simulated annealing algorithm that gives the global solution of an objective function. In addition, the stochastic method Simultaneous Perturbation Stochastic Approximation (SPSA) for solving unconstrained optimization problems is used to refine the solution. We also propose Penalty SPSA (PSPSA) for solving constrained optimization problems. The constraints are handled using exterior point penalty functions. The hybridization of both techniques ASA and PSPSA provides a powerful hybrid heuristic optimization method; the proposed method is applicable to any problem where the topology of the structure is not fixed; it is simple and capable of handling problems subject to any number of nonlinear constraints. Extensive tests on the ASAPSPSA as a global optimization method are presented; its performance as a viable optimization method is demonstrated by applying it first to a series of benchmark functions with 2 - 50 dimensions and then it is used in structural design to demonstrate its applicability and efficiency

    Constrained Optimization in Random Simulation:Efficient Global Optimization and Karush-Kuhn-Tucker Conditions

    Get PDF
    We develop a novel method for solving constrained optimization problems in random (or stochastic) simulation; i.e., our method minimizes the goal output subject to one or more output constraints and input constraints. Our method is indeed novel, as it combines the Karush-Kuhn-Tucker (KKT) conditions with the popular algorithm called "effciient global optimization" (EGO), which is also known as "Bayesian optimization" and is related to “active learning". Originally, EGO solves non-constrained optimization problems in deterministic simulation; EGO is a sequential algorithm that uses Kriging (or Gaussian process) metamodeling of the underlying simulation model, treating the simulation as a black box. Though there are many variants of EGO - for these non-constrained deterministic problems and for variants of these problems - none of these EGO-variants use the KKT conditions - even though these conditions are well-known (first-order necessary) optimality conditions in white-box problems. Because the simulation is random, we apply stochastic Kriging. Furthermore, we allow for variance heterogeneity and apply a popular sample allocation rule to determine the number of replicated simulation outputs for selected combinations of simulation inputs. Moreover, our algorithm can take advantage of parallel computing. We numerically compare the performance of our algorithm and the popular proprietary OptQuest algorithm, in two familiar examples (namely, a mathematical toy example and a practical inventory system with a service-level constraint); we conclude that our algorithm is more efficient (requires fewer expensive simulation runs) and effective (gives better estimates of the true global optimum)

    A reliable hybrid solver for nonconvex optimization

    Get PDF
    International audienceNonconvex and highly multimodal optimization problems represent a challenge both for stochastic and deterministic global optimization methods. The former (metaheuristics) usually achieve satisfactory solutions but cannot guarantee global optimality, while the latter (generally based on a spatial branch and bound scheme [1], an exhaustive and non-uniform partitioning method) may struggle to converge toward a global minimum within reasonable time. The partitioning process is exponential in the number of variables, which prevents the resolution of large instances. The performances of the solvers even dramatically deteriorate when using reliable techniques, namely techniques that cope with rounding errors.In this paper, we present a fully reliable hybrid algorithm named Charibde (Cooperative Hybrid Algorithm using Reliable Interval-Based methods and Dierential Evolution) [2] that reconciles stochastic and deterministic techniques. An Evolutionary Algorithm (EA) cooperates with intervalbased techniques to accelerate convergence toward the global minimum and prove the optimality of the solution with user-defined precision. Charibde may be used to solve continuous, nonconvex, constrained or bound-constrained problems involving factorable functions

    Design of optimal spacecraft-asteroid formations through a hybrid global optimization approach

    Get PDF
    Purpose – The purpose of this paper is to present a methodology and experimental results on using global optimization algorithms to determine the optimal orbit, based on the mission requirements, for a set of spacecraft flying in formation with an asteroid. Design/methodology/approach – A behavioral-based hybrid global optimization approach is used to first characterize the solution space and find families of orbits that are a fixed distance away from the asteroid. The same optimization approach is then used to find the set of Pareto optimal solutions that minimize both the distance from the asteroid and the variation of the Sun-spacecraft-asteroid angle. Two sample missions to asteroids, representing constrained single and multi-objective problems, were selected to test the applicability of using an in-house hybrid stochastic-deterministic global optimization algorithm (Evolutionary Programming and Interval Computation (EPIC)) to find optimal orbits for a spacecraft flying in formation with an orbit. The Near Earth Asteroid 99942 Apophis (2004 MN4) is used as the case study due to a fly-by of Earth in 2029 leading to two potential impacts in 2036 or 2037. Two black-box optimization problems that model the orbital dynamics of the spacecraft were developed. Findings – It was found for the two missions under test, that the optimized orbits fall into various distinct families, which can be used to design multi-spacecraft missions with similar orbital characteristics. Research limitations/implications – The global optimization software, EPIC, was very effective at finding sets of orbits which met the required mission objectives and constraints for a formation of spacecraft in proximity of an asteroid. The hybridization of the stochastic search with the deterministic domain decomposition can greatly improve the intrinsic stochastic nature of the multi-agent search process without the excessive computational cost of a full grid search. The stability of the discovered families of formation orbit is subject to the gravity perturbation of the asteroid and to the solar pressure. Their control, therefore, requires further investigation. Originality/value – This paper contributes to both the field of space mission design for close-proximity orbits and to the field of global optimization. In particular, suggests a common formulation for single and multi-objective problems and a robust and effective hybrid search method based on behaviorism. This approach provides an effective way to identify families of optimal formation orbits

    Hybrid optimization coupling electromagnetism and descent search for engineering problems

    Get PDF
    In this paper, we present a new stochastic hybrid technique for constrained global optimization. It is a combination of the electromagnetism-like (EM) mechanism with an approximate descent search, which is a derivative-free procedure with high ability of producing a descent direction. Since the original EM algorithm is specifically designed for solving bound constrained problems, the approach herein adopted for handling the constraints of the problem relies on a simple heuristic denoted by feasibility and dominance rules. The hybrid EM method is tested on four well-known engineering design problems and the numerical results demonstrate the effectiveness of the proposed approach

    A modified electromagnetism-like algorithm based on a pattern search method

    Get PDF
    The Electromagnetism-like (EM) algorithm, developed by Birbil and Fang [2] is a population-based stochastic global optimization algorithm that uses an attraction-repulsion mechanism to move sample points towards optimality. A typical EM algorithm for solving continuous bound constrained optimization problems performs a local search in order to gather information for a point, in the population. Here, we propose a new local search procedure based on the original pattern search method of Hooke and Jeeves, which is simple to implement and does not require any derivative information. The proposed method is applied to different test problems from the literature and compared with the original EM algorithm.(undefined
    • …
    corecore