212,460 research outputs found
A hybrid multiagent approach for global trajectory optimization
In this paper we consider a global optimization method for space trajectory design problems. The method, which actually aims at finding not only the global minimizer but a whole set of low-lying local minimizers(corresponding to a set of different design options), is based on a domain
decomposition technique where each subdomain is evaluated through a procedure based on the evolution of a population of agents. The method is applied to two space trajectory design problems and compared with existing deterministic and stochastic global optimization methods
A stochastic smoothing method for nonsmooth global optimization
The paper presents the results of testing the stochastic smoothing method for global optimization of a multiextremal function in a convex feasible subset of the Euclidean space. Preliminarily, the objective function is extended outside the admissible region so that its global minimum does not change, and it becomes coercive.Проблема глобальної оптимізації неопуклих негладких функцій з обмеженнями є актуальною для багатьох інженерних застосувань, зокрема, для навчання неопуклих негладких нейронних мереж. У роботі представлені результати тестування методу згладжування багато екстремальної цільової функції для знаходження її глобального мінімуму в деякої опуклій допустимій області евклідового простору. Попередньо цільова функція довизначається поза опуклої допустимої області так, щоб не змінити її глобального мінімуму, та зробити її коерцитивною.Проблема глобальной оптимизации невыпуклых негладких функций при ограничениях актуальна для многих инженерных приложений, в частности, для обучения невыпуклых негладких нейронных сетей. В работе представлены результаты тестирования метода сглаживания многоэкстремальной целевой функции для нахождения ее глобального минимума в некоторой выпуклой допустимой области евклидового пространства. Предварительно целевая функция доопределяется вне допустимой области так, чтобы не изменить ее глобальный минимум, и сделать ее коэрцитивной
Almost Global Stochastic Stability
We develop a method to prove almost global stability of stochastic
differential equations in the sense that almost every initial point (with
respect to the Lebesgue measure) is asymptotically attracted to the origin with
unit probability. The method can be viewed as a dual to Lyapunov's second
method for stochastic differential equations and extends the deterministic
result in [A. Rantzer, Syst. Contr. Lett., 42 (2001), pp. 161--168]. The result
can also be used in certain cases to find stabilizing controllers for
stochastic nonlinear systems using convex optimization. The main technical tool
is the theory of stochastic flows of diffeomorphisms.Comment: Submitte
A Branch and Bound Method for Stochastic Global Optimization
A stochastic version of the branch and bound method is proposed for solving stochastic global optimization problems. The method, instead of deterministic bounds, uses stochastic upper and lower estimates of the optimal value of subproblems, to guide the partitioning process. Almost sure convergence of the method is proved and random accuracy estimates derived. Methods for constructing random bounds for stochastic global optimization problems are discussed. The theoretical considerations are illustrated with an example of a facility location problem
Continuous extremal optimization for Lennard-Jones Clusters
In this paper, we explore a general-purpose heuristic algorithm for finding
high-quality solutions to continuous optimization problems. The method, called
continuous extremal optimization(CEO), can be considered as an extension of
extremal optimization(EO) and is consisted of two components, one is with
responsibility for global searching and the other is with responsibility for
local searching. With only one adjustable parameter, the CEO's performance
proves competitive with more elaborate stochastic optimization procedures. We
demonstrate it on a well known continuous optimization problem: the
Lennerd-Jones clusters optimization problem.Comment: 5 pages and 3 figure
Global optimization method for design problems
In structural design optimization method, numerical techniques are increasingly used. In typical structural optimization problems there may be many locally minimum configurations. For that reason, the application of a global method, which may escape from the locally minimum points, remains essential. In this paper, a new hybrid simulated annealing algorithm for global optimization with constraints is proposed. We have developed a new algorithm called Adaptive Simulated Annealing Penalty Simultaneous Perturbation Stochastic Approximation algorithm (ASAPSPSA) that uses Adaptive Simulated Annealing algorithm (ASA); ASA is a series of modifications done to the traditional simulated annealing algorithm that gives the global solution of an objective function. In addition, the stochastic method Simultaneous Perturbation Stochastic Approximation (SPSA) for solving unconstrained optimization problems is used to refine the solution. We also propose Penalty SPSA (PSPSA) for solving constrained optimization problems. The constraints are handled using exterior point penalty functions. The hybridization of both techniques ASA and PSPSA provides a powerful hybrid heuristic optimization method; the proposed method is applicable to any problem where the topology of the structure is not fixed; it is simple and capable of handling problems subject to any number of nonlinear constraints. Extensive tests on the ASAPSPSA as a global optimization method are presented; its performance as a viable optimization method is demonstrated by applying it first to a series of benchmark functions with 2 - 50 dimensions and then it is used in structural design to demonstrate its applicability and efficiency
- …