5,576 research outputs found
Solving the G-problems in less than 500 iterations: Improved efficient constrained optimization by surrogate modeling and adaptive parameter control
Constrained optimization of high-dimensional numerical problems plays an
important role in many scientific and industrial applications. Function
evaluations in many industrial applications are severely limited and no
analytical information about objective function and constraint functions is
available. For such expensive black-box optimization tasks, the constraint
optimization algorithm COBRA was proposed, making use of RBF surrogate modeling
for both the objective and the constraint functions. COBRA has shown remarkable
success in solving reliably complex benchmark problems in less than 500
function evaluations. Unfortunately, COBRA requires careful adjustment of
parameters in order to do so.
In this work we present a new self-adjusting algorithm SACOBRA, which is
based on COBRA and capable to achieve high-quality results with very few
function evaluations and no parameter tuning. It is shown with the help of
performance profiles on a set of benchmark problems (G-problems, MOPTA08) that
SACOBRA consistently outperforms any COBRA algorithm with fixed parameter
setting. We analyze the importance of the several new elements in SACOBRA and
find that each element of SACOBRA plays a role to boost up the overall
optimization performance. We discuss the reasons behind and get in this way a
better understanding of high-quality RBF surrogate modeling
Multiobjective optimization of electromagnetic structures based on self-organizing migration
Práce se zabĂ˝vá popisem novĂ©ho stochastickĂ©ho vĂcekriteriálnĂho optimalizaÄŤnĂho algoritmu MOSOMA (Multiobjective Self-Organizing Migrating Algorithm). Je zde ukázáno, Ĺľe algoritmus je schopen Ĺ™ešit nejrĹŻznÄ›jšà typy optimalizaÄŤnĂch Ăşloh (s jakĂ˝mkoli poÄŤtem kritĂ©riĂ, s i bez omezujĂcĂch podmĂnek, se spojitĂ˝m i diskrĂ©tnĂm stavovĂ˝m prostorem). VĂ˝sledky algoritmu jsou srovnány s dalšĂmi běžnÄ› pouĹľĂvanĂ˝mi metodami pro vĂcekriteriálnĂ optimalizaci na velkĂ© sadÄ› testovacĂch Ăşloh. Uvedli jsme novou techniku pro vĂ˝poÄŤet metriky rozprostĹ™enĂ (spread) zaloĹľenĂ© na hledánĂ minimálnĂ kostry grafu (Minimum Spanning Tree) pro problĂ©my majĂcĂ vĂce neĹľ dvÄ› kritĂ©ria. DoporuÄŤenĂ© hodnoty pro parametry Ĺ™ĂdĂcĂ bÄ›h algoritmu byly urÄŤeny na základÄ› vĂ˝sledkĹŻ jejich citlivostnĂ analĂ˝zy. Algoritmus MOSOMA je dále ĂşspěšnÄ› pouĹľit pro Ĺ™ešenĂ rĹŻznĂ˝ch návrhovĂ˝ch Ăşloh z oblasti elektromagnetismu (návrh Yagi-Uda antĂ©ny a dielektrickĂ˝ch filtrĹŻ, adaptivnĂ Ĺ™ĂzenĂ vyzaĹ™ovanĂ©ho svazku v ÄŤasovĂ© oblasti…).This thesis describes a novel stochastic multi-objective optimization algorithm called MOSOMA (Multi-Objective Self-Organizing Migrating Algorithm). It is shown that MOSOMA is able to solve various types of multi-objective optimization problems (with any number of objectives, unconstrained or constrained problems, with continuous or discrete decision space). The efficiency of MOSOMA is compared with other commonly used optimization techniques on a large suite of test problems. The new procedure based on finding of minimum spanning tree for computing the spread metric for problems with more than two objectives is proposed. Recommended values of parameters controlling the run of MOSOMA are derived according to their sensitivity analysis. The ability of MOSOMA to solve real-life problems from electromagnetics is shown in a few examples (Yagi-Uda and dielectric filters design, adaptive beam forming in time domain…).
Planning as Optimization: Dynamically Discovering Optimal Configurations for Runtime Situations
The large number of possible configurations of modern software-based systems,
combined with the large number of possible environmental situations of such
systems, prohibits enumerating all adaptation options at design time and
necessitates planning at run time to dynamically identify an appropriate
configuration for a situation. While numerous planning techniques exist, they
typically assume a detailed state-based model of the system and that the
situations that warrant adaptations are known. Both of these assumptions can be
violated in complex, real-world systems. As a result, adaptation planning must
rely on simple models that capture what can be changed (input parameters) and
observed in the system and environment (output and context parameters). We
therefore propose planning as optimization: the use of optimization strategies
to discover optimal system configurations at runtime for each distinct
situation that is also dynamically identified at runtime. We apply our approach
to CrowdNav, an open-source traffic routing system with the characteristics of
a real-world system. We identify situations via clustering and conduct an
empirical study that compares Bayesian optimization and two types of
evolutionary optimization (NSGA-II and novelty search) in CrowdNav
Is One Hyperparameter Optimizer Enough?
Hyperparameter tuning is the black art of automatically finding a good
combination of control parameters for a data miner. While widely applied in
empirical Software Engineering, there has not been much discussion on which
hyperparameter tuner is best for software analytics. To address this gap in the
literature, this paper applied a range of hyperparameter optimizers (grid
search, random search, differential evolution, and Bayesian optimization) to
defect prediction problem. Surprisingly, no hyperparameter optimizer was
observed to be `best' and, for one of the two evaluation measures studied here
(F-measure), hyperparameter optimization, in 50\% cases, was no better than
using default configurations.
We conclude that hyperparameter optimization is more nuanced than previously
believed. While such optimization can certainly lead to large improvements in
the performance of classifiers used in software analytics, it remains to be
seen which specific optimizers should be applied to a new dataset.Comment: 7 pages, 2 columns, accepted for SWAN1
- …