5,576 research outputs found

    Solving the G-problems in less than 500 iterations: Improved efficient constrained optimization by surrogate modeling and adaptive parameter control

    Get PDF
    Constrained optimization of high-dimensional numerical problems plays an important role in many scientific and industrial applications. Function evaluations in many industrial applications are severely limited and no analytical information about objective function and constraint functions is available. For such expensive black-box optimization tasks, the constraint optimization algorithm COBRA was proposed, making use of RBF surrogate modeling for both the objective and the constraint functions. COBRA has shown remarkable success in solving reliably complex benchmark problems in less than 500 function evaluations. Unfortunately, COBRA requires careful adjustment of parameters in order to do so. In this work we present a new self-adjusting algorithm SACOBRA, which is based on COBRA and capable to achieve high-quality results with very few function evaluations and no parameter tuning. It is shown with the help of performance profiles on a set of benchmark problems (G-problems, MOPTA08) that SACOBRA consistently outperforms any COBRA algorithm with fixed parameter setting. We analyze the importance of the several new elements in SACOBRA and find that each element of SACOBRA plays a role to boost up the overall optimization performance. We discuss the reasons behind and get in this way a better understanding of high-quality RBF surrogate modeling

    Multiobjective optimization of electromagnetic structures based on self-organizing migration

    Get PDF
    Práce se zabývá popisem nového stochastického vícekriteriálního optimalizačního algoritmu MOSOMA (Multiobjective Self-Organizing Migrating Algorithm). Je zde ukázáno, že algoritmus je schopen řešit nejrůznější typy optimalizačních úloh (s jakýmkoli počtem kritérií, s i bez omezujících podmínek, se spojitým i diskrétním stavovým prostorem). Výsledky algoritmu jsou srovnány s dalšími běžně používanými metodami pro vícekriteriální optimalizaci na velké sadě testovacích úloh. Uvedli jsme novou techniku pro výpočet metriky rozprostření (spread) založené na hledání minimální kostry grafu (Minimum Spanning Tree) pro problémy mající více než dvě kritéria. Doporučené hodnoty pro parametry řídící běh algoritmu byly určeny na základě výsledků jejich citlivostní analýzy. Algoritmus MOSOMA je dále úspěšně použit pro řešení různých návrhových úloh z oblasti elektromagnetismu (návrh Yagi-Uda antény a dielektrických filtrů, adaptivní řízení vyzařovaného svazku v časové oblasti…).This thesis describes a novel stochastic multi-objective optimization algorithm called MOSOMA (Multi-Objective Self-Organizing Migrating Algorithm). It is shown that MOSOMA is able to solve various types of multi-objective optimization problems (with any number of objectives, unconstrained or constrained problems, with continuous or discrete decision space). The efficiency of MOSOMA is compared with other commonly used optimization techniques on a large suite of test problems. The new procedure based on finding of minimum spanning tree for computing the spread metric for problems with more than two objectives is proposed. Recommended values of parameters controlling the run of MOSOMA are derived according to their sensitivity analysis. The ability of MOSOMA to solve real-life problems from electromagnetics is shown in a few examples (Yagi-Uda and dielectric filters design, adaptive beam forming in time domain…).

    Planning as Optimization: Dynamically Discovering Optimal Configurations for Runtime Situations

    Full text link
    The large number of possible configurations of modern software-based systems, combined with the large number of possible environmental situations of such systems, prohibits enumerating all adaptation options at design time and necessitates planning at run time to dynamically identify an appropriate configuration for a situation. While numerous planning techniques exist, they typically assume a detailed state-based model of the system and that the situations that warrant adaptations are known. Both of these assumptions can be violated in complex, real-world systems. As a result, adaptation planning must rely on simple models that capture what can be changed (input parameters) and observed in the system and environment (output and context parameters). We therefore propose planning as optimization: the use of optimization strategies to discover optimal system configurations at runtime for each distinct situation that is also dynamically identified at runtime. We apply our approach to CrowdNav, an open-source traffic routing system with the characteristics of a real-world system. We identify situations via clustering and conduct an empirical study that compares Bayesian optimization and two types of evolutionary optimization (NSGA-II and novelty search) in CrowdNav

    Is One Hyperparameter Optimizer Enough?

    Full text link
    Hyperparameter tuning is the black art of automatically finding a good combination of control parameters for a data miner. While widely applied in empirical Software Engineering, there has not been much discussion on which hyperparameter tuner is best for software analytics. To address this gap in the literature, this paper applied a range of hyperparameter optimizers (grid search, random search, differential evolution, and Bayesian optimization) to defect prediction problem. Surprisingly, no hyperparameter optimizer was observed to be `best' and, for one of the two evaluation measures studied here (F-measure), hyperparameter optimization, in 50\% cases, was no better than using default configurations. We conclude that hyperparameter optimization is more nuanced than previously believed. While such optimization can certainly lead to large improvements in the performance of classifiers used in software analytics, it remains to be seen which specific optimizers should be applied to a new dataset.Comment: 7 pages, 2 columns, accepted for SWAN1
    • …
    corecore