34,656 research outputs found

    A benchmark generator for dynamic multi-objective optimization problems

    Get PDF
    The file attached to this record is the author's final peer reviewed version. The Publisher's final version can be found by following the DOI link.Many real-world optimization problems appear to not only have multiple objectives that conflict each other but also change over time. They are dynamic multi-objective optimization problems (DMOPs) and the corresponding field is called dynamic multi-objective optimization (DMO), which has gained growing attention in recent years. However, one main issue in the field of DMO is that there is no standard test suite to determine whether an algorithm is capable of solving them. This paper presents a new benchmark generator for DMOPs that can generate several complicated characteristics, including mixed Pareto-optimal front (convexity-concavity), strong dependencies between variables, and a mixed type of change, which are rarely tested in the literature. Experiments are conducted to compare the performance of five state-of-the-art DMO algorithms on several typical test functions derived from the proposed generator, which gives a better understanding of the strengths and weaknesses of these tested algorithms for DMOPs

    Solving the G-problems in less than 500 iterations: Improved efficient constrained optimization by surrogate modeling and adaptive parameter control

    Get PDF
    Constrained optimization of high-dimensional numerical problems plays an important role in many scientific and industrial applications. Function evaluations in many industrial applications are severely limited and no analytical information about objective function and constraint functions is available. For such expensive black-box optimization tasks, the constraint optimization algorithm COBRA was proposed, making use of RBF surrogate modeling for both the objective and the constraint functions. COBRA has shown remarkable success in solving reliably complex benchmark problems in less than 500 function evaluations. Unfortunately, COBRA requires careful adjustment of parameters in order to do so. In this work we present a new self-adjusting algorithm SACOBRA, which is based on COBRA and capable to achieve high-quality results with very few function evaluations and no parameter tuning. It is shown with the help of performance profiles on a set of benchmark problems (G-problems, MOPTA08) that SACOBRA consistently outperforms any COBRA algorithm with fixed parameter setting. We analyze the importance of the several new elements in SACOBRA and find that each element of SACOBRA plays a role to boost up the overall optimization performance. We discuss the reasons behind and get in this way a better understanding of high-quality RBF surrogate modeling

    COCO: Performance Assessment

    Full text link
    We present an any-time performance assessment for benchmarking numerical optimization algorithms in a black-box scenario, applied within the COCO benchmarking platform. The performance assessment is based on runtimes measured in number of objective function evaluations to reach one or several quality indicator target values. We argue that runtime is the only available measure with a generic, meaningful, and quantitative interpretation. We discuss the choice of the target values, runlength-based targets, and the aggregation of results by using simulated restarts, averages, and empirical distribution functions
    • …
    corecore