5,018 research outputs found

    On the Robustness of Median Sampling in Noisy Evolutionary Optimization

    Full text link
    In real-world optimization tasks, the objective (i.e., fitness) function evaluation is often disturbed by noise due to a wide range of uncertainties. Evolutionary algorithms (EAs) have been widely applied to tackle noisy optimization, where reducing the negative effect of noise is a crucial issue. One popular strategy to cope with noise is sampling, which evaluates the fitness multiple times and uses the sample average to approximate the true fitness. In this paper, we introduce median sampling as a noise handling strategy into EAs, which uses the median of the multiple evaluations to approximate the true fitness instead of the mean. We theoretically show that median sampling can reduce the expected running time of EAs from exponential to polynomial by considering the (1+1)-EA on OneMax under the commonly used one-bit noise. We also compare mean sampling with median sampling by considering two specific noise models, suggesting that when the 2-quantile of the noisy fitness increases with the true fitness, median sampling can be a better choice. The results provide us with some guidance to employ median sampling efficiently in practice.Comment: 19 pages. arXiv admin note: text overlap with arXiv:1810.05045, arXiv:1711.0095

    Racing Multi-Objective Selection Probabilities

    Get PDF
    In the context of Noisy Multi-Objective Optimization, dealing with uncertainties requires the decision maker to define some preferences about how to handle them, through some statistics (e.g., mean, median) to be used to evaluate the qualities of the solutions, and define the corresponding Pareto set. Approximating these statistics requires repeated samplings of the population, drastically increasing the overall computational cost. To tackle this issue, this paper proposes to directly estimate the probability of each individual to be selected, using some Hoeffding races to dynamically assign the estimation budget during the selection step. The proposed racing approach is validated against static budget approaches with NSGA-II on noisy versions of the ZDT benchmark functions

    A hybrid swarm-based algorithm for single-objective optimization problems involving high-cost analyses

    Full text link
    In many technical fields, single-objective optimization procedures in continuous domains involve expensive numerical simulations. In this context, an improvement of the Artificial Bee Colony (ABC) algorithm, called the Artificial super-Bee enhanced Colony (AsBeC), is presented. AsBeC is designed to provide fast convergence speed, high solution accuracy and robust performance over a wide range of problems. It implements enhancements of the ABC structure and hybridizations with interpolation strategies. The latter are inspired by the quadratic trust region approach for local investigation and by an efficient global optimizer for separable problems. Each modification and their combined effects are studied with appropriate metrics on a numerical benchmark, which is also used for comparing AsBeC with some effective ABC variants and other derivative-free algorithms. In addition, the presented algorithm is validated on two recent benchmarks adopted for competitions in international conferences. Results show remarkable competitiveness and robustness for AsBeC.Comment: 19 pages, 4 figures, Springer Swarm Intelligenc

    Self-Adaptive Surrogate-Assisted Covariance Matrix Adaptation Evolution Strategy

    Get PDF
    This paper presents a novel mechanism to adapt surrogate-assisted population-based algorithms. This mechanism is applied to ACM-ES, a recently proposed surrogate-assisted variant of CMA-ES. The resulting algorithm, saACM-ES, adjusts online the lifelength of the current surrogate model (the number of CMA-ES generations before learning a new surrogate) and the surrogate hyper-parameters. Both heuristics significantly improve the quality of the surrogate model, yielding a significant speed-up of saACM-ES compared to the ACM-ES and CMA-ES baselines. The empirical validation of saACM-ES on the BBOB-2012 noiseless testbed demonstrates the efficiency and the scalability w.r.t the problem dimension and the population size of the proposed approach, that reaches new best results on some of the benchmark problems.Comment: Genetic and Evolutionary Computation Conference (GECCO 2012) (2012

    Developmental Bayesian Optimization of Black-Box with Visual Similarity-Based Transfer Learning

    Full text link
    We present a developmental framework based on a long-term memory and reasoning mechanisms (Vision Similarity and Bayesian Optimisation). This architecture allows a robot to optimize autonomously hyper-parameters that need to be tuned from any action and/or vision module, treated as a black-box. The learning can take advantage of past experiences (stored in the episodic and procedural memories) in order to warm-start the exploration using a set of hyper-parameters previously optimized from objects similar to the new unknown one (stored in a semantic memory). As example, the system has been used to optimized 9 continuous hyper-parameters of a professional software (Kamido) both in simulation and with a real robot (industrial robotic arm Fanuc) with a total of 13 different objects. The robot is able to find a good object-specific optimization in 68 (simulation) or 40 (real) trials. In simulation, we demonstrate the benefit of the transfer learning based on visual similarity, as opposed to an amnesic learning (i.e. learning from scratch all the time). Moreover, with the real robot, we show that the method consistently outperforms the manual optimization from an expert with less than 2 hours of training time to achieve more than 88% of success

    Analysis-of-marginal-Tail-Means (ATM): a robust method for discrete black-box optimization

    Full text link
    We present a new method, called Analysis-of-marginal-Tail-Means (ATM), for effective robust optimization of discrete black-box problems. ATM has important applications to many real-world engineering problems (e.g., manufacturing optimization, product design, molecular engineering), where the objective to optimize is black-box and expensive, and the design space is inherently discrete. One weakness of existing methods is that they are not robust: these methods perform well under certain assumptions, but yield poor results when such assumptions (which are difficult to verify in black-box problems) are violated. ATM addresses this via the use of marginal tail means for optimization, which combines both rank-based and model-based methods. The trade-off between rank- and model-based optimization is tuned by first identifying important main effects and interactions, then finding a good compromise which best exploits additive structure. By adaptively tuning this trade-off from data, ATM provides improved robust optimization over existing methods, particularly in problems with (i) a large number of factors, (ii) unordered factors, or (iii) experimental noise. We demonstrate the effectiveness of ATM in simulations and in two real-world engineering problems: the first on robust parameter design of a circular piston, and the second on product family design of a thermistor network

    PID control system analysis and design

    Get PDF
    With its three-term functionality offering treatment of both transient and steady-state responses, proportional-integral-derivative (PID) control provides a generic and efficient solution to realworld control problems. The wide application of PID control has stimulated and sustained research and development to "get the best out of PID", and "the search is on to find the next key technology or methodology for PID tuning". This article presents remedies for problems involving the integral and derivative terms. PID design objectives, methods, and future directions are discussed. Subsequently, a computerized, simulation-based approach is presented, together with illustrative design results for first-order, higher order, and nonlinear plants. Finally, we discuss differences between academic research and industrial practice, so as to motivate new research directions in PID control

    Efficient dynamic resampling for dominance-based multiobjective evolutionary optimization

    Get PDF
    Multi-objective optimization problems are often subject to the presence of objectives that require expensive resampling for their computation. This is the case for many robustness metrics, which are frequently used as an additional objective that accounts for the reliability of specific sections of the solution space. Typical robustness measurements use resampling, but the number of samples that constitute a precise dispersion measure has a potentially large impact on the computational cost of an algorithm. This article proposes the integration of dominance based statistical testing methods as part of the selection mechanism of evolutionary multi-objective genetic algorithms with the aim of reducing the number of fitness evaluations. The performance of the approach is tested on five classical benchmark functions integrating it into two well-known algorithms, NSGA-II and SPEA2. The experimental results show a significant reduction in the number of fitness evaluations while, at the same time, maintaining the quality of the solutions.The authors acknowledge financial support granted by the Spanish Ministry of Economy and Competitivity under grant ENE2014-56126-C2-2-R
    • …
    corecore