12,965 research outputs found

    Reduced computation for evolutionary optimization in noisy environment

    Get PDF
    ABSTRACT Evolutionary Algorithms' (EAs') application to real world optimization problems often involves expensive fitness function evaluation. Naturally this has a crippling effect on the performance of any population based search technique such as EA. Estimating the fitness of individuals instead of actually evaluating them is a workable approach to deal with this situation. Optimization problems in real world often involve expensive fitness. In Categories and Subject Descriptors INTRODUCTION Many real world optimization problems involve very expensive function evaluation, making it impractical for a population based search technique such as EA to be used in such problem domains. In such problems, the run-time for a single function evaluation could be in the range from a fraction of a second to hours of supercomputer time. A suitable alternative is to use approximation instead of actual function evaluation to substantially reduce the computation time [8, 10, and 11]. Use of approximate model to speed up optimization dates all the way back to the sixties The DAFHEA (dynamic approximate fitness based hybrid evolutionary algorithm) framework proposed in our earlier researc

    Uncertainty And Evolutionary Optimization: A Novel Approach

    Full text link
    Evolutionary algorithms (EA) have been widely accepted as efficient solvers for complex real world optimization problems, including engineering optimization. However, real world optimization problems often involve uncertain environment including noisy and/or dynamic environments, which pose major challenges to EA-based optimization. The presence of noise interferes with the evaluation and the selection process of EA, and thus adversely affects its performance. In addition, as presence of noise poses challenges to the evaluation of the fitness function, it may need to be estimated instead of being evaluated. Several existing approaches attempt to address this problem, such as introduction of diversity (hyper mutation, random immigrants, special operators) or incorporation of memory of the past (diploidy, case based memory). However, these approaches fail to adequately address the problem. In this paper we propose a Distributed Population Switching Evolutionary Algorithm (DPSEA) method that addresses optimization of functions with noisy fitness using a distributed population switching architecture, to simulate a distributed self-adaptive memory of the solution space. Local regression is used in the pseudo-populations to estimate the fitness. Successful applications to benchmark test problems ascertain the proposed method's superior performance in terms of both robustness and accuracy.Comment: In Proceedings of the The 9th IEEE Conference on Industrial Electronics and Applications (ICIEA 2014), IEEE Press, pp. 988-983, 201

    A hybrid swarm-based algorithm for single-objective optimization problems involving high-cost analyses

    Full text link
    In many technical fields, single-objective optimization procedures in continuous domains involve expensive numerical simulations. In this context, an improvement of the Artificial Bee Colony (ABC) algorithm, called the Artificial super-Bee enhanced Colony (AsBeC), is presented. AsBeC is designed to provide fast convergence speed, high solution accuracy and robust performance over a wide range of problems. It implements enhancements of the ABC structure and hybridizations with interpolation strategies. The latter are inspired by the quadratic trust region approach for local investigation and by an efficient global optimizer for separable problems. Each modification and their combined effects are studied with appropriate metrics on a numerical benchmark, which is also used for comparing AsBeC with some effective ABC variants and other derivative-free algorithms. In addition, the presented algorithm is validated on two recent benchmarks adopted for competitions in international conferences. Results show remarkable competitiveness and robustness for AsBeC.Comment: 19 pages, 4 figures, Springer Swarm Intelligenc

    A new sequential covering strategy for inducing classification rules with ant colony algorithms

    Get PDF
    Ant colony optimization (ACO) algorithms have been successfully applied to discover a list of classification rules. In general, these algorithms follow a sequential covering strategy, where a single rule is discovered at each iteration of the algorithm in order to build a list of rules. The sequential covering strategy has the drawback of not coping with the problem of rule interaction, i.e., the outcome of a rule affects the rules that can be discovered subsequently since the search space is modified due to the removal of examples covered by previous rules. This paper proposes a new sequential covering strategy for ACO classification algorithms to mitigate the problem of rule interaction, where the order of the rules is implicitly encoded as pheromone values and the search is guided by the quality of a candidate list of rules. Our experiments using 18 publicly available data sets show that the predictive accuracy obtained by a new ACO classification algorithm implementing the proposed sequential covering strategy is statistically significantly higher than the predictive accuracy of state-of-the-art rule induction classification algorithms
    corecore