20 research outputs found

    On the Easiest and Hardest Fitness Functions

    Get PDF
    The hardness of fitness functions is an important research topic in the field of evolutionary computation. In theory, the study can help understanding the ability of evolutionary algorithms. In practice, the study may provide a guideline to the design of benchmarks. The aim of this paper is to answer the following research questions: Given a fitness function class, which functions are the easiest with respect to an evolutionary algorithm? Which are the hardest? How are these functions constructed? The paper provides theoretical answers to these questions. The easiest and hardest fitness functions are constructed for an elitist (1+1) evolutionary algorithm to maximise a class of fitness functions with the same optima. It is demonstrated that the unimodal functions are the easiest and deceptive functions are the hardest in terms of the time-fitness landscape. The paper also reveals that the easiest fitness function to one algorithm may become the hardest to another algorithm, and vice versa

    Placement and Quantitating of FACTS Devices in a Power System Including the Wind Unit to Enhance System Parameters

    Get PDF
    One of the main concerns of network operators is the enhancement of system parameters; accordingly, a set of different means to this end are posed. However, the use of renewable energies such as the wind could increase the importance of the debate over sustainability and conditions of power system parameters. In this study, the condition of said parameters is examined by placing FACTS (Flexible Alternating Current Transmission System) devices in a 24-bus power system including a wind farm. Research data entailing information on the wind and the amount of consumption load per year are classified by using the K-means classification algorithm; then, the objective function is obtained according to the parameters intended for optimization. This function is optimized by using the Honey-bee mating optimization (HBMO) algorithm followed by obtaining the suitable place and amount for FACTS devices. The results showed that the examined parameters are optimized when using FACTS devices

    A Fuzzy Rule-Based System to Predict Energy Consumption of Genetic Programming Algorithms

    Get PDF
    In recent years, the energy-awareness has become one of the most interesting areas in our environmentally conscious society. Algorithm designers have been part of this, particularly when dealing with networked devices and, mainly, when handheld ones are involved. Although studies in this area has increased, not many of them have focused on Evolutionary Algorithms. To the best of our knowledge, few attempts have been performed before for modeling their energy consumption considering different execution devices. In this work, we propose a fuzzy rulebased system to predict energy comsumption of a kind of Evolutionary Algorithm, Genetic Prohramming, given the device in wich it will be executed, its main parameters, and a measurement of the difficulty of the problem addressed. Experimental results performed show that the proposed model can predict energy consumption with very low error values.We acknowledge support from Spanish Ministry of Economy and Competitiveness under projects TIN2014-56494-C4-[1,2,3]-P and TIN2017-85727-C4- [2,4]-P, Regional Government of Extremadura, Department of Commerce and Economy, conceded by the European Regional Development Fund, a way to build Europe, under the project IB16035, and Junta de Extremadura FEDER, projects GR15068 and GR15130

    Hardest Monotone Functions for Evolutionary Algorithms

    Full text link
    The study of hardest and easiest fitness landscapes is an active area of research. Recently, Kaufmann, Larcher, Lengler and Zou conjectured that for the self-adjusting (1,λ)(1,\lambda)-EA, Adversarial Dynamic BinVal (ADBV) is the hardest dynamic monotone function to optimize. We introduce the function Switching Dynamic BinVal (SDBV) which coincides with ADBV whenever the number of remaining zeros in the search point is strictly less than n/2n/2, where nn denotes the dimension of the search space. We show, using a combinatorial argument, that for the (1+1)(1+1)-EA with any mutation rate p[0,1]p \in [0,1], SDBV is drift-minimizing among the class of dynamic monotone functions. Our construction provides the first explicit example of an instance of the partially-ordered evolutionary algorithm (PO-EA) model with parameterized pessimism introduced by Colin, Doerr and F\'erey, building on work of Jansen. We further show that the (1+1)(1+1)-EA optimizes SDBV in Θ(n3/2)\Theta(n^{3/2}) generations. Our simulations demonstrate matching runtimes for both static and self-adjusting (1,λ)(1,\lambda) and (1+λ)(1+\lambda)-EA. We further show, using an example of fixed dimension, that drift-minimization does not equal maximal runtime

    How to Escape Local Optima in Black Box Optimisation: When Non-elitism Outperforms Elitism

    Get PDF
    Escaping local optima is one of the major obstacles to function optimisation. Using the metaphor of a fitness landscape, local optima correspond to hills separated by fitness valleys that have to be overcome. We define a class of fitness valleys of tunable difficulty by considering their length, representing the Hamming path between the two optima and their depth, the drop in fitness. For this function class we present a runtime comparison between stochastic search algorithms using different search strategies. The ((Formula presented.)) EA is a simple and well-studied evolutionary algorithm that has to jump across the valley to a point of higher fitness because it does not accept worsening moves (elitism). In contrast, the Metropolis algorithm and the Strong Selection Weak Mutation (SSWM) algorithm, a famous process in population genetics, are both able to cross the fitness valley by accepting worsening moves. We show that the runtime of the ((Formula presented.)) EA depends critically on the length of the valley while the runtimes of the non-elitist algorithms depend crucially on the depth of the valley. Moreover, we show that both SSWM and Metropolis can also efficiently optimise a rugged function consisting of consecutive valleys

    Drift Analysis with Fitness Levels for Elitist Evolutionary Algorithms

    Full text link
    The fitness level method is a popular tool for analyzing the computation time of elitist evolutionary algorithms. Its idea is to divide the search space into multiple fitness levels and estimate lower and upper bounds on the computation time using transition probabilities between fitness levels. However, the lower bound generated from this method is often not tight. To improve the lower bound, this paper rigorously studies an open question about the fitness level method: what are the tightest lower and upper time bounds that can be constructed based on fitness levels? To answer this question, drift analysis with fitness levels is developed, and the tightest bound problem is formulated as a constrained multi-objective optimization problem subject to fitness level constraints. The tightest metric bounds from fitness levels are constructed and proven for the first time. Then the metric bounds are converted into linear bounds, where existing linear bounds are special cases. This paper establishes a general framework that can cover various linear bounds from trivial to best coefficients. It is generic and promising, as it can be used not only to draw the same bounds as existing ones, but also to draw tighter bounds, especially on fitness landscapes where shortcuts exist. This is demonstrated in the case study of the (1+1) EA maximizing the TwoPath function

    On Easiest Functions for Mutation Operators in Bio-Inspired Optimisation

    Get PDF
    Understanding which function classes are easy and which are hard for a given algorithm is a fundamental question for the analysis and design of bio-inspired search heuristics. A natural starting point is to consider the easiest and hardest functions for an algorithm. For the (1+1) EA using standard bit mutation (SBM) it is well known that OneMax is an easiest function with unique optimum while Trap is a hardest. In this paper we extend the analysis of easiest function classes to the contiguous somatic hypermutation (CHM) operator used in artificial immune systems. We define a function MinBlocks and prove that it is an easiest function for the (1+1) EA using CHM, presenting both a runtime and a fixed budget analysis. Since MinBlocks is, up to a factor of 2, a hardest function for standard bit mutations, we consider the effects of combining both operators into a hybrid algorithm. We rigorously prove that by combining the advantages of k operators, several hybrid algorithmic schemes have optimal asymptotic performance on the easiest functions for each individual operator. In particular, the hybrid algorithms using CHM and SBM have optimal asymptotic performance on both OneMax and MinBlocks. We then investigate easiest functions for hybrid schemes and show that an easiest function for an hybrid algorithm is not just a trivial weighted combination of the respective easiest functions for each operator.publishersversionPeer reviewe
    corecore