280 research outputs found

    Comma Selection Outperforms Plus Selection on OneMax with Randomly Planted Optima

    Full text link
    It is an ongoing debate whether and how comma selection in evolutionary algorithms helps to escape local optima. We propose a new benchmark function to investigate the benefits of comma selection: OneMax with randomly planted local optima, generated by frozen noise. We show that comma selection (the (1,λ)(1,\lambda) EA) is faster than plus selection (the (1+λ)(1+\lambda) EA) on this benchmark, in a fixed-target scenario, and for offspring population sizes λ\lambda for which both algorithms behave differently. For certain parameters, the (1,λ)(1,\lambda) EA finds the target in Θ(nlnn)\Theta(n \ln n) evaluations, with high probability (w.h.p.), while the (1+λ)(1+\lambda) EA) w.h.p. requires almost Θ((nlnn)2)\Theta((n\ln n)^2) evaluations. We further show that the advantage of comma selection is not arbitrarily large: w.h.p. comma selection outperforms plus selection at most by a factor of O(nlnn)O(n \ln n) for most reasonable parameter choices. We develop novel methods for analysing frozen noise and give powerful and general fixed-target results with tail bounds that are of independent interest.Comment: An extended abstract will be published at GECCO 202

    When the plus strategy performs better than the comma strategy - and when not

    Get PDF
    Occasionally there have been long debates on whether to use elitist selection or not. In the present paper the simple (1,lambd) EA and (1+lambda) EA operating on {0,1}^n are compared by means of a rigorous runtime analysis. It turns out that only values for lambda that are logarithmic in n are interesting. An illustrative function is presented for which newly developed proof methods show that the (1,lambda) EA - where lambda is logarithmic in n - outperforms the (1+lambda) EA for any lambda. For smaller offspring populations the (1,lambda) EA is inefficient on every function with a unique optimum, whereas for larger lambda the two randomized search heuristics behave almost equivalently

    When move acceptance selection hyper-heuristics outperform Metropolis and elitist evolutionary algorithms and when not

    Get PDF
    Selection hyper-heuristics (HHs) are automated algorithm selection methodologies that choose between different heuristics during the optimisation process. Recently, selection HHs choosing between a collection of elitist randomised local search heuristics with different neighbourhood sizes have been shown to optimise standard unimodal benchmark functions from evolutionary computation in the optimal expected runtime achievable with the available low-level heuristics. In this paper, we extend our understanding of the performance of HHs to the domain of multimodal optimisation by considering a Move Acceptance HH (MAHH) from the literature that can switch between elitist and non-elitist heuristics during the run. In essence, MAHH is a non-elitist search heuristic that differs from other search heuristics in the source of non-elitism. We first identify the range of parameters that allow MAHH to hillclimb efficiently and prove that it can optimise the standard hillclimbing benchmark function OneMax in the best expected asymptotic time achievable by unbiased mutation-based randomised search heuristics. Afterwards, we use standard multimodal benchmark functions to highlight function characteristics where MAHH outperforms elitist evolutionary algorithms and the well-known Metropolis non-elitist algorithm by quickly escaping local optima, and ones where it does not. Since MAHH is essentially a non-elitist random local search heuristic, the paper is of independent interest to researchers in the fields of artificial intelligence and randomised search heuristics

    Analyzing Social Network Structures in the Iterated Prisoner's Dilemma with Choice and Refusal

    Full text link
    The Iterated Prisoner's Dilemma with Choice and Refusal (IPD/CR) is an extension of the Iterated Prisoner's Dilemma with evolution that allows players to choose and to refuse their game partners. From individual behaviors, behavioral population structures emerge. In this report, we examine one particular IPD/CR environment and document the social network methods used to identify population behaviors found within this complex adaptive system. In contrast to the standard homogeneous population of nice cooperators, we have also found metastable populations of mixed strategies within this environment. In particular, the social networks of interesting populations and their evolution are examined.Comment: 37 pages, uuencoded gzip'd Postscript (1.1Mb when gunzip'd) also available via WWW at http://www.cs.wisc.edu/~smucker/ipd-cr/ipd-cr.htm

    Application of Genetic Algorithm to The Job Assignment Problem with Dynamics Constraints

    Get PDF
    The process of giving out an assignment to an individual that results to delay, or non-performance of the job is from the cause of not evaluating the minimum cost of the work and the right person to perform the assignment. Assignment problem entails assigning a precise person or thing to an exact task or job. The optimal result is to assign one person to one job. The most common method to solve assignment problem is the Hungarian method. In this paper, Genetic Algorithm is applied to solve assignment problems to attain an optimal solution. The “N men – N jobs” issue is the core task issue, where the general expense of tasks is limited as a result of allocating a single job to just an individual. In deciphering this issue, Genetic Algorithm (GA) and Partially Matched Crossover (PMX) are been utilized as an exceptional encoding plan. GA was evaluated alongside the Hungarian method and the results clearly showed that it performed better than the Hungarian method

    Derivative-Free Optimization

    Get PDF
    Abstract. In many engineering applications it is common to find optimization problems where the cost function and/or constraints require complex simulations. Though it is often, but not always, theoretically possible in these cases to extract derivative information efficiently, the associated implementation procedures are typically non-trivial and time-consuming (e.g., adjoint-based methodologies). Derivative-free (non-invasive, black-box) optimization has lately received considerable attention within the optimization community, including the establishment of solid mathematical foundations for many of the methods considered in practice. In this chapter we will describe some of the most conspicuous derivative-free optimization techniques. Our depiction will concentrate first on local optimization such as pattern search techniques, and other methods based on interpolation/approximation. Then, we will survey a number of global search methodologies, and finally give guidelines on constraint handling approaches
    corecore