65,517 research outputs found

    Optimal Parameter Choices Through Self-Adjustment: Applying the 1/5-th Rule in Discrete Settings

    Full text link
    While evolutionary algorithms are known to be very successful for a broad range of applications, the algorithm designer is often left with many algorithmic choices, for example, the size of the population, the mutation rates, and the crossover rates of the algorithm. These parameters are known to have a crucial influence on the optimization time, and thus need to be chosen carefully, a task that often requires substantial efforts. Moreover, the optimal parameters can change during the optimization process. It is therefore of great interest to design mechanisms that dynamically choose best-possible parameters. An example for such an update mechanism is the one-fifth success rule for step-size adaption in evolutionary strategies. While in continuous domains this principle is well understood also from a mathematical point of view, no comparable theory is available for problems in discrete domains. In this work we show that the one-fifth success rule can be effective also in discrete settings. We regard the (1+(λ,λ))(1+(\lambda,\lambda))~GA proposed in [Doerr/Doerr/Ebel: From black-box complexity to designing new genetic algorithms, TCS 2015]. We prove that if its population size is chosen according to the one-fifth success rule then the expected optimization time on \textsc{OneMax} is linear. This is better than what \emph{any} static population size λ\lambda can achieve and is asymptotically optimal also among all adaptive parameter choices.Comment: This is the full version of a paper that is to appear at GECCO 201

    Players with limited memory

    Get PDF
    This paper studies a model of memory. The model takes into account that memory capacity is limited and imperfect. We study how agents with such memory limitations, who have very little information about their choice environment, play games. We introduce the notion of a Limited Memory Equilibrium (LME) and show that play converges to an LME in every generic normal form game. Our characterization of the set of LME suggests that players with limited memory do (weakly) better in games than in decision problems. We also show that agents can do quite well even with severely limited memory, although severe limitations tend to make them behave cautiously

    Analysis of Noisy Evolutionary Optimization When Sampling Fails

    Full text link
    In noisy evolutionary optimization, sampling is a common strategy to deal with noise. By the sampling strategy, the fitness of a solution is evaluated multiple times (called \emph{sample size}) independently, and its true fitness is then approximated by the average of these evaluations. Previous studies on sampling are mainly empirical. In this paper, we first investigate the effect of sample size from a theoretical perspective. By analyzing the (1+1)-EA on the noisy LeadingOnes problem, we show that as the sample size increases, the running time can reduce from exponential to polynomial, but then return to exponential. This suggests that a proper sample size is crucial in practice. Then, we investigate what strategies can work when sampling with any fixed sample size fails. By two illustrative examples, we prove that using parent or offspring populations can be better. Finally, we construct an artificial noisy example to show that when using neither sampling nor populations is effective, adaptive sampling (i.e., sampling with an adaptive sample size) can work. This, for the first time, provides a theoretical support for the use of adaptive sampling

    A framework for deadlock detection in core ABS

    Get PDF
    We present a framework for statically detecting deadlocks in a concurrent object-oriented language with asynchronous method calls and cooperative scheduling of method activations. Since this language features recursion and dynamic resource creation, deadlock detection is extremely complex and state-of-the-art solutions either give imprecise answers or do not scale. In order to augment precision and scalability we propose a modular framework that allows several techniques to be combined. The basic component of the framework is a front-end inference algorithm that extracts abstract behavioural descriptions of methods, called contracts, which retain resource dependency information. This component is integrated with a number of possible different back-ends that analyse contracts and derive deadlock information. As a proof-of-concept, we discuss two such back-ends: (i) an evaluator that computes a fixpoint semantics and (ii) an evaluator using abstract model checking.Comment: Software and Systems Modeling, Springer Verlag, 201
    corecore