869 research outputs found

    Optimal Parameter Choices Through Self-Adjustment: Applying the 1/5-th Rule in Discrete Settings

    Full text link
    While evolutionary algorithms are known to be very successful for a broad range of applications, the algorithm designer is often left with many algorithmic choices, for example, the size of the population, the mutation rates, and the crossover rates of the algorithm. These parameters are known to have a crucial influence on the optimization time, and thus need to be chosen carefully, a task that often requires substantial efforts. Moreover, the optimal parameters can change during the optimization process. It is therefore of great interest to design mechanisms that dynamically choose best-possible parameters. An example for such an update mechanism is the one-fifth success rule for step-size adaption in evolutionary strategies. While in continuous domains this principle is well understood also from a mathematical point of view, no comparable theory is available for problems in discrete domains. In this work we show that the one-fifth success rule can be effective also in discrete settings. We regard the (1+(λ,λ))(1+(\lambda,\lambda))~GA proposed in [Doerr/Doerr/Ebel: From black-box complexity to designing new genetic algorithms, TCS 2015]. We prove that if its population size is chosen according to the one-fifth success rule then the expected optimization time on \textsc{OneMax} is linear. This is better than what \emph{any} static population size λ\lambda can achieve and is asymptotically optimal also among all adaptive parameter choices.Comment: This is the full version of a paper that is to appear at GECCO 201

    New Directions in Cloud Programming

    Full text link
    Nearly twenty years after the launch of AWS, it remains difficult for most developers to harness the enormous potential of the cloud. In this paper we lay out an agenda for a new generation of cloud programming research aimed at bringing research ideas to programmers in an evolutionary fashion. Key to our approach is a separation of distributed programs into a PACT of four facets: Program semantics, Availablity, Consistency and Targets of optimization. We propose to migrate developers gradually to PACT programming by lifting familiar code into our more declarative level of abstraction. We then propose a multi-stage compiler that emits human-readable code at each stage that can be hand-tuned by developers seeking more control. Our agenda raises numerous research challenges across multiple areas including language design, query optimization, transactions, distributed consistency, compilers and program synthesis

    Fast Mutation in Crossover-based Algorithms

    Full text link
    The heavy-tailed mutation operator proposed in Doerr, Le, Makhmara, and Nguyen (GECCO 2017), called \emph{fast mutation} to agree with the previously used language, so far was proven to be advantageous only in mutation-based algorithms. There, it can relieve the algorithm designer from finding the optimal mutation rate and nevertheless obtain a performance close to the one that the optimal mutation rate gives. In this first runtime analysis of a crossover-based algorithm using a heavy-tailed choice of the mutation rate, we show an even stronger impact. For the (1+(λ,λ))(1+(\lambda,\lambda)) genetic algorithm optimizing the OneMax benchmark function, we show that with a heavy-tailed mutation rate a linear runtime can be achieved. This is asymptotically faster than what can be obtained with any static mutation rate, and is asymptotically equivalent to the runtime of the self-adjusting version of the parameters choice of the (1+(λ,λ))(1+(\lambda,\lambda)) genetic algorithm. This result is complemented by an empirical study which shows the effectiveness of the fast mutation also on random satisfiable Max-3SAT instances.Comment: This is a version of the same paper presented at GECCO 2020 completed with the proofs which were missing because of the page limi

    Nonparametric Identification and Estimation of Multi-Unit, Sequential, Oral, Ascending-Price Auctions With Asymmetric Bidders

    Get PDF
    Within the independent private-values paradigm, we derive the data-generating process of the winning bid for the last unit sold at multi-unit sequential English auctions when bidder valuations are draws from different distributions; i.e., in the presence of asymmetries. When the identity of the winner as well as the number of units won by each bidder in previous stages of the auction are observed, we demonstrate nonparametric identification and then propose two estimation strategies, one based on the empirical distribution function of winning bids for the last unit sold and the other based on approximation methods using orthogonal polynomials. We apply our methods to daily data from fish auctions held in Grenå, Denmark. For single-unit supply, we use our estimates to compare the revenues a seller could expect to earn were a Dutch auction employed instead.Asymmetric, Multi-unit, Sequential, Oral, Ascending-price fish auctions, Dutch auctions, Nonparametric identification and estimation

    Pseudo-NK: an Enhanced Model of Complexity

    Get PDF
    This paper is based on the acknowledgment that NK models are an extremely usefu l tool in order to represent and study the complexity stemming from interactions among components of a system. For this reason NK models have been applied in many domains, such as Organizational Sciences and Economics, as a simple and powerful tool for the representation of complexity. However, the paper suggests that NK suffers from un-necessary limitations and difficulties due to its peculiar implementation, originally devised for biological phenomena. We suggest that it is possible to devise alternative implementations of NK that, though maintaining the core aspects of the NK model, remove its major limitations to applications in new domains. The paper proposes one such a model, called pseudo-NK (pNK) model, which we describe and test. The proposed model appears to be able to replicate most, if not all, the properties of standard NK models, but also to offer wider possibilities. Namely, pNK uses real-valued (instead of binary) dimensions forming the landscape and allows for gradual levels of interaction among components (instead of presence-absence). These extensions provide the possibility to maintain the approach at the original of NK (and therefore, the compatibility with former results) and extend the application to further domains, where the limitations posed by NK are more striking.NK model, Simulation models, Complexity, Interactions

    IRaPPA: information retrieval based integration of biophysical models for protein assembly selection

    Get PDF
    Motivation: In order to function, proteins frequently bind to one another and form 3D assemblies. Knowledge of the atomic details of these structures helps our understanding of how proteins work together, how mutations can lead to disease, and facilitates the designing of drugs which prevent or mimic the interaction. Results: Atomic modeling of protein-protein interactions requires the selection of near-native structures from a set of docked poses based on their calculable properties. By considering this as an information retrieval problem, we have adapted methods developed for Internet search ranking and electoral voting into IRaPPA, a pipeline integrating biophysical properties. The approach enhances the identification of near-native structures when applied to four docking methods, resulting in a near-native appearing in the top 10 solutions for up to 50% of complexes benchmarked, and up to 70% in the top 100. Availability and Implementation: IRaPPA has been implemented in the SwarmDock server ( http://bmm.crick.ac.uk/ approximately SwarmDock/ ), pyDock server ( http://life.bsc.es/pid/pydockrescoring/ ) and ZDOCK server ( http://zdock.umassmed.edu/ ), with code available on request. Contact: [email protected]. Supplementary information: Supplementary data are available at Bioinformatics online

    When Does Hillclimbing Fail on Monotone Functions: An entropy compression argument

    Full text link
    Hillclimbing is an essential part of any optimization algorithm. An important benchmark for hillclimbing algorithms on pseudo-Boolean functions f:{0,1}nRf: \{0,1\}^n \to \mathbb{R} are (strictly) montone functions, on which a surprising number of hillclimbers fail to be efficient. For example, the (1+1)(1+1)-Evolutionary Algorithm is a standard hillclimber which flips each bit independently with probability c/nc/n in each round. Perhaps surprisingly, this algorithm shows a phase transition: it optimizes any monotone pseudo-boolean function in quasilinear time if c<1c<1, but there are monotone functions for which the algorithm needs exponential time if c>2.2c>2.2. But so far it was unclear whether the threshold is at c=1c=1. In this paper we show how Moser's entropy compression argument can be adapted to this situation, that is, we show that a long runtime would allow us to encode the random steps of the algorithm with less bits than their entropy. Thus there exists a c0>1c_0 > 1 such that for all 0<cc00<c\le c_0 the (1+1)(1+1)-Evolutionary Algorithm with rate c/nc/n finds the optimum in O(nlog2n)O(n \log^2 n) steps in expectation.Comment: 14 pages, no figure
    corecore