115,387 research outputs found

    Implications of long tails in the distribution of mutant effects

    Get PDF
    Long-tailed distributions possess an in nite variance, yet a nite sample that is drawn from such a distribution has a nite variance. In this work we consider a model of a population subject to mutation, selection and drift. We investigate the implications of a long-tailed distribution of mutant allelic e¤ects on the distribution of genotypic e¤ects in a model with a continuum of allelic e¤ects. While the analysis is confined to asexual populations, it does also have implications for sexual populations. We obtain analytical results for a selectively neutral population as well as one subject to selection. We supplement these analytical results with numerical simulations, to take into account genetic drift. We nd that a long-tailed distribution of mutant e¤ects may a¤ect both the equilibrium and the evolutionary adaptive behaviour of a population

    Runtime Analysis for Self-adaptive Mutation Rates

    Full text link
    We propose and analyze a self-adaptive version of the (1,λ)(1,\lambda) evolutionary algorithm in which the current mutation rate is part of the individual and thus also subject to mutation. A rigorous runtime analysis on the OneMax benchmark function reveals that a simple local mutation scheme for the rate leads to an expected optimization time (number of fitness evaluations) of O(nλ/logλ+nlogn)O(n\lambda/\log\lambda+n\log n) when λ\lambda is at least ClnnC \ln n for some constant C>0C > 0. For all values of λClnn\lambda \ge C \ln n, this performance is asymptotically best possible among all λ\lambda-parallel mutation-based unbiased black-box algorithms. Our result shows that self-adaptation in evolutionary computation can find complex optimal parameter settings on the fly. At the same time, it proves that a relatively complicated self-adjusting scheme for the mutation rate proposed by Doerr, Gie{\ss}en, Witt, and Yang~(GECCO~2017) can be replaced by our simple endogenous scheme. On the technical side, the paper contributes new tools for the analysis of two-dimensional drift processes arising in the analysis of dynamic parameter choices in EAs, including bounds on occupation probabilities in processes with non-constant drift

    Modeling of Inertial Rate Sensor Errors Using Autoregressive and Moving Average (ARMA) Models

    Get PDF
    In this chapter, a low-cost micro electro mechanical systems (MEMS) gyroscope drift is modeled by time series model, namely, autoregressive-moving-average (ARMA). The optimality of ARMA (2, 1) model is identified by using minimum values of the Akaike information criteria (AIC). In addition, the ARMA model based Sage-Husa adaptive fading Kalman filter algorithm (SHAFKF) is proposed for minimizing the drift and random noise of MEMS gyroscope signal. The suggested algorithm is explained in two stages: (i) an adaptive transitive factor (a1) is introduced into a predicted state error covariance for adaption. (ii) The measurement noise covariance matrix is updated by another transitive factor (a2). The proposed algorithm is applied to MEMS gyroscope signals for reducing the drift and random noise in a static condition at room temperature. The Allan variance (AV) analysis is used to identify and quantify the random noise sources of MEMS gyro signal. The performance of the suggested algorithm is analyzed using AV for static signal. The experimental results demonstrate that the proposed algorithm performs better than CKF and a single transitive factor based adaptive SHFKF algorithm for reducing the drift and random noise in the static condition

    Evolution of Cooperation in Public Goods Games with Stochastic Opting-Out

    Full text link
    This paper investigates the evolution of strategic play where players drawn from a finite well-mixed population are offered the opportunity to play in a public goods game. All players accept the offer. However, due to the possibility of unforeseen circumstances, each player has a fixed probability of being unable to participate in the game, unlike similar models which assume voluntary participation. We first study how prescribed stochastic opting-out affects cooperation in finite populations. Moreover, in the model, cooperation is favored by natural selection over both neutral drift and defection if return on investment exceeds a threshold value defined solely by the population size, game size, and a player's probability of opting-out. Ultimately, increasing the probability that each player is unable to fulfill her promise of participating in the public goods game facilitates natural selection of cooperators. We also use adaptive dynamics to study the coevolution of cooperation and opting-out behavior. However, given rare mutations minutely different from the original population, an analysis based on adaptive dynamics suggests that the over time the population will tend towards complete defection and non-participation, and subsequently, from there, participating cooperators will stand a chance to emerge by neutral drift. Nevertheless, increasing the probability of non-participation decreases the rate at which the population tends towards defection when participating. Our work sheds light on understanding how stochastic opting-out emerges in the first place and its role in the evolution of cooperation.Comment: 30 pages, 4 figures. This is one of the student project papers arsing from the Mathematics REU program at Dartmouth 2017 Summer. See https://math.dartmouth.edu/~reu/ for more info. Comments are always welcom

    Performance Envelopes of Adaptive Ensemble Data Stream Classifiers

    Get PDF
    This dissertation documents a study of the performance characteristics of algorithms designed to mitigate the effects of concept drift on online machine learning. Several supervised binary classifiers were evaluated on their performance when applied to an input data stream with a non-stationary class distribution. The selected classifiers included ensembles that combine the contributions of their member algorithms to improve overall performance. These ensembles adapt to changing class definitions, known as “concept drift,” often present in real-world situations, by adjusting the relative contributions of their members. Three stream classification algorithms and three adaptive ensemble algorithms were compared to determine the capabilities of each in terms of accuracy and throughput. For each\u3c run of the experiment, the percentage of correct classifications was measured using prequential analysis, a well-established methodology in the evaluation of streaming classifiers. Throughput was measured in classifications performed per second as timed by the CPU clock. Two main experimental variables were manipulated to investigate and compare the range of accuracy and throughput exhibited by each algorithm under various conditions. The number of attributes in the instances to be classified and the speed at which the definitions of labeled data drifted were varied across six total combinations of drift-speed and dimensionality. The implications of results are used to recommend improved methods for working with stream-based data sources. The typical approach to counteract concept drift is to update the classification models with new data. In the stream paradigm, classifiers are continuously exposed to new data that may serve as representative examples of the current situation. However, updating the ensemble classifier in order to maintain or improve accuracy can be computationally costly and will negatively impact throughput. In a real-time system, this could lead to an unacceptable slow-down. The results of this research showed that,among several algorithms for reducing the effect of concept drift, adaptive decision trees maintained the highest accuracy without slowing down with respect to the no-drift condition. Adaptive ensemble techniques were also able to maintain reasonable accuracy in the presence of drift without much change in the throughput. However, the overall throughput of the adaptive methods is low and may be unacceptable for extremely time-sensitive applications. The performance visualization methodology utilized in this study gives a clear and intuitive visual summary that allows system designers to evaluate candidate algorithms with respect to their performance needs
    corecore