2,866,317 research outputs found

    Nonparametric Bayesian multiple testing for longitudinal performance stratification

    Full text link
    This paper describes a framework for flexible multiple hypothesis testing of autoregressive time series. The modeling approach is Bayesian, though a blend of frequentist and Bayesian reasoning is used to evaluate procedures. Nonparametric characterizations of both the null and alternative hypotheses will be shown to be the key robustification step necessary to ensure reasonable Type-I error performance. The methodology is applied to part of a large database containing up to 50 years of corporate performance statistics on 24,157 publicly traded American companies, where the primary goal of the analysis is to flag companies whose historical performance is significantly different from that expected due to chance.Comment: Published in at http://dx.doi.org/10.1214/09-AOAS252 the Annals of Applied Statistics (http://www.imstat.org/aoas/) by the Institute of Mathematical Statistics (http://www.imstat.org

    Stereotype Threat, Self-Affirmation, and Women\u27s Statistics Performance

    Get PDF
    Stereotype threat (fear of confirming a negative group stereotype that in turn can inhibit academic performance) has been implicated in the gender gap observed in the field of mathematics. Even though stereotype threat depresses women\u27s performance, there has been much research reporting various interventions that ameliorate its negative effects. The current study investigated stereotype threat specifically in statistics--an unexplored area in the research literature --and the alleviating effects of self-affirmation. Participants in three conditions (control, stereotype threat, stereotype threat + affirmation) completed a statistics test. In both stereotype threat conditions participants were given a verbal prime to induce stereotype threat, but only the stereotype threat + affirmation condition was given the affirmation task. The predictions that stereotype threat would depress women\u27s statistics performance and that self-affirmation would minimize stereotype threat were not supported. How a performance expectation relates to a successful stereotype threat prime was discussed as are study limitations and directions for future research

    Order Statistics Based List Decoding Techniques for Linear Binary Block Codes

    Full text link
    The order statistics based list decoding techniques for linear binary block codes of small to medium block length are investigated. The construction of the list of the test error patterns is considered. The original order statistics decoding is generalized by assuming segmentation of the most reliable independent positions of the received bits. The segmentation is shown to overcome several drawbacks of the original order statistics decoding. The complexity of the order statistics based decoding is further reduced by assuming a partial ordering of the received bits in order to avoid the complex Gauss elimination. The probability of the test error patterns in the decoding list is derived. The bit error rate performance and the decoding complexity trade-off of the proposed decoding algorithms is studied by computer simulations. Numerical examples show that, in some cases, the proposed decoding schemes are superior to the original order statistics decoding in terms of both the bit error rate performance as well as the decoding complexity.Comment: 17 pages, 2 tables, 6 figures, submitted to IEEE Transactions on Information Theor

    Statistics-based adaptive non-uniform crossover for genetic algorithms

    Get PDF
    Copyright @ 2002 University of BirminghamThrough the population, genetic algorithm (GA) implicitly maintains the statistics about the search space. This implicit statistics can be used explicitly to enhance GA's performance. Inspired by this idea, a statistics-based adaptive non-uniform crossover, called SANUX, has been proposed. SANUX uses the statistics information of the alleles in each locus to adaptively calculate the swapping probability of that locus for crossover. A simple triangular function has been used to calculate the swapping probability. In this paper two different functions, the trapezoid and exponential functions, are investigated for SANUX insteadd of the triangular function. The experiment results show that both functions further improve the performance of SANUX across a typical set of GA's test problems

    Reconstruction of photon statistics using low performance photon counters

    Get PDF
    The output of a photodetector consists of a current pulse whose charge has the statistical distribution of the actual photon numbers convolved with a Bernoulli distribution. Photodetectors are characterized by a nonunit quantum efficiency, i.e. not all the photons lead to a charge, and by a finite resolution, i.e. a different number of detected photons leads to a discriminable values of the charge only up to a maximum value. We present a detailed comparison, based on Monte Carlo simulated experiments and real data, among the performances of detectors with different upper limits of counting capability. In our scheme the inversion of Bernoulli convolution is performed by maximum-likelihood methods assisted by measurements taken at different quantum efficiencies. We show that detectors that are only able to discriminate between zero, one and more than one detected photons are generally enough to provide a reliable reconstruction of the photon statistics for single-peaked distributions, while detectors with higher resolution limits do not lead to further improvements. In addition, we demonstrate that, for semiclassical states, even on/off detectors are enough to provide a good reconstruction. Finally, we show that a reliable reconstruction of multi-peaked distributions requires either higher quantum efficiency or better capability in discriminating high number of detected photons.Comment: 8 pages, 3 figure

    The Foundation Performance Dashboard: Vital Statistics for Social Impact

    Get PDF
    Boards and foundation leadership rarely have a clear, consistent, and comprehensive picture of their foundation's performance. Interestingly, this situation persists despite the fact that nearly 60% of foundation CEOs would like to have more board involvement in reviewing the foundation's philanthropic mission and effectiveness. Our experience suggests that this conundrum results from a fundamental uncertainty: Foundations are unsure how to bridge the chasm between the readily available and concise metrics for investment performance and the much more complex, expensive, and subjective data from internal operations and program evaluations

    Growth or decline in the Church of England during the decade of Evangelism: did the Churchmanship of the Bishop matter?

    Get PDF
    The Decade of Evangelism occupied the attention of the Church of England throughout the 1990s. The present study employs the statistics routinely published by the Church of England in order to assess two matters: the extent to which these statistics suggest that the 43 individual dioceses finished the decade in a stronger or weaker position than they had entered it and the extent to which, according to these statistics, the performance of dioceses led by bishops shaped in the Evangelical tradition differed from the performance of dioceses led by bishops shaped in the Catholic tradition. The data demonstrated that the majority of dioceses were performing less effectively at the end of the decade than at the beginning, in terms of a range of membership statistics, and that the rate of decline varied considerably from one diocese to another. The only exception to the trend was provided by the diocese of London, which experienced some growth. The data also demonstrated that little depended on the churchmanship of the diocesan bishop in shaping diocesan outcomes on the performance indicators employed in the study

    New hos-based parameter estimation methods for speech recognition in noisy environments

    Get PDF
    The problem of recognition in noisy environments is addressed. Often, a recognition system is used in a noisy environment and there is no possibility of training it with noisy samples. Classical speech analysis techniques are based on second-order statistics and their performance dramatically decreases when noise is present in the signal under analysis. New methods based on higher order statistics (HOS) are applied in a recognition system and compared against the autocorrelation method. Cumulant-based methods show better performance than autocorrelation-based methods for low SNRPeer ReviewedPostprint (published version
    corecore