879 research outputs found

    Does increasing the sample size always increase the accuracy of a consistent estimator?

    Get PDF
    Birnbaum (1948) introduced the notion of peakedness about \theta of a random variable T, defined by P(| T - \theta | <\epsilon), \epsilon > 0. What seems to be not well-known is that, for a consistent estimator Tn of \theta, its peakedness does not necessarily converge to 1 monotonically in n. In this article some known results on how the peakedness of the sample mean behaves as a function of n are recalled. Also, new results concerning the peakedness of the median and the interquartile range are presented

    Some generalized subset selection procedures

    Get PDF
    In this paper some generalizations of Gupta's subset selection procedure are discussed. Assume k(\geq 2) populations are given and assume that the associated random variables have distributions with unknown location parameters \theta_i, i = 1, ..., k. The ordered parameters are denoted by \theta_[1] \leq ... \leq \theta_[k] . On the basis of independent samples from these populations, Gupta (1965) selects a subset, as small as possible, which contains, with probability at least P*, the best population, i.e. the one with the largest location parameter, \theta_[k]. The two generalizations discussed in this paper are those of van der Laan (1991, 1992a, b) and of van der Laan and van Eeden (1993). Each one of these is designed to give a smaller expected subset size, ES, than Gupta's procedure, for which ES is large when \theta_[k] is close to the other \theta_i 's. The procedure of van der Laan (1992a) selects, with probability at least P*, an \epsilon-best population whose location parameter is at least \theta_[k] - \epsilon (with \epsilon \geq 0). Some efficiency results for normal populations, comparing van der Laan's procedure with Gupta's, are presented. The procedure of van der Laan and van Eeden (1993) uses a loss function and it upperbounds either the expected loss or the expected subset size, or both. The loss is taken as zero when the subset contains an \epsilon-best population and as an increasing function of \theta_[k] - \epsilon - max { \theta_i I i-th population in the subset } if not. Some properties of this procedure, for the case of two normal populations, are presented

    Subset selection for the best of two populations : tables of the expected subset size

    Get PDF
    Assume two independent populations are given. The associated independent random variables have Normal distributions with unknown expectations \theta_1 and \theta_2, respectively, and known common variance \sigma^2. The selection goal of Gupta's subset selection for two populations is to select a non-empty subset which contains the best, in the sense of largest expectation, population with confidence level P* (½ &lt;P* &lt;1). In Van der Laan and Van Eeden (1992) a generalized selection goal has been introduced and investigated. In this report extended tables with values of the expected subset size are given

    On probability distributions arising from points on a lattice

    Get PDF
    • …
    corecore