366 research outputs found

    Minimum variance unbiased estimation based on bootstrap iterations

    Get PDF
    Practical computation of the minimum variance unbiased estimator (MVUE) is often a difficult, if not impossible, task, even though general statistical theory assures its existence under regularity conditions. We propose a new approach, based on infinitely many iterations of bootstrap bias correction, to calculating the MVUE approximately. A numerical example is given to illustrate the effectiveness of our new approach.published_or_final_versio

    On a new class of data depths for measuring representativeness

    Get PDF
    Theme: Big Data and Statistical ComputingSession SS1R1 - Data DepthData depth provides a natural means to rank multivariate vectors with respect to an underlying multivariate distribution. The conventional notion of a depth function emphasizes a centre-outward ordering of data points. While useful for certain statistical applications, such emphasis has rendered most classical data depths insensitive to some distributional features, such as multimodality, of concern to other statistical applications. To get around the problem we introduce a new notion of data depth which seeks to rank data points according to their representativeness, rather than centrality, with respect to an underlying distribution of interest. We propose a general device for defining such depth functions, based essentially on a choice of goodness-of-fit test statistic. Our device calls for a new interpretation of depth more akin to the concept of density than location. It copes particularly well with multivariate data exhibiting multimodality. In addition to providing depth values for individual data points, the new class of depth functions derived from goodness-of-fit tests also extends naturally to provide depth values for subsets of data points, a concept new to the data-depth literature. Applications of the new depth functions are demonstrated with both simulated and real data.published_or_final_versio

    Nonparametric confidence intervals based on extreme bootstrap percentiles

    Get PDF
    Monte Carlo approximation of standard bootstrap confidence intervals relies on the drawing of a large number, B say, of bootstrap resamples. Conventional choice of B is often made on the order of 1,000. While this choice may prove to be more than sufficient for some cases, it may be far from adequate for others. A new approach is suggested to construct confidence intervals based on extreme bootstrap percentiles and an adaptive choice of B. It economizes on the computational effort in a problem-specific fashion, yielding stable confidence intervals of satisfactory coverage accuracy.published_or_final_versio

    Bootstrap methods for lasso-type estimators under a moving-parameter framework

    Get PDF
    The Conference program's website is located at http://isnpstat.org/firstISNPS/index.php?option=com_content&view=article&id=4&Itemid=3We study the distributions of Lasso-type regression estimators in a moving-parameter asymptotic framework, and consider various bootstrap methods for estimating them accordingly. We show, in particular, that the distribution functions of Lasso-type estimators, including even those possessing the oracle properties such as the adaptive Lasso and the SCAD, cannot be consistently estimated by the bootstraps uniformly over the space of the regression parameters, especially when some of the regre...postprin

    Depth functions as measures of representativeness

    Get PDF
    Data depth provides a natural means to rank multivariate vectors with respect to an underlying multivariate distribution. Most existing depth functions emphasize a centre-outward ordering of data points, which may not provide a useful geometric representation of certain distributional features, such as multimodality, of concern to some statistical applications. Such inadequacy motivates us to develop a device for ranking data points according to their “representativeness” rather than “centrality” with respect to an underlying distribution of interest. Derived essentially from a choice of goodness-of-fit test statistic, our device calls for a new interpretation of “depth” more akin to the concept of density than location. It copes particularly well with multivariate data exhibiting multimodality. In addition to providing depth values for individual data points, depth functions derived from goodness-of-fit tests also extend naturally to provide depth values for subsets of data points, a concept new to the data-depth literature.postprin

    Hybrid resampling methods for confidence intervals: comment

    Get PDF
    published_or_final_versio

    Distribution of likelihood-based p-values under a local alternative hypothesis

    Get PDF
    We consider inference on a scalar parameter of interest in the presence of a nuisance parameter, using a likelihood-based statistic which is asymptotically normally distributed under the null hypothesis. Higher-order expansions are used to compare the repeated sampling distribution, under a general contiguous alternative hypothesis, of pp-values calculated from the asymptotic normal approximation to the null sampling distribution of the statistic with the distribution of pp-values calculated by bootstrap approximations. The results of comparisons in terms of power of different testing procedures under an alternative hypothesis are closely related to differences under the null hypothesis, specifically the extent to which testing procedures are conservative or liberal under the null. Empirical examples are given which demonstrate that higher-order asymptotic effects may be seen clearly in small-sample contexts

    Ratewise efficient estimation of regression coefficients based on Lp procedures

    Get PDF
    We consider the problem of estimation of regression coefficients under general classes of error densities without assuming classical regularity conditions. Optimal orders of convergence rates of regression-equivariant estimators are established and shown to be attained in general by Lp estimators based on judicious choices of p. We develop a procedure for choosing p adaptively to yield Lp estimators that converge at approximately optimal rates. The procedure consists of a special algorithm to automatically select the correct mode of Lp estimation and the m out of n bootstrap to consistently estimate the log mean squared error of the Lp estimator. Our proposed adaptive Lp estimator is compared with other adaptive and non-adaptive Lp estimators in a simulation study, that confirms superiority of our procedure.published_or_final_versio

    Block bootstrap optimality and empirical block selection for sample quantiles with dependent data

    Get PDF
    We establish a general theory of optimality for block bootstrap distribution estimation for sample quantiles under mild strong mixing conditions. In contrast to existing results, we study the block bootstrap for varying numbers of blocks. This corresponds to a hybrid between the sub- sampling bootstrap and the moving block bootstrap, in which the number of blocks is between 1 and the ratio of sample size to block length. The hybrid block bootstrap is shown to give theoretical benefits, and startling improvements in accuracy in distribution estimation in important practical settings. The conclusion that bootstrap samples should be of smaller size than the original sample has significant implications for computational efficiency and scalability of bootstrap methodologies with dependent data. Our main theorem determines the optimal number of blocks and block length to achieve the best possible convergence rate for the block bootstrap distribution estimator for sample quantiles. We propose an intuitive method for empirical selection of the optimal number and length of blocks, and demonstrate its value in a nontrivial example

    Asymptotic iterated bootstrap confidence intervals

    Get PDF
    An iterated bootstrap confidence interval requires an additive correction to be made to the nominal coverage level of an uncorrected interval. Such correction is usually performed using a computationally intensive Monte Carlo simulation involving two nested levels of bootstrap sampling. Asymptotic expansions of the required correction and the iterated interval endpoints are used to provide two new computationally efficient methods for constructing an approximation to the iterated bootstrap confidence interval. The first asymptotic interval replaces the need for a second level of bootstrap sampling with a series of preliminary analytic calculations, which are readily automated, and from which an approximation to the coverage correction is easily obtained. The second interval directly approximates the endpoints of the iterated interval and yields, for the first time, the possibility of constructing an approximation to an iterated bootstrap confidence interval which does not require any resampling. The theoretical properties of the two intervals are considered. The computation required for their construction is detailed and has been coded in a fully automatic user-friendly Fortran program which may be obtained by anonymous ftp. A simulation study which illustrates their effectiveness on three examples is presented.published_or_final_versio
    corecore