30 research outputs found
Resampling Methods for Sample Surveys
Application of the bootstrap in sample survey settings presents considerable practical and conceptual difficulties and various potential solutions have recently been proffered in the statistical literature. This paper provides a critical review of these methods along with a new method which does not seem to have appeared in the literature although it is closely related to a method based on a plug-in rule for without replacement sampling proposed by Booth, Butler and Hall (1994). Use of the various methods to construct bootstrap percentile- t confidence intervals is discussed from the point of view of first-order asymptotic accuracy, correcting several errors and omissions in the literature. Some open questions concerning second-order asymptotics are also answered, and results are provided from a small simulation study supporting the major points of the paper. Two of the methods discussed can be justified as plug-in rules, and thus have a fairly straightforward motivation. In compariso..
Density Estimation Under Constraints
. We suggest a general method for tackling problems of density estimation under constraints. It is in effect a particular form of the weighted bootstrap, in which resampling weights are chosen so as to minimise distance from the empirical or uniform bootstrap distribution subject to the constraints being satisfied. A number of constraints are treated as examples. They include conditions on moments, quantiles and entropy, the latter as a device for imposing qualitative conditions such as those of unimodality or "interestingness." For example, without altering the data or the amount of smoothing we may construct a density estimator that enjoys the same mean, median and quartiles as the data. Different measures of distance give rise to slightly different results. KEYWORDS. Biased bootstrap, Cressie-Read distance, curve estimation, empirical likelihood, entropy, kernel methods, mode, smoothing, weighted bootstrap. SHORT TITLE. Constrained density estimation. AMS SUBJECT CLASSIFICATION. ..
Applications of Intentionally Biased Bootstrap Methods
. A class of weighted-bootstrap techniques, called biasedbootstrap methods, is proposed. It is motivated by the need to adjust more conventional, uniform-bootstrap methods in a surgical way, so as to alter some of their features while leaving others unchanged. Depending on the nature of the adjustment, the biased bootstrap can be used to reduce bias, or reduce variance, or render some characteristic equal to a predetermined quantity. More specifically, applications of bootstrap methods include hypothesis testing, variance stabilisation, both density estimation and nonparametric regression under constraints, `robustification ' of general statistical procedures, sensitivity analysis, generalised method of moments, shrinkage, and many more. 1991 Mathematics Subject Classification: Primary 62G09, Secondary 62G05 Keywords and Phrases: Bias reduction, empirical likelihood, hypothesis testing, local-linear smoothing, nonparametric curve estimation, variance stabilisation, weighted bootstrap 1..
The mean resultant length of the spherically projected normal distribution
We derive a closed-form expression for the mean resultant length of the d-dimensional projected normal distribution and provide graphical comparisons of the projected normal and Fisher-von Mises distributions in three and four dimensions.Directional data Spherical data Circular data Angular normal Offset normal Displaced normal von Mises distribution Fisher distribution Regression
Biased Bootstrap Methods for Reducing the Eects of Contamination
Contamination of a sampled distribution, for example by a heavy-tailed distribution, can degrade the performance of a statistical estimator. We suggest a general approach to alleviating this problem, using a version of the weighted bootstrap. The idea is to "tilt" away from the contaminated distribution by a given (but arbitrary) amount, in a direction that minimises a measure of the new distribution's dispersion. This theoretical proposal has a simple empirical version, which results in each data value being assigned a weight according to an assessment of its influence on dispersion. Importantly, distance can be measured directly in terms of the likely level of contamination, without reference to an empirical measure of scale. This makes the procedure particularly attractive for use in multivariate problems. It has a number of forms, depending on the definitions taken for dispersion and for distance between distributions. Examples of dispersion measures include variance, and generalis..