4,352 research outputs found
The bootstrap -A review
The bootstrap, extensively studied during the last decade, has become a powerful tool in different areas of Statistical Inference. In this work, we present the main ideas of bootstrap methodology in several contexts, citing the most relevant contributions and illustrating with examples and simulation studies some interesting aspects
Bias in parametric estimation: reduction and useful side-effects
The bias of an estimator is defined as the difference of its expected value
from the parameter to be estimated, where the expectation is with respect to
the model. Loosely speaking, small bias reflects the desire that if an
experiment is repeated indefinitely then the average of all the resultant
estimates will be close to the parameter value that is estimated. The current
paper is a review of the still-expanding repository of methods that have been
developed to reduce bias in the estimation of parametric models. The review
provides a unifying framework where all those methods are seen as attempts to
approximate the solution of a simple estimating equation. Of particular focus
is the maximum likelihood estimator, which despite being asymptotically
unbiased under the usual regularity conditions, has finite-sample bias that can
result in significant loss of performance of standard inferential procedures.
An informal comparison of the methods is made revealing some useful practical
side-effects in the estimation of popular models in practice including: i)
shrinkage of the estimators in binomial and multinomial regression models that
guarantees finiteness even in cases of data separation where the maximum
likelihood estimator is infinite, and ii) inferential benefits for models that
require the estimation of dispersion or precision parameters
Small sample inference for probabilistic index models
Probabilistic index models may be used to generate classical and new rank tests, with the additional advantage of supplementing them with interpretable effect size measures. The popularity of rank tests for small sample inference makes probabilistic index models also natural candidates for small sample studies. However, at present, inference for such models relies on asymptotic theory that can deliver poor approximations of the sampling distribution if the sample size is rather small. A bias-reduced version of the bootstrap and adjusted jackknife empirical likelihood are explored. It is shown that their application leads to drastic improvements in small sample inference for probabilistic index models, justifying the use of such models for reliable and informative statistical inference in small sample studies
- …