2,555 research outputs found
Minimum variance unbiased estimation based on bootstrap iterations
Practical computation of the minimum variance unbiased estimator (MVUE) is often a difficult, if not impossible, task, even though general statistical theory assures its existence under regularity conditions. We propose a new approach, based on infinitely many iterations of bootstrap bias correction, to calculating the MVUE approximately. A numerical example is given to illustrate the effectiveness of our new approach.published_or_final_versio
Application of the Iterated Weighted Least-Squares Fit to counting experiments
Least-squares fits are an important tool in many data analysis applications.
In this paper, we review theoretical results, which are relevant for their
application to data from counting experiments. Using a simple example, we
illustrate the well known fact that commonly used variants of the least-squares
fit applied to Poisson-distributed data produce biased estimates. The bias can
be overcome with an iterated weighted least-squares method, which produces
results identical to the maximum-likelihood method. For linear models, the
iterated weighted least-squares method converges faster than the equivalent
maximum-likelihood method, and does not require problem-specific starting
values, which may be a practical advantage. The equivalence of both methods
also holds for binomially distributed data. We further show that the unbinned
maximum-likelihood method can be derived as a limiting case of the iterated
least-squares fit when the bin width goes to zero, which demonstrates a deep
connection between the two methods.Comment: Accepted by NIM
Bias in parametric estimation: reduction and useful side-effects
The bias of an estimator is defined as the difference of its expected value
from the parameter to be estimated, where the expectation is with respect to
the model. Loosely speaking, small bias reflects the desire that if an
experiment is repeated indefinitely then the average of all the resultant
estimates will be close to the parameter value that is estimated. The current
paper is a review of the still-expanding repository of methods that have been
developed to reduce bias in the estimation of parametric models. The review
provides a unifying framework where all those methods are seen as attempts to
approximate the solution of a simple estimating equation. Of particular focus
is the maximum likelihood estimator, which despite being asymptotically
unbiased under the usual regularity conditions, has finite-sample bias that can
result in significant loss of performance of standard inferential procedures.
An informal comparison of the methods is made revealing some useful practical
side-effects in the estimation of popular models in practice including: i)
shrinkage of the estimators in binomial and multinomial regression models that
guarantees finiteness even in cases of data separation where the maximum
likelihood estimator is infinite, and ii) inferential benefits for models that
require the estimation of dispersion or precision parameters
Bias Reduction of Long Memory Parameter Estimators via the Pre-filtered Sieve Bootstrap
This paper investigates the use of bootstrap-based bias correction of
semi-parametric estimators of the long memory parameter in fractionally
integrated processes. The re-sampling method involves the application of the
sieve bootstrap to data pre-filtered by a preliminary semi-parametric estimate
of the long memory parameter. Theoretical justification for using the bootstrap
techniques to bias adjust log-periodogram and semi-parametric local Whittle
estimators of the memory parameter is provided. Simulation evidence comparing
the performance of the bootstrap bias correction with analytical bias
correction techniques is also presented. The bootstrap method is shown to
produce notable bias reductions, in particular when applied to an estimator for
which analytical adjustments have already been used. The empirical coverage of
confidence intervals based on the bias-adjusted estimators is very close to the
nominal, for a reasonably large sample size, more so than for the comparable
analytically adjusted estimators. The precision of inferences (as measured by
interval length) is also greater when the bootstrap is used to bias correct
rather than analytical adjustments.Comment: 38 page
The Multistep Beveridge-Nelson Decomposition
The Beveridge-Nelson decomposition defines the trend component in terms of the eventual forecast function, as the value the series would take if it were on its long-run path. The paper introduces the multistep Beveridge-Nelson decomposition, which arises when the forecast function is obtained by the direct autoregressive approach, which optimizes the predictive ability of the AR model at forecast horizons greater than one. We compare our proposal with the standard Beveridge-Nelson decomposition, for which the forecast function is obtained by iterating the one-step-ahead predictions via the chain rule. We illustrate that the multistep Beveridge-Nelson trend is more efficient than the standard one in the presence of model misspecification and we subsequently assess the predictive validity of the extracted transitory component with respect to future growth.Trend and Cycle; Forecasting; Filtering.
- …