6,455 research outputs found
The distribution of model averaging estimators and an impossibility result regarding its estimation
The finite-sample as well as the asymptotic distribution of Leung and
Barron's (2006) model averaging estimator are derived in the context of a
linear regression model. An impossibility result regarding the estimation of
the finite-sample distribution of the model averaging estimator is obtained.Comment: Published at http://dx.doi.org/10.1214/074921706000000987 in the IMS
Lecture Notes Monograph Series
(http://www.imstat.org/publications/lecnotes.htm) by the Institute of
Mathematical Statistics (http://www.imstat.org
On the Order of Magnitude of Sums of Negative Powers of Integrated Processes
The asymptotic behavior of expressions of the form where is an integrated process, is
a sequence of norming constants, and is a measurable function has been the
subject of a number of articles in recent years. We mention Borodin and
Ibragimov (1995), Park and Phillips (1999), de Jong (2004), Jeganathan (2004),
P\"{o}tscher (2004), de Jong and Whang (2005), Berkes and Horvath (2006), and
Christopeit (2009) which study weak convergence results for such expressions
under various conditions on and the function . Of course, these
results also provide information on the order of magnitude of . However, to the best of our knowledge no result
is available for the case where is non-integrable with respect to
Lebesgue-measure in a neighborhood of a given point, say . In this paper
we are interested in bounds on the order of magnitude of when , a case where the
implied function is not integrable in any neighborhood of zero. More
generally, we shall also obtain bounds on the order of magnitude for
where are random variables
satisfying certain conditions
Lower Risk Bounds and Properties of Confidence Sets For Ill-Posed Estimation Problems with Applications to Spectral Density and Persistence Estimation, Unit Roots,and Estimation of Long Memory Parameters
Important estimation problems in econometrics like estimation the value of a spectral density at frequency zero, which appears in the econometrics literature in the guises of heteroskedasticity and autocorrelation consistent variance estimation and long run variance estimation, are shown to be "ill-posed" estimation problems. A prototypical result obtained in the paper is that the minimax risk for estimation the value of the spectral density at frequency zero is infinite regardless of sample size, and that confidence sets are close to being univormative. In this result the maximum risk is over commonly used specifications for the set of feasible data generating processes. The consequences for inference on unit roots and cointegrating are discussed. Similar results for persistence estimation and estimation of the long memory parameter are given. All these results are obtained as special cases of a more general theory developed for abstract estimation problems, which readily also allows for the treatment of other ill-posed estimation problems such as, e. g. nonparametric regression of density estimation.
Nonlinear Functions and Convergence to Brownian Motion: Beyond the Continuous Mapping Theorem
Weak convergence results for sample averages of nonlinear functions of (discrete-time) stochastic processes satisfying a functional central limit theorem (e.g., integrated processes) are given. These results substantially extend recent work by Park and Phillips (1999) and de Jong (2001), in that a much wider class of functions is covered. For example, some of the results hold for the class of all locally integrable functions, thus avoiding any of the various regularity conditions imposed on the functions in Park and Phillips (1999) or de Jong (2001).
On the Power of Invariant Tests for Hypotheses on a Covariance Matrix
The behavior of the power function of autocorrelation tests such as the
Durbin-Watson test in time series regressions or the Cliff-Ord test in spatial
regression models has been intensively studied in the literature. When the
correlation becomes strong, Kr\"amer (1985) (for the Durbin-Watson test) and
Kr\"amer (2005) (for the Cliff-Ord test) have shown that the power can be very
low, in fact can converge to zero, under certain circumstances. Motivated by
these results, Martellosio (2010) set out to build a general theory that would
explain these findings. Unfortunately, Martellosio (2010) does not achieve this
goal, as a substantial portion of his results and proofs suffer from serious
flaws. The present paper now builds a theory as envisioned in Martellosio
(2010) in a fairly general framework, covering general invariant tests of a
hypothesis on the disturbance covariance matrix in a linear regression model.
The general results are then specialized to testing for spatial correlation and
to autocorrelation testing in time series regression models. We also
characterize the situation where the null and the alternative hypothesis are
indistinguishable by invariant tests
Efficient Simulation-Based Minimum Distance Estimation and Indirect Inference
Given a random sample from a parametric model, we show how indirect inference
estimators based on appropriate nonparametric density estimators (i.e.,
simulation-based minimum distance estimators) can be constructed that, under
mild assumptions, are asymptotically normal with variance-covarince matrix
equal to the Cramer-Rao bound.Comment: Minor revision, some references and remarks adde
Confidence Sets Based on Penalized Maximum Likelihood Estimators in Gaussian Regression
Confidence intervals based on penalized maximum likelihood estimators such as
the LASSO, adaptive LASSO, and hard-thresholding are analyzed. In the
known-variance case, the finite-sample coverage properties of such intervals
are determined and it is shown that symmetric intervals are the shortest. The
length of the shortest intervals based on the hard-thresholding estimator is
larger than the length of the shortest interval based on the adaptive LASSO,
which is larger than the length of the shortest interval based on the LASSO,
which in turn is larger than the standard interval based on the maximum
likelihood estimator. In the case where the penalized estimators are tuned to
possess the `sparsity property', the intervals based on these estimators are
larger than the standard interval by an order of magnitude. Furthermore, a
simple asymptotic confidence interval construction in the `sparse' case, that
also applies to the smoothly clipped absolute deviation estimator, is
discussed. The results for the known-variance case are shown to carry over to
the unknown-variance case in an appropriate asymptotic sense.Comment: second revision: new title, some comments added, proofs moved to
appendi
Non-Parametric Maximum Likelihood Density Estimation and Simulation-Based Minimum Distance Estimators
Indirect inference estimators (i.e., simulation-based minimum distance
estimators) in a parametric model that are based on auxiliary non-parametric
maximum likelihood density estimators are shown to be asymptotically normal. If
the parametric model is correctly specified, it is furthermore shown that the
asymptotic variance-covariance matrix equals the inverse of the
Fisher-information matrix. These results are based on uniform-in-parameters
convergence rates and a uniform-in-parameters Donsker-type theorem for
non-parametric maximum likelihood density estimators.Comment: minor corrections, some discussion added, some material remove
Can one estimate the conditional distribution of post-model-selection estimators?
We consider the problem of estimating the conditional distribution of a
post-model-selection estimator where the conditioning is on the selected model.
The notion of a post-model-selection estimator here refers to the combined
procedure resulting from first selecting a model (e.g., by a model selection
criterion such as AIC or by a hypothesis testing procedure) and then estimating
the parameters in the selected model (e.g., by least-squares or maximum
likelihood), all based on the same data set. We show that it is impossible to
estimate this distribution with reasonable accuracy even asymptotically. In
particular, we show that no estimator for this distribution can be uniformly
consistent (not even locally). This follows as a corollary to (local) minimax
lower bounds on the performance of estimators for this distribution. Similar
impossibility results are also obtained for the conditional distribution of
linear functions (e.g., predictors) of the post-model-selection estimator.Comment: Published at http://dx.doi.org/10.1214/009053606000000821 in the
Annals of Statistics (http://www.imstat.org/aos/) by the Institute of
Mathematical Statistics (http://www.imstat.org
- …