812 research outputs found
Empirical likelihood confidence intervals for the mean of a long-range dependent process
This paper considers blockwise empirical likelihood for real-valued linear time processes which may exhibit either short- or long-range dependence. Empirical likelihood approaches intended for weakly dependent time series can fail in the presence of strong dependence. However, a modified blockwise method is proposed for confidence interval estimation of the process mean, which is valid for various dependence structures including long-range dependence. The finite-sample performance of the method is evaluated through a simulation study and compared to other confidence interval procedures involving subsampling or normal approximations
On optimal block resampling for Gaussian-subordinated long-range dependent processes
Block-based resampling estimators have been intensively investigated for
weakly dependent time processes, which has helped to inform implementation
(e.g., best block sizes). However, little is known about resampling performance
and block sizes under strong or long-range dependence. To establish guideposts
in block selection, we consider a broad class of strongly dependent time
processes, formed by a transformation of a stationary long-memory Gaussian
series, and examine block-based resampling estimators for the variance of the
prototypical sample mean; extensions to general statistical functionals are
also considered. Unlike weak dependence, the properties of resampling
estimators under strong dependence are shown to depend intricately on the
nature of non-linearity in the time series (beyond Hermite ranks) in addition
the long-memory coefficient and block size. Additionally, the intuition has
often been that optimal block sizes should be larger under strong dependence
(say for a sample size ) than the optimal order
known under weak dependence. This intuition turns out to be largely incorrect,
though a block order may be reasonable (and even optimal) in many
cases, owing to non-linearity in a long-memory time series. While optimal block
sizes are more complex under long-range dependence compared to short-range, we
provide a consistent data-driven rule for block selection, and numerical
studies illustrate that the guides for block selection perform well in other
block-based problems with long-memory time series, such as distribution
estimation and strategies for testing Hermite rank
A comparison of block and semi-parametric bootstrap methods for variance estimation in spatial statistics
Efron (1979) introduced the bootstrap method for independent data but it cannot be easily applied to spatial data because of their dependency. For spatial data that are correlated in terms of their locations in the underlying space the moving block bootstrap method is usually used to estimate the precision measures of the estimators. The precision of the moving block bootstrap estimators is related to the block size which is difficult to select. In the moving block bootstrap method also the variance estimator is underestimated. In this paper, first the semi-parametric bootstrap is used to estimate the precision measures of estimators in spatial data analysis. In the semi-parametric bootstrap method, we use the estimation of the spatial correlation structure. Then, we compare the semi-parametric bootstrap with a moving block bootstrap for variance estimation of estimators in a simulation study. Finally, we use the semi-parametric bootstrap to analyze the coal-ash data
Goodness of fit tests for a class of Markov random field models
This paper develops goodness of fit statistics that can be used to formally
assess Markov random field models for spatial data, when the model
distributions are discrete or continuous and potentially parametric. Test
statistics are formed from generalized spatial residuals which are collected
over groups of nonneighboring spatial observations, called concliques. Under a
hypothesized Markov model structure, spatial residuals within each conclique
are shown to be independent and identically distributed as uniform variables.
The information from a series of concliques can be then pooled into goodness of
fit statistics. Under some conditions, large sample distributions of these
statistics are explicitly derived for testing both simple and composite
hypotheses, where the latter involves additional parametric estimation steps.
The distributional results are verified through simulation, and a data example
illustrates the method for model assessment.Comment: Published in at http://dx.doi.org/10.1214/11-AOS948 the Annals of
Statistics (http://www.imstat.org/aos/) by the Institute of Mathematical
Statistics (http://www.imstat.org
- …