63,780 research outputs found

    Adaptive density estimation under dependence

    Get PDF
    Assume that (Xt)t∈Z(X_t)_{t\in\Z} is a real valued time series admitting a common marginal density ff with respect to Lebesgue's measure. Donoho {\it et al.} (1996) propose a near-minimax method based on thresholding wavelets to estimate ff on a compact set in an independent and identically distributed setting. The aim of the present work is to extend these results to general weak dependent contexts. Weak dependence assumptions are expressed as decreasing bounds of covariance terms and are detailed for different examples. The threshold levels in estimators f^n\widehat f_n depend on weak dependence properties of the sequence (Xt)t∈Z(X_t)_{t\in\Z} through the constant. If these properties are unknown, we propose cross-validation procedures to get new estimators. These procedures are illustrated via simulations of dynamical systems and non causal infinite moving averages. We also discuss the efficiency of our estimators with respect to the decrease of covariances bounds

    Aggregation of predictors for nonstationary sub-linear processes and online adaptive forecasting of time varying autoregressive processes

    Full text link
    In this work, we study the problem of aggregating a finite number of predictors for nonstationary sub-linear processes. We provide oracle inequalities relying essentially on three ingredients: (1) a uniform bound of the â„“1\ell^1 norm of the time varying sub-linear coefficients, (2) a Lipschitz assumption on the predictors and (3) moment conditions on the noise appearing in the linear representation. Two kinds of aggregations are considered giving rise to different moment conditions on the noise and more or less sharp oracle inequalities. We apply this approach for deriving an adaptive predictor for locally stationary time varying autoregressive (TVAR) processes. It is obtained by aggregating a finite number of well chosen predictors, each of them enjoying an optimal minimax convergence rate under specific smoothness conditions on the TVAR coefficients. We show that the obtained aggregated predictor achieves a minimax rate while adapting to the unknown smoothness. To prove this result, a lower bound is established for the minimax rate of the prediction risk for the TVAR process. Numerical experiments complete this study. An important feature of this approach is that the aggregated predictor can be computed recursively and is thus applicable in an online prediction context.Comment: Published at http://dx.doi.org/10.1214/15-AOS1345 in the Annals of Statistics (http://www.imstat.org/aos/) by the Institute of Mathematical Statistics (http://www.imstat.org

    Semiparametric stationarity and fractional unit roots tests based on data-driven multidimensional increment ratio statistics

    Full text link
    In this paper, we show that the central limit theorem (CLT) satisfied by the data-driven Multidimensional Increment Ratio (MIR) estimator of the memory parameter d established in Bardet and Dola (2012) for d ∈\in (--0.5, 0.5) can be extended to a semiparametric class of Gaussian fractionally integrated processes with memory parameter d ∈\in (--0.5, 1.25). Since the asymptotic variance of this CLT can be estimated, by data-driven MIR tests for the two cases of stationarity and non-stationarity, so two tests are constructed distinguishing the hypothesis d \textless{} 0.5 and d ≥\ge 0.5, as well as a fractional unit roots test distinguishing the case d = 1 from the case d \textless{} 1. Simulations done on numerous kinds of short-memory, long-memory and non-stationary processes, show both the high accuracy and robustness of this MIR estimator compared to those of usual semiparametric estimators. They also attest of the reasonable efficiency of MIR tests compared to other usual stationarity tests or fractional unit roots tests. Keywords: Gaussian fractionally integrated processes; semiparametric estimators of the memory parameter; test of long-memory; stationarity test; fractional unit roots test.Comment: arXiv admin note: substantial text overlap with arXiv:1207.245

    The importance of scale in spatially varying coefficient modeling

    Get PDF
    While spatially varying coefficient (SVC) models have attracted considerable attention in applied science, they have been criticized as being unstable. The objective of this study is to show that capturing the "spatial scale" of each data relationship is crucially important to make SVC modeling more stable, and in doing so, adds flexibility. Here, the analytical properties of six SVC models are summarized in terms of their characterization of scale. Models are examined through a series of Monte Carlo simulation experiments to assess the extent to which spatial scale influences model stability and the accuracy of their SVC estimates. The following models are studied: (i) geographically weighted regression (GWR) with a fixed distance or (ii) an adaptive distance bandwidth (GWRa), (iii) flexible bandwidth GWR (FB-GWR) with fixed distance or (iv) adaptive distance bandwidths (FB-GWRa), (v) eigenvector spatial filtering (ESF), and (vi) random effects ESF (RE-ESF). Results reveal that the SVC models designed to capture scale dependencies in local relationships (FB-GWR, FB-GWRa and RE-ESF) most accurately estimate the simulated SVCs, where RE-ESF is the most computationally efficient. Conversely GWR and ESF, where SVC estimates are naively assumed to operate at the same spatial scale for each relationship, perform poorly. Results also confirm that the adaptive bandwidth GWR models (GWRa and FB-GWRa) are superior to their fixed bandwidth counterparts (GWR and FB-GWR)
    • …
    corecore