1,002 research outputs found

    Indirect Inference for Locally Stationary Models

    Full text link
    We propose the use of indirect inference estimation to conduct inference in complex locally stationary models. We develop a local indirect inference algorithm and establish the asymptotic properties of the proposed estimator. Due to the nonparametric nature of locally stationary models, the resulting indirect inference estimator exhibits nonparametric rates of convergence. We validate our methodology with simulation studies in the confines of a locally stationary moving average model and a new locally stationary multiplicative stochastic volatility model. Using this indirect inference methodology and the new locally stationary volatility model, we obtain evidence of non-linear, time-varying volatility trends for monthly returns on several Fama-French portfolios

    Intermittent process analysis with scattering moments

    Full text link
    Scattering moments provide nonparametric models of random processes with stationary increments. They are expected values of random variables computed with a nonexpansive operator, obtained by iteratively applying wavelet transforms and modulus nonlinearities, which preserves the variance. First- and second-order scattering moments are shown to characterize intermittency and self-similarity properties of multiscale processes. Scattering moments of Poisson processes, fractional Brownian motions, L\'{e}vy processes and multifractal random walks are shown to have characteristic decay. The Generalized Method of Simulated Moments is applied to scattering moments to estimate data generating models. Numerical applications are shown on financial time-series and on energy dissipation of turbulent flows.Comment: Published in at http://dx.doi.org/10.1214/14-AOS1276 the Annals of Statistics (http://www.imstat.org/aos/) by the Institute of Mathematical Statistics (http://www.imstat.org

    Large-sample study of the kernel density estimators under multiplicative censoring

    Full text link
    The multiplicative censoring model introduced in Vardi [Biometrika 76 (1989) 751--761] is an incomplete data problem whereby two independent samples from the lifetime distribution GG, Xm=(X1,...,Xm)\mathcal{X}_m=(X_1,...,X_m) and Zn=(Z1,...,Zn)\mathcal{Z}_n=(Z_1,...,Z_n), are observed subject to a form of coarsening. Specifically, sample Xm\mathcal{X}_m is fully observed while Yn=(Y1,...,Yn)\mathcal{Y}_n=(Y_1,...,Y_n) is observed instead of Zn\mathcal{Z}_n, where Yi=UiZiY_i=U_iZ_i and (U1,...,Un)(U_1,...,U_n) is an independent sample from the standard uniform distribution. Vardi [Biometrika 76 (1989) 751--761] showed that this model unifies several important statistical problems, such as the deconvolution of an exponential random variable, estimation under a decreasing density constraint and an estimation problem in renewal processes. In this paper, we establish the large-sample properties of kernel density estimators under the multiplicative censoring model. We first construct a strong approximation for the process k(G^−G)\sqrt{k}(\hat{G}-G), where G^\hat{G} is a solution of the nonparametric score equation based on (Xm,Yn)(\mathcal{X}_m,\mathcal{Y}_n), and k=m+nk=m+n is the total sample size. Using this strong approximation and a result on the global modulus of continuity, we establish conditions for the strong uniform consistency of kernel density estimators. We also make use of this strong approximation to study the weak convergence and integrated squared error properties of these estimators. We conclude by extending our results to the setting of length-biased sampling.Comment: Published in at http://dx.doi.org/10.1214/11-AOS954 the Annals of Statistics (http://www.imstat.org/aos/) by the Institute of Mathematical Statistics (http://www.imstat.org

    Parameter tuning in pointwise adaptation using a propagation approach

    Full text link
    This paper discusses the problem of adaptive estimation of a univariate object like the value of a regression function at a given point or a linear functional in a linear inverse problem. We consider an adaptive procedure originated from Lepski [Theory Probab. Appl. 35 (1990) 454--466.] that selects in a data-driven way one estimate out of a given class of estimates ordered by their variability. A serious problem with using this and similar procedures is the choice of some tuning parameters like thresholds. Numerical results show that the theoretically recommended proposals appear to be too conservative and lead to a strong oversmoothing effect. A careful choice of the parameters of the procedure is extremely important for getting the reasonable quality of estimation. The main contribution of this paper is the new approach for choosing the parameters of the procedure by providing the prescribed behavior of the resulting estimate in the simple parametric situation. We establish a non-asymptotical "oracle" bound, which shows that the estimation risk is, up to a logarithmic multiplier, equal to the risk of the "oracle" estimate that is optimally selected from the given family. A numerical study demonstrates a good performance of the resulting procedure in a number of simulated examples.Comment: Published in at http://dx.doi.org/10.1214/08-AOS607 the Annals of Statistics (http://www.imstat.org/aos/) by the Institute of Mathematical Statistics (http://www.imstat.org

    Adaptation to lowest density regions with application to support recovery

    Full text link
    A scheme for locally adaptive bandwidth selection is proposed which sensitively shrinks the bandwidth of a kernel estimator at lowest density regions such as the support boundary which are unknown to the statistician. In case of a H\"{o}lder continuous density, this locally minimax-optimal bandwidth is shown to be smaller than the usual rate, even in case of homogeneous smoothness. Some new type of risk bound with respect to a density-dependent standardized loss of this estimator is established. This bound is fully nonasymptotic and allows to deduce convergence rates at lowest density regions that can be substantially faster than n−1/2n^{-1/2}. It is complemented by a weighted minimax lower bound which splits into two regimes depending on the value of the density. The new estimator adapts into the second regime, and it is shown that simultaneous adaptation into the fastest regime is not possible in principle as long as the H\"{o}lder exponent is unknown. Consequences on plug-in rules for support recovery are worked out in detail. In contrast to those with classical density estimators, the plug-in rules based on the new construction are minimax-optimal, up to some logarithmic factor.Comment: Published at http://dx.doi.org/10.1214/15-AOS1366 in the Annals of Statistics (http://www.imstat.org/aos/) by the Institute of Mathematical Statistics (http://www.imstat.org
    • …
    corecore