64 research outputs found

    Nonparametric regression for locally stationary random fields under stochastic sampling design

    Full text link
    In this study, we develop an asymptotic theory of nonparametric regression for locally stationary random fields (LSRFs) {Xs,An:s∈Rn}\{{\bf X}_{{\bf s}, A_{n}}: {\bf s} \in R_{n} \} in Rp\mathbb{R}^{p} observed at irregularly spaced locations in Rn=[0,An]dβŠ‚RdR_{n} =[0,A_{n}]^{d} \subset \mathbb{R}^{d}. We first derive the uniform convergence rate of general kernel estimators, followed by the asymptotic normality of an estimator for the mean function of the model. Moreover, we consider additive models to avoid the curse of dimensionality arising from the dependence of the convergence rate of estimators on the number of covariates. Subsequently, we derive the uniform convergence rate and joint asymptotic normality of the estimators for additive functions. We also introduce approximately mnm_{n}-dependent RFs to provide examples of LSRFs. We find that these RFs include a wide class of L\'evy-driven moving average RFs.Comment: 50 page

    On the estimation of locally stationary functional time series

    Full text link
    This paper develops an asymptotic theory for estimating the time-varying characteristics of locally stationary functional time series. We introduce a kernel-based method to estimate the time-varying covariance operator and the time-varying mean function of a locally stationary functional time series. Subsequently, we derive the convergence rate of the kernel estimator of the covariance operator and associated eigenvalue and eigenfunctions. We also establish a central limit theorem for the kernel-based locally weighted sample mean. As applications of our results, we discuss the prediction of locally stationary functional time series and methods for testing the equality of time-varying mean functions in two functional samples.Comment: 37 page

    Subsampling inference for nonparametric extremal conditional quantiles

    Get PDF
    This paper proposes a subsampling inference method for extreme conditional quantiles based on a self-normalized version of a local estimator for conditional quantiles, such as the local linear quantile regression estimator. The proposed method circumvents difficulty of estimating nuisance parameters in the limiting distribution of the local estimator. A simulation study and empirical example illustrate usefulness of our subsampling inference to investigate extremal phenomena

    Local polynomial regression for spatial data on Rd\mathbb{R}^d

    Full text link
    This paper develops a general asymptotic theory of local polynomial (LP) regression for spatial data observed at irregularly spaced locations in a sampling region RnβŠ‚RdR_n \subset \mathbb{R}^d. We adopt a stochastic sampling design that can generate irregularly spaced sampling sites in a flexible manner including both pure increasing and mixed increasing domain frameworks. We first introduce a nonparametric regression model for spatial data defined on Rd\mathbb{R}^d and then establish the asymptotic normality of LP estimators with general order pβ‰₯1p \geq 1. We also propose methods for constructing confidence intervals and establish uniform convergence rates of LP estimators. Our dependence structure conditions on the underlying processes cover a wide class of random fields such as L\'evy-driven continuous autoregressive moving average random fields. As an application of our main results, we discuss a two-sample testing problem for mean functions and their partial derivatives.Comment: 45 page

    On the uniform convergence of deconvolution estimators from repeated measurements

    Get PDF
    This paper studies the uniform convergence rates of Li and Vuong's (1998, Journal of Multivariate Analysis 65, 139-165; hereafter LV) nonparametric deconvolution estimator and its regularized version by Comte and Kappus (2015, Journal of Multivariate Analysis 140, 31-46) for the classical measurement error model, where repeated noisy measurements on the error-free variable of interest are available. In contrast to LV, our assumptions allow unbounded supports for the error-free variable and measurement errors. Compared to Bonhomme and Robin (2010, Review of Economic Studies 77, 491-533) specialized to the measurement error model, our assumptions do not require existence of the moment generating functions of the square and product of repeated measurements. Furthermore, by utilizing a maximal inequality for the multivariate normalized empirical characteristic function process, we derive uniform convergence rates that are faster than the ones derived in these papers under such weaker conditions

    Hierarchical Regression Discontinuity Design: Pursuing Subgroup Treatment Effects

    Full text link
    Regression discontinuity design (RDD) is widely adopted for causal inference under intervention determined by a continuous variable. While one is interested in treatment effect heterogeneity by subgroups in many applications, RDD typically suffers from small subgroup-wise sample sizes, which makes the estimation results highly instable. To solve this issue, we introduce hierarchical RDD (HRDD), a hierarchical Bayes approach for pursuing treatment effect heterogeneity in RDD. A key feature of HRDD is to employ a pseudo-model based on a loss function to estimate subgroup-level parameters of treatment effects under RDD, and assign a hierarchical prior distribution to ``borrow strength" from other subgroups. The posterior computation can be easily done by a simple Gibbs sampling. We demonstrate the proposed HRDD through simulation and real data analysis, and show that HRDD provides much more stable point and interval estimation than separately applying the standard RDD method to each subgroup.Comment: 21 page
    • …
    corecore