1,660 research outputs found

    Optimal Uniform Convergence Rates for Sieve Nonparametric Instrumental Variables Regression

    Get PDF
    We study the problem of nonparametric regression when the regressor is endogenous, which is an important nonparametric instrumental variables (NPIV) regression in econometrics and a difficult ill-posed inverse problem with unknown operator in statistics. We first establish a general upper bound on the sup-norm (uniform) convergence rate of a sieve estimator, allowing for endogenous regressors and weakly dependent data. This result leads to the optimal sup-norm convergence rates for spline and wavelet least squares regression estimators under weakly dependent data and heavy-tailed error terms. This upper bound also yields the sup-norm convergence rates for sieve NPIV estimators under i.i.d. data: the rates coincide with the known optimal L2L^2-norm rates for severely ill-posed problems, and are power of log(n)\log(n) slower than the optimal L2L^2-norm rates for mildly ill-posed problems. We then establish the minimax risk lower bound in sup-norm loss, which coincides with our upper bounds on sup-norm rates for the spline and wavelet sieve NPIV estimators. This sup-norm rate optimality provides another justification for the wide application of sieve NPIV estimators. Useful results on weakly-dependent random matrices are also provided

    On rate optimality for ill-posed inverse problems in econometrics

    Get PDF
    In this paper, we clarify the relations between the existing sets of regularity conditions for convergence rates of nonparametric indirect regression (NPIR) and nonparametric instrumental variables (NPIV) regression models. We establish minimax risk lower bounds in mean integrated squared error loss for the NPIR and the NPIV models under two basic regularity conditions that allow for both mildly ill-posed and severely ill-posed cases. We show that both a simple projection estimator for the NPIR model, and a sieve minimum distance estimator for the NPIV model, can achieve the minimax risk lower bounds, and are rate-optimal uniformly over a large class of structure functions, allowing for mildly ill-posed and severely ill-posed cases.Comment: 27 page

    Spectral calibration of exponential Lévy Models [1]

    Get PDF
    We investigate the problem of calibrating an exponential Lévy model based on market prices of vanilla options. We show that this inverse problem is in general severely ill-posed and we derive exact minimax rates of convergence. The estimation procedure we propose is based on the explicit inversion of the option price formula in the spectral domain and a cut-off scheme for high frequencies as regularisation.European option, jump diffusion, minimax rates, severely ill-posed, nonlinear inverse problem, spectral cut-off

    On rate optimality for ill-posed inverse problems in econometrics

    Get PDF
    In this paper, we clarify the relations between the existing sets of regularity conditions for convergence rates of nonparametric indirect regression (NPIR) and nonparametric instrumental variables (NPIV) regression models. We establish minimax risk lower bounds in mean integrated squared error loss for the NPIR and the NPIV models under two basic regularity conditions that allow for both mildly ill-posed and severely ill-posed cases.We show that both a simple projection estimator for the NPIR model, and a sieve minimum distance estimator for the NPIV model,can achieve the minimax risk lower bounds, and are rate-optimal uniformly over a large class of structure functions, allowing for mildly ill-posed and severely ill-posed cases.

    Nonlinear estimation for linear inverse problems with error in the operator

    Full text link
    We study two nonlinear methods for statistical linear inverse problems when the operator is not known. The two constructions combine Galerkin regularization and wavelet thresholding. Their performances depend on the underlying structure of the operator, quantified by an index of sparsity. We prove their rate-optimality and adaptivity properties over Besov classes.Comment: Published in at http://dx.doi.org/10.1214/009053607000000721 the Annals of Statistics (http://www.imstat.org/aos/) by the Institute of Mathematical Statistics (http://www.imstat.org

    Early stopping for statistical inverse problems via truncated SVD estimation

    Full text link
    We consider truncated SVD (or spectral cut-off, projection) estimators for a prototypical statistical inverse problem in dimension DD. Since calculating the singular value decomposition (SVD) only for the largest singular values is much less costly than the full SVD, our aim is to select a data-driven truncation level m^{1,,D}\widehat m\in\{1,\ldots,D\} only based on the knowledge of the first m^\widehat m singular values and vectors. We analyse in detail whether sequential {\it early stopping} rules of this type can preserve statistical optimality. Information-constrained lower bounds and matching upper bounds for a residual based stopping rule are provided, which give a clear picture in which situation optimal sequential adaptation is feasible. Finally, a hybrid two-step approach is proposed which allows for classical oracle inequalities while considerably reducing numerical complexity.Comment: slightly modified version. arXiv admin note: text overlap with arXiv:1606.0770

    Stable soft extrapolation of entire functions

    Full text link
    Soft extrapolation refers to the problem of recovering a function from its samples, multiplied by a fast-decaying window and perturbed by an additive noise, over an interval which is potentially larger than the essential support of the window. A core theoretical question is to provide bounds on the possible amount of extrapolation, depending on the sample perturbation level and the function prior. In this paper we consider soft extrapolation of entire functions of finite order and type (containing the class of bandlimited functions as a special case), multiplied by a super-exponentially decaying window (such as a Gaussian). We consider a weighted least-squares polynomial approximation with judiciously chosen number of terms and a number of samples which scales linearly with the degree of approximation. It is shown that this simple procedure provides stable recovery with an extrapolation factor which scales logarithmically with the perturbation level and is inversely proportional to the characteristic lengthscale of the function. The pointwise extrapolation error exhibits a H\"{o}lder-type continuity with an exponent derived from weighted potential theory, which changes from 1 near the available samples, to 0 when the extrapolation distance reaches the characteristic smoothness length scale of the function. The algorithm is asymptotically minimax, in the sense that there is essentially no better algorithm yielding meaningfully lower error over the same smoothness class. When viewed in the dual domain, the above problem corresponds to (stable) simultaneous de-convolution and super-resolution for objects of small space/time extent. Our results then show that the amount of achievable super-resolution is inversely proportional to the object size, and therefore can be significant for small objects
    corecore