11,745 research outputs found

    CLEAR: Covariant LEAst-square Re-fitting with applications to image restoration

    Full text link
    In this paper, we propose a new framework to remove parts of the systematic errors affecting popular restoration algorithms, with a special focus for image processing tasks. Generalizing ideas that emerged for ℓ1\ell_1 regularization, we develop an approach re-fitting the results of standard methods towards the input data. Total variation regularizations and non-local means are special cases of interest. We identify important covariant information that should be preserved by the re-fitting method, and emphasize the importance of preserving the Jacobian (w.r.t. the observed signal) of the original estimator. Then, we provide an approach that has a "twicing" flavor and allows re-fitting the restored signal by adding back a local affine transformation of the residual term. We illustrate the benefits of our method on numerical simulations for image restoration tasks

    Appraisal Using Generalized Additive Models

    Get PDF
    Many of the results from real estate empirical studies depend upon using a correct functional form for their validity. Unfortunately, common parametric statistical tools cannot easily control for the possibility of misspecification. Recently, semiparametric estimators such as generalized additive models (GAMs) have arisen which can automatically control for additive (in price) or multiplicative (in ln(price)) nonlinear relations among the independent and dependent variables. As the paper shows, GAMs can empirically outperform naive parametric and polynomial models in ex-sample predictive behavior. Moreover, GAMs have well-developed statistical properties and can suggest useful transformations in parametric settings.

    Convex and non-convex regularization methods for spatial point processes intensity estimation

    Get PDF
    This paper deals with feature selection procedures for spatial point processes intensity estimation. We consider regularized versions of estimating equations based on Campbell theorem derived from two classical functions: Poisson likelihood and logistic regression likelihood. We provide general conditions on the spatial point processes and on penalty functions which ensure consistency, sparsity and asymptotic normality. We discuss the numerical implementation and assess finite sample properties in a simulation study. Finally, an application to tropical forestry datasets illustrates the use of the proposed methods

    Local Adaptive Grouped Regularization and its Oracle Properties for Varying Coefficient Regression

    Full text link
    Varying coefficient regression is a flexible technique for modeling data where the coefficients are functions of some effect-modifying parameter, often time or location in a certain domain. While there are a number of methods for variable selection in a varying coefficient regression model, the existing methods are mostly for global selection, which includes or excludes each covariate over the entire domain. Presented here is a new local adaptive grouped regularization (LAGR) method for local variable selection in spatially varying coefficient linear and generalized linear regression. LAGR selects the covariates that are associated with the response at any point in space, and simultaneously estimates the coefficients of those covariates by tailoring the adaptive group Lasso toward a local regression model with locally linear coefficient estimates. Oracle properties of the proposed method are established under local linear regression and local generalized linear regression. The finite sample properties of LAGR are assessed in a simulation study and for illustration, the Boston housing price data set is analyzed.Comment: 30 pages, one technical appendix, two figure

    Efficient estimation of Banach parameters in semiparametric models

    Get PDF
    Consider a semiparametric model with a Euclidean parameter and an infinite-dimensional parameter, to be called a Banach parameter. Assume: (a) There exists an efficient estimator of the Euclidean parameter. (b) When the value of the Euclidean parameter is known, there exists an estimator of the Banach parameter, which depends on this value and is efficient within this restricted model. Substituting the efficient estimator of the Euclidean parameter for the value of this parameter in the estimator of the Banach parameter, one obtains an efficient estimator of the Banach parameter for the full semiparametric model with the Euclidean parameter unknown. This hereditary property of efficiency completes estimation in semiparametric models in which the Euclidean parameter has been estimated efficiently. Typically, estimation of both the Euclidean and the Banach parameter is necessary in order to describe the random phenomenon under study to a sufficient extent. Since efficient estimators are asymptotically linear, the above substitution method is a particular case of substituting asymptotically linear estimators of a Euclidean parameter into estimators that are asymptotically linear themselves and that depend on this Euclidean parameter. This more general substitution case is studied for its own sake as well, and a hereditary property for asymptotic linearity is proved.Comment: Published at http://dx.doi.org/10.1214/009053604000000913 in the Annals of Statistics (http://www.imstat.org/aos/) by the Institute of Mathematical Statistics (http://www.imstat.org

    Inference in Additively Separable Models With a High-Dimensional Set of Conditioning Variables

    Full text link
    This paper studies nonparametric series estimation and inference for the effect of a single variable of interest x on an outcome y in the presence of potentially high-dimensional conditioning variables z. The context is an additively separable model E[y|x, z] = g0(x) + h0(z). The model is high-dimensional in the sense that the series of approximating functions for h0(z) can have more terms than the sample size, thereby allowing z to have potentially very many measured characteristics. The model is required to be approximately sparse: h0(z) can be approximated using only a small subset of series terms whose identities are unknown. This paper proposes an estimation and inference method for g0(x) called Post-Nonparametric Double Selection which is a generalization of Post-Double Selection. Standard rates of convergence and asymptotic normality for the estimator are shown to hold uniformly over a large class of sparse data generating processes. A simulation study illustrates finite sample estimation properties of the proposed estimator and coverage properties of the corresponding confidence intervals. Finally, an empirical application to college admissions policy demonstrates the practical implementation of the proposed method
    • 

    corecore