288 research outputs found

    Wavelet Estimators in Nonparametric Regression: A Comparative Simulation Study

    Get PDF
    Wavelet analysis has been found to be a powerful tool for the nonparametric estimation of spatially-variable objects. We discuss in detail wavelet methods in nonparametric regression, where the data are modelled as observations of a signal contaminated with additive Gaussian noise, and provide an extensive review of the vast literature of wavelet shrinkage and wavelet thresholding estimators developed to denoise such data. These estimators arise from a wide range of classical and empirical Bayes methods treating either individual or blocks of wavelet coefficients. We compare various estimators in an extensive simulation study on a variety of sample sizes, test functions, signal-to-noise ratios and wavelet filters. Because there is no single criterion that can adequately summarise the behaviour of an estimator, we use various criteria to measure performance in finite sample situations. Insight into the performance of these estimators is obtained from graphical outputs and numerical tables. In order to provide some hints of how these estimators should be used to analyse real data sets, a detailed practical step-by-step illustration of a wavelet denoising analysis on electrical consumption is provided. Matlab codes are provided so that all figures and tables in this paper can be reproduced

    Wavelet methods in statistics: Some recent developments and their applications

    Full text link
    The development of wavelet theory has in recent years spawned applications in signal processing, in fast algorithms for integral transforms, and in image and function representation methods. This last application has stimulated interest in wavelet applications to statistics and to the analysis of experimental data, with many successes in the efficient analysis, processing, and compression of noisy signals and images. This is a selective review article that attempts to synthesize some recent work on ``nonlinear'' wavelet methods in nonparametric curve estimation and their role on a variety of applications. After a short introduction to wavelet theory, we discuss in detail several wavelet shrinkage and wavelet thresholding estimators, scattered in the literature and developed, under more or less standard settings, for density estimation from i.i.d. observations or to denoise data modeled as observations of a signal with additive noise. Most of these methods are fitted into the general concept of regularization with appropriately chosen penalty functions. A narrow range of applications in major areas of statistics is also discussed such as partial linear regression models and functional index models. The usefulness of all these methods are illustrated by means of simulations and practical examples.Comment: Published in at http://dx.doi.org/10.1214/07-SS014 the Statistics Surveys (http://www.i-journals.org/ss/) by the Institute of Mathematical Statistics (http://www.imstat.org

    Penalized wavelet monotone regression

    Get PDF
    In this paper we focus on nonparametric estimation of a constrained regression function using penalized wavelet regression techniques. This results into a convex op- timization problem under linear constraints. Necessary and sufficient conditions for existence of a unique solution are discussed. The estimator is easily obtained via the dual formulation of the optimization problem. In particular we investigate a penalized wavelet monotone regression estimator. We establish the rate of convergence of this estimator, and illustrate its finite sample performance via a simulation study. We also compare its performance with that of a recently proposed constrained estimator. An illustration to some real data is given

    Wavelet regression with long memory infinite moving average errors

    Get PDF
    For more than a decade there has been great interest in wavelets and wavelet-based methods. Among the most successful applications of wavelets is nonparametric statistical estimation, following the pioneering work of Donoho and Johnstone (1994, 1995) and Donoho et al. (1995). In this thesis, we consider the wavelet-based estimators of the mean regression function with long memory infinite moving average errors, and investigate the rates of convergence of estimators based on thresholding of empirical wavelet coefficients. We show that these estimators achieve nearly optimal minimax convergence rates within a logarithmic term over a large class of non-smooth functions that involve many jump discontinuities, where the number of discontinuities may grow polynomially fast with sample size. Therefore, in the presence of long memory moving average noise, wavelet estimators still achieve nearly optimal convergence rates and demonstrate explicitly the extraordinary local adaptability of this method in handling discontinuities. We illustrate the theory with numerical examples. A technical result in our development is the establishment of Bernstein-type exponential inequalities for infinite weighted sums of i.i.d. random variables under certain cumulant or moment assumptions. These large and moderate deviation inequalities may be of independent interest

    Wavelet penalized likelihood estimation in generalized functional models

    Full text link
    The paper deals with generalized functional regression. The aim is to estimate the influence of covariates on observations, drawn from an exponential distribution. The link considered has a semiparametric expression: if we are interested in a functional influence of some covariates, we authorize others to be modeled linearly. We thus consider a generalized partially linear regression model with unknown regression coefficients and an unknown nonparametric function. We present a maximum penalized likelihood procedure to estimate the components of the model introducing penalty based wavelet estimators. Asymptotic rates of the estimates of both the parametric and the nonparametric part of the model are given and quasi-minimax optimality is obtained under usual conditions in literature. We establish in particular that the LASSO penalty leads to an adaptive estimation with respect to the regularity of the estimated function. An algorithm based on backfitting and Fisher-scoring is also proposed for implementation. Simulations are used to illustrate the finite sample behaviour, including a comparison with kernel and splines based methods

    On the block thresholding wavelet estimators with censored data

    Get PDF
    AbstractWe consider block thresholding wavelet-based density estimators with randomly right-censored data and investigate their asymptotic convergence rates. Unlike for the complete data case, the empirical wavelet coefficients are constructed through the Kaplan–Meier estimators of the distribution functions in the censored data case. On the basis of a result of Stute [W. Stute, The central limit theorem under random censorship, Ann. Statist. 23 (1995) 422–439] that approximates the Kaplan–Meier integrals as averages of i.i.d. random variables with a certain rate in probability, we can show that these wavelet empirical coefficients can be approximated by averages of i.i.d. random variables with a certain error rate in L2. Therefore we can show that these estimators, based on block thresholding of empirical wavelet coefficients, achieve optimal convergence rates over a large range of Besov function classes Bp,qs,s>1/p, p≥2, q≥1 and nearly optimal convergence rates when 1≤p<2. We also show that these estimators achieve optimal convergence rates over a large class of functions that involve many irregularities of a wide variety of types, including chirp and Doppler functions, and jump discontinuities. Therefore, in the presence of random censoring, wavelet estimators still provide extensive adaptivity to many irregularities of large function classes. The performance of the estimators is tested via a modest simulation study

    Wavelets in Statistics

    Get PDF
    In this paper we give the main uses of wavelets in statistics, with emphasis in time series analysis. We include the fundamental work on non parametric regression, which motivated the development of techniques used in the estimation of the spectral density of stationary processes and of the evolutionary spectrum of locally stationary processes

    Reassessing the Paradigms of Statistical Model-Building

    Get PDF
    Statistical model-building is the science of constructing models from data and from information about the data-generation process, with the aim of analysing those data and drawing inference from that analysis. Many statistical tasks are undertaken during this analysis; they include classification, forecasting, prediction and testing. Model-building has assumed substantial importance, as new technologies enable data on highly complex phenomena to be gathered in very large quantities. This creates a demand for more complex models, and requires the model-building process itself to be adaptive. The word “paradigm” refers to philosophies, frameworks and methodologies for developing and interpreting statistical models, in the context of data, and applying them for inference. In order to solve contemporary statistical problems it is often necessary to combine techniques from previously separate paradigms. The workshop addressed model-building paradigms that are at the frontiers of modern statistical research. It tried to create synergies, by delineating the connections and collisions among different paradigms. It also endeavoured to shape the future evolution of paradigms
    corecore