2,528 research outputs found

    Asymptotic Normality for Deconvolution Estimators of Multivariate Densities of Stationary Processes

    Get PDF
    AbstractWe consider the estimation of the multivariate probability density functions of stationary random processes from noisy observations. The asymptotic normality of kernel-type deconvolution estimators is established for various classes of mixing processes. Classes of noise characteristic functions both with algebraic and with exponential decay are studied

    Nonparametric volatility density estimation for discrete time models

    Full text link
    We consider discrete time models for asset prices with a stationary volatility process. We aim at estimating the multivariate density of this process at a set of consecutive time instants. A Fourier type deconvolution kernel density estimator based on the logarithm of the squared process is proposed to estimate the volatility density. Expansions of the bias and bounds on the variance are derived

    Nonparametric methods for volatility density estimation

    Full text link
    Stochastic volatility modelling of financial processes has become increasingly popular. The proposed models usually contain a stationary volatility process. We will motivate and review several nonparametric methods for estimation of the density of the volatility process. Both models based on discretely sampled continuous time processes and discrete time models will be discussed. The key insight for the analysis is a transformation of the volatility density estimation problem to a deconvolution model for which standard methods exist. Three type of nonparametric density estimators are reviewed: the Fourier-type deconvolution kernel density estimator, a wavelet deconvolution density estimator and a penalized projection estimator. The performance of these estimators will be compared. Key words: stochastic volatility models, deconvolution, density estimation, kernel estimator, wavelets, minimum contrast estimation, mixin

    Identifiability and consistent estimation of nonparametric translation hidden Markov models with general state space

    Get PDF
    This paper considers hidden Markov models where the observations are given as the sum of a latent state which lies in a general state space and some independent noise with unknown distribution. It is shown that these fully nonparametric translation models are identifiable with respect to both the distribution of the latent variables and the distribution of the noise, under mostly a light tail assumption on the latent variables. Two nonparametric estimation methods are proposed and we prove that the corresponding estimators are consistent for the weak convergence topology. These results are illustrated with numerical experiments

    Nonparametric deconvolution problem for dependent sequences

    Full text link
    We consider the nonparametric estimation of the density function of weakly and strongly dependent processes with noisy observations. We show that in the ordinary smooth case the optimal bandwidth choice can be influenced by long range dependence, as opposite to the standard case, when no noise is present. In particular, if the dependence is moderate the bandwidth, the rates of mean-square convergence and, additionally, central limit theorem are the same as in the i.i.d. case. If the dependence is strong enough, then the bandwidth choice is influenced by the strength of dependence, which is different when compared to the non-noisy case. Also, central limit theorem are influenced by the strength of dependence. On the other hand, if the density is supersmooth, then long range dependence has no effect at all on the optimal bandwidth choice.Comment: Published in at http://dx.doi.org/10.1214/07-EJS154 the Electronic Journal of Statistics (http://www.i-journals.org/ejs/) by the Institute of Mathematical Statistics (http://www.imstat.org

    Blind Single Channel Deconvolution using Nonstationary Signal Processing

    Get PDF

    Adaptive density deconvolution with dependent inputs

    Get PDF
    In the convolution model Z_i=X_i+ϵ_iZ\_i=X\_i+ \epsilon\_i, we give a model selection procedure to estimate the density of the unobserved variables (X_i)_1in(X\_i)\_{1 \leq i \leq n}, when the sequence (X_i)_i1(X\_i)\_{i \geq 1} is strictly stationary but not necessarily independent. This procedure depends on wether the density of ϵ_i\epsilon\_i is super smooth or ordinary smooth. The rates of convergence of the penalized contrast estimators are the same as in the independent framework, and are minimax over most classes of regularity on R{\mathbb R}. Our results apply to mixing sequences, but also to many other dependent sequences. When the errors are super smooth, the condition on the dependence coefficients is the minimal condition of that type ensuring that the sequence (X_i)_i1(X\_i)\_{i \geq 1} is not a long-memory process

    Nonparametric regression for dependent data in the errors-in-variables problem

    Get PDF
    We consider the nonparametric estimation of the regression functions for dependent data. Suppose that the covariates are observed with additive errors in the data and we employ nonparametric deconvolution kernel techniques to estimate the regression functions in this paper. We investigate how the strength of time dependence affects the asymptotic properties of the local constant and linear estimators. We treat both short-range dependent and long-range dependent linear processes in a unified way and demonstrate that the long-range dependence (LRD) of the covariates affects the asymptotic properties of the nonparametric estimators as well as the LRD of regression errors does.local polynomial regression, errors-in-variables, deconvolution, ordinary smooth case, supersmooth case, linear processes, long-range dependence
    corecore