169 research outputs found
Least squares type estimation of the transition density of a particular hidden Markov chain
In this paper, we study the following model of hidden Markov chain:
, with a real-valued stationary
Markov chain and a noise having a known
distribution and independent of the sequence . We present an estimator
of the transition density obtained by minimization of an original contrast that
takes advantage of the regressive aspect of the problem. It is selected among a
collection of projection estimators with a model selection method. The
-risk and its rate of convergence are evaluated for ordinary smooth noise
and some simulations illustrate the method. We obtain uniform risk bounds over
classes of Besov balls. In addition our estimation procedure requires no prior
knowledge of the regularity of the true transition. Finally, our estimator
permits to avoid the drawbacks of quotient estimators.Comment: Published in at http://dx.doi.org/10.1214/07-EJS111 the Electronic
Journal of Statistics (http://www.i-journals.org/ejs/) by the Institute of
Mathematical Statistics (http://www.imstat.org
Rates of convergence for nonparametric deconvolution
This Note presents original rates of convergence for the deconvolution
problem. We assume that both the estimated density and noise density are
supersmooth and we compute the risk for two kinds of estimators
Adaptive estimation of the transition density of a Markov chain
In this paper a new estimator for the transition density of an
homogeneous Markov chain is considered. We introduce an original contrast
derived from regression framework and we use a model selection method to
estimate under mild conditions. The resulting estimate is adaptive with
an optimal rate of convergence over a large range of anisotropic Besov spaces
. Some simulations are also presented
Minimal penalty for Goldenshluger-Lepski method
This paper is concerned with adaptive nonparametric estimation using the
Goldenshluger-Lepski selection method. This estimator selection method is based
on pairwise comparisons between estimators with respect to some loss function.
The method also involves a penalty term that typically needs to be large enough
in order that the method works (in the sense that one can prove some oracle
type inequality for the selected estimator). In the case of density estimation
with kernel estimators and a quadratic loss, we show that the procedure fails
if the penalty term is chosen smaller than some critical value for the penalty:
the minimal penalty. More precisely we show that the quadratic risk of the
selected estimator explodes when the penalty is below this critical value while
it stays under control when the penalty is above this critical value. This kind
of phase transition phenomenon for penalty calibration has already been
observed and proved for penalized model selection methods in various contexts
but appears here for the first time for the Goldenshluger-Lepski pairwise
comparison method. Some simulations illustrate the theoretical results and lead
to some hints on how to use the theory to calibrate the method in practice
Estimator selection: a new method with applications to kernel density estimation
Estimator selection has become a crucial issue in non parametric estimation.
Two widely used methods are penalized empirical risk minimization (such as
penalized log-likelihood estimation) or pairwise comparison (such as Lepski's
method). Our aim in this paper is twofold. First we explain some general ideas
about the calibration issue of estimator selection methods. We review some
known results, putting the emphasis on the concept of minimal penalty which is
helpful to design data-driven selection criteria. Secondly we present a new
method for bandwidth selection within the framework of kernel density density
estimation which is in some sense intermediate between these two main methods
mentioned above. We provide some theoretical results which lead to some fully
data-driven selection strategy
Numerical performance of Penalized Comparison to Overfitting for multivariate kernel density estimation
Kernel density estimation is a well known method involving a smoothing
parameter (the bandwidth) that needs to be tuned by the user. Although this
method has been widely used the bandwidth selection remains a challenging issue
in terms of balancing algorithmic performance and statistical relevance. The
purpose of this paper is to compare a recently developped bandwidth selection
method for kernel density estimation to those which are commonly used by now
(at least those which are implemented in the R-package). This new method is
called Penalized Comparison to Overfitting (PCO). It has been proposed by some
of the authors of this paper in a previous work devoted to its statistical
relevance from a purely theoretical perspective. It is compared here to other
usual bandwidth selection methods for univariate and also multivariate kernel
density estimation on the basis of intensive simulation studies. In particular,
cross-validation and plug-in criteria are numerically investigated and compared
to PCO. The take home message is that PCO can outperform the classical methods
without algorithmic additionnal cost
Adaptive pointwise estimation for pure jump L\'evy processes
This paper is concerned with adaptive kernel estimation of the L\'evy density
N(x) for bounded-variation pure-jump L\'evy processes. The sample path is
observed at n discrete instants in the "high frequency" context (\Delta =
\Delta(n) tends to zero while n\Delta tends to infinity). We construct a
collection of kernel estimators of the function g(x)=xN(x) and propose a method
of local adaptive selection of the bandwidth. We provide an oracle inequality
and a rate of convergence for the quadratic pointwise risk. This rate is proved
to be the optimal minimax rate. We give examples and simulation results for
processes fitting in our framework. We also consider the case of irregular
sampling
Adaptive estimation of the dynamics of a discrete time stochastic volatility model
International audienceThis paper is concerned with the particular hidden model: , where and are independent sequences of i.i.d. noise. Moreover, the sequences and are independent and the distribution of is known. Our aim is to estimate the functions and when only observations are available. We propose to estimate and and study the integrated mean square error of projection estimators of these functions on automatically selected projection spaces. By ratio strategy, estimators of and are then deduced. The mean square risk of the resulting estimators are studied and their rates are discussed. Lastly, simulation experiments are provided: constants in the penalty functions defining the estimators are calibrated and the quality of the estimators is checked on several examples
Minimax estimation of the conditional cumulative distribution function under random censorship
International audienceConsider an i.i.d. sample , of observations and denote by the conditional cumulative distribution function of given . We provide a data driven nonparametric strategy to estimate . We prove that, in term of the integrated mean square risk on a compact set, our estimator performs a squared-bias variance compromise. We deduce from this an upper bound for the rate of convergence of the estimator, in a context of anisotropic function classes. A lower bound for this rate is also proved, which implies the optimality of our estimator. Then our procedure can be adapted to positive censored random variables 's, i.e. when only and \delta_i=\1_{\{Y_i\leq C_i\}} are observed, for an i.i.d. censoring sequence independent of . Simulation experiments illustrate the method
- …