7 research outputs found
Nonparametric estimation by convex programming
The problem we concentrate on is as follows: given (1) a convex compact set
in , an affine mapping , a parametric family
of probability densities and (2) i.i.d. observations
of the random variable , distributed with the density
for some (unknown) , estimate the value of a given linear form
at . For several families with no additional
assumptions on and , we develop computationally efficient estimation
routines which are minimax optimal, within an absolute constant factor. We then
apply these routines to recovering itself in the Euclidean norm.Comment: Published in at http://dx.doi.org/10.1214/08-AOS654 the Annals of
Statistics (http://www.imstat.org/aos/) by the Institute of Mathematical
Statistics (http://www.imstat.org
Nonparametric estimation of composite functions
We study the problem of nonparametric estimation of a multivariate function
that can be represented as a composition of two
unknown smooth functions and . We suppose that and belong to known smoothness classes of
functions, with smoothness and , respectively. We obtain the
full description of minimax rates of estimation of in terms of and
, and propose rate-optimal estimators for the sup-norm loss. For the
construction of such estimators, we first prove an approximation result for
composite functions that may have an independent interest, and then a result on
adaptation to the local structure. Interestingly, the construction of
rate-optimal estimators for composite functions (with given, fixed smoothness)
needs adaptation, but not in the traditional sense: it is now adaptation to the
local structure. We prove that composition models generate only two types of
local structures: the local single-index model and the local model with
roughness isolated to a single dimension (i.e., a model containing elements of
both additive and single-index structure). We also find the zones of (,
) where no local structure is generated, as well as the zones where the
composition modeling leads to faster rates, as compared to the classical
nonparametric rates that depend only to the overall smoothness of .Comment: Published in at http://dx.doi.org/10.1214/08-AOS611 the Annals of
Statistics (http://www.imstat.org/aos/) by the Institute of Mathematical
Statistics (http://www.imstat.org
Oracle inequalities
www.elsevier.com/locate/ach
Adaptive Denoising of Signals with Local Shift-Invariant Structure
International audienceWe discuss the problem of adaptive discrete-time signal denoising in the situation where the signal to be recovered admits a ``linear oracle'' - an unknown linear estimate that takes the form of convolution of observations with a time-invariant filter. It was shown by Juditsky and Nemirovski (2009) that when the -norm of the oracle filter is small enough, such oracle can be ``mimicked'' by an efficiently computable \textit{adaptive} estimate of the same structure with the observation-driven filter. The filter in question was obtained as a solution to the optimization problem in which the -norm of the Discrete Fourier Transform (DFT) of the estimation residual is minimized under constraint on the -norm of the filter DFT.In this paper, we discuss a new family of adaptive estimates which rely upon minimizing the -norm of the estimation residual. We show that such estimators possess better statistical properties than those based on~-fit; in particular, under the assumption of \textit{approximate shift-invariance} we prove oracle inequalities for their -loss and improved bounds for - and pointwise losses.We also study the relationship of the approximate shift-invariance assumption with the signal simplicity introduced by Juditsky and Nemirovski (2009), and discuss the application of the proposed approach to harmonic oscillation denoising
Adaptive Denoising of Signals with Local Shift-Invariant Structure
International audienceWe discuss the problem of adaptive discrete-time signal denoising in the situation where the signal to be recovered admits a ``linear oracle'' - an unknown linear estimate that takes the form of convolution of observations with a time-invariant filter. It was shown by Juditsky and Nemirovski (2009) that when the -norm of the oracle filter is small enough, such oracle can be ``mimicked'' by an efficiently computable \textit{adaptive} estimate of the same structure with the observation-driven filter. The filter in question was obtained as a solution to the optimization problem in which the -norm of the Discrete Fourier Transform (DFT) of the estimation residual is minimized under constraint on the -norm of the filter DFT.In this paper, we discuss a new family of adaptive estimates which rely upon minimizing the -norm of the estimation residual. We show that such estimators possess better statistical properties than those based on~-fit; in particular, under the assumption of \textit{approximate shift-invariance} we prove oracle inequalities for their -loss and improved bounds for - and pointwise losses.We also study the relationship of the approximate shift-invariance assumption with the signal simplicity introduced by Juditsky and Nemirovski (2009), and discuss the application of the proposed approach to harmonic oscillation denoising