186 research outputs found
Nonparametric estimation of multivariate convex-transformed densities
We study estimation of multivariate densities of the form
for and for a fixed monotone function and an unknown
convex function . The canonical example is for ; in this case, the resulting class of densities [\mathcal
{P}(e^{-y})={p=\exp(-g):g is convex}] is well known as the class of log-concave
densities. Other functions allow for classes of densities with heavier
tails than the log-concave class. We first investigate when the maximum
likelihood estimator exists for the class for
various choices of monotone transformations , including decreasing and
increasing functions . The resulting models for increasing transformations
extend the classes of log-convex densities studied previously in the
econometrics literature, corresponding to . We then establish
consistency of the maximum likelihood estimator for fairly general functions
, including the log-concave class and many others. In
a final section, we provide asymptotic minimax lower bounds for the estimation
of and its vector of derivatives at a fixed point under natural
smoothness hypotheses on and . The proofs rely heavily on results from
convex analysis.Comment: Published in at http://dx.doi.org/10.1214/10-AOS840 the Annals of
Statistics (http://www.imstat.org/aos/) by the Institute of Mathematical
Statistics (http://www.imstat.org
A Kiefer--Wolfowitz theorem for convex densities
Kiefer and Wolfowitz [Z. Wahrsch. Verw. Gebiete 34 (1976) 73--85] showed that
if is a strictly curved concave distribution function (corresponding to a
strictly monotone density ), then the Maximum Likelihood Estimator
, which is, in fact, the least concave majorant of the empirical
distribution function , differs from the empirical distribution
function in the uniform norm by no more than a constant times almost surely. We review their result and give an updated version of
their proof. We prove a comparable theorem for the class of distribution
functions with convex decreasing densities , but with the maximum
likelihood estimator of replaced by the least squares estimator
: if are sampled from a distribution function
with strictly convex density , then the least squares estimator
of and the empirical distribution function differ in the uniform norm by no more than a constant times almost surely. The proofs rely on bounds on the interpolation error
for complete spline interpolation due to Hall [J. Approximation Theory 1 (1968)
209--218], Hall and Meyer [J. Approximation Theory 16 (1976) 105--122],
building on earlier work by Birkhoff and de Boor [J. Math. Mech. 13 (1964)
827--835]. These results, which are crucial for the developments here, are all
nicely summarized and exposited in de Boor [A Practical Guide to Splines (2001)
Springer, New York].Comment: Published at http://dx.doi.org/10.1214/074921707000000256 in the IMS
Lecture Notes Monograph Series
(http://www.imstat.org/publications/lecnotes.htm) by the Institute of
Mathematical Statistics (http://www.imstat.org
On Convex Least Squares Estimation when the Truth is Linear
We prove that the convex least squares estimator (LSE) attains a
pointwise rate of convergence in any region where the truth is linear. In
addition, the asymptotic distribution can be characterized by a modified
invelope process. Analogous results hold when one uses the derivative of the
convex LSE to perform derivative estimation. These asymptotic results
facilitate a new consistent testing procedure on the linearity against a convex
alternative. Moreover, we show that the convex LSE adapts to the optimal rate
at the boundary points of the region where the truth is linear, up to a log-log
factor. These conclusions are valid in the context of both density estimation
and regression function estimation.Comment: 35 pages, 5 figure
Confidence Bands for Distribution Functions: A New Look at the Law of the Iterated Logarithm
We present a general law of the iterated logarithm for stochastic processes
on the open unit interval having subexponential tails in a locally uniform
fashion. It applies to standard Brownian bridge but also to suitably
standardized empirical distribution functions. This leads to new
goodness-of-fit tests and confidence bands which refine the procedures of Berk
and Jones (1979) and Owen (1995). Roughly speaking, the high power and accuracy
of the latter procedures in the tail regions of distributions are essentially
preserved while gaining considerably in the central region
- …