186 research outputs found

    Nonparametric estimation of multivariate convex-transformed densities

    Full text link
    We study estimation of multivariate densities pp of the form p(x)=h(g(x))p(x)=h(g(x)) for xRdx\in \mathbb {R}^d and for a fixed monotone function hh and an unknown convex function gg. The canonical example is h(y)=eyh(y)=e^{-y} for yRy\in \mathbb {R}; in this case, the resulting class of densities [\mathcal {P}(e^{-y})={p=\exp(-g):g is convex}] is well known as the class of log-concave densities. Other functions hh allow for classes of densities with heavier tails than the log-concave class. We first investigate when the maximum likelihood estimator p^\hat{p} exists for the class P(h)\mathcal {P}(h) for various choices of monotone transformations hh, including decreasing and increasing functions hh. The resulting models for increasing transformations hh extend the classes of log-convex densities studied previously in the econometrics literature, corresponding to h(y)=exp(y)h(y)=\exp(y). We then establish consistency of the maximum likelihood estimator for fairly general functions hh, including the log-concave class P(ey)\mathcal {P}(e^{-y}) and many others. In a final section, we provide asymptotic minimax lower bounds for the estimation of pp and its vector of derivatives at a fixed point x0x_0 under natural smoothness hypotheses on hh and gg. The proofs rely heavily on results from convex analysis.Comment: Published in at http://dx.doi.org/10.1214/10-AOS840 the Annals of Statistics (http://www.imstat.org/aos/) by the Institute of Mathematical Statistics (http://www.imstat.org

    A Kiefer--Wolfowitz theorem for convex densities

    Full text link
    Kiefer and Wolfowitz [Z. Wahrsch. Verw. Gebiete 34 (1976) 73--85] showed that if FF is a strictly curved concave distribution function (corresponding to a strictly monotone density ff), then the Maximum Likelihood Estimator F^n\hat{F}_n, which is, in fact, the least concave majorant of the empirical distribution function Fn\mathbb {F}_n, differs from the empirical distribution function in the uniform norm by no more than a constant times (n1logn)2/3(n^{-1}\log n)^{2/3} almost surely. We review their result and give an updated version of their proof. We prove a comparable theorem for the class of distribution functions FF with convex decreasing densities ff, but with the maximum likelihood estimator F^n\hat{F}_n of FF replaced by the least squares estimator F~n\widetilde{F}_n: if X1,...,XnX_1,..., X_n are sampled from a distribution function FF with strictly convex density ff, then the least squares estimator F~n\widetilde{F}_n of FF and the empirical distribution function Fn\mathbb {F}_n differ in the uniform norm by no more than a constant times (n1logn)3/5(n^{-1}\log n)^{3/5} almost surely. The proofs rely on bounds on the interpolation error for complete spline interpolation due to Hall [J. Approximation Theory 1 (1968) 209--218], Hall and Meyer [J. Approximation Theory 16 (1976) 105--122], building on earlier work by Birkhoff and de Boor [J. Math. Mech. 13 (1964) 827--835]. These results, which are crucial for the developments here, are all nicely summarized and exposited in de Boor [A Practical Guide to Splines (2001) Springer, New York].Comment: Published at http://dx.doi.org/10.1214/074921707000000256 in the IMS Lecture Notes Monograph Series (http://www.imstat.org/publications/lecnotes.htm) by the Institute of Mathematical Statistics (http://www.imstat.org

    On Convex Least Squares Estimation when the Truth is Linear

    Get PDF
    We prove that the convex least squares estimator (LSE) attains a n1/2n^{-1/2} pointwise rate of convergence in any region where the truth is linear. In addition, the asymptotic distribution can be characterized by a modified invelope process. Analogous results hold when one uses the derivative of the convex LSE to perform derivative estimation. These asymptotic results facilitate a new consistent testing procedure on the linearity against a convex alternative. Moreover, we show that the convex LSE adapts to the optimal rate at the boundary points of the region where the truth is linear, up to a log-log factor. These conclusions are valid in the context of both density estimation and regression function estimation.Comment: 35 pages, 5 figure

    Confidence Bands for Distribution Functions: A New Look at the Law of the Iterated Logarithm

    Get PDF
    We present a general law of the iterated logarithm for stochastic processes on the open unit interval having subexponential tails in a locally uniform fashion. It applies to standard Brownian bridge but also to suitably standardized empirical distribution functions. This leads to new goodness-of-fit tests and confidence bands which refine the procedures of Berk and Jones (1979) and Owen (1995). Roughly speaking, the high power and accuracy of the latter procedures in the tail regions of distributions are essentially preserved while gaining considerably in the central region
    corecore