7,805 research outputs found

    LogConcDEAD: An R Package for Maximum Likelihood Estimation of a Multivariate Log-Concave Density

    Get PDF
    In this article we introduce the R package LogConcDEAD (Log-concave density estimation in arbitrary dimensions). Its main function is to compute the nonparametric maximum likelihood estimator of a log-concave density. Functions for plotting, sampling from the density estimate and evaluating the density estimate are provided. All of the functions available in the package are illustrated using simple, reproducible examples with simulated data.

    Nonparametric estimation of multivariate convex-transformed densities

    Full text link
    We study estimation of multivariate densities pp of the form p(x)=h(g(x))p(x)=h(g(x)) for x∈Rdx\in \mathbb {R}^d and for a fixed monotone function hh and an unknown convex function gg. The canonical example is h(y)=eβˆ’yh(y)=e^{-y} for y∈Ry\in \mathbb {R}; in this case, the resulting class of densities [\mathcal {P}(e^{-y})={p=\exp(-g):g is convex}] is well known as the class of log-concave densities. Other functions hh allow for classes of densities with heavier tails than the log-concave class. We first investigate when the maximum likelihood estimator p^\hat{p} exists for the class P(h)\mathcal {P}(h) for various choices of monotone transformations hh, including decreasing and increasing functions hh. The resulting models for increasing transformations hh extend the classes of log-convex densities studied previously in the econometrics literature, corresponding to h(y)=exp⁑(y)h(y)=\exp(y). We then establish consistency of the maximum likelihood estimator for fairly general functions hh, including the log-concave class P(eβˆ’y)\mathcal {P}(e^{-y}) and many others. In a final section, we provide asymptotic minimax lower bounds for the estimation of pp and its vector of derivatives at a fixed point x0x_0 under natural smoothness hypotheses on hh and gg. The proofs rely heavily on results from convex analysis.Comment: Published in at http://dx.doi.org/10.1214/10-AOS840 the Annals of Statistics (http://www.imstat.org/aos/) by the Institute of Mathematical Statistics (http://www.imstat.org

    Bayesian nonparametric multivariate convex regression

    Full text link
    In many applications, such as economics, operations research and reinforcement learning, one often needs to estimate a multivariate regression function f subject to a convexity constraint. For example, in sequential decision processes the value of a state under optimal subsequent decisions may be known to be convex or concave. We propose a new Bayesian nonparametric multivariate approach based on characterizing the unknown regression function as the max of a random collection of unknown hyperplanes. This specification induces a prior with large support in a Kullback-Leibler sense on the space of convex functions, while also leading to strong posterior consistency. Although we assume that f is defined over R^p, we show that this model has a convergence rate of log(n)^{-1} n^{-1/(d+2)} under the empirical L2 norm when f actually maps a d dimensional linear subspace to R. We design an efficient reversible jump MCMC algorithm for posterior computation and demonstrate the methods through application to value function approximation

    A Bayesian nonparametric approach to log-concave density estimation

    Full text link
    The estimation of a log-concave density on R\mathbb{R} is a canonical problem in the area of shape-constrained nonparametric inference. We present a Bayesian nonparametric approach to this problem based on an exponentiated Dirichlet process mixture prior and show that the posterior distribution converges to the log-concave truth at the (near-) minimax rate in Hellinger distance. Our proof proceeds by establishing a general contraction result based on the log-concave maximum likelihood estimator that prevents the need for further metric entropy calculations. We also present two computationally more feasible approximations and a more practical empirical Bayes approach, which are illustrated numerically via simulations.Comment: 39 pages, 17 figures. Simulation studies were significantly expanded and one more theorem has been adde

    The MM Alternative to EM

    Full text link
    The EM algorithm is a special case of a more general algorithm called the MM algorithm. Specific MM algorithms often have nothing to do with missing data. The first M step of an MM algorithm creates a surrogate function that is optimized in the second M step. In minimization, MM stands for majorize--minimize; in maximization, it stands for minorize--maximize. This two-step process always drives the objective function in the right direction. Construction of MM algorithms relies on recognizing and manipulating inequalities rather than calculating conditional expectations. This survey walks the reader through the construction of several specific MM algorithms. The potential of the MM algorithm in solving high-dimensional optimization and estimation problems is its most attractive feature. Our applications to random graph models, discriminant analysis and image restoration showcase this ability.Comment: Published in at http://dx.doi.org/10.1214/08-STS264 the Statistical Science (http://www.imstat.org/sts/) by the Institute of Mathematical Statistics (http://www.imstat.org
    • …
    corecore