38,822 research outputs found

    Asymptotics of the discrete log-concave maximum likelihood estimator and related applications

    Full text link
    The assumption of log-concavity is a flexible and appealing nonparametric shape constraint in distribution modelling. In this work, we study the log-concave maximum likelihood estimator (MLE) of a probability mass function (pmf). We show that the MLE is strongly consistent and derive its pointwise asymptotic theory under both the well- and misspecified setting. Our asymptotic results are used to calculate confidence intervals for the true log-concave pmf. Both the MLE and the associated confidence intervals may be easily computed using the R package logcondiscr. We illustrate our theoretical results using recent data from the H1N1 pandemic in Ontario, Canada.Comment: 21 pages, 7 Figure

    Testing k-monotonicity of a discrete distribution. Application to the estimation of the number of classes in a population

    Full text link
    We develop here several goodness-of-fit tests for testing the k-monotonicity of a discrete density, based on the empirical distribution of the observations. Our tests are non-parametric, easy to implement and are proved to be asymptotically of the desired level and consistent. We propose an estimator of the degree of k-monotonicity of the distribution based on the non-parametric goodness-of-fit tests. We apply our work to the estimation of the total number of classes in a population. A large simulation study allows to assess the performances of our procedures.Comment: 32 pages, 8 figure

    On Convex Least Squares Estimation when the Truth is Linear

    Get PDF
    We prove that the convex least squares estimator (LSE) attains a n−1/2n^{-1/2} pointwise rate of convergence in any region where the truth is linear. In addition, the asymptotic distribution can be characterized by a modified invelope process. Analogous results hold when one uses the derivative of the convex LSE to perform derivative estimation. These asymptotic results facilitate a new consistent testing procedure on the linearity against a convex alternative. Moreover, we show that the convex LSE adapts to the optimal rate at the boundary points of the region where the truth is linear, up to a log-log factor. These conclusions are valid in the context of both density estimation and regression function estimation.Comment: 35 pages, 5 figure

    Large Scale Variational Bayesian Inference for Structured Scale Mixture Models

    Get PDF
    Natural image statistics exhibit hierarchical dependencies across multiple scales. Representing such prior knowledge in non-factorial latent tree models can boost performance of image denoising, inpainting, deconvolution or reconstruction substantially, beyond standard factorial "sparse" methodology. We derive a large scale approximate Bayesian inference algorithm for linear models with non-factorial (latent tree-structured) scale mixture priors. Experimental results on a range of denoising and inpainting problems demonstrate substantially improved performance compared to MAP estimation or to inference with factorial priors.Comment: Appears in Proceedings of the 29th International Conference on Machine Learning (ICML 2012

    A probabilistic interpretation of set-membership filtering: application to polynomial systems through polytopic bounding

    Get PDF
    Set-membership estimation is usually formulated in the context of set-valued calculus and no probabilistic calculations are necessary. In this paper, we show that set-membership estimation can be equivalently formulated in the probabilistic setting by employing sets of probability measures. Inference in set-membership estimation is thus carried out by computing expectations with respect to the updated set of probability measures P as in the probabilistic case. In particular, it is shown that inference can be performed by solving a particular semi-infinite linear programming problem, which is a special case of the truncated moment problem in which only the zero-th order moment is known (i.e., the support). By writing the dual of the above semi-infinite linear programming problem, it is shown that, if the nonlinearities in the measurement and process equations are polynomial and if the bounding sets for initial state, process and measurement noises are described by polynomial inequalities, then an approximation of this semi-infinite linear programming problem can efficiently be obtained by using the theory of sum-of-squares polynomial optimization. We then derive a smart greedy procedure to compute a polytopic outer-approximation of the true membership-set, by computing the minimum-volume polytope that outer-bounds the set that includes all the means computed with respect to P
    • …
    corecore