785 research outputs found

    Kullback-Leibler aggregation and misspecified generalized linear models

    Full text link
    In a regression setup with deterministic design, we study the pure aggregation problem and introduce a natural extension from the Gaussian distribution to distributions in the exponential family. While this extension bears strong connections with generalized linear models, it does not require identifiability of the parameter or even that the model on the systematic component is true. It is shown that this problem can be solved by constrained and/or penalized likelihood maximization and we derive sharp oracle inequalities that hold both in expectation and with high probability. Finally all the bounds are proved to be optimal in a minimax sense.Comment: Published in at http://dx.doi.org/10.1214/11-AOS961 the Annals of Statistics (http://www.imstat.org/aos/) by the Institute of Mathematical Statistics (http://www.imstat.org

    Linear and convex aggregation of density estimators

    Get PDF
    We study the problem of linear and convex aggregation of MM estimators of a density with respect to the mean squared risk. We provide procedures for linear and convex aggregation and we prove oracle inequalities for their risks. We also obtain lower bounds showing that these procedures are rate optimal in a minimax sense. As an example, we apply general results to aggregation of multivariate kernel density estimators with different bandwidths. We show that linear and convex aggregates mimic the kernel oracles in asymptotically exact sense for a large class of kernels including Gaussian, Silverman's and Pinsker's ones. We prove that, for Pinsker's kernel, the proposed aggregates are sharp asymptotically minimax simultaneously over a large scale of Sobolev classes of densities. Finally, we provide simulations demonstrating performance of the convex aggregation procedure.Comment: 22 page

    Entropic optimal transport is maximum-likelihood deconvolution

    Get PDF
    We give a statistical interpretation of entropic optimal transport by showing that performing maximum-likelihood estimation for Gaussian deconvolution corresponds to calculating a projection with respect to the entropic optimal transport distance. This structural result gives theoretical support for the wide adoption of these tools in the machine learning community

    Optimal learning with QQ-aggregation

    Full text link
    We consider a general supervised learning problem with strongly convex and Lipschitz loss and study the problem of model selection aggregation. In particular, given a finite dictionary functions (learners) together with the prior, we generalize the results obtained by Dai, Rigollet and Zhang [Ann. Statist. 40 (2012) 1878-1905] for Gaussian regression with squared loss and fixed design to this learning setup. Specifically, we prove that the QQ-aggregation procedure outputs an estimator that satisfies optimal oracle inequalities both in expectation and with high probability. Our proof techniques somewhat depart from traditional proofs by making most of the standard arguments on the Laplace transform of the empirical process to be controlled.Comment: Published in at http://dx.doi.org/10.1214/13-AOS1190 the Annals of Statistics (http://www.imstat.org/aos/) by the Institute of Mathematical Statistics (http://www.imstat.org

    Optimal rates for plug-in estimators of density level sets

    Full text link
    In the context of density level set estimation, we study the convergence of general plug-in methods under two main assumptions on the density for a given level λ\lambda. More precisely, it is assumed that the density (i) is smooth in a neighborhood of λ\lambda and (ii) has γ\gamma-exponent at level λ\lambda. Condition (i) ensures that the density can be estimated at a standard nonparametric rate and condition (ii) is similar to Tsybakov's margin assumption which is stated for the classification framework. Under these assumptions, we derive optimal rates of convergence for plug-in estimators. Explicit convergence rates are given for plug-in estimators based on kernel density estimators when the underlying measure is the Lebesgue measure. Lower bounds proving optimality of the rates in a minimax sense when the density is H\"older smooth are also provided.Comment: Published in at http://dx.doi.org/10.3150/09-BEJ184 the Bernoulli (http://isi.cbs.nl/bernoulli/) by the International Statistical Institute/Bernoulli Society (http://isi.cbs.nl/BS/bshome.htm

    Optimal detection of sparse principal components in high dimension

    Full text link
    We perform a finite sample analysis of the detection levels for sparse principal components of a high-dimensional covariance matrix. Our minimax optimal test is based on a sparse eigenvalue statistic. Alas, computing this test is known to be NP-complete in general, and we describe a computationally efficient alternative test using convex relaxations. Our relaxation is also proved to detect sparse principal components at near optimal detection levels, and it performs well on simulated datasets. Moreover, using polynomial time reductions from theoretical computer science, we bring significant evidence that our results cannot be improved, thus revealing an inherent trade off between statistical and computational performance.Comment: Published in at http://dx.doi.org/10.1214/13-AOS1127 the Annals of Statistics (http://www.imstat.org/aos/) by the Institute of Mathematical Statistics (http://www.imstat.org

    Uncoupled isotonic regression via minimum Wasserstein deconvolution

    Full text link
    Isotonic regression is a standard problem in shape-constrained estimation where the goal is to estimate an unknown nondecreasing regression function ff from independent pairs (xi,yi)(x_i, y_i) where E[yi]=f(xi),i=1,n\mathbb{E}[y_i]=f(x_i), i=1, \ldots n. While this problem is well understood both statistically and computationally, much less is known about its uncoupled counterpart where one is given only the unordered sets {x1,,xn}\{x_1, \ldots, x_n\} and {y1,,yn}\{y_1, \ldots, y_n\}. In this work, we leverage tools from optimal transport theory to derive minimax rates under weak moments conditions on yiy_i and to give an efficient algorithm achieving optimal rates. Both upper and lower bounds employ moment-matching arguments that are also pertinent to learning mixtures of distributions and deconvolution.Comment: To appear in Information and Inference: a Journal of the IM
    corecore