37 research outputs found

    Efficient Density Estimation via Piecewise Polynomial Approximation

    Get PDF
    We give a highly efficient "semi-agnostic" algorithm for learning univariate probability distributions that are well approximated by piecewise polynomial density functions. Let pp be an arbitrary distribution over an interval II which is τ\tau-close (in total variation distance) to an unknown probability distribution qq that is defined by an unknown partition of II into tt intervals and tt unknown degree-dd polynomials specifying qq over each of the intervals. We give an algorithm that draws \tilde{O}(t\new{(d+1)}/\eps^2) samples from pp, runs in time \poly(t,d,1/\eps), and with high probability outputs a piecewise polynomial hypothesis distribution hh that is (O(\tau)+\eps)-close (in total variation distance) to pp. This sample complexity is essentially optimal; we show that even for τ=0\tau=0, any algorithm that learns an unknown tt-piecewise degree-dd probability distribution over II to accuracy \eps must use \Omega({\frac {t(d+1)} {\poly(1 + \log(d+1))}} \cdot {\frac 1 {\eps^2}}) samples from the distribution, regardless of its running time. Our algorithm combines tools from approximation theory, uniform convergence, linear programming, and dynamic programming. We apply this general algorithm to obtain a wide range of results for many natural problems in density estimation over both continuous and discrete domains. These include state-of-the-art results for learning mixtures of log-concave distributions; mixtures of tt-modal distributions; mixtures of Monotone Hazard Rate distributions; mixtures of Poisson Binomial Distributions; mixtures of Gaussians; and mixtures of kk-monotone densities. Our general technique yields computationally efficient algorithms for all these problems, in many cases with provably optimal sample complexities (up to logarithmic factors) in all parameters

    Sample-Efficient Learning of Mixtures

    Full text link
    We consider PAC learning of probability distributions (a.k.a. density estimation), where we are given an i.i.d. sample generated from an unknown target distribution, and want to output a distribution that is close to the target in total variation distance. Let F\mathcal F be an arbitrary class of probability distributions, and let Fk\mathcal{F}^k denote the class of kk-mixtures of elements of F\mathcal F. Assuming the existence of a method for learning F\mathcal F with sample complexity mF(ϵ)m_{\mathcal{F}}(\epsilon), we provide a method for learning Fk\mathcal F^k with sample complexity O(klogkmF(ϵ)/ϵ2)O({k\log k \cdot m_{\mathcal F}(\epsilon) }/{\epsilon^{2}}). Our mixture learning algorithm has the property that, if the F\mathcal F-learner is proper/agnostic, then the Fk\mathcal F^k-learner would be proper/agnostic as well. This general result enables us to improve the best known sample complexity upper bounds for a variety of important mixture classes. First, we show that the class of mixtures of kk axis-aligned Gaussians in Rd\mathbb{R}^d is PAC-learnable in the agnostic setting with O~(kd/ϵ4)\widetilde{O}({kd}/{\epsilon ^ 4}) samples, which is tight in kk and dd up to logarithmic factors. Second, we show that the class of mixtures of kk Gaussians in Rd\mathbb{R}^d is PAC-learnable in the agnostic setting with sample complexity O~(kd2/ϵ4)\widetilde{O}({kd^2}/{\epsilon ^ 4}), which improves the previous known bounds of O~(k3d2/ϵ4)\widetilde{O}({k^3d^2}/{\epsilon ^ 4}) and O~(k4d4/ϵ2)\widetilde{O}(k^4d^4/\epsilon ^ 2) in its dependence on kk and dd. Finally, we show that the class of mixtures of kk log-concave distributions over Rd\mathbb{R}^d is PAC-learnable using O~(d(d+5)/2ϵ(d+9)/2k)\widetilde{O}(d^{(d+5)/2}\epsilon^{-(d+9)/2}k) samples.Comment: A bug from the previous version, which appeared in AAAI 2018 proceedings, is fixed. 18 page
    corecore