990 research outputs found

    A Size-Free CLT for Poisson Multinomials and its Applications

    Full text link
    An (n,k)(n,k)-Poisson Multinomial Distribution (PMD) is the distribution of the sum of nn independent random vectors supported on the set Bk={e1,…,ek}{\cal B}_k=\{e_1,\ldots,e_k\} of standard basis vectors in Rk\mathbb{R}^k. We show that any (n,k)(n,k)-PMD is poly(kσ){\rm poly}\left({k\over \sigma}\right)-close in total variation distance to the (appropriately discretized) multi-dimensional Gaussian with the same first two moments, removing the dependence on nn from the Central Limit Theorem of Valiant and Valiant. Interestingly, our CLT is obtained by bootstrapping the Valiant-Valiant CLT itself through the structural characterization of PMDs shown in recent work by Daskalakis, Kamath, and Tzamos. In turn, our stronger CLT can be leveraged to obtain an efficient PTAS for approximate Nash equilibria in anonymous games, significantly improving the state of the art, and matching qualitatively the running time dependence on nn and 1/ε1/\varepsilon of the best known algorithm for two-strategy anonymous games. Our new CLT also enables the construction of covers for the set of (n,k)(n,k)-PMDs, which are proper and whose size is shown to be essentially optimal. Our cover construction combines our CLT with the Shapley-Folkman theorem and recent sparsification results for Laplacian matrices by Batson, Spielman, and Srivastava. Our cover size lower bound is based on an algebraic geometric construction. Finally, leveraging the structural properties of the Fourier spectrum of PMDs we show that these distributions can be learned from Ok(1/ε2)O_k(1/\varepsilon^2) samples in polyk(1/ε){\rm poly}_k(1/\varepsilon)-time, removing the quasi-polynomial dependence of the running time on 1/ε1/\varepsilon from the algorithm of Daskalakis, Kamath, and Tzamos.Comment: To appear in STOC 201

    Probabilistic Framework for Sensor Management

    Get PDF
    A probabilistic sensor management framework is introduced, which maximizes the utility of sensor systems with many different sensing modalities by dynamically configuring the sensor system in the most beneficial way. For this purpose, techniques from stochastic control and Bayesian estimation are combined such that long-term effects of possible sensor configurations and stochastic uncertainties resulting from noisy measurements can be incorporated into the sensor management decisions

    Robustly Learning Mixtures of kk Arbitrary Gaussians

    Full text link
    We give a polynomial-time algorithm for the problem of robustly estimating a mixture of kk arbitrary Gaussians in Rd\mathbb{R}^d, for any fixed kk, in the presence of a constant fraction of arbitrary corruptions. This resolves the main open problem in several previous works on algorithmic robust statistics, which addressed the special cases of robustly estimating (a) a single Gaussian, (b) a mixture of TV-distance separated Gaussians, and (c) a uniform mixture of two Gaussians. Our main tools are an efficient \emph{partial clustering} algorithm that relies on the sum-of-squares method, and a novel \emph{tensor decomposition} algorithm that allows errors in both Frobenius norm and low-rank terms.Comment: This version extends the previous one to yield 1) robust proper learning algorithm with poly(eps) error and 2) an information theoretic argument proving that the same algorithms in fact also yield parameter recovery guarantees. The updates are included in Sections 7,8, and 9 and the main result from the previous version (Thm 1.4) is presented and proved in Section
    • …
    corecore