6,950 research outputs found

    A Size-Free CLT for Poisson Multinomials and its Applications

    Full text link
    An (n,k)(n,k)-Poisson Multinomial Distribution (PMD) is the distribution of the sum of nn independent random vectors supported on the set Bk={e1,…,ek}{\cal B}_k=\{e_1,\ldots,e_k\} of standard basis vectors in Rk\mathbb{R}^k. We show that any (n,k)(n,k)-PMD is poly(kσ){\rm poly}\left({k\over \sigma}\right)-close in total variation distance to the (appropriately discretized) multi-dimensional Gaussian with the same first two moments, removing the dependence on nn from the Central Limit Theorem of Valiant and Valiant. Interestingly, our CLT is obtained by bootstrapping the Valiant-Valiant CLT itself through the structural characterization of PMDs shown in recent work by Daskalakis, Kamath, and Tzamos. In turn, our stronger CLT can be leveraged to obtain an efficient PTAS for approximate Nash equilibria in anonymous games, significantly improving the state of the art, and matching qualitatively the running time dependence on nn and 1/ε1/\varepsilon of the best known algorithm for two-strategy anonymous games. Our new CLT also enables the construction of covers for the set of (n,k)(n,k)-PMDs, which are proper and whose size is shown to be essentially optimal. Our cover construction combines our CLT with the Shapley-Folkman theorem and recent sparsification results for Laplacian matrices by Batson, Spielman, and Srivastava. Our cover size lower bound is based on an algebraic geometric construction. Finally, leveraging the structural properties of the Fourier spectrum of PMDs we show that these distributions can be learned from Ok(1/ε2)O_k(1/\varepsilon^2) samples in polyk(1/ε){\rm poly}_k(1/\varepsilon)-time, removing the quasi-polynomial dependence of the running time on 1/ε1/\varepsilon from the algorithm of Daskalakis, Kamath, and Tzamos.Comment: To appear in STOC 201

    Composite Cyclotomic Fourier Transforms with Reduced Complexities

    Full text link
    Discrete Fourier transforms~(DFTs) over finite fields have widespread applications in digital communication and storage systems. Hence, reducing the computational complexities of DFTs is of great significance. Recently proposed cyclotomic fast Fourier transforms (CFFTs) are promising due to their low multiplicative complexities. Unfortunately, there are two issues with CFFTs: (1) they rely on efficient short cyclic convolution algorithms, which has not been investigated thoroughly yet, and (2) they have very high additive complexities when directly implemented. In this paper, we address both issues. One of the main contributions of this paper is efficient bilinear 11-point cyclic convolution algorithms, which allow us to construct CFFTs over GF(211)(2^{11}). The other main contribution of this paper is that we propose composite cyclotomic Fourier transforms (CCFTs). In comparison to previously proposed fast Fourier transforms, our CCFTs achieve lower overall complexities for moderate to long lengths, and the improvement significantly increases as the length grows. Our 2047-point and 4095-point CCFTs are also first efficient DFTs of such lengths to the best of our knowledge. Finally, our CCFTs are also advantageous for hardware implementations due to their regular and modular structure.Comment: submitted to IEEE trans on Signal Processin

    Polynomial-Time Algorithms for Prime Factorization and Discrete Logarithms on a Quantum Computer

    Get PDF
    A digital computer is generally believed to be an efficient universal computing device; that is, it is believed able to simulate any physical computing device with an increase in computation time of at most a polynomial factor. This may not be true when quantum mechanics is taken into consideration. This paper considers factoring integers and finding discrete logarithms, two problems which are generally thought to be hard on a classical computer and have been used as the basis of several proposed cryptosystems. Efficient randomized algorithms are given for these two problems on a hypothetical quantum computer. These algorithms take a number of steps polynomial in the input size, e.g., the number of digits of the integer to be factored.Comment: 28 pages, LaTeX. This is an expanded version of a paper that appeared in the Proceedings of the 35th Annual Symposium on Foundations of Computer Science, Santa Fe, NM, Nov. 20--22, 1994. Minor revisions made January, 199

    Fractional Focusing and the Chirp Scaling Algorithm With Real Synthetic Aperture Radar Data

    Get PDF
    abstract: For synthetic aperture radar (SAR) image formation processing, the chirp scaling algorithm (CSA) has gained considerable attention mainly because of its excellent target focusing ability, optimized processing steps, and ease of implementation. In particular, unlike the range Doppler and range migration algorithms, the CSA is easy to implement since it does not require interpolation, and it can be used on both stripmap and spotlight SAR systems. Another transform that can be used to enhance the processing of SAR image formation is the fractional Fourier transform (FRFT). This transform has been recently introduced to the signal processing community, and it has shown many promising applications in the realm of SAR signal processing, specifically because of its close association to the Wigner distribution and ambiguity function. The objective of this work is to improve the application of the FRFT in order to enhance the implementation of the CSA for SAR processing. This will be achieved by processing real phase-history data from the RADARSAT-1 satellite, a multi-mode SAR platform operating in the C-band, providing imagery with resolution between 8 and 100 meters at incidence angles of 10 through 59 degrees. The phase-history data will be processed into imagery using the conventional chirp scaling algorithm. The results will then be compared using a new implementation of the CSA based on the use of the FRFT, combined with traditional SAR focusing techniques, to enhance the algorithm's focusing ability, thereby increasing the peak-to-sidelobe ratio of the focused targets. The FRFT can also be used to provide focusing enhancements at extended ranges.Dissertation/ThesisM.S. Electrical Engineering 201

    A new weighted NMF algorithm for missing data interpolation and its application to speech enhancement

    Get PDF
    In this paper we present a novel weighted NMF (WNMF) algorithm for interpolating missing data. The proposed approach has a computational cost equivalent to that of standard NMF and, additionally, has the flexibility to control the degree of interpolation in the missing data regions. Existing WNMF methods do not offer this capability and, thereby, tend to overestimate the values in the masked regions. By constraining the estimates of the missing-data regions, the proposed approach allows for a better trade-off in the interpolation. We further demonstrate the applicability of WNMF and missing data estimation to the problem of speech enhancement. In this preliminary work, we consider the improvement obtainable by applying the proposed method to ideal binary mask-based gain functions. The instrumental quality metrics (PESQ and SNR) clearly indicate the added benefit of the missing data interpolation, compared to the output of the ideal binary mask. This preliminary work opens up novel possibilities not only in the field of speech enhancement but also, more generally, in the field of missing data interpolation using NMF

    Studies in Signal Processing Techniques for Speech Enhancement: A comparative study

    Get PDF
    Speech enhancement is very essential to suppress the background noise and to increase speech intelligibility and reduce fatigue in hearing. There exist many simple speech enhancement algorithms like spectral subtraction to complex algorithms like Bayesian Magnitude estimators based on Minimum Mean Square Error (MMSE) and its variants. A continuous research is going and new algorithms are emerging to enhance speech signal recorded in the background of environment such as industries, vehicles and aircraft cockpit. In aviation industries speech enhancement plays a vital role to bring crucial information from pilot’s conversation in case of an incident or accident by suppressing engine and other cockpit instrument noises. In this work proposed is a new approach to speech enhancement making use harmonic wavelet transform and Bayesian estimators. The performance indicators, SNR and listening confirms to the fact that newly modified algorithms using harmonic wavelet transform indeed show better results than currently existing methods. Further, the Harmonic Wavelet Transform is computationally efficient and simple to implement due to its inbuilt decimation-interpolation operations compared to those of filter-bank approach to realize sub-bands
    • …
    corecore