3,923 research outputs found

    Nearly Optimal Deterministic Algorithm for Sparse Walsh-Hadamard Transform

    Get PDF
    For every fixed constant α>0\alpha > 0, we design an algorithm for computing the kk-sparse Walsh-Hadamard transform of an NN-dimensional vector xRNx \in \mathbb{R}^N in time k1+α(logN)O(1)k^{1+\alpha} (\log N)^{O(1)}. Specifically, the algorithm is given query access to xx and computes a kk-sparse x~RN\tilde{x} \in \mathbb{R}^N satisfying x~x^1cx^Hk(x^)1\|\tilde{x} - \hat{x}\|_1 \leq c \|\hat{x} - H_k(\hat{x})\|_1, for an absolute constant c>0c > 0, where x^\hat{x} is the transform of xx and Hk(x^)H_k(\hat{x}) is its best kk-sparse approximation. Our algorithm is fully deterministic and only uses non-adaptive queries to xx (i.e., all queries are determined and performed in parallel when the algorithm starts). An important technical tool that we use is a construction of nearly optimal and linear lossless condensers which is a careful instantiation of the GUV condenser (Guruswami, Umans, Vadhan, JACM 2009). Moreover, we design a deterministic and non-adaptive 1/1\ell_1/\ell_1 compressed sensing scheme based on general lossless condensers that is equipped with a fast reconstruction algorithm running in time k1+α(logN)O(1)k^{1+\alpha} (\log N)^{O(1)} (for the GUV-based condenser) and is of independent interest. Our scheme significantly simplifies and improves an earlier expander-based construction due to Berinde, Gilbert, Indyk, Karloff, Strauss (Allerton 2008). Our methods use linear lossless condensers in a black box fashion; therefore, any future improvement on explicit constructions of such condensers would immediately translate to improved parameters in our framework (potentially leading to k(logN)O(1)k (\log N)^{O(1)} reconstruction time with a reduced exponent in the poly-logarithmic factor, and eliminating the extra parameter α\alpha). Finally, by allowing the algorithm to use randomness, while still using non-adaptive queries, the running time of the algorithm can be improved to O~(klog3N)\tilde{O}(k \log^3 N)

    The geometry of quantum learning

    Full text link
    Concept learning provides a natural framework in which to place the problems solved by the quantum algorithms of Bernstein-Vazirani and Grover. By combining the tools used in these algorithms--quantum fast transforms and amplitude amplification--with a novel (in this context) tool--a solution method for geometrical optimization problems--we derive a general technique for quantum concept learning. We name this technique "Amplified Impatient Learning" and apply it to construct quantum algorithms solving two new problems: BATTLESHIP and MAJORITY, more efficiently than is possible classically.Comment: 20 pages, plain TeX with amssym.tex, related work at http://www.math.uga.edu/~hunziker/ and http://math.ucsd.edu/~dmeyer

    A Generalised Hadamard Transform

    Get PDF
    A Generalised Hadamard Transform for multi-phase or multilevel signals is introduced, which includes the Fourier, Generalised, Discrete Fourier, Walsh-Hadamard and Reverse Jacket Transforms. The jacket construction is formalised and shown to admit a tensor product decomposition. Primary matrices under this decomposition are identified. New examples of primary jacket matrices of orders 8 and 12 are presented.Comment: To appear in the proceedings of the 2005 IEEE International Symposium on Information Theory, Adelaide, Australia, September 4-9, 200
    corecore