177,079 research outputs found

    Improved Bounds for Universal One-Bit Compressive Sensing

    Full text link
    Unlike compressive sensing where the measurement outputs are assumed to be real-valued and have infinite precision, in "one-bit compressive sensing", measurements are quantized to one bit, their signs. In this work, we show how to recover the support of sparse high-dimensional vectors in the one-bit compressive sensing framework with an asymptotically near-optimal number of measurements. We also improve the bounds on the number of measurements for approximately recovering vectors from one-bit compressive sensing measurements. Our results are universal, namely the same measurement scheme works simultaneously for all sparse vectors. Our proof of optimality for support recovery is obtained by showing an equivalence between the task of support recovery using 1-bit compressive sensing and a well-studied combinatorial object known as Union Free Families.Comment: 14 page

    A Survey of Quantum Learning Theory

    Get PDF
    This paper surveys quantum learning theory: the theoretical aspects of machine learning using quantum computers. We describe the main results known for three models of learning: exact learning from membership queries, and Probably Approximately Correct (PAC) and agnostic learning from classical or quantum examples.Comment: 26 pages LaTeX. v2: many small changes to improve the presentation. This version will appear as Complexity Theory Column in SIGACT News in June 2017. v3: fixed a small ambiguity in the definition of gamma(C) and updated a referenc

    Top-Down Induction of Decision Trees: Rigorous Guarantees and Inherent Limitations

    Get PDF
    Consider the following heuristic for building a decision tree for a function f:{0,1}n{±1}f : \{0,1\}^n \to \{\pm 1\}. Place the most influential variable xix_i of ff at the root, and recurse on the subfunctions fxi=0f_{x_i=0} and fxi=1f_{x_i=1} on the left and right subtrees respectively; terminate once the tree is an ε\varepsilon-approximation of ff. We analyze the quality of this heuristic, obtaining near-matching upper and lower bounds: \circ Upper bound: For every ff with decision tree size ss and every ε(0,12)\varepsilon \in (0,\frac1{2}), this heuristic builds a decision tree of size at most sO(log(s/ε)log(1/ε))s^{O(\log(s/\varepsilon)\log(1/\varepsilon))}. \circ Lower bound: For every ε(0,12)\varepsilon \in (0,\frac1{2}) and s2O~(n)s \le 2^{\tilde{O}(\sqrt{n})}, there is an ff with decision tree size ss such that this heuristic builds a decision tree of size sΩ~(logs)s^{\tilde{\Omega}(\log s)}. We also obtain upper and lower bounds for monotone functions: sO(logs/ε)s^{O(\sqrt{\log s}/\varepsilon)} and sΩ~(logs4)s^{\tilde{\Omega}(\sqrt[4]{\log s } )} respectively. The lower bound disproves conjectures of Fiat and Pechyony (2004) and Lee (2009). Our upper bounds yield new algorithms for properly learning decision trees under the uniform distribution. We show that these algorithms---which are motivated by widely employed and empirically successful top-down decision tree learning heuristics such as ID3, C4.5, and CART---achieve provable guarantees that compare favorably with those of the current fastest algorithm (Ehrenfeucht and Haussler, 1989). Our lower bounds shed new light on the limitations of these heuristics. Finally, we revisit the classic work of Ehrenfeucht and Haussler. We extend it to give the first uniform-distribution proper learning algorithm that achieves polynomial sample and memory complexity, while matching its state-of-the-art quasipolynomial runtime

    Quantum network communication -- the butterfly and beyond

    Full text link
    We study the k-pair communication problem for quantum information in networks of quantum channels. We consider the asymptotic rates of high fidelity quantum communication between specific sender-receiver pairs. Four scenarios of classical communication assistance (none, forward, backward, and two-way) are considered. (i) We obtain outer and inner bounds of the achievable rate regions in the most general directed networks. (ii) For two particular networks (including the butterfly network) routing is proved optimal, and the free assisting classical communication can at best be used to modify the directions of quantum channels in the network. Consequently, the achievable rate regions are given by counting edge avoiding paths, and precise achievable rate regions in all four assisting scenarios can be obtained. (iii) Optimality of routing can also be proved in classes of networks. The first class consists of directed unassisted networks in which (1) the receivers are information sinks, (2) the maximum distance from senders to receivers is small, and (3) a certain type of 4-cycles are absent, but without further constraints (such as on the number of communicating and intermediate parties). The second class consists of arbitrary backward-assisted networks with 2 sender-receiver pairs. (iv) Beyond the k-pair communication problem, observations are made on quantum multicasting and a static version of network communication related to the entanglement of assistance.Comment: 15 pages, 17 figures. Final versio

    Deriving confinement via RG decimations

    Get PDF
    We present the general framework and building blocks of a recent derivation of the fact that the SU(2) LGT is in a confining phase for all values of the coupling 0<β<0 < \beta < \infty, for space-time dimension d4d \leq 4. The method employs approximate but explicitly computable RG decimations that are shown to constrain the exact partition function and order parameters from above and below, and flow from the weak to the strong coupling regime without encountering a fixed point.Comment: 7 pages, presented at the XXV International Symposium on Lattice Field Theoy, July 30 - August 4 2007, Regensburg, German

    Approximate F_2-Sketching of Valuation Functions

    Get PDF
    We study the problem of constructing a linear sketch of minimum dimension that allows approximation of a given real-valued function f : F_2^n - > R with small expected squared error. We develop a general theory of linear sketching for such functions through which we analyze their dimension for most commonly studied types of valuation functions: additive, budget-additive, coverage, alpha-Lipschitz submodular and matroid rank functions. This gives a characterization of how many bits of information have to be stored about the input x so that one can compute f under additive updates to its coordinates. Our results are tight in most cases and we also give extensions to the distributional version of the problem where the input x in F_2^n is generated uniformly at random. Using known connections with dynamic streaming algorithms, both upper and lower bounds on dimension obtained in our work extend to the space complexity of algorithms evaluating f(x) under long sequences of additive updates to the input x presented as a stream. Similar results hold for simultaneous communication in a distributed setting
    corecore