10,805 research outputs found

    Limits on Support Recovery with Probabilistic Models: An Information-Theoretic Framework

    Get PDF
    The support recovery problem consists of determining a sparse subset of a set of variables that is relevant in generating a set of observations, and arises in a diverse range of settings such as compressive sensing, and subset selection in regression, and group testing. In this paper, we take a unified approach to support recovery problems, considering general probabilistic models relating a sparse data vector to an observation vector. We study the information-theoretic limits of both exact and partial support recovery, taking a novel approach motivated by thresholding techniques in channel coding. We provide general achievability and converse bounds characterizing the trade-off between the error probability and number of measurements, and we specialize these to the linear, 1-bit, and group testing models. In several cases, our bounds not only provide matching scaling laws in the necessary and sufficient number of measurements, but also sharp thresholds with matching constant factors. Our approach has several advantages over previous approaches: For the achievability part, we obtain sharp thresholds under broader scalings of the sparsity level and other parameters (e.g., signal-to-noise ratio) compared to several previous works, and for the converse part, we not only provide conditions under which the error probability fails to vanish, but also conditions under which it tends to one.Comment: Accepted to IEEE Transactions on Information Theory; presented in part at ISIT 2015 and SODA 201

    Quantum Cryptography Beyond Quantum Key Distribution

    Get PDF
    Quantum cryptography is the art and science of exploiting quantum mechanical effects in order to perform cryptographic tasks. While the most well-known example of this discipline is quantum key distribution (QKD), there exist many other applications such as quantum money, randomness generation, secure two- and multi-party computation and delegated quantum computation. Quantum cryptography also studies the limitations and challenges resulting from quantum adversaries---including the impossibility of quantum bit commitment, the difficulty of quantum rewinding and the definition of quantum security models for classical primitives. In this review article, aimed primarily at cryptographers unfamiliar with the quantum world, we survey the area of theoretical quantum cryptography, with an emphasis on the constructions and limitations beyond the realm of QKD.Comment: 45 pages, over 245 reference

    Strictly contractive quantum channels and physically realizable quantum computers

    Get PDF
    We study the robustness of quantum computers under the influence of errors modelled by strictly contractive channels. A channel TT is defined to be strictly contractive if, for any pair of density operators ρ,σ\rho,\sigma in its domain, TρTσ1kρσ1\| T\rho - T\sigma \|_1 \le k \| \rho-\sigma \|_1 for some 0k<10 \le k < 1 (here 1\| \cdot \|_1 denotes the trace norm). In other words, strictly contractive channels render the states of the computer less distinguishable in the sense of quantum detection theory. Starting from the premise that all experimental procedures can be carried out with finite precision, we argue that there exists a physically meaningful connection between strictly contractive channels and errors in physically realizable quantum computers. We show that, in the absence of error correction, sensitivity of quantum memories and computers to strictly contractive errors grows exponentially with storage time and computation time respectively, and depends only on the constant kk and the measurement precision. We prove that strict contractivity rules out the possibility of perfect error correction, and give an argument that approximate error correction, which covers previous work on fault-tolerant quantum computation as a special case, is possible.Comment: 14 pages; revtex, amsfonts, amssymb; made some changes (recommended by Phys. Rev. A), updated the reference

    Broadcasting on Random Directed Acyclic Graphs

    Full text link
    We study a generalization of the well-known model of broadcasting on trees. Consider a directed acyclic graph (DAG) with a unique source vertex XX, and suppose all other vertices have indegree d2d\geq 2. Let the vertices at distance kk from XX be called layer kk. At layer 00, XX is given a random bit. At layer k1k\geq 1, each vertex receives dd bits from its parents in layer k1k-1, which are transmitted along independent binary symmetric channel edges, and combines them using a dd-ary Boolean processing function. The goal is to reconstruct XX with probability of error bounded away from 1/21/2 using the values of all vertices at an arbitrarily deep layer. This question is closely related to models of reliable computation and storage, and information flow in biological networks. In this paper, we analyze randomly constructed DAGs, for which we show that broadcasting is only possible if the noise level is below a certain degree and function dependent critical threshold. For d3d\geq 3, and random DAGs with layer sizes Ω(logk)\Omega(\log k) and majority processing functions, we identify the critical threshold. For d=2d=2, we establish a similar result for NAND processing functions. We also prove a partial converse for odd d3d\geq 3 illustrating that the identified thresholds are impossible to improve by selecting different processing functions if the decoder is restricted to using a single vertex. Finally, for any noise level, we construct explicit DAGs (using expander graphs) with bounded degree and layer sizes Θ(logk)\Theta(\log k) admitting reconstruction. In particular, we show that such DAGs can be generated in deterministic quasi-polynomial time or randomized polylogarithmic time in the depth. These results portray a doubly-exponential advantage for storing a bit in DAGs compared to trees, where d=1d=1 but layer sizes must grow exponentially with depth in order to enable broadcasting.Comment: 33 pages, double column format. arXiv admin note: text overlap with arXiv:1803.0752

    The capacity of non-identical adaptive group testing

    Full text link
    We consider the group testing problem, in the case where the items are defective independently but with non-constant probability. We introduce and analyse an algorithm to solve this problem by grouping items together appropriately. We give conditions under which the algorithm performs essentially optimally in the sense of information-theoretic capacity. We use concentration of measure results to bound the probability that this algorithm requires many more tests than the expected number. This has applications to the allocation of spectrum to cognitive radios, in the case where a database gives prior information that a particular band will be occupied.Comment: To be presented at Allerton 201
    corecore