11,460 research outputs found

    Communicating over adversarial quantum channels using quantum list codes

    Get PDF
    We study quantum communication in the presence of adversarial noise. In this setting, communicating with perfect fidelity requires using a quantum code of bounded minimum distance, for which the best known rates are given by the quantum Gilbert-Varshamov (QGV) bound. By asking only for arbitrarily high fidelity and allowing the sender and reciever to use a secret key with length logarithmic in the number of qubits sent, we achieve a dramatic improvement over the QGV rates. In fact, we find protocols that achieve arbitrarily high fidelity at noise levels for which perfect fidelity is impossible. To achieve such communication rates, we introduce fully quantum list codes, which may be of independent interest.Comment: 6 pages. Discussion expanded and more details provided in proofs. Far less unclear than previous versio

    Exponential Lower Bound for 2-Query Locally Decodable Codes via a Quantum Argument

    Get PDF
    A locally decodable code encodes n-bit strings x in m-bit codewords C(x), in such a way that one can recover any bit x_i from a corrupted codeword by querying only a few bits of that word. We use a quantum argument to prove that LDCs with 2 classical queries need exponential length: m=2^{Omega(n)}. Previously this was known only for linear codes (Goldreich et al. 02). Our proof shows that a 2-query LDC can be decoded with only 1 quantum query, and then proves an exponential lower bound for such 1-query locally quantum-decodable codes. We also show that q quantum queries allow more succinct LDCs than the best known LDCs with q classical queries. Finally, we give new classical lower bounds and quantum upper bounds for the setting of private information retrieval. In particular, we exhibit a quantum 2-server PIR scheme with O(n^{3/10}) qubits of communication, improving upon the O(n^{1/3}) bits of communication of the best known classical 2-server PIR.Comment: 16 pages Latex. 2nd version: title changed, large parts rewritten, some results added or improve

    The benefit of a 1-bit jump-start, and the necessity of stochastic encoding, in jamming channels

    Full text link
    We consider the problem of communicating a message mm in the presence of a malicious jamming adversary (Calvin), who can erase an arbitrary set of up to pnpn bits, out of nn transmitted bits (x1,,xn)(x_1,\ldots,x_n). The capacity of such a channel when Calvin is exactly causal, i.e. Calvin's decision of whether or not to erase bit xix_i depends on his observations (x1,,xi)(x_1,\ldots,x_i) was recently characterized to be 12p1-2p. In this work we show two (perhaps) surprising phenomena. Firstly, we demonstrate via a novel code construction that if Calvin is delayed by even a single bit, i.e. Calvin's decision of whether or not to erase bit xix_i depends only on (x1,,xi1)(x_1,\ldots,x_{i-1}) (and is independent of the "current bit" xix_i) then the capacity increases to 1p1-p when the encoder is allowed to be stochastic. Secondly, we show via a novel jamming strategy for Calvin that, in the single-bit-delay setting, if the encoding is deterministic (i.e. the transmitted codeword is a deterministic function of the message mm) then no rate asymptotically larger than 12p1-2p is possible with vanishing probability of error, hence stochastic encoding (using private randomness at the encoder) is essential to achieve the capacity of 1p1-p against a one-bit-delayed Calvin.Comment: 21 pages, 4 figures, extended draft of submission to ISIT 201

    Oblivious channels

    Get PDF
    Let C = {x_1,...,x_N} \subset {0,1}^n be an [n,N] binary error correcting code (not necessarily linear). Let e \in {0,1}^n be an error vector. A codeword x in C is said to be "disturbed" by the error e if the closest codeword to x + e is no longer x. Let A_e be the subset of codewords in C that are disturbed by e. In this work we study the size of A_e in random codes C (i.e. codes in which each codeword x_i is chosen uniformly and independently at random from {0,1}^n). Using recent results of Vu [Random Structures and Algorithms 20(3)] on the concentration of non-Lipschitz functions, we show that |A_e| is strongly concentrated for a wide range of values of N and ||e||. We apply this result in the study of communication channels we refer to as "oblivious". Roughly speaking, a channel W(y|x) is said to be oblivious if the error distribution imposed by the channel is independent of the transmitted codeword x. For example, the well studied Binary Symmetric Channel is an oblivious channel. In this work, we define oblivious and partially oblivious channels and present lower bounds on their capacity. The oblivious channels we define have connections to Arbitrarily Varying Channels with state constraints.Comment: Submitted to the IEEE International Symposium on Information Theory (ISIT) 200

    Update-Efficiency and Local Repairability Limits for Capacity Approaching Codes

    Get PDF
    Motivated by distributed storage applications, we investigate the degree to which capacity achieving encodings can be efficiently updated when a single information bit changes, and the degree to which such encodings can be efficiently (i.e., locally) repaired when single encoded bit is lost. Specifically, we first develop conditions under which optimum error-correction and update-efficiency are possible, and establish that the number of encoded bits that must change in response to a change in a single information bit must scale logarithmically in the block-length of the code if we are to achieve any nontrivial rate with vanishing probability of error over the binary erasure or binary symmetric channels. Moreover, we show there exist capacity-achieving codes with this scaling. With respect to local repairability, we develop tight upper and lower bounds on the number of remaining encoded bits that are needed to recover a single lost bit of the encoding. In particular, we show that if the code-rate is ϵ\epsilon less than the capacity, then for optimal codes, the maximum number of codeword symbols required to recover one lost symbol must scale as log1/ϵ\log1/\epsilon. Several variations on---and extensions of---these results are also developed.Comment: Accepted to appear in JSA

    Algorithm-Based Secure and Fault Tolerant Outsourcing of Matrix Computations

    No full text
    page number : 7 , Extended abstractWe study interactive algorithmic schemes for outsourcing matrix computations on untrusted global computing infrastructures such as clouds or volunteer peer-to-peer platforms. In these schemes the client outsources part of the computation with guaranties on both the inputs' secrecy and output's integrity. For the sake of efficiency, thanks to interaction, the number of operations performed by the client is almost linear in the input/output size, while the number of outsourced operations is of the order of matrix multiplication. Our scheme is based on efficient linear codes (especially evaluation/interpolation version of Reed-Solomon codes). Confidentiality is ensured by encoding the inputs using a secret generator matrix, while fault tolerance is ensured together by using fast probabilistic verification and high correction capability of the code. The scheme can tolerate multiple malicious errors and hence provides an efficient solution beyond resilience against soft errors. These schemes also allow to securely compute multiplication of a secret matrix with a known public matrix. Under reasonable hypotheses, we further prove the non-existence of such unconditionally secure schemes for general matrices

    Communication over an Arbitrarily Varying Channel under a State-Myopic Encoder

    Full text link
    We study the problem of communication over a discrete arbitrarily varying channel (AVC) when a noisy version of the state is known non-causally at the encoder. The state is chosen by an adversary which knows the coding scheme. A state-myopic encoder observes this state non-causally, though imperfectly, through a noisy discrete memoryless channel (DMC). We first characterize the capacity of this state-dependent channel when the encoder-decoder share randomness unknown to the adversary, i.e., the randomized coding capacity. Next, we show that when only the encoder is allowed to randomize, the capacity remains unchanged when positive. Interesting and well-known special cases of the state-myopic encoder model are also presented.Comment: 16 page
    corecore