21,209 research outputs found

    Interactive Channel Capacity Revisited

    Full text link
    We provide the first capacity approaching coding schemes that robustly simulate any interactive protocol over an adversarial channel that corrupts any ϵ\epsilon fraction of the transmitted symbols. Our coding schemes achieve a communication rate of 1O(ϵloglog1/ϵ)1 - O(\sqrt{\epsilon \log \log 1/\epsilon}) over any adversarial channel. This can be improved to 1O(ϵ)1 - O(\sqrt{\epsilon}) for random, oblivious, and computationally bounded channels, or if parties have shared randomness unknown to the channel. Surprisingly, these rates exceed the 1Ω(H(ϵ))=1Ω(ϵlog1/ϵ)1 - \Omega(\sqrt{H(\epsilon)}) = 1 - \Omega(\sqrt{\epsilon \log 1/\epsilon}) interactive channel capacity bound which [Kol and Raz; STOC'13] recently proved for random errors. We conjecture 1Θ(ϵloglog1/ϵ)1 - \Theta(\sqrt{\epsilon \log \log 1/\epsilon}) and 1Θ(ϵ)1 - \Theta(\sqrt{\epsilon}) to be the optimal rates for their respective settings and therefore to capture the interactive channel capacity for random and adversarial errors. In addition to being very communication efficient, our randomized coding schemes have multiple other advantages. They are computationally efficient, extremely natural, and significantly simpler than prior (non-capacity approaching) schemes. In particular, our protocols do not employ any coding but allow the original protocol to be performed as-is, interspersed only by short exchanges of hash values. When hash values do not match, the parties backtrack. Our approach is, as we feel, by far the simplest and most natural explanation for why and how robust interactive communication in a noisy environment is possible

    End-to-End Error-Correcting Codes on Networks with Worst-Case Symbol Errors

    Full text link
    The problem of coding for networks experiencing worst-case symbol errors is considered. We argue that this is a reasonable model for highly dynamic wireless network transmissions. We demonstrate that in this setup prior network error-correcting schemes can be arbitrarily far from achieving the optimal network throughput. A new transform metric for errors under the considered model is proposed. Using this metric, we replicate many of the classical results from coding theory. Specifically, we prove new Hamming-type, Plotkin-type, and Elias-Bassalygo-type upper bounds on the network capacity. A commensurate lower bound is shown based on Gilbert-Varshamov-type codes for error-correction. The GV codes used to attain the lower bound can be non-coherent, that is, they do not require prior knowledge of the network topology. We also propose a computationally-efficient concatenation scheme. The rate achieved by our concatenated codes is characterized by a Zyablov-type lower bound. We provide a generalized minimum-distance decoding algorithm which decodes up to half the minimum distance of the concatenated codes. The end-to-end nature of our design enables our codes to be overlaid on the classical distributed random linear network codes [1]. Furthermore, the potentially intensive computation at internal nodes for the link-by-link error-correction is un-necessary based on our design.Comment: Submitted for publication. arXiv admin note: substantial text overlap with arXiv:1108.239

    Structured Near-Optimal Channel-Adapted Quantum Error Correction

    Full text link
    We present a class of numerical algorithms which adapt a quantum error correction scheme to a channel model. Given an encoding and a channel model, it was previously shown that the quantum operation that maximizes the average entanglement fidelity may be calculated by a semidefinite program (SDP), which is a convex optimization. While optimal, this recovery operation is computationally difficult for long codes. Furthermore, the optimal recovery operation has no structure beyond the completely positive trace preserving (CPTP) constraint. We derive methods to generate structured channel-adapted error recovery operations. Specifically, each recovery operation begins with a projective error syndrome measurement. The algorithms to compute the structured recovery operations are more scalable than the SDP and yield recovery operations with an intuitive physical form. Using Lagrange duality, we derive performance bounds to certify near-optimality.Comment: 18 pages, 13 figures Update: typos corrected in Appendi

    Fuzzy Extractors: How to Generate Strong Keys from Biometrics and Other Noisy Data

    Get PDF
    We provide formal definitions and efficient secure techniques for - turning noisy information into keys usable for any cryptographic application, and, in particular, - reliably and securely authenticating biometric data. Our techniques apply not just to biometric information, but to any keying material that, unlike traditional cryptographic keys, is (1) not reproducible precisely and (2) not distributed uniformly. We propose two primitives: a "fuzzy extractor" reliably extracts nearly uniform randomness R from its input; the extraction is error-tolerant in the sense that R will be the same even if the input changes, as long as it remains reasonably close to the original. Thus, R can be used as a key in a cryptographic application. A "secure sketch" produces public information about its input w that does not reveal w, and yet allows exact recovery of w given another value that is close to w. Thus, it can be used to reliably reproduce error-prone biometric inputs without incurring the security risk inherent in storing them. We define the primitives to be both formally secure and versatile, generalizing much prior work. In addition, we provide nearly optimal constructions of both primitives for various measures of ``closeness'' of input data, such as Hamming distance, edit distance, and set difference.Comment: 47 pp., 3 figures. Prelim. version in Eurocrypt 2004, Springer LNCS 3027, pp. 523-540. Differences from version 3: minor edits for grammar, clarity, and typo

    Quantum machine learning: a classical perspective

    Get PDF
    Recently, increased computational power and data availability, as well as algorithmic advances, have led machine learning techniques to impressive results in regression, classification, data-generation and reinforcement learning tasks. Despite these successes, the proximity to the physical limits of chip fabrication alongside the increasing size of datasets are motivating a growing number of researchers to explore the possibility of harnessing the power of quantum computation to speed-up classical machine learning algorithms. Here we review the literature in quantum machine learning and discuss perspectives for a mixed readership of classical machine learning and quantum computation experts. Particular emphasis will be placed on clarifying the limitations of quantum algorithms, how they compare with their best classical counterparts and why quantum resources are expected to provide advantages for learning problems. Learning in the presence of noise and certain computationally hard problems in machine learning are identified as promising directions for the field. Practical questions, like how to upload classical data into quantum form, will also be addressed.Comment: v3 33 pages; typos corrected and references adde

    Exponentially convergent data assimilation algorithm for Navier-Stokes equations

    Full text link
    The paper presents a new state estimation algorithm for a bilinear equation representing the Fourier- Galerkin (FG) approximation of the Navier-Stokes (NS) equations on a torus in R2. This state equation is subject to uncertain but bounded noise in the input (Kolmogorov forcing) and initial conditions, and its output is incomplete and contains bounded noise. The algorithm designs a time-dependent gain such that the estimation error converges to zero exponentially. The sufficient condition for the existence of the gain are formulated in the form of algebraic Riccati equations. To demonstrate the results we apply the proposed algorithm to the reconstruction a chaotic fluid flow from incomplete and noisy data
    corecore