10,596 research outputs found

    Some Applications of Coding Theory in Computational Complexity

    Full text link
    Error-correcting codes and related combinatorial constructs play an important role in several recent (and old) results in computational complexity theory. In this paper we survey results on locally-testable and locally-decodable error-correcting codes, and their applications to complexity theory and to cryptography. Locally decodable codes are error-correcting codes with sub-linear time error-correcting algorithms. They are related to private information retrieval (a type of cryptographic protocol), and they are used in average-case complexity and to construct ``hard-core predicates'' for one-way permutations. Locally testable codes are error-correcting codes with sub-linear time error-detection algorithms, and they are the combinatorial core of probabilistically checkable proofs

    Continuously non-malleable codes with split-state refresh

    Get PDF
    Non-malleable codes for the split-state model allow to encode a message into two parts, such that arbitrary independent tampering on each part, and subsequent decoding of the corresponding modified codeword, yields either the same as the original message, or a completely unrelated value. Continuously non-malleable codes further allow to tolerate an unbounded (polynomial) number of tampering attempts, until a decoding error happens. The drawback is that, after an error happens, the system must self-destruct and stop working, otherwise generic attacks become possible. In this paper we propose a solution to this limitation, by leveraging a split-state refreshing procedure. Namely, whenever a decoding error happens, the two parts of an encoding can be locally refreshed (i.e., without any interaction), which allows to avoid the self-destruct mechanism. An additional feature of our security model is that it captures directly security against continual leakage attacks. We give an abstract framework for building such codes in the common reference string model, and provide a concrete instantiation based on the external Diffie-Hellman assumption. Finally, we explore applications in which our notion turns out to be essential. The first application is a signature scheme tolerating an arbitrary polynomial number of split-state tampering attempts, without requiring a self-destruct capability, and in a model where refreshing of the memory happens only after an invalid output is produced. This circumvents an impossibility result from a recent work by Fuijisaki and Xagawa (Asiacrypt 2016). The second application is a compiler for tamper-resilient RAM programs. In comparison to other tamper-resilient compilers, ours has several advantages, among which the fact that, for the first time, it does not rely on the self-destruct feature

    A Framework for Efficient Adaptively Secure Composable Oblivious Transfer in the ROM

    Get PDF
    Oblivious Transfer (OT) is a fundamental cryptographic protocol that finds a number of applications, in particular, as an essential building block for two-party and multi-party computation. We construct a round-optimal (2 rounds) universally composable (UC) protocol for oblivious transfer secure against active adaptive adversaries from any OW-CPA secure public-key encryption scheme with certain properties in the random oracle model (ROM). In terms of computation, our protocol only requires the generation of a public/secret-key pair, two encryption operations and one decryption operation, apart from a few calls to the random oracle. In~terms of communication, our protocol only requires the transfer of one public-key, two ciphertexts, and three binary strings of roughly the same size as the message. Next, we show how to instantiate our construction under the low noise LPN, McEliece, QC-MDPC, LWE, and CDH assumptions. Our instantiations based on the low noise LPN, McEliece, and QC-MDPC assumptions are the first UC-secure OT protocols based on coding assumptions to achieve: 1) adaptive security, 2) optimal round complexity, 3) low communication and computational complexities. Previous results in this setting only achieved static security and used costly cut-and-choose techniques.Our instantiation based on CDH achieves adaptive security at the small cost of communicating only two more group elements as compared to the gap-DH based Simplest OT protocol of Chou and Orlandi (Latincrypt 15), which only achieves static security in the ROM

    Ternary Syndrome Decoding with Large Weight

    Get PDF
    The Syndrome Decoding problem is at the core of many code-based cryptosystems. In this paper, we study ternary Syndrome Decoding in large weight. This problem has been introduced in the Wave signature scheme but has never been thoroughly studied. We perform an algorithmic study of this problem which results in an update of the Wave parameters. On a more fundamental level, we show that ternary Syndrome Decoding with large weight is a really harder problem than the binary Syndrome Decoding problem, which could have several applications for the design of code-based cryptosystems

    Low-degree tests at large distances

    Full text link
    We define tests of boolean functions which distinguish between linear (or quadratic) polynomials, and functions which are very far, in an appropriate sense, from these polynomials. The tests have optimal or nearly optimal trade-offs between soundness and the number of queries. In particular, we show that functions with small Gowers uniformity norms behave ``randomly'' with respect to hypergraph linearity tests. A central step in our analysis of quadraticity tests is the proof of an inverse theorem for the third Gowers uniformity norm of boolean functions. The last result has also a coding theory application. It is possible to estimate efficiently the distance from the second-order Reed-Muller code on inputs lying far beyond its list-decoding radius

    Joint Scheduling and ARQ for MU-MIMO Downlink in the Presence of Inter-Cell Interference

    Full text link
    User scheduling and multiuser multi-antenna (MU-MIMO) transmission are at the core of high rate data-oriented downlink schemes of the next-generation of cellular systems (e.g., LTE-Advanced). Scheduling selects groups of users according to their channels vector directions and SINR levels. However, when scheduling is applied independently in each cell, the inter-cell interference (ICI) power at each user receiver is not known in advance since it changes at each new scheduling slot depending on the scheduling decisions of all interfering base stations. In order to cope with this uncertainty, we consider the joint operation of scheduling, MU-MIMO beamforming and Automatic Repeat reQuest (ARQ). We develop a game-theoretic framework for this problem and build on stochastic optimization techniques in order to find optimal scheduling and ARQ schemes. Particularizing our framework to the case of "outage service rates", we obtain a scheme based on adaptive variable-rate coding at the physical layer, combined with ARQ at the Logical Link Control (ARQ-LLC). Then, we present a novel scheme based on incremental redundancy Hybrid ARQ (HARQ) that is able to achieve a throughput performance arbitrarily close to the "genie-aided service rates", with no need for a genie that provides non-causally the ICI power levels. The novel HARQ scheme is both easier to implement and superior in performance with respect to the conventional combination of adaptive variable-rate coding and ARQ-LLC.Comment: Submitted to IEEE Transactions on Communications, v2: small correction

    Complexity Theory

    Get PDF
    Computational Complexity Theory is the mathematical study of the intrinsic power and limitations of computational resources like time, space, or randomness. The current workshop focused on recent developments in various sub-areas including arithmetic complexity, Boolean complexity, communication complexity, cryptography, probabilistic proof systems, pseudorandomness, and quantum computation. Many of the developements are related to diverse mathematical fields such as algebraic geometry, combinatorial number theory, probability theory, quantum mechanics, representation theory, and the theory of error-correcting codes

    Negligible Cooperation: Contrasting the Maximal- and Average-Error Cases

    Get PDF
    In communication networks, cooperative strategies are coding schemes where network nodes work together to improve network performance metrics such as the total rate delivered across the network. This work studies encoder cooperation in the setting of a discrete multiple access channel (MAC) with two encoders and a single decoder. A network node, here called the cooperation facilitator (CF), that is connected to both encoders via rate-limited links, enables the cooperation strategy. Previous work by the authors presents two classes of MACs: (i) one class where the average-error sum-capacity has an infinite derivative in the limit where CF output link capacities approach zero, and (ii) a second class of MACs where the maximal-error sum-capacity is not continuous at the point where the output link capacities of the CF equal zero. This work contrasts the power of the CF in the maximal- and average-error cases, showing that a constant number of bits communicated over the CF output link can yield a positive gain in the maximal-error sum-capacity, while a far greater number of bits, even numbers that grow sublinearly in the blocklength, can never yield a non-negligible gain in the average-error sum-capacity

    Scalable Neural Network Decoders for Higher Dimensional Quantum Codes

    Get PDF
    Machine learning has the potential to become an important tool in quantum error correction as it allows the decoder to adapt to the error distribution of a quantum chip. An additional motivation for using neural networks is the fact that they can be evaluated by dedicated hardware which is very fast and consumes little power. Machine learning has been previously applied to decode the surface code. However, these approaches are not scalable as the training has to be redone for every system size which becomes increasingly difficult. In this work the existence of local decoders for higher dimensional codes leads us to use a low-depth convolutional neural network to locally assign a likelihood of error on each qubit. For noiseless syndrome measurements, numerical simulations show that the decoder has a threshold of around 7.1%7.1\% when applied to the 4D toric code. When the syndrome measurements are noisy, the decoder performs better for larger code sizes when the error probability is low. We also give theoretical and numerical analysis to show how a convolutional neural network is different from the 1-nearest neighbor algorithm, which is a baseline machine learning method
    corecore