8,450 research outputs found

    Some Applications of Coding Theory in Computational Complexity

    Full text link
    Error-correcting codes and related combinatorial constructs play an important role in several recent (and old) results in computational complexity theory. In this paper we survey results on locally-testable and locally-decodable error-correcting codes, and their applications to complexity theory and to cryptography. Locally decodable codes are error-correcting codes with sub-linear time error-correcting algorithms. They are related to private information retrieval (a type of cryptographic protocol), and they are used in average-case complexity and to construct ``hard-core predicates'' for one-way permutations. Locally testable codes are error-correcting codes with sub-linear time error-detection algorithms, and they are the combinatorial core of probabilistically checkable proofs

    Syndrome decoding of Reed-Muller codes and tensor decomposition over finite fields

    Full text link
    Reed-Muller codes are some of the oldest and most widely studied error-correcting codes, of interest for both their algebraic structure as well as their many algorithmic properties. A recent beautiful result of Saptharishi, Shpilka and Volk showed that for binary Reed-Muller codes of length nn and distance d=O(1)d = O(1), one can correct polylog(n)\operatorname{polylog}(n) random errors in poly(n)\operatorname{poly}(n) time (which is well beyond the worst-case error tolerance of O(1)O(1)). In this paper, we consider the problem of `syndrome decoding' Reed-Muller codes from random errors. More specifically, given the polylog(n)\operatorname{polylog}(n)-bit long syndrome vector of a codeword corrupted in polylog(n)\operatorname{polylog}(n) random coordinates, we would like to compute the locations of the codeword corruptions. This problem turns out to be equivalent to a basic question about computing tensor decomposition of random low-rank tensors over finite fields. Our main result is that syndrome decoding of Reed-Muller codes (and the equivalent tensor decomposition problem) can be solved efficiently, i.e., in polylog(n)\operatorname{polylog}(n) time. We give two algorithms for this problem: 1. The first algorithm is a finite field variant of a classical algorithm for tensor decomposition over real numbers due to Jennrich. This also gives an alternate proof for the main result of Saptharishi et al. 2. The second algorithm is obtained by implementing the steps of the Berlekamp-Welch-style decoding algorithm of Saptharishi et al. in sublinear-time. The main new ingredient is an algorithm for solving certain kinds of systems of polynomial equations.Comment: 24 page

    Linear-time list recovery of high-rate expander codes

    Full text link
    We show that expander codes, when properly instantiated, are high-rate list recoverable codes with linear-time list recovery algorithms. List recoverable codes have been useful recently in constructing efficiently list-decodable codes, as well as explicit constructions of matrices for compressive sensing and group testing. Previous list recoverable codes with linear-time decoding algorithms have all had rate at most 1/2; in contrast, our codes can have rate 1ϵ1 - \epsilon for any ϵ>0\epsilon > 0. We can plug our high-rate codes into a construction of Meir (2014) to obtain linear-time list recoverable codes of arbitrary rates, which approach the optimal trade-off between the number of non-trivial lists provided and the rate of the code. While list-recovery is interesting on its own, our primary motivation is applications to list-decoding. A slight strengthening of our result would implies linear-time and optimally list-decodable codes for all rates, and our work is a step in the direction of solving this important problem

    On the design of an ECOC-compliant genetic algorithm

    Get PDF
    Genetic Algorithms (GA) have been previously applied to Error-Correcting Output Codes (ECOC) in state-of-the-art works in order to find a suitable coding matrix. Nevertheless, none of the presented techniques directly take into account the properties of the ECOC matrix. As a result the considered search space is unnecessarily large. In this paper, a novel Genetic strategy to optimize the ECOC coding step is presented. This novel strategy redefines the usual crossover and mutation operators in order to take into account the theoretical properties of the ECOC framework. Thus, it reduces the search space and lets the algorithm to converge faster. In addition, a novel operator that is able to enlarge the code in a smart way is introduced. The novel methodology is tested on several UCI datasets and four challenging computer vision problems. Furthermore, the analysis of the results done in terms of performance, code length and number of Support Vectors shows that the optimization process is able to find very efficient codes, in terms of the trade-off between classification performance and the number of classifiers. Finally, classification performance per dichotomizer results shows that the novel proposal is able to obtain similar or even better results while defining a more compact number of dichotomies and SVs compared to state-of-the-art approaches

    Decoding Across the Quantum LDPC Code Landscape

    Full text link
    We show that belief propagation combined with ordered statistics post-processing is a general decoder for quantum low density parity check codes constructed from the hypergraph product. To this end, we run numerical simulations of the decoder applied to three families of hypergraph product code: topological codes, fixed-rate random codes and a new class of codes that we call semi-topological codes. Our new code families share properties of both topological and random hypergraph product codes, with a construction that allows for a finely-controlled trade-off between code threshold and stabilizer locality. Our results indicate thresholds across all three families of hypergraph product code, and provide evidence of exponential suppression in the low error regime. For the Toric code, we observe a threshold in the range 9.9±0.2%9.9\pm0.2\%. This result improves upon previous quantum decoders based on belief propagation, and approaches the performance of the minimum weight perfect matching algorithm. We expect semi-topological codes to have the same threshold as Toric codes, as they are identical in the bulk, and we present numerical evidence supporting this observation.Comment: The code for the BP+OSD decoder used in this work can be found on Github: https://github.com/quantumgizmos/bp_os

    A single-photon sampling architecture for solid-state imaging

    Full text link
    Advances in solid-state technology have enabled the development of silicon photomultiplier sensor arrays capable of sensing individual photons. Combined with high-frequency time-to-digital converters (TDCs), this technology opens up the prospect of sensors capable of recording with high accuracy both the time and location of each detected photon. Such a capability could lead to significant improvements in imaging accuracy, especially for applications operating with low photon fluxes such as LiDAR and positron emission tomography. The demands placed on on-chip readout circuitry imposes stringent trade-offs between fill factor and spatio-temporal resolution, causing many contemporary designs to severely underutilize the technology's full potential. Concentrating on the low photon flux setting, this paper leverages results from group testing and proposes an architecture for a highly efficient readout of pixels using only a small number of TDCs, thereby also reducing both cost and power consumption. The design relies on a multiplexing technique based on binary interconnection matrices. We provide optimized instances of these matrices for various sensor parameters and give explicit upper and lower bounds on the number of TDCs required to uniquely decode a given maximum number of simultaneous photon arrivals. To illustrate the strength of the proposed architecture, we note a typical digitization result of a 120x120 photodiode sensor on a 30um x 30um pitch with a 40ps time resolution and an estimated fill factor of approximately 70%, using only 161 TDCs. The design guarantees registration and unique recovery of up to 4 simultaneous photon arrivals using a fast decoding algorithm. In a series of realistic simulations of scintillation events in clinical positron emission tomography the design was able to recover the spatio-temporal location of 98.6% of all photons that caused pixel firings.Comment: 24 pages, 3 figures, 5 table
    corecore