22,929 research outputs found

    Achieving a vanishing SNR-gap to exact lattice decoding at a subexponential complexity

    Full text link
    The work identifies the first lattice decoding solution that achieves, in the general outage-limited MIMO setting and in the high-rate and high-SNR limit, both a vanishing gap to the error-performance of the (DMT optimal) exact solution of preprocessed lattice decoding, as well as a computational complexity that is subexponential in the number of codeword bits. The proposed solution employs lattice reduction (LR)-aided regularized (lattice) sphere decoding and proper timeout policies. These performance and complexity guarantees hold for most MIMO scenarios, all reasonable fading statistics, all channel dimensions and all full-rate lattice codes. In sharp contrast to the above manageable complexity, the complexity of other standard preprocessed lattice decoding solutions is shown here to be extremely high. Specifically the work is first to quantify the complexity of these lattice (sphere) decoding solutions and to prove the surprising result that the complexity required to achieve a certain rate-reliability performance, is exponential in the lattice dimensionality and in the number of codeword bits, and it in fact matches, in common scenarios, the complexity of ML-based solutions. Through this sharp contrast, the work was able to, for the first time, rigorously quantify the pivotal role of lattice reduction as a special complexity reducing ingredient. Finally the work analytically refines transceiver DMT analysis which generally fails to address potentially massive gaps between theory and practice. Instead the adopted vanishing gap condition guarantees that the decoder's error curve is arbitrarily close, given a sufficiently high SNR, to the optimal error curve of exact solutions, which is a much stronger condition than DMT optimality which only guarantees an error gap that is subpolynomial in SNR, and can thus be unbounded and generally unacceptable in practical settings.Comment: 16 pages - submission for IEEE Trans. Inform. Theor

    Rank Minimization over Finite Fields: Fundamental Limits and Coding-Theoretic Interpretations

    Full text link
    This paper establishes information-theoretic limits in estimating a finite field low-rank matrix given random linear measurements of it. These linear measurements are obtained by taking inner products of the low-rank matrix with random sensing matrices. Necessary and sufficient conditions on the number of measurements required are provided. It is shown that these conditions are sharp and the minimum-rank decoder is asymptotically optimal. The reliability function of this decoder is also derived by appealing to de Caen's lower bound on the probability of a union. The sufficient condition also holds when the sensing matrices are sparse - a scenario that may be amenable to efficient decoding. More precisely, it is shown that if the n\times n-sensing matrices contain, on average, \Omega(nlog n) entries, the number of measurements required is the same as that when the sensing matrices are dense and contain entries drawn uniformly at random from the field. Analogies are drawn between the above results and rank-metric codes in the coding theory literature. In fact, we are also strongly motivated by understanding when minimum rank distance decoding of random rank-metric codes succeeds. To this end, we derive distance properties of equiprobable and sparse rank-metric codes. These distance properties provide a precise geometric interpretation of the fact that the sparse ensemble requires as few measurements as the dense one. Finally, we provide a non-exhaustive procedure to search for the unknown low-rank matrix.Comment: Accepted to the IEEE Transactions on Information Theory; Presented at IEEE International Symposium on Information Theory (ISIT) 201

    Quantum Fourier sampling, Code Equivalence, and the quantum security of the McEliece and Sidelnikov cryptosystems

    Full text link
    The Code Equivalence problem is that of determining whether two given linear codes are equivalent to each other up to a permutation of the coordinates. This problem has a direct reduction to a nonabelian hidden subgroup problem (HSP), suggesting a possible quantum algorithm analogous to Shor's algorithms for factoring or discrete log. However, we recently showed that in many cases of interest---including Goppa codes---solving this case of the HSP requires rich, entangled measurements. Thus, solving these cases of Code Equivalence via Fourier sampling appears to be out of reach of current families of quantum algorithms. Code equivalence is directly related to the security of McEliece-type cryptosystems in the case where the private code is known to the adversary. However, for many codes the support splitting algorithm of Sendrier provides a classical attack in this case. We revisit the claims of our previous article in the light of these classical attacks, and discuss the particular case of the Sidelnikov cryptosystem, which is based on Reed-Muller codes
    • …
    corecore