68 research outputs found

    Secure, reliable, and efficient communication over the wiretap channel

    Get PDF
    Secure wireless communication between devices is essential for modern communication systems. Physical-layer security over the wiretap channel may provide an additional level of secrecy beyond the current cryptographic approaches. Given a sender Alice, a legitimate receiver Bob, and a malicious eavesdropper Eve, the wiretap channel occurs when Eve experiences a worse signal-to-noise ratio than Bob. Previous study of the wiretap channel has tended to make assumptions that ignore the reality of wireless communication. This thesis presents a study of short block length codes with the aim of both reliability for Bob and confusion for Eve. The standard approach to wiretap coding is shown to be very inefficient for reliability. Quantifying Eve's confusion in terms of entropy is not solved in many cases, though it is possible for codes with a moderate complexity trellis representation. Using error rate arguments, error correcting codes with steep performance curves turn out to be desirable both for reliability and confusion.Masteroppgave i informatikkINF399MAMN-INFMAMN-PRO

    Algebraic Codes For Error Correction In Digital Communication Systems

    Get PDF
    Access to the full-text thesis is no longer available at the author's request, due to 3rd party copyright restrictions. Access removed on 29.11.2016 by CS (TIS).Metadata merged with duplicate record (http://hdl.handle.net/10026.1/899) on 20.12.2016 by CS (TIS).C. Shannon presented theoretical conditions under which communication was possible error-free in the presence of noise. Subsequently the notion of using error correcting codes to mitigate the effects of noise in digital transmission was introduced by R. Hamming. Algebraic codes, codes described using powerful tools from algebra took to the fore early on in the search for good error correcting codes. Many classes of algebraic codes now exist and are known to have the best properties of any known classes of codes. An error correcting code can be described by three of its most important properties length, dimension and minimum distance. Given codes with the same length and dimension, one with the largest minimum distance will provide better error correction. As a result the research focuses on finding improved codes with better minimum distances than any known codes. Algebraic geometry codes are obtained from curves. They are a culmination of years of research into algebraic codes and generalise most known algebraic codes. Additionally they have exceptional distance properties as their lengths become arbitrarily large. Algebraic geometry codes are studied in great detail with special attention given to their construction and decoding. The practical performance of these codes is evaluated and compared with previously known codes in different communication channels. Furthermore many new codes that have better minimum distance to the best known codes with the same length and dimension are presented from a generalised construction of algebraic geometry codes. Goppa codes are also an important class of algebraic codes. A construction of binary extended Goppa codes is generalised to codes with nonbinary alphabets and as a result many new codes are found. This construction is shown as an efficient way to extend another well known class of algebraic codes, BCH codes. A generic method of shortening codes whilst increasing the minimum distance is generalised. An analysis of this method reveals a close relationship with methods of extending codes. Some new codes from Goppa codes are found by exploiting this relationship. Finally an extension method for BCH codes is presented and this method is shown be as good as a well known method of extension in certain cases

    Parameterized Inapproximability of the Minimum Distance Problem over All Fields and the Shortest Vector Problem in All ℓpNorms

    Get PDF
    Funding Information: M. Cheraghchi’s research was partially supported by the National Science Foundation under Grants No. CCF-2006455 and CCF-2107345. V. Guruswami’s research was supported in part by NSF grants CCF-2228287 and CCF-2210823 and a Simons Investigator award. J. Ribeiro’s research was supported by NOVA LINCS (UIDB/04516/2020) with the financial support of FCT - Fundação para a Ciência e a Tecnologia and by the NSF grants CCF-1814603 and CCF-2107347 and the following grants of Vipul Goyal: the NSF award 1916939, DARPA SIEVE program, a gift from Ripple, a DoE NETL award, a JP Morgan Faculty Fellowship, a PNC center for financial services innovation award, and a Cylab seed funding award. Publisher Copyright: © 2023 ACM.We prove that the Minimum Distance Problem (MDP) on linear codes over any fixed finite field and parameterized by the input distance bound is W[1]-hard to approximate within any constant factor. We also prove analogous results for the parameterized Shortest Vector Problem (SVP) on integer lattices. Specifically, we prove that SVP in the p norm is W[1]-hard to approximate within any constant factor for any fixed p >1 and W[1]-hard to approximate within a factor approaching 2 for p=1. (We show hardness under randomized reductions in each case.) These results answer the main questions left open (and explicitly posed) by Bhattacharyya, Bonnet, Egri, Ghoshal, Karthik C. S., Lin, Manurangsi, and Marx (Journal of the ACM, 2021) on the complexity of parameterized MDP and SVP. For MDP, they established similar hardness for binary linear codes and left the case of general fields open. For SVP in p norms with p > 1, they showed inapproximability within some constant factor (depending on p) and left open showing such hardness for arbitrary constant factors. They also left open showing W[1]-hardness even of exact SVP in the 1 norm.publishersversionpublishe

    Bit flipping decoding for binary product codes

    Get PDF
    Error control coding has been used to mitigate the impact of noise on the wireless channel. Today, wireless communication systems have in their design Forward Error Correction (FEC) techniques to help reduce the amount of retransmitted data. When designing a coding scheme, three challenges need to be addressed, the error correcting capability of the code, the decoding complexity of the code and the delay introduced by the coding scheme. While it is easy to design coding schemes with a large error correcting capability, it is a challenge finding decoding algorithms for these coding schemes. Generally increasing the length of a block code increases its error correcting capability and its decoding complexity. Product codes have been identified as a means to increase the block length of simpler codes, yet keep their decoding complexity low. Bit flipping decoding has been identified as simple to implement decoding algorithm. Research has generally been focused on improving bit flipping decoding for Low Density Parity Check codes. In this study we develop a new decoding algorithm based on syndrome checking and bit flipping to use for binary product codes, to address the major challenge of coding systems, i.e., developing codes with a large error correcting capability yet have a low decoding complexity. Simulated results show that the proposed decoding algorithm outperforms the conventional decoding algorithm proposed by P. Elias in BER and more significantly in WER performance. The algorithm offers comparable complexity to the conventional algorithm in the Rayleigh fading channel

    Novel Polynomial Basis and Its Application to Reed-Solomon Erasure Codes

    Full text link
    In this paper, we present a new basis of polynomial over finite fields of characteristic two and then apply it to the encoding/decoding of Reed-Solomon erasure codes. The proposed polynomial basis allows that hh-point polynomial evaluation can be computed in O(hlog2(h))O(h\log_2(h)) finite field operations with small leading constant. As compared with the canonical polynomial basis, the proposed basis improves the arithmetic complexity of addition, multiplication, and the determination of polynomial degree from O(hlog2(h)log2log2(h))O(h\log_2(h)\log_2\log_2(h)) to O(hlog2(h))O(h\log_2(h)). Based on this basis, we then develop the encoding and erasure decoding algorithms for the (n=2r,k)(n=2^r,k) Reed-Solomon codes. Thanks to the efficiency of transform based on the polynomial basis, the encoding can be completed in O(nlog2(k))O(n\log_2(k)) finite field operations, and the erasure decoding in O(nlog2(n))O(n\log_2(n)) finite field operations. To the best of our knowledge, this is the first approach supporting Reed-Solomon erasure codes over characteristic-2 finite fields while achieving a complexity of O(nlog2(n))O(n\log_2(n)), in both additive and multiplicative complexities. As the complexity leading factor is small, the algorithms are advantageous in practical applications
    corecore