566 research outputs found

    A Novel Stochastic Decoding of LDPC Codes with Quantitative Guarantees

    Full text link
    Low-density parity-check codes, a class of capacity-approaching linear codes, are particularly recognized for their efficient decoding scheme. The decoding scheme, known as the sum-product, is an iterative algorithm consisting of passing messages between variable and check nodes of the factor graph. The sum-product algorithm is fully parallelizable, owing to the fact that all messages can be update concurrently. However, since it requires extensive number of highly interconnected wires, the fully-parallel implementation of the sum-product on chips is exceedingly challenging. Stochastic decoding algorithms, which exchange binary messages, are of great interest for mitigating this challenge and have been the focus of extensive research over the past decade. They significantly reduce the required wiring and computational complexity of the message-passing algorithm. Even though stochastic decoders have been shown extremely effective in practice, the theoretical aspect and understanding of such algorithms remains limited at large. Our main objective in this paper is to address this issue. We first propose a novel algorithm referred to as the Markov based stochastic decoding. Then, we provide concrete quantitative guarantees on its performance for tree-structured as well as general factor graphs. More specifically, we provide upper-bounds on the first and second moments of the error, illustrating that the proposed algorithm is an asymptotically consistent estimate of the sum-product algorithm. We also validate our theoretical predictions with experimental results, showing we achieve comparable performance to other practical stochastic decoders.Comment: This paper has been submitted to IEEE Transactions on Information Theory on May 24th 201

    Applied Advanced Error Control Coding for General Purpose Representation and Association Machine Systems

    Get PDF
    General-Purpose Representation and Association Machine (GPRAM) is proposed to be focusing on computations in terms of variation and flexibility, rather than precision and speed. GPRAM system has a vague representation and has no predefined tasks. With several important lessons learned from error control coding, neuroscience and human visual system, we investigate several types of error control codes, including Hamming code and Low-Density Parity Check (LDPC) codes, and extend them to different directions. While in error control codes, solely XOR logic gate is used to connect different nodes. Inspired by bio-systems and Turbo codes, we suggest and study non-linear codes with expanded operations, such as codes including AND and OR gates which raises the problem of prior-probabilities mismatching. Prior discussions about critical challenges in designing codes and iterative decoding for non-equiprobable symbols may pave the way for a more comprehensive understanding of bio-signal processing. The limitation of XOR operation in iterative decoding with non-equiprobable symbols is described and can be potentially resolved by applying quasi-XOR operation and intermediate transformation layer. Constructing codes for non-equiprobable symbols with the former approach cannot satisfyingly perform with regarding to error correction capability. Probabilistic messages for sum-product algorithm using XOR, AND, and OR operations with non-equiprobable symbols are further computed. The primary motivation for the constructing codes is to establish the GPRAM system rather than to conduct error control coding per se. The GPRAM system is fundamentally developed by applying various operations with substantial over-complete basis. This system is capable of continuously achieving better and simpler approximations for complex tasks. The approaches of decoding LDPC codes with non-equiprobable binary symbols are discussed due to the aforementioned prior-probabilities mismatching problem. The traditional Tanner graph should be modified because of the distinction of message passing to information bits and to parity check bits from check nodes. In other words, the message passing along two directions are identical in conventional Tanner graph, while the message along the forward direction and backward direction are different in our case. A method of optimizing signal constellation is described, which is able to maximize the channel mutual information. A simple Image Processing Unit (IPU) structure is proposed for GPRAM system, to which images are inputted. The IPU consists of a randomly constructed LDPC code, an iterative decoder, a switch, and scaling and decision device. The quality of input images has been severely deteriorated for the purpose of mimicking visual information variability (VIV) experienced in human visual systems. The IPU is capable of (a) reliably recognizing digits from images of which quality is extremely inadequate; (b) achieving similar hyper-acuity performance comparing to human visual system; and (c) significantly improving the recognition rate with applying randomly constructed LDPC code, which is not specifically optimized for the tasks

    Video over DSL with LDGM Codes for Interactive Applications

    Get PDF
    Digital Subscriber Line (DSL) network access is subject to error bursts, which, for interactive video, can introduce unacceptable latencies if video packets need to be re-sent. If the video packets are protected against errors with Forward Error Correction (FEC), calculation of the application-layer channel codes themselves may also introduce additional latency. This paper proposes Low-Density Generator Matrix (LDGM) codes rather than other popular codes because they are more suitable for interactive video streaming, not only for their computational simplicity but also for their licensing advantage. The paper demonstrates that a reduction of up to 4 dB in video distortion is achievable with LDGM Application Layer (AL) FEC. In addition, an extension to the LDGM scheme is demonstrated, which works by rearranging the columns of the parity check matrix so as to make it even more resilient to burst errors. Telemedicine and video conferencing are typical target applications

    Finding Skewed Subcubes Under a Distribution

    Get PDF
    Say that we are given samples from a distribution ? over an n-dimensional space. We expect or desire ? to behave like a product distribution (or a k-wise independent distribution over its marginals for small k). We propose the problem of enumerating/list-decoding all large subcubes where the distribution ? deviates markedly from what we expect; we refer to such subcubes as skewed subcubes. Skewed subcubes are certificates of dependencies between small subsets of variables in ?. We motivate this problem by showing that it arises naturally in the context of algorithmic fairness and anomaly detection. In this work we focus on the special but important case where the space is the Boolean hypercube, and the expected marginals are uniform. We show that the obvious definition of skewed subcubes can lead to intractable list sizes, and propose a better definition of a minimal skewed subcube, which are subcubes whose skew cannot be attributed to a larger subcube that contains it. Our main technical contribution is a list-size bound for this definition and an algorithm to efficiently find all such subcubes. Both the bound and the algorithm rely on Fourier-analytic techniques, especially the powerful hypercontractive inequality. On the lower bounds side, we show that finding skewed subcubes is as hard as the sparse noisy parity problem, and hence our algorithms cannot be improved on substantially without a breakthrough on this problem which is believed to be intractable. Motivated by this, we study alternate models allowing query access to ? where finding skewed subcubes might be easier

    Efficient modular arithmetic units for low power cryptographic applications

    Get PDF
    The demand for high security in energy constrained devices such as mobiles and PDAs is growing rapidly. This leads to the need for efficient design of cryptographic algorithms which offer data integrity, authentication, non-repudiation and confidentiality of the encrypted data and communication channels. The public key cryptography is an ideal choice for data integrity, authentication and non-repudiation whereas the private key cryptography ensures the confidentiality of the data transmitted. The latter has an extremely high encryption speed but it has certain limitations which make it unsuitable for use in certain applications. Numerous public key cryptographic algorithms are available in the literature which comprise modular arithmetic modules such as modular addition, multiplication, inversion and exponentiation. Recently, numerous cryptographic algorithms have been proposed based on modular arithmetic which are scalable, do word based operations and efficient in various aspects. The modular arithmetic modules play a crucial role in the overall performance of the cryptographic processor. Hence, better results can be obtained by designing efficient arithmetic modules such as modular addition, multiplication, exponentiation and squaring. This thesis is organized into three papers, describes the efficient implementation of modular arithmetic units, application of these modules in International Data Encryption Algorithm (IDEA). Second paper describes the IDEA algorithm implementation using the existing techniques and using the proposed efficient modular units. The third paper describes the fault tolerant design of a modular unit which has online self-checking capability --Abstract, page iv
    • …
    corecore