460 research outputs found

    RS + LDPC-Staircase Codes for the Erasure Channel: Standards, Usage and Performance

    Get PDF
    Application-Level Forward Erasure Correction (AL-FEC) codes are a key element of telecommunication systems. They are used to recover from packet losses when retransmission are not feasible and to optimize the large scale distribution of contents. In this paper we introduce Reed-Solomon/LDPCStaircase codes, two complementary AL-FEC codes that have recently been recognized as superior to Raptor codes in the context of the 3GPP-eMBMS call for technology [1]. After a brief introduction to the codes, we explain how to design high performance codecs which is a key aspect when targeting embedded systems with limited CPU/battery capacity. Finally we present the performances of these codes in terms of erasure correction capabilities and encoding/decoding speed, taking advantage of the 3GPP-eMBMS results where they have been ranked first

    Structured Random Linear Codes (SRLC): Bridging the Gap between Block and Convolutional Codes

    Get PDF
    Several types of AL-FEC (Application-Level FEC) codes for the Packet Erasure Channel exist. Random Linear Codes (RLC), where redundancy packets consist of random linear combinations of source packets over a certain finite field, are a simple yet efficient coding technique, for instance massively used for Network Coding applications. However the price to pay is a high encoding and decoding complexity, especially when working on GF(28)GF(2^8), which seriously limits the number of packets in the encoding window. On the opposite, structured block codes have been designed for situations where the set of source packets is known in advance, for instance with file transfer applications. Here the encoding and decoding complexity is controlled, even for huge block sizes, thanks to the sparse nature of the code and advanced decoding techniques that exploit this sparseness (e.g., Structured Gaussian Elimination). But their design also prevents their use in convolutional use-cases featuring an encoding window that slides over a continuous set of incoming packets. In this work we try to bridge the gap between these two code classes, bringing some structure to RLC codes in order to enlarge the use-cases where they can be efficiently used: in convolutional mode (as any RLC code), but also in block mode with either tiny, medium or large block sizes. We also demonstrate how to design compact signaling for these codes (for encoder/decoder synchronization), which is an essential practical aspect.Comment: 7 pages, 12 figure

    Subquadratic computation of vector generating polynomials and improvement of the block Wiedemann algorithm

    Get PDF
    This paper describes a new algorithm for computing linear generators (vector generating polynomials) for matrix sequences, running in sub-quadratic time. This algorithm applies in particular to the sequential stage of Coppersmith's block Wiedemann algorithm. Experiments showed that our method can be substituted in place of the quadratic one proposed by Coppersmith, yielding important speedups even for realistic matrix sizes. The base fields we were interested in were finite fields of large characteristic. As an example, we have been able to compute a linear generator for a sequence of 4*4 matrices of length 242 304 defined over GF(2^607) in less than two days on one 667MHz alpha ev67 cpu

    Solution of Large Sparse System of Linear Equations over GF(2) on a Multi Node Multi GPU Platform

    Get PDF
    We provide an efficient multi-node, multi-GPU implementation of the Block Wiedemann Algorithm (BWA)to find the solution of a large sparse system of linear equations over GF(2). One of the important applications ofsolving such systems arises in most integer factorization algorithms like Number Field Sieve. In this paper, wedescribe how hybrid parallelization can be adapted to speed up the most time-consuming sequence generation stage of BWA. This stage involves generating a sequence of matrix-matrix products and matrix transpose-matrix products where the matrices are very large, highly sparse, and have entries over GF(2). We describe a GPU-accelerated parallel method for the computation of these matrix-matrix products using techniques like row-wise parallel distribution of the first matrix over multi-node multi-GPU platform using MPI and CUDA and word-wise XORing of rows of the second matrix. We also describe the hybrid parallelization of matrix transpose-matrix product computation, where we divide both the matrices row-wise into equal-sized blocks using MPI. Then after a GPU-accelerated matrix transpose-matrix product generation, we combine all those blocks using MPI_BXOR operation in MPI_Reduce to obtain the result. The performance of hybrid parallelization of the sequence generation step on a hybrid cluster using multiple GPUs has been compared with parallelization on only multiple MPI processors. We have used this hybrid parallel sequence generation tool for the benchmarking of an HPC cluster. Detailed timings of the complete solution of number field sieve matrices of RSA-130, RSA-140, and RSA-170 are also compared in this paper using up to 4 NVidia V100 GPUs of a DGX station. We got a speedup of 2.8 after parallelization on 4 V100 GPUs compared to that over 1 GPU

    Computation of Discrete Logarithms in GF(2^607)

    No full text
    International audienceWe describe in this article how we have been able to extend the record for computations of discrete logarithms in characteristic 2 from the previous record over GF(2^503) to a newer mark of GF(2^607), using Coppersmith's algorithm. This has been made possible by several practical improvements to the algorithm. Although the computations have been carried out on fairly standard hardware, our opinion is that we are nearing the current limits of the manageable sizes for this algorithm, and that going substantially further will require deeper improvements to the method

    COHOMOLOGY OF CONGRUENCE SUBGROUPS OF SL4(Z). III

    Get PDF
    In two previous papers we computed cohomology groups for a range of levels , where is the congruence subgroup of consisting of all matrices with bottom row congruent to mod . In this note we update this earlier work by carrying it out for prime levels up to . This requires new methods in sparse matrix reduction, which are the main focus of the paper. Our computations involve matrices with up to 20 million nonzero entries. We also make two conjectures concerning the contributions to for prime coming from Eisenstein series and Siegel modular forms
    corecore