28 research outputs found

    Sequential Gradient Coding For Straggler Mitigation

    Full text link
    In distributed computing, slower nodes (stragglers) usually become a bottleneck. Gradient Coding (GC), introduced by Tandon et al., is an efficient technique that uses principles of error-correcting codes to distribute gradient computation in the presence of stragglers. In this paper, we consider the distributed computation of a sequence of gradients {g(1),g(2),,g(J)}\{g(1),g(2),\ldots,g(J)\}, where processing of each gradient g(t)g(t) starts in round-tt and finishes by round-(t+T)(t+T). Here T0T\geq 0 denotes a delay parameter. For the GC scheme, coding is only across computing nodes and this results in a solution where T=0T=0. On the other hand, having T>0T>0 allows for designing schemes which exploit the temporal dimension as well. In this work, we propose two schemes that demonstrate improved performance compared to GC. Our first scheme combines GC with selective repetition of previously unfinished tasks and achieves improved straggler mitigation. In our second scheme, which constitutes our main contribution, we apply GC to a subset of the tasks and repetition for the remainder of the tasks. We then multiplex these two classes of tasks across workers and rounds in an adaptive manner, based on past straggler patterns. Using theoretical analysis, we demonstrate that our second scheme achieves significant reduction in the computational load. In our experiments, we study a practical setting of concurrently training multiple neural networks over an AWS Lambda cluster involving 256 worker nodes, where our framework naturally applies. We demonstrate that the latter scheme can yield a 16\% improvement in runtime over the baseline GC scheme, in the presence of naturally occurring, non-simulated stragglers

    Confident decoding with GRAND

    Full text link
    We establish that during the execution of any Guessing Random Additive Noise Decoding (GRAND) algorithm, an interpretable, useful measure of decoding confidence can be evaluated. This measure takes the form of a log-likelihood ratio (LLR) of the hypotheses that, should a decoding be found by a given query, the decoding is correct versus its being incorrect. That LLR can be used as soft output for a range of applications and we demonstrate its utility by showing that it can be used to confidently discard likely erroneous decodings in favor of returning more readily managed erasures. As an application, we show that feature can be used to compromise the physical layer security of short length wiretap codes by accurately and confidently revealing a proportion of a communication when code-rate is above capacity

    Design Of Fountain Codes With Error Control

    Get PDF
    This thesis is focused on providing unequal error protection (uep) to two disjoint sources which are communicating to a comdestination via a comrelay by using distributed lt codes over a binary erasure channel (bec), and designing fountain codes with error control property by integrating lt codes with turbo codes over a binary input additive white gaussian noise (bi-awgn) channel. A simple yet efficient technique for decomposing the rsd into two entirely different degree distributions is developed and presented in this thesis. These two distributions are used to encode data symbols at the sources and the encoded symbols from the sources are selectively xored at the relay based on a suitable relay operation before the combined codeword is transmitted to the destination. By doing so, it is shown that the uep can be provided to these sources. The performance of lt codes over the awgn channel is well studied and presented in this thesis which indicates that these codes have weak error correction ability over the channel. But, errors introduced into individual symbols during the transmission of information over noisy channels need correction by some error correcting codes. Since it is found that lt codes alone are weak at correcting those errors, lt codes are integrated with turbo codes which are good error correcting codes. Therefore, the source data (symbols) are at first turbo encoded and then lt encoded and transmitted over the awgn channel. When the corrupted encoded symbols are received at receiver, lt decoding is conducted folloby turbo decoding. The overall performance of the integrated system is studied and presented in this thesis, which suggests that the errors left after lt decoding can be corrected to some extent by turbo decoder

    Bit flipping decoding for binary product codes

    Get PDF
    Error control coding has been used to mitigate the impact of noise on the wireless channel. Today, wireless communication systems have in their design Forward Error Correction (FEC) techniques to help reduce the amount of retransmitted data. When designing a coding scheme, three challenges need to be addressed, the error correcting capability of the code, the decoding complexity of the code and the delay introduced by the coding scheme. While it is easy to design coding schemes with a large error correcting capability, it is a challenge finding decoding algorithms for these coding schemes. Generally increasing the length of a block code increases its error correcting capability and its decoding complexity. Product codes have been identified as a means to increase the block length of simpler codes, yet keep their decoding complexity low. Bit flipping decoding has been identified as simple to implement decoding algorithm. Research has generally been focused on improving bit flipping decoding for Low Density Parity Check codes. In this study we develop a new decoding algorithm based on syndrome checking and bit flipping to use for binary product codes, to address the major challenge of coding systems, i.e., developing codes with a large error correcting capability yet have a low decoding complexity. Simulated results show that the proposed decoding algorithm outperforms the conventional decoding algorithm proposed by P. Elias in BER and more significantly in WER performance. The algorithm offers comparable complexity to the conventional algorithm in the Rayleigh fading channel

    Viterbi algorithm in continuous-phase frequency shift keying

    Get PDF
    The Viterbi algorithm, an application of dynamic programming, is widely used for estimation and detection problems in digital communications and signal processing. It is used to detect signals in communication channels with memory, and to decode sequential error-control codes that are used to enhance the performance of digital communication systems. The Viterbi algorithm is also used in speech and character recognition tasks where the speech signals or characters are modeled by hidden Markov models. This project explains the basics of the Viterbi algorithm as applied to systems in digital communication systems, and speech and character recognition. It also focuses on the operations and the practical memory requirements to implement the Viterbi algorithm in real-time. A forward error correction technique known as convolutional coding with Viterbi decoding was explored. In this project, the basic Viterbi decoder behavior model was built and simulated. The convolutional encoder, BPSK and AWGN channel were implemented in MATLAB code. The BER was tested to evaluate the decoding performance. The theory of Viterbi Algorithm is introduced based on convolutional coding. The application of Viterbi Algorithm in the Continuous-Phase Frequency Shift Keying (CPFSK) is presented. Analysis for the performance is made and compared with the conventional coherent estimator. The main issue of this thesis is to implement the RTL level model of Viterbi decoder. The RTL Viterbi decoder model includes the Branch Metric block, the Add-Compare-Select block, the trace-back block, the decoding block and next state block. With all done, we further understand about the Viterbi decoding algorithm

    Algebraic Codes For Error Correction In Digital Communication Systems

    Get PDF
    Access to the full-text thesis is no longer available at the author's request, due to 3rd party copyright restrictions. Access removed on 29.11.2016 by CS (TIS).Metadata merged with duplicate record (http://hdl.handle.net/10026.1/899) on 20.12.2016 by CS (TIS).C. Shannon presented theoretical conditions under which communication was possible error-free in the presence of noise. Subsequently the notion of using error correcting codes to mitigate the effects of noise in digital transmission was introduced by R. Hamming. Algebraic codes, codes described using powerful tools from algebra took to the fore early on in the search for good error correcting codes. Many classes of algebraic codes now exist and are known to have the best properties of any known classes of codes. An error correcting code can be described by three of its most important properties length, dimension and minimum distance. Given codes with the same length and dimension, one with the largest minimum distance will provide better error correction. As a result the research focuses on finding improved codes with better minimum distances than any known codes. Algebraic geometry codes are obtained from curves. They are a culmination of years of research into algebraic codes and generalise most known algebraic codes. Additionally they have exceptional distance properties as their lengths become arbitrarily large. Algebraic geometry codes are studied in great detail with special attention given to their construction and decoding. The practical performance of these codes is evaluated and compared with previously known codes in different communication channels. Furthermore many new codes that have better minimum distance to the best known codes with the same length and dimension are presented from a generalised construction of algebraic geometry codes. Goppa codes are also an important class of algebraic codes. A construction of binary extended Goppa codes is generalised to codes with nonbinary alphabets and as a result many new codes are found. This construction is shown as an efficient way to extend another well known class of algebraic codes, BCH codes. A generic method of shortening codes whilst increasing the minimum distance is generalised. An analysis of this method reveals a close relationship with methods of extending codes. Some new codes from Goppa codes are found by exploiting this relationship. Finally an extension method for BCH codes is presented and this method is shown be as good as a well known method of extension in certain cases
    corecore