12 research outputs found

    A STUDY OF LINEAR ERROR CORRECTING CODES

    Get PDF
    Since Shannon's ground-breaking work in 1948, there have been two main development streams of channel coding in approaching the limit of communication channels, namely classical coding theory which aims at designing codes with large minimum Hamming distance and probabilistic coding which places the emphasis on low complexity probabilistic decoding using long codes built from simple constituent codes. This work presents some further investigations in these two channel coding development streams. Low-density parity-check (LDPC) codes form a class of capacity-approaching codes with sparse parity-check matrix and low-complexity decoder Two novel methods of constructing algebraic binary LDPC codes are presented. These methods are based on the theory of cyclotomic cosets, idempotents and Mattson-Solomon polynomials, and are complementary to each other. The two methods generate in addition to some new cyclic iteratively decodable codes, the well-known Euclidean and projective geometry codes. Their extension to non binary fields is shown to be straightforward. These algebraic cyclic LDPC codes, for short block lengths, converge considerably well under iterative decoding. It is also shown that for some of these codes, maximum likelihood performance may be achieved by a modified belief propagation decoder which uses a different subset of 7^ codewords of the dual code for each iteration. Following a property of the revolving-door combination generator, multi-threaded minimum Hamming distance computation algorithms are developed. Using these algorithms, the previously unknown, minimum Hamming distance of the quadratic residue code for prime 199 has been evaluated. In addition, the highest minimum Hamming distance attainable by all binary cyclic codes of odd lengths from 129 to 189 has been determined, and as many as 901 new binary linear codes which have higher minimum Hamming distance than the previously considered best known linear code have been found. It is shown that by exploiting the structure of circulant matrices, the number of codewords required, to compute the minimum Hamming distance and the number of codewords of a given Hamming weight of binary double-circulant codes based on primes, may be reduced. A means of independently verifying the exhaustively computed number of codewords of a given Hamming weight of these double-circulant codes is developed and in coiyunction with this, it is proved that some published results are incorrect and the correct weight spectra are presented. Moreover, it is shown that it is possible to estimate the minimum Hamming distance of this family of prime-based double-circulant codes. It is shown that linear codes may be efficiently decoded using the incremental correlation Dorsch algorithm. By extending this algorithm, a list decoder is derived and a novel, CRC-less error detection mechanism that offers much better throughput and performance than the conventional ORG scheme is described. Using the same method it is shown that the performance of conventional CRC scheme may be considerably enhanced. Error detection is an integral part of an incremental redundancy communications system and it is shown that sequences of good error correction codes, suitable for use in incremental redundancy communications systems may be obtained using the Constructions X and XX. Examples are given and their performances presented in comparison to conventional CRC schemes

    Error-Correction Coding and Decoding: Bounds, Codes, Decoders, Analysis and Applications

    Get PDF
    Coding; Communications; Engineering; Networks; Information Theory; Algorithm

    Novel Methods in the Improvement of Turbo Codes and their Decoding

    Get PDF
    The performance of turbo codes can often be improved by improving the weight spectra of such codes. Methods of producing the weight spectra of turbo codes have been investigated and many improvements were made to refine the techniques. A much faster method of weight spectrum evaluation has been developed that allows calculation of weight spectra within a few minutes on a typical desktop PC. Simulation results show that new high performance turbo codes are produced by the optimisation methods presented. The two further important areas of concern are the code itself and the decoding. Improvements of the code are accomplished through optimisation of the interleaver and choice of constituent coders. Optimisation of interleaves can also be accomplished automatically using the algorithms described in this work. The addition of a CRC as an outer code proved to offer a vast improvement on the overall code performance. This was achieved without any code rate loss as the turbo code is punctured to make way for the CRC remainder. The results show a gain of 0.4dB compared to the non-CRC (1014,676) turbo code. Another improvement to the decoding performance was achieved through a combination of MAP decoding and Ordered Reliability decoding. The simulations show a performance of just 0.2dB from the Shannon limit. The same code without ordered reliability decoding has a performance curve which is 0.6dB from the Shannon limit. In situations where the MAP decoder fails to converge ordered reliability decoding succeeds in producing a codeword much closer to the received vector, often the correct codeword. The ordered reliability decoding adds to the computational complexity but lends itself to FPGA implementation.Engineering and Physical Sciences Research Council (EPSRC

    A STUDY OF ERASURE CORRECTING CODES

    Get PDF
    This work focus on erasure codes, particularly those that of high performance, and the related decoding algorithms, especially with low computational complexity. The work is composed of different pieces, but the main components are developed within the following two main themes. Ideas of message passing are applied to solve the erasures after the transmission. Efficient matrix-representation of the belief propagation (BP) decoding algorithm on the BEG is introduced as the recovery algorithm. Gallager's bit-flipping algorithm are further developed into the guess and multi-guess algorithms especially for the application to recover the unsolved erasures after the recovery algorithm. A novel maximum-likelihood decoding algorithm, the In-place algorithm, is proposed with a reduced computational complexity. A further study on the marginal number of correctable erasures by the In-place algoritinn determines a lower bound of the average number of correctable erasures. Following the spirit in search of the most likable codeword based on the received vector, we propose a new branch-evaluation- search-on-the-code-tree (BESOT) algorithm, which is powerful enough to approach the ML performance for all linear block codes. To maximise the recovery capability of the In-place algorithm in network transmissions, we propose the product packetisation structure to reconcile the computational complexity of the In-place algorithm. Combined with the proposed product packetisation structure, the computational complexity is less than the quadratic complexity bound. We then extend this to application of the Rayleigh fading channel to solve the errors and erasures. By concatenating an outer code, such as BCH codes, the product-packetised RS codes have the performance of the hard-decision In-place algorithm significantly better than that of the soft-decision iterative algorithms on optimally designed LDPC codes

    Coded cooperative diversity with low complexity encoding and decoding algorithms.

    Get PDF
    One of the main concerns in designing the wireless communication systems is to provide sufficiently large data rates while considering the different aspects of the implementation complexity that is often constrained by limited battery power and signal processing capability of the devices. Thus, in this thesis, a low complexity encoding and decoding algorithms are investigated for systems with the transmission diversity, particularly the receiver diversity and the cooperative diversity. Design guidelines for such systems are provided to provide a good trade-off between the implementation complexity and the performance. The order statistics based list decoding techniques for linear binary block codes of small to medium block length are investigated to reduce the complexity of coded systems. The original order statistics decoding (OSD) is generalized by assuming segmentation of the most reliable independent positions of the received bits. The segmentation is shown to overcome several drawbacks of the original order statistics decoding. The complexity of the OSD is further reduced by assuming a partial ordering of the received bits in order to avoid the highly complex Gauss elimination. The bit error rate performance and the decoding complexity trade-off of the proposed decoding algorithms are studied by computer simulations. Numerical examples show that, in some cases, the proposed decoding schemes are superior to the original order statistics decoding in terms of both the bit error rate performance as well as the decoding complexity. The complexity of the order statistics based list decoding algorithms for linear block codes and binary block turbo codes (BTC) is further reduced by employing highly reliable cyclic redundancy check (CRC) bits. The results show that sending CRC bits for many segments is the most effective tecnhique in reducing the complexity. The coded cooperative diversity is compared with the conventional receiver coded diversity in terms of the pairwise error probability and the overall bit error rate (BER). The expressions for the pairwise error probabilities are obtained analytically and verified by computer simulations. The performance of the cooperative diversity is found to be strongly relay location dependent. Using the analytical as well as extensive numerical results, the geographical areas of the relay locations are obtained for small to medium signal-to-noise ratio values, such that the cooperative coded diversity outperforms the receiver coded diversity. However, for sufficiently large signal-to-noise ratio (SNR) values, or if the path-loss attenuations are not considered, then the receiver coded diversity always outperforms the cooperative coded diversity. The obtained results have important implications on the deployment of the next generation cellular systems supporting the cooperative as well as the receiver diversity

    Trellis coded modulation techniques

    Get PDF
    The subject of this thesis is an investigation of various trellis coded modulation (TCM) techniques that have potential for out-performing conventional methods. The primary advantage of TCM over modulation schemes employing traditional error-correction coding is the ability to achieve increased power efficiency without the normal expansion of bandwidth introduced by the coding process. Thus, channels that are power limited and bandwidth limited are an ideal application for TCM. In this thesis, four areas of interest are investigated. These include: signal constellation design, multilevel convolutional coding, adaptive TCM and finally low-complexity implementation of TCM. An investigation of the effect of signal constellation design on probability of error has led to the optimisation of constellation angles for a given channel signal to noise ratio and a given code. The use of multilevel convolutional codes based on rings of integers and multi-dimensional modulation is presented. The potential benefits of incorporating several modulation schemes with adaptive TCM which require a single decoder are also investigated. The final area of investigation has been the development of an algorithm for decoding of convolutional codes with a low complexity decoder. The research described in this thesis investigated the use of trellis coded modulation to develop various techniques applicable to digital data transmission systems. Throughout this work, emphasis has been placed on enhancing the performance or complexity of conventional communication systems by simple modifications to the existing structures

    On Lowering the Error Floor of Short-to-Medium Block Length Irregular Low Density Parity Check Codes

    Get PDF
    Edited version embargoed until 22.03.2019 Full version: Access restricted permanently due to 3rd party copyright restrictions. Restriction set on 22.03.2018 by SE, Doctoral CollegeGallager proposed and developed low density parity check (LDPC) codes in the early 1960s. LDPC codes were rediscovered in the early 1990s and shown to be capacity approaching over the additive white Gaussian noise (AWGN) channel. Subsequently, density evolution (DE) optimized symbol node degree distributions were used to significantly improve the decoding performance of short to medium length irregular LDPC codes. Currently, the short to medium length LDPC codes with the lowest error floor are DE optimized irregular LDPC codes constructed using progressive edge growth (PEG) algorithm modifications which are designed to increase the approximate cycle extrinsic message degrees (ACE) in the LDPC code graphs constructed. The aim of the present work is to find efficient means to improve on the error floor performance published for short to medium length irregular LDPC codes over AWGN channels in the literature. An efficient algorithm for determining the girth and ACE distributions in short to medium length LDPC code Tanner graphs has been proposed. A cyclic PEG (CPEG) algorithm which uses an edge connections sequence that results in LDPC codes with improved girth and ACE distributions is presented. LDPC codes with DE optimized/’good’ degree distributions which have larger minimum distances and stopping distances than previously published for LDPC codes of similar length and rate have been found. It is shown that increasing the minimum distance of LDPC codes lowers their error floor performance over AWGN channels; however, there are threshold minimum distances values above which there is no further lowering of the error floor performance. A minimum local girth (edge skipping) (MLG (ES)) PEG algorithm is presented; the algorithm controls the minimum local girth (global girth) connected in the Tanner graphs of LDPC codes constructed by forfeiting some edge connections. A technique for constructing optimal low correlated edge density (OED) LDPC codes based on modified DE optimized symbol node degree distributions and the MLG (ES) PEG algorithm modification is presented. OED rate-½ (n, k)=(512, 256) LDPC codes have been shown to have lower error floor over the AWGN channel than previously published for LDPC codes of similar length and rate. Similarly, consequent to an improved symbol node degree distribution, rate ½ (n, k)=(1024, 512) LDPC codes have been shown to have lower error floor over the AWGN channel than previously published for LDPC codes of similar length and rate. An improved BP/SPA (IBP/SPA) decoder, obtained by making two simple modifications to the standard BP/SPA decoder, has been shown to result in an unprecedented generalized improvement in the performance of short to medium length irregular LDPC codes under iterative message passing decoding. The superiority of the Slepian Wolf distributed source coding model over other distributed source coding models based on LDPC codes has been shown

    LIPIcs, Volume 261, ICALP 2023, Complete Volume

    Get PDF
    LIPIcs, Volume 261, ICALP 2023, Complete Volum

    Multimodal Data Analysis of Dyadic Interactions for an Automated Feedback System Supporting Parent Implementation of Pivotal Response Treatment

    Get PDF
    abstract: Parents fulfill a pivotal role in early childhood development of social and communication skills. In children with autism, the development of these skills can be delayed. Applied behavioral analysis (ABA) techniques have been created to aid in skill acquisition. Among these, pivotal response treatment (PRT) has been empirically shown to foster improvements. Research into PRT implementation has also shown that parents can be trained to be effective interventionists for their children. The current difficulty in PRT training is how to disseminate training to parents who need it, and how to support and motivate practitioners after training. Evaluation of the parents’ fidelity to implementation is often undertaken using video probes that depict the dyadic interaction occurring between the parent and the child during PRT sessions. These videos are time consuming for clinicians to process, and often result in only minimal feedback for the parents. Current trends in technology could be utilized to alleviate the manual cost of extracting data from the videos, affording greater opportunities for providing clinician created feedback as well as automated assessments. The naturalistic context of the video probes along with the dependence on ubiquitous recording devices creates a difficult scenario for classification tasks. The domain of the PRT video probes can be expected to have high levels of both aleatory and epistemic uncertainty. Addressing these challenges requires examination of the multimodal data along with implementation and evaluation of classification algorithms. This is explored through the use of a new dataset of PRT videos. The relationship between the parent and the clinician is important. The clinician can provide support and help build self-efficacy in addition to providing knowledge and modeling of treatment procedures. Facilitating this relationship along with automated feedback not only provides the opportunity to present expert feedback to the parent, but also allows the clinician to aid in personalizing the classification models. By utilizing a human-in-the-loop framework, clinicians can aid in addressing the uncertainty in the classification models by providing additional labeled samples. This will allow the system to improve classification and provides a person-centered approach to extracting multimodal data from PRT video probes.Dissertation/ThesisDoctoral Dissertation Computer Science 201
    corecore