184 research outputs found

    Coding Theory and its Applications in Communication systems

    Get PDF
    Error control coding has been used extensively in digital communication systems because of its cost-effectiveness in achieving efficient, reliable digital transmission. Coding now plays an important role in the design of modern communication systems. This paper reviews the development of basic coding theory and state-of-art coding techniques. The applications of coding to communication systems and future trends are also discussed

    Automatic-repeat-request error control schemes

    Get PDF
    Error detection incorporated with automatic-repeat-request (ARQ) is widely used for error control in data communication systems. This method of error control is simple and provides high system reliability. If a properly chosen code is used for error detection, virtually error-free data transmission can be attained. Various types of ARQ and hybrid ARQ schemes, and error detection using linear block codes are surveyed

    On Fault Tolerance Methods for Networks-on-Chip

    Get PDF
    Technology scaling has proceeded into dimensions in which the reliability of manufactured devices is becoming endangered. The reliability decrease is a consequence of physical limitations, relative increase of variations, and decreasing noise margins, among others. A promising solution for bringing the reliability of circuits back to a desired level is the use of design methods which introduce tolerance against possible faults in an integrated circuit. This thesis studies and presents fault tolerance methods for network-onchip (NoC) which is a design paradigm targeted for very large systems-onchip. In a NoC resources, such as processors and memories, are connected to a communication network; comparable to the Internet. Fault tolerance in such a system can be achieved at many abstraction levels. The thesis studies the origin of faults in modern technologies and explains the classification to transient, intermittent and permanent faults. A survey of fault tolerance methods is presented to demonstrate the diversity of available methods. Networks-on-chip are approached by exploring their main design choices: the selection of a topology, routing protocol, and flow control method. Fault tolerance methods for NoCs are studied at different layers of the OSI reference model. The data link layer provides a reliable communication link over a physical channel. Error control coding is an efficient fault tolerance method especially against transient faults at this abstraction level. Error control coding methods suitable for on-chip communication are studied and their implementations presented. Error control coding loses its effectiveness in the presence of intermittent and permanent faults. Therefore, other solutions against them are presented. The introduction of spare wires and split transmissions are shown to provide good tolerance against intermittent and permanent errors and their combination to error control coding is illustrated. At the network layer positioned above the data link layer, fault tolerance can be achieved with the design of fault tolerant network topologies and routing algorithms. Both of these approaches are presented in the thesis together with realizations in the both categories. The thesis concludes that an optimal fault tolerance solution contains carefully co-designed elements from different abstraction levelsSiirretty Doriast

    Applications of error-control coding

    Full text link

    Practical packet combining for use with cooperative and non-cooperative ARQ schemes in wireless sensor networks

    Get PDF
    Although it is envisaged that advances in technology will follow a "Moores Law" trend for many years to come, one of the aims of Wireless Sensor Networks (WSNs) is to reduce the size of the nodes as much as possible. The issue of limited resources on current devices may therefore not improve much with future designs as a result. There is a pressing need, therefore, for simple, efficient protocols and algorithms that can maximise the use of available resources in an energy efficient manner. In this thesis an improved packet combining scheme useful on low power, resource-constrained sensor networks is developed. The algorithm is applicable in areas where currently only more complex combining approaches are used. These include cooperative communications and hybrid-ARQ schemes which have been shown to be of major benefit for wireless communications. Using the packet combining scheme developed in this thesis more than an 85% reduction in energy costs are possible over previous, similar approaches. Both simulated and practical experiments are developed in which the algorithm is shown to offer up to approximately 2.5 dB reduction in the required Signal-to-Noise ratio (SNR) for a particular Packet Error Rate (PER). This is a welcome result as complex schemes, such as maximal-ratio combining, are not implementable on many of the resource constrained devices under consideration. A motivational side study on the transitional region is also carried out in this thesis. This region has been shown to be somewhat of a problem for WSNs. It is characterised by variable packet reception rate caused by a combination of fading and manufacturing variances in the radio receivers. Experiments are carried out to determine whether or not a spread-spectrum architecture has any effect on the size of this region, as has been suggested in previous work. It is shown that, for the particular setup tested, the transitional region still has significant extent even when employing a spread-spectrum architecture. This result further motivates the need for the packet combining scheme developed as it is precisely in zones such as the transitional region that packet combining will be of most benefit

    Variable Redundancy Coding for Adaptive Error Control

    Get PDF
    This thesis is concerned with variable redundancy(VR) error control coding. VR coding is proposed as one method of providing efficient adaptive error control for time-varying digital data transmission links. The VR technique involves using a set of short, easy to implement, block codes; rather than the one code of a fixed redundancy system which is usually inefficient, and complex to decode. With a VR system, efficient data-rate low-power codes are used when channel conditions are good, and very high-power inefficient codes are used when the channel is noisy. The decoder decides which code is required to cope with current conditions, and communicates this decision to the encoder by means of a feedback link. This thesis presents a theoretical and practical investigation of the VR technique, and aims to show that when compared with a fixed redundancy system one or more of the advantages of increased average data throughput, decreased maximum probability of erroneous decoding, and decreased complexity can be realised. This is confirmed by the practical results presented in the thesis, which were obtained from field trials of an experimental VR system operating over the HE’ radio channel, and from computer simulations. One consequence of the research has been the inception of a study of codes with disjoint code books and mutual Hamming distance (initially considered for combatting feedback errors), and this topic is introduced in the thesis

    Time diversity solutions to cope with lost packets

    Get PDF
    A dissertation submitted to Departamento de Engenharia Electrotécnica of Faculdade de Ciências e Tecnologia of Universidade Nova de Lisboa in partial fulfilment of the requirements for the degree of Doctor of Philosophy in Engenharia Electrotécnica e de ComputadoresModern broadband wireless systems require high throughputs and can also have very high Quality-of-Service (QoS) requirements, namely small error rates and short delays. A high spectral efficiency is needed to meet these requirements. Lost packets, either due to errors or collisions, are usually discarded and need to be retransmitted, leading to performance degradation. An alternative to simple retransmission that can improve both power and spectral efficiency is to combine the signals associated to different transmission attempts. This thesis analyses two time diversity approaches to cope with lost packets that are relatively similar at physical layer but handle different packet loss causes. The first is a lowcomplexity Diversity-Combining (DC) Automatic Repeat reQuest (ARQ) scheme employed in a Time Division Multiple Access (TDMA) architecture, adapted for channels dedicated to a single user. The second is a Network-assisted Diversity Multiple Access (NDMA) scheme, which is a multi-packet detection approach able to separate multiple mobile terminals transmitting simultaneously in one slot using temporal diversity. This thesis combines these techniques with Single Carrier with Frequency Division Equalizer (SC-FDE) systems, which are widely recognized as the best candidates for the uplink of future broadband wireless systems. It proposes a new NDMA scheme capable of handling more Mobile Terminals (MTs) than the user separation capacity of the receiver. This thesis also proposes a set of analytical tools that can be used to analyse and optimize the use of these two systems. These tools are then employed to compare both approaches in terms of error rate, throughput and delay performances, and taking the implementation complexity into consideration. Finally, it is shown that both approaches represent viable solutions for future broadband wireless communications complementing each other.Fundação para a Ciência e Tecnologia - PhD grant(SFRH/BD/41515/2007); CTS multi-annual funding project PEst-OE/EEI/UI0066/2011, IT pluri-annual funding project PEst-OE/EEI/LA0008/2011, U-BOAT project PTDC/EEATEL/ 67066/2006, MPSat project PTDC/EEA-TEL/099074/2008 and OPPORTUNISTICCR project PTDC/EEA-TEL/115981/200

    A STUDY OF LINEAR ERROR CORRECTING CODES

    Get PDF
    Since Shannon's ground-breaking work in 1948, there have been two main development streams of channel coding in approaching the limit of communication channels, namely classical coding theory which aims at designing codes with large minimum Hamming distance and probabilistic coding which places the emphasis on low complexity probabilistic decoding using long codes built from simple constituent codes. This work presents some further investigations in these two channel coding development streams. Low-density parity-check (LDPC) codes form a class of capacity-approaching codes with sparse parity-check matrix and low-complexity decoder Two novel methods of constructing algebraic binary LDPC codes are presented. These methods are based on the theory of cyclotomic cosets, idempotents and Mattson-Solomon polynomials, and are complementary to each other. The two methods generate in addition to some new cyclic iteratively decodable codes, the well-known Euclidean and projective geometry codes. Their extension to non binary fields is shown to be straightforward. These algebraic cyclic LDPC codes, for short block lengths, converge considerably well under iterative decoding. It is also shown that for some of these codes, maximum likelihood performance may be achieved by a modified belief propagation decoder which uses a different subset of 7^ codewords of the dual code for each iteration. Following a property of the revolving-door combination generator, multi-threaded minimum Hamming distance computation algorithms are developed. Using these algorithms, the previously unknown, minimum Hamming distance of the quadratic residue code for prime 199 has been evaluated. In addition, the highest minimum Hamming distance attainable by all binary cyclic codes of odd lengths from 129 to 189 has been determined, and as many as 901 new binary linear codes which have higher minimum Hamming distance than the previously considered best known linear code have been found. It is shown that by exploiting the structure of circulant matrices, the number of codewords required, to compute the minimum Hamming distance and the number of codewords of a given Hamming weight of binary double-circulant codes based on primes, may be reduced. A means of independently verifying the exhaustively computed number of codewords of a given Hamming weight of these double-circulant codes is developed and in coiyunction with this, it is proved that some published results are incorrect and the correct weight spectra are presented. Moreover, it is shown that it is possible to estimate the minimum Hamming distance of this family of prime-based double-circulant codes. It is shown that linear codes may be efficiently decoded using the incremental correlation Dorsch algorithm. By extending this algorithm, a list decoder is derived and a novel, CRC-less error detection mechanism that offers much better throughput and performance than the conventional ORG scheme is described. Using the same method it is shown that the performance of conventional CRC scheme may be considerably enhanced. Error detection is an integral part of an incremental redundancy communications system and it is shown that sequences of good error correction codes, suitable for use in incremental redundancy communications systems may be obtained using the Constructions X and XX. Examples are given and their performances presented in comparison to conventional CRC schemes

    Bit flipping decoding for binary product codes

    Get PDF
    Error control coding has been used to mitigate the impact of noise on the wireless channel. Today, wireless communication systems have in their design Forward Error Correction (FEC) techniques to help reduce the amount of retransmitted data. When designing a coding scheme, three challenges need to be addressed, the error correcting capability of the code, the decoding complexity of the code and the delay introduced by the coding scheme. While it is easy to design coding schemes with a large error correcting capability, it is a challenge finding decoding algorithms for these coding schemes. Generally increasing the length of a block code increases its error correcting capability and its decoding complexity. Product codes have been identified as a means to increase the block length of simpler codes, yet keep their decoding complexity low. Bit flipping decoding has been identified as simple to implement decoding algorithm. Research has generally been focused on improving bit flipping decoding for Low Density Parity Check codes. In this study we develop a new decoding algorithm based on syndrome checking and bit flipping to use for binary product codes, to address the major challenge of coding systems, i.e., developing codes with a large error correcting capability yet have a low decoding complexity. Simulated results show that the proposed decoding algorithm outperforms the conventional decoding algorithm proposed by P. Elias in BER and more significantly in WER performance. The algorithm offers comparable complexity to the conventional algorithm in the Rayleigh fading channel
    corecore