9 research outputs found
Some new results on majority-logic codes for correction of random errors
The main advantages of random error-correcting majority-logic
codes and majority-logic decoding in general are well known and
two-fold. Firstly, they offer a partial solution to a classical
coding theory problem, that of decoder complexity. Secondly, a
majority-logic decoder inherently corrects many more random error
patterns than the minimum distance of the code implies is possible.
The solution to the decoder complexity is only a partial one
because there are circumstances under which a majority-logic decoder
is too complex and expensive to implement. [Continues.
Bit flipping decoding for binary product codes
Error control coding has been used to mitigate the impact of noise on the wireless channel.
Today, wireless communication systems have in their design Forward Error Correction (FEC)
techniques to help reduce the amount of retransmitted data. When designing a coding scheme,
three challenges need to be addressed, the error correcting capability of the code, the decoding
complexity of the code and the delay introduced by the coding scheme. While it is easy to design
coding schemes with a large error correcting capability, it is a challenge finding decoding
algorithms for these coding schemes. Generally increasing the length of a block code increases
its error correcting capability and its decoding complexity.
Product codes have been identified as a means to increase the block length of simpler codes,
yet keep their decoding complexity low. Bit flipping decoding has been identified as simple to
implement decoding algorithm. Research has generally been focused on improving bit flipping
decoding for Low Density Parity Check codes. In this study we develop a new decoding
algorithm based on syndrome checking and bit flipping to use for binary product codes, to
address the major challenge of coding systems, i.e., developing codes with a large error
correcting capability yet have a low decoding complexity. Simulated results show that the
proposed decoding algorithm outperforms the conventional decoding algorithm proposed by P.
Elias in BER and more significantly in WER performance. The algorithm offers comparable
complexity to the conventional algorithm in the Rayleigh fading channel
A study of digital holographic filters generation. Phase 2: Digital data communication system, volume 1
An empirical study of the performance of the Viterbi decoders in bursty channels was carried out and an improved algebraic decoder for nonsystematic codes was developed. The hybrid algorithm was simulated for the (2,1), k = 7 code on a computer using 20 channels having various error statistics, ranging from pure random error to pure bursty channels. The hybrid system outperformed both the algebraic and the Viterbi decoders in every case, except the 1% random error channel where the Viterbi decoder had one bit less decoding error
On Lowering the Error Floor of Short-to-Medium Block Length Irregular Low Density Parity Check Codes
Edited version embargoed until 22.03.2019
Full version: Access restricted permanently due to 3rd party copyright restrictions. Restriction set on 22.03.2018 by SE, Doctoral CollegeGallager proposed and developed low density parity check (LDPC) codes in the early 1960s. LDPC codes were rediscovered in the early 1990s and shown to be capacity approaching over the additive white Gaussian noise (AWGN) channel. Subsequently, density evolution (DE) optimized symbol node degree distributions were used to significantly improve the decoding performance of short to medium length irregular LDPC codes. Currently, the short to medium length LDPC codes with the lowest error floor are DE optimized irregular LDPC codes constructed using progressive edge growth (PEG) algorithm modifications which are designed to increase the approximate cycle extrinsic message degrees (ACE) in the LDPC code graphs constructed.
The aim of the present work is to find efficient means to improve on the error floor performance published for short to medium length irregular LDPC codes over AWGN channels in the literature. An efficient algorithm for determining the girth and ACE distributions in short to medium length LDPC code Tanner graphs has been proposed. A cyclic PEG (CPEG) algorithm which uses an edge connections sequence that results in LDPC codes with improved girth and ACE distributions is presented. LDPC codes with DE optimized/’good’ degree distributions which have larger minimum distances and stopping distances than previously published for LDPC codes of similar length and rate have been found. It is shown that increasing the minimum distance of LDPC codes lowers their error floor performance over AWGN channels; however, there are threshold minimum distances values above which there is no further lowering of the error floor performance. A minimum local girth (edge skipping) (MLG (ES)) PEG algorithm is presented; the algorithm controls the minimum local girth (global girth) connected in the Tanner graphs of LDPC codes constructed by forfeiting some edge connections. A technique for constructing optimal low correlated edge density (OED) LDPC codes based on modified DE optimized symbol node degree distributions and the MLG (ES) PEG algorithm modification is presented. OED rate-½ (n, k)=(512, 256) LDPC codes have been shown to have lower error floor over the AWGN channel than previously published for LDPC codes of similar length and rate. Similarly, consequent to an improved symbol node degree distribution, rate ½ (n, k)=(1024, 512) LDPC codes have been shown to have lower error floor over the AWGN channel than previously published for LDPC codes of similar length and rate.
An improved BP/SPA (IBP/SPA) decoder, obtained by making two simple modifications to the standard BP/SPA decoder, has been shown to result in an unprecedented generalized improvement in the performance of short to medium length irregular LDPC codes under iterative message passing decoding. The superiority of the Slepian Wolf distributed source coding model over other distributed source coding models based on LDPC codes has been shown
Algebraic Codes For Error Correction In Digital Communication Systems
Access to the full-text thesis is no longer available at the author's request, due to 3rd party copyright restrictions. Access removed on 29.11.2016 by CS (TIS).Metadata merged with duplicate record (http://hdl.handle.net/10026.1/899) on 20.12.2016 by CS (TIS).C. Shannon presented theoretical conditions under which communication was possible
error-free in the presence of noise. Subsequently the notion of using error
correcting codes to mitigate the effects of noise in digital transmission was introduced
by R. Hamming. Algebraic codes, codes described using powerful tools from
algebra took to the fore early on in the search for good error correcting codes. Many
classes of algebraic codes now exist and are known to have the best properties of
any known classes of codes. An error correcting code can be described by three of its
most important properties length, dimension and minimum distance. Given codes
with the same length and dimension, one with the largest minimum distance will
provide better error correction. As a result the research focuses on finding improved
codes with better minimum distances than any known codes.
Algebraic geometry codes are obtained from curves. They are a culmination of years
of research into algebraic codes and generalise most known algebraic codes. Additionally
they have exceptional distance properties as their lengths become arbitrarily
large. Algebraic geometry codes are studied in great detail with special attention
given to their construction and decoding. The practical performance of these codes
is evaluated and compared with previously known codes in different communication
channels. Furthermore many new codes that have better minimum distance
to the best known codes with the same length and dimension are presented from
a generalised construction of algebraic geometry codes. Goppa codes are also an
important class of algebraic codes. A construction of binary extended Goppa codes
is generalised to codes with nonbinary alphabets and as a result many new codes
are found. This construction is shown as an efficient way to extend another well
known class of algebraic codes, BCH codes. A generic method of shortening codes
whilst increasing the minimum distance is generalised. An analysis of this method
reveals a close relationship with methods of extending codes. Some new codes from
Goppa codes are found by exploiting this relationship. Finally an extension method
for BCH codes is presented and this method is shown be as good as a well known
method of extension in certain cases
A modified belief-propagation decoder for the parallel decoding of product codes
In this dissertation a modification to the belief-propagation algorithm is presented.
The algorithm modifies the belief-propagation algorithm to allow for the parallel
decoding of product codes. The algorithm leverages the fact that each component
code in the product code can be independently decoded because the codewords are
encoded by independent and identically distributed (i.i.d.) processes. The algorithm
maximises the parellelisation by decoding all the component codes in each dimension
in parallel. In order to facilitate this process we developed new additional stages
which are added to the belief-propagation algorithm: the codeword reliability estimation,
the belief-aggregation and the exit test stages. The parallel product code
decoder o ers a 0.2 dB worsening of the decoding BER performance when compared
to the best serial decoder. However, the parallel belief-propagation decoder
o ers a 7.26 time speedup on an eight-core processor, which is 0.91 of the theoretical
maximum of eight for an eight-core processor
Iterative receiver in multiuser relaying systems with fast frequency-hopping modulation
In this thesis, a novel iterative receiver and its improved version are proposed for
relay-assisted multiuser communications, in which multiple users transmit to a destination
with the help of a relay and using fast frequency-hopping modulation. Each
user employs a channel encoder to protect its information and facilitate interference
cancellation at the receiver. The signal received at the relay is either amplified, or
partially decoded with a simple energy detector, before being forwarded to the destination.
Under flat Rayleigh fading channels, the receiver at the destination can
be implemented non-coherently, i.e., it does not require the instantaneous channel
information to demodulate the users’ transmitted signals. The proposed iterative
algorithm at the destination exploits the soft outputs of the channel decoders to
successively extract the maximum likelihood symbols of the users and perform interference
cancellation. The iterative method is successfully applied for both cases of
amplify-and-forward and partial decode-and-forward relaying. The error performance
of the proposed iterative receiver is investigated by computer simulation. Under the
same spectral efficiency, simulation results demonstrate the excellent performance of
the proposed receiver when compared to the performance of decoding without interference
cancellation as well as the performance of the maximum likelihood multiuser
detection previously developed for uncoded transmission. Simulation results also suggest
that a proper selection of channel coding schemes can help to support significant
more users without consuming extra system resources.
In addition, to further enhance the receiver’s performance in terms of the bit error
rate, an improved version of the iterative receiver is presented. Such an improved receiver
invokes inner-loop iterations between the channel decoders and the demappers
in such a way that the soft outputs of the channel decoders are also used to refine the
outputs of the demappers for every outer-loop iteration. Simulation results indicate
a performance gain of about 2.5dB by using the two-loop receiver when compared to
the performance of the first proposed receiver