3,320 research outputs found

    Decoder based on Parallel Genetic Algorithm and Multi-objective Optimization for Low Density Parity Check Codes

    Get PDF
    Genetic algorithms are powerful search techniques that are used successfully to solve problems in many different disciplines. This article introduces a new Parallel Genetic Algorithm for decoding LDPC codes (PGAD). The results show that the proposed algorithm gives large gains over the Sum-Product decoder, which proves its efficiency. We also show that the fitness function must be improved by Multi-objective Optimization, for this, we applied the Weighted Sum method to improve PGAD, this new version is called (MOGAD) gives higher performance compared to one. Keywords: Parallel Genetic Algorithms decoder, Sum-Product decoder, Fitness Function, LDPC codes, Error correcting codes, Multi-objective optimization, Weighted sum method

    Decoding of Block Codes by using Genetic Algorithms and Permutations Set

    Get PDF
    Recently Genetic algorithms are successfully used for decoding some classes of error correcting codes. For decoding a linear block code C, these genetic algorithms computes a permutation p of the code generator matrix depending of the received word. Our main contribution in this paper is to choose the permutation p from the automorphism group of C. This choice allows reducing the complexity of re-encoding in the decoding steps when C is cyclic and also to generalize the proposed genetic decoding algorithm for binary nonlinear block codes like the Kerdock codes. In this paper, an efficient stop criterion is proposed and it reduces considerably the decoding complexity of our algorithm. The simulation results of the proposed decoder, over the AWGN channel, show that it reaches the error correcting performances of its competitors. The study of the complexity shows that the proposed decoder is less complex than its competitors that are based also on genetic algorithms

    On the Computing of the Minimum Distance of Linear Block Codes by Heuristic Methods

    Full text link
    The evaluation of the minimum distance of linear block codes remains an open problem in coding theory, and it is not easy to determine its true value by classical methods, for this reason the problem has been solved in the literature with heuristic techniques such as genetic algorithms and local search algorithms. In this paper we propose two approaches to attack the hardness of this problem. The first approach is based on genetic algorithms and it yield to good results comparing to another work based also on genetic algorithms. The second approach is based on a new randomized algorithm which we call Multiple Impulse Method MIM, where the principle is to search codewords locally around the all-zero codeword perturbed by a minimum level of noise, anticipating that the resultant nearest nonzero codewords will most likely contain the minimum Hamming-weight codeword whose Hamming weight is equal to the minimum distance of the linear code

    Efficiency of two decoders based on hash techniques and syndrome calculation over a Rayleigh channel

    Get PDF
    The explosive growth of connected devices demands high quality and reliability in data transmission and storage. Error correction codes (ECCs) contribute to this in ways that are not very apparent to the end user, yet indispensable and effective at the most basic level of transmission. This paper presents an investigation of the performance and analysis of two decoders that are based on hash techniques and syndrome calculation over a Rayleigh channel. These decoders under study consist of two main features: a reduced complexity compared to other competitors and good error correction performance over an additive white gaussian noise (AWGN) channel. When applied to decode some linear block codes such as Bose, Ray-Chaudhuri, and Hocquenghem (BCH) and quadratic residue (QR) codes over a Rayleigh channel, the experiment and comparison results of these decoders have shown their efficiency in terms of guaranteed performance measured in bit error rate (BER). For example, the coding gain obtained by syndrome decoding and hash techniques (SDHT) when it is applied to decode BCH (31, 11, 11) equals 34.5 dB, i.e., a reduction rate of 75% compared to the case where the exchange is carried out without coding and decoding process

    Efficient space-frequency block coded pilot-aided channel estimation method for multiple-input-multiple-output orthogonal frequency division multiplexing systems over mobile frequency-selective fading channels

    Get PDF
    © 2014 IEEE. Personal use of this material is permitted. Permission from IEEE must be obtained for all other uses, in any current or future media, including reprinting/republishing this material for advertising or promotional purposes, creating new collective works, for resale or redistribution to servers or lists, or reuse of any copyrighted component of this work in other works.An iterative pilot-aided channel estimation technique for space-frequency block coded (SFBC) multiple-input multiple-output orthogonal frequency division multiplexing systems is proposed. Traditionally, when channel estimation techniques are utilised, the SFBC information signals are decoded one block at a time. In the proposed algorithm, multiple blocks of SFBC information signals are decoded simultaneously. The proposed channel estimation method can thus significantly reduce the amount of time required to decode information signals compared to similar channel estimation methods proposed in the literature. The proposed method is based on the maximum likelihood approach that offers linearity and simplicity of implementation. An expression for the pairwise error probability (PEP) is derived based on the estimated channel. The derived PEP is then used to determine the optimal power allocation for the pilot sequence. The performance of the proposed algorithm is demonstrated in high frequency selective channels, for different number of pilot symbols, using different modulation schemes. The algorithm is also tested under different levels of Doppler shift and for different number of transmit and receive antennas. The results show that the proposed scheme minimises the error margin between slow and high speed receivers compared to similar channel estimation methods in the literature.Peer reviewe

    HUMAN FACE RECOGNITION BASED ON FRACTAL IMAGE CODING

    Get PDF
    Human face recognition is an important area in the field of biometrics. It has been an active area of research for several decades, but still remains a challenging problem because of the complexity of the human face. In this thesis we describe fully automatic solutions that can locate faces and then perform identification and verification. We present a solution for face localisation using eye locations. We derive an efficient representation for the decision hyperplane of linear and nonlinear Support Vector Machines (SVMs). For this we introduce the novel concept of ρ\rho and η\eta prototypes. The standard formulation for the decision hyperplane is reformulated and expressed in terms of the two prototypes. Different kernels are treated separately to achieve further classification efficiency and to facilitate its adaptation to operate with the fast Fourier transform to achieve fast eye detection. Using the eye locations, we extract and normalise the face for size and in-plane rotations. Our method produces a more efficient representation of the SVM decision hyperplane than the well-known reduced set methods. As a result, our eye detection subsystem is faster and more accurate. The use of fractals and fractal image coding for object recognition has been proposed and used by others. Fractal codes have been used as features for recognition, but we need to take into account the distance between codes, and to ensure the continuity of the parameters of the code. We use a method based on fractal image coding for recognition, which we call the Fractal Neighbour Distance (FND). The FND relies on the Euclidean metric and the uniqueness of the attractor of a fractal code. An advantage of using the FND over fractal codes as features is that we do not have to worry about the uniqueness of, and distance between, codes. We only require the uniqueness of the attractor, which is already an implied property of a properly generated fractal code. Similar methods to the FND have been proposed by others, but what distinguishes our work from the rest is that we investigate the FND in greater detail and use our findings to improve the recognition rate. Our investigations reveal that the FND has some inherent invariance to translation, scale, rotation and changes to illumination. These invariances are image dependent and are affected by fractal encoding parameters. The parameters that have the greatest effect on recognition accuracy are the contrast scaling factor, luminance shift factor and the type of range block partitioning. The contrast scaling factor affect the convergence and eventual convergence rate of a fractal decoding process. We propose a novel method of controlling the convergence rate by altering the contrast scaling factor in a controlled manner, which has not been possible before. This helped us improve the recognition rate because under certain conditions better results are achievable from using a slower rate of convergence. We also investigate the effects of varying the luminance shift factor, and examine three different types of range block partitioning schemes. They are Quad-tree, HV and uniform partitioning. We performed experiments using various face datasets, and the results show that our method indeed performs better than many accepted methods such as eigenfaces. The experiments also show that the FND based classifier increases the separation between classes. The standard FND is further improved by incorporating the use of localised weights. A local search algorithm is introduced to find a best matching local feature using this locally weighted FND. The scores from a set of these locally weighted FND operations are then combined to obtain a global score, which is used as a measure of the similarity between two face images. Each local FND operation possesses the distortion invariant properties described above. Combined with the search procedure, the method has the potential to be invariant to a larger class of non-linear distortions. We also present a set of locally weighted FNDs that concentrate around the upper part of the face encompassing the eyes and nose. This design was motivated by the fact that the region around the eyes has more information for discrimination. Better performance is achieved by using different sets of weights for identification and verification. For facial verification, performance is further improved by using normalised scores and client specific thresholding. In this case, our results are competitive with current state-of-the-art methods, and in some cases outperform all those to which they were compared. For facial identification, under some conditions the weighted FND performs better than the standard FND. However, the weighted FND still has its short comings when some datasets are used, where its performance is not much better than the standard FND. To alleviate this problem we introduce a voting scheme that operates with normalised versions of the weighted FND. Although there are no improvements at lower matching ranks using this method, there are significant improvements for larger matching ranks. Our methods offer advantages over some well-accepted approaches such as eigenfaces, neural networks and those that use statistical learning theory. Some of the advantages are: new faces can be enrolled without re-training involving the whole database; faces can be removed from the database without the need for re-training; there are inherent invariances to face distortions; it is relatively simple to implement; and it is not model-based so there are no model parameters that need to be tweaked
    corecore