39 research outputs found

    A Method to determine Partial Weight Enumerator for Linear Block Codes

    Get PDF
    In this paper we present a fast and efficient method to find partial weight enumerator (PWE) for binary linear block codes by using the error impulse technique and Monte Carlo method. This PWE can be used to compute an upper bound of the error probability for the soft decision maximum likelihood decoder (MLD). As application of this method we give partial weight enumerators and analytical performances of the BCH(130,66), BCH(103,47) and BCH(111,55) shortened codes; the first code is obtained by shortening the binary primitive BCH (255,191,17) code and the two other codes are obtained by shortening the binary primitive BCH(127,71,19) code. The weight distributions of these three codes are unknown at our knowledge.Comment: Computer Engineering and Intelligent Systems Vol 3, No.11, 201

    Decoding of Block Codes by using Genetic Algorithms and Permutations Set

    Get PDF
    Recently Genetic algorithms are successfully used for decoding some classes of error correcting codes. For decoding a linear block code C, these genetic algorithms computes a permutation p of the code generator matrix depending of the received word. Our main contribution in this paper is to choose the permutation p from the automorphism group of C. This choice allows reducing the complexity of re-encoding in the decoding steps when C is cyclic and also to generalize the proposed genetic decoding algorithm for binary nonlinear block codes like the Kerdock codes. In this paper, an efficient stop criterion is proposed and it reduces considerably the decoding complexity of our algorithm. The simulation results of the proposed decoder, over the AWGN channel, show that it reaches the error correcting performances of its competitors. The study of the complexity shows that the proposed decoder is less complex than its competitors that are based also on genetic algorithms

    A new efficient way based on special stabilizer multiplier permutations to attack the hardness of the minimum weight search problem for large BCH codes

    Get PDF
    BCH codes represent an important class of cyclic error-correcting codes; their minimum distances are known only for some cases and remains an open NP-Hard problem in coding theory especially for large lengths. This paper presents an efficient scheme ZSSMP (Zimmermann Special Stabilizer Multiplier Permutation) to find the true value of the minimum distance for many large BCH codes. The proposed method consists in searching a codeword having the minimum weight by Zimmermann algorithm in the sub codes fixed by special stabilizer multiplier permutations. These few sub codes had very small dimensions compared to the dimension of the considered code itself and therefore the search of a codeword of global minimum weight is simplified in terms of run time complexity.  ZSSMP is validated on all BCH codes of length 255 for which it gives the exact value of the minimum distance. For BCH codes of length 511, the proposed technique passes considerably the famous known powerful scheme of Canteaut and Chabaud used to attack the public-key cryptosystems based on codes. ZSSMP is very rapid and allows catching the smallest weight codewords in few seconds. By exploiting the efficiency and the quickness of ZSSMP, the true minimum distances and consequently the error correcting capability of all the set of 165 BCH codes of length up to 1023 are determined except the two cases of the BCH(511,148) and BCH(511,259) codes. The comparison of ZSSMP with other powerful methods proves its quality for attacking the hardness of minimum weight search problem at least for the codes studied in this paper

    Enhancing feature selection with a novel hybrid approach incorporating genetic algorithms and swarm intelligence techniques

    Get PDF
    Computing advances in data storage are leading to rapid growth in large-scale datasets. Using all features increases temporal/spatial complexity and negatively influences performance. Feature selection is a fundamental stage in data preprocessing, removing redundant and irrelevant features to minimize the number of features and enhance the performance of classification accuracy. Numerous optimization algorithms were employed to handle feature selection (FS) problems, and they outperform conventional FS techniques. However, there is no metaheuristic FS method that outperforms other optimization algorithms in many datasets. This motivated our study to incorporate the advantages of various optimization techniques to obtain a powerful technique that outperforms other methods in many datasets from different domains. In this article, a novel combined method GASI is developed using swarm intelligence (SI) based feature selection techniques and genetic algorithms (GA) that uses a multi-objective fitness function to seek the optimal subset of features. To assess the performance of the proposed approach, seven datasets have been collected from the UCI repository and exploited to test the newly established feature selection technique. The experimental results demonstrate that the suggested method GASI outperforms many powerful SI-based feature selection techniques studied. GASI obtains a better average fitness value and improves classification performance

    Treatment tutomatic: Semantics analysis and traduction of the frozen expression with Nooj

    Get PDF
    The purpose of this article is to define more closely the notion of freezing which, despite numerous publications in the field, remains a vague concept. The reflections are the result of a research project that aims to build an almost exhaustive database of fixed verbal expressions in English. After a review of the essential properties of the expressions, it is indicated why each of these criteria is problematic. One of the major problems lies in the existence of related phenomena: from a semantic point of view, fixed expressions participate in the general phenomenon of polysemy, lexical solidarity brings them closer to collocations and finally, morphosyntactic fixity is present in many fixed sentences that are conversational routines and even partly in so- called free syntax

    Efficiency of two decoders based on hash techniques and syndrome calculation over a Rayleigh channel

    Get PDF
    The explosive growth of connected devices demands high quality and reliability in data transmission and storage. Error correction codes (ECCs) contribute to this in ways that are not very apparent to the end user, yet indispensable and effective at the most basic level of transmission. This paper presents an investigation of the performance and analysis of two decoders that are based on hash techniques and syndrome calculation over a Rayleigh channel. These decoders under study consist of two main features: a reduced complexity compared to other competitors and good error correction performance over an additive white gaussian noise (AWGN) channel. When applied to decode some linear block codes such as Bose, Ray-Chaudhuri, and Hocquenghem (BCH) and quadratic residue (QR) codes over a Rayleigh channel, the experiment and comparison results of these decoders have shown their efficiency in terms of guaranteed performance measured in bit error rate (BER). For example, the coding gain obtained by syndrome decoding and hash techniques (SDHT) when it is applied to decode BCH (31, 11, 11) equals 34.5 dB, i.e., a reduction rate of 75% compared to the case where the exchange is carried out without coding and decoding process

    Negation and Speculation in NLP: A Survey, Corpora, Methods, and Applications

    Get PDF
    Negation and speculation are universal linguistic phenomena that affect the performance of Natural Language Processing (NLP) applications, such as those for opinion mining and information retrieval, especially in biomedical data. In this article, we review the corpora annotated with negation and speculation in various natural languages and domains. Furthermore, we discuss the ongoing research into recent rule-based, supervised, and transfer learning techniques for the detection of negating and speculative content. Many English corpora for various domains are now annotated with negation and speculation; moreover, the availability of annotated corpora in other languages has started to increase. However, this growth is insufficient to address these important phenomena in languages with limited resources. The use of cross-lingual models and translation of the well-known languages are acceptable alternatives. We also highlight the lack of consistent annotation guidelines and the shortcomings of the existing techniques, and suggest alternatives that may speed up progress in this research direction. Adding more syntactic features may alleviate the limitations of the existing techniques, such as cue ambiguity and detecting the discontinuous scopes. In some NLP applications, inclusion of a system that is negation- and speculation-aware improves performance, yet this aspect is still not addressed or considered an essential step
    corecore