531 research outputs found

    A Note on Cyclic Codes from APN Functions

    Full text link
    Cyclic codes, as linear block error-correcting codes in coding theory, play a vital role and have wide applications. Ding in \cite{D} constructed a number of classes of cyclic codes from almost perfect nonlinear (APN) functions and planar functions over finite fields and presented ten open problems on cyclic codes from highly nonlinear functions. In this paper, we consider two open problems involving the inverse APN functions f(x)=xqm2f(x)=x^{q^m-2} and the Dobbertin APN function f(x)=x24i+23i+22i+2i1f(x)=x^{2^{4i}+2^{3i}+2^{2i}+2^{i}-1}. From the calculation of linear spans and the minimal polynomials of two sequences generated by these two classes of APN functions, the dimensions of the corresponding cyclic codes are determined and lower bounds on the minimum weight of these cyclic codes are presented. Actually, we present a framework for the minimal polynomial and linear span of the sequence ss^{\infty} defined by st=Tr((1+αt)e)s_t=Tr((1+\alpha^t)^e), where α\alpha is a primitive element in GF(q)GF(q). These techniques can also be applied into other open problems in \cite{D}

    The Weight Distributions of Cyclic Codes and Elliptic Curves

    Full text link
    Cyclic codes with two zeros and their dual codes as a practically and theoretically interesting class of linear codes, have been studied for many years. However, the weight distributions of cyclic codes are difficult to determine. From elliptic curves, this paper determines the weight distributions of dual codes of cyclic codes with two zeros for a few more cases

    A study on mutual information-based feature selection for text categorization

    Get PDF
    Feature selection plays an important role in text categorization. Automatic feature selection methods such as document frequency thresholding (DF), information gain (IG), mutual information (MI), and so on are commonly applied in text categorization. Many existing experiments show IG is one of the most effective methods, by contrast, MI has been demonstrated to have relatively poor performance. According to one existing MI method, the mutual information of a category c and a term t can be negative, which is in conflict with the definition of MI derived from information theory where it is always non-negative. We show that the form of MI used in TC is not derived correctly from information theory. There are two different MI based feature selection criteria which are referred to as MI in the TC literature. Actually, one of them should correctly be termed "pointwise mutual information" (PMI). In this paper, we clarify the terminological confusion surrounding the notion of "mutual information" in TC, and detail an MI method derived correctly from information theory. Experiments with the Reuters-21578 collection and OHSUMED collection show that the corrected MI method’s performance is similar to that of IG, and it is considerably better than PMI

    Exploration on Intelligent Teaching of Probability and Statistics in Universities

    Get PDF
    Starting from the concept of Intelligent Teaching, this paper briefly introduces the meaning of intelligent classroom, then gives the description about tools of Intelligent teaching. Taking Probability and Statistics as an example, this paper explores the mild Intelligent teaching with Mosotech as the tool of Intelligent teaching which can be used before, during and after class under the Intelligent teaching mode. Keywords: Intelligent teaching, Intelligent teaching tools, Probability and statistics DOI: 10.7176/JEP/12-27-03 Publication date:September 30th 202
    corecore