6,696 research outputs found

    A linear construction for certain Kerdock and Preparata codes

    Full text link
    The Nordstrom-Robinson, Kerdock, and (slightly modified) Pre\- parata codes are shown to be linear over \ZZ_4, the integers  mod  4\bmod~4. The Kerdock and Preparata codes are duals over \ZZ_4, and the Nordstrom-Robinson code is self-dual. All these codes are just extended cyclic codes over \ZZ_4. This provides a simple definition for these codes and explains why their Hamming weight distributions are dual to each other. First- and second-order Reed-Muller codes are also linear codes over \ZZ_4, but Hamming codes in general are not, nor is the Golay code.Comment: 5 page

    Construction of a Large Class of Deterministic Sensing Matrices that Satisfy a Statistical Isometry Property

    Full text link
    Compressed Sensing aims to capture attributes of kk-sparse signals using very few measurements. In the standard Compressed Sensing paradigm, the \m\times \n measurement matrix \A is required to act as a near isometry on the set of all kk-sparse signals (Restricted Isometry Property or RIP). Although it is known that certain probabilistic processes generate \m \times \n matrices that satisfy RIP with high probability, there is no practical algorithm for verifying whether a given sensing matrix \A has this property, crucial for the feasibility of the standard recovery algorithms. In contrast this paper provides simple criteria that guarantee that a deterministic sensing matrix satisfying these criteria acts as a near isometry on an overwhelming majority of kk-sparse signals; in particular, most such signals have a unique representation in the measurement domain. Probability still plays a critical role, but it enters the signal model rather than the construction of the sensing matrix. We require the columns of the sensing matrix to form a group under pointwise multiplication. The construction allows recovery methods for which the expected performance is sub-linear in \n, and only quadratic in \m; the focus on expected performance is more typical of mainstream signal processing than the worst-case analysis that prevails in standard Compressed Sensing. Our framework encompasses many families of deterministic sensing matrices, including those formed from discrete chirps, Delsarte-Goethals codes, and extended BCH codes.Comment: 16 Pages, 2 figures, to appear in IEEE Journal of Selected Topics in Signal Processing, the special issue on Compressed Sensin

    Deep Predictive Coding Neural Network for RF Anomaly Detection in Wireless Networks

    Full text link
    Intrusion detection has become one of the most critical tasks in a wireless network to prevent service outages that can take long to fix. The sheer variety of anomalous events necessitates adopting cognitive anomaly detection methods instead of the traditional signature-based detection techniques. This paper proposes an anomaly detection methodology for wireless systems that is based on monitoring and analyzing radio frequency (RF) spectrum activities. Our detection technique leverages an existing solution for the video prediction problem, and uses it on image sequences generated from monitoring the wireless spectrum. The deep predictive coding network is trained with images corresponding to the normal behavior of the system, and whenever there is an anomaly, its detection is triggered by the deviation between the actual and predicted behavior. For our analysis, we use the images generated from the time-frequency spectrograms and spectral correlation functions of the received RF signal. We test our technique on a dataset which contains anomalies such as jamming, chirping of transmitters, spectrum hijacking, and node failure, and evaluate its performance using standard classifier metrics: detection ratio, and false alarm rate. Simulation results demonstrate that the proposed methodology effectively detects many unforeseen anomalous events in real time. We discuss the applications, which encompass industrial IoT, autonomous vehicle control and mission-critical communications services.Comment: 7 pages, 7 figures, Communications Workshop ICC'1

    Decoding of Projective Reed-Muller Codes by Dividing a Projective Space into Affine Spaces

    Full text link
    A projective Reed-Muller (PRM) code, obtained by modifying a (classical) Reed-Muller code with respect to a projective space, is a doubly extended Reed-Solomon code when the dimension of the related projective space is equal to 1. The minimum distance and dual code of a PRM code are known, and some decoding examples have been represented for low-dimensional projective space. In this study, we construct a decoding algorithm for all PRM codes by dividing a projective space into a union of affine spaces. In addition, we determine the computational complexity and the number of errors correctable of our algorithm. Finally, we compare the codeword error rate of our algorithm with that of minimum distance decoding.Comment: 17 pages, 4 figure

    Properties and Construction of Polar Codes

    Full text link
    Recently, Ar{\i}kan introduced the method of channel polarization on which one can construct efficient capacity-achieving codes, called polar codes, for any binary discrete memoryless channel. In the thesis, we show that decoding algorithm of polar codes, called successive cancellation decoding, can be regarded as belief propagation decoding, which has been used for decoding of low-density parity-check codes, on a tree graph. On the basis of the observation, we show an efficient construction method of polar codes using density evolution, which has been used for evaluation of the error probability of belief propagation decoding on a tree graph. We further show that channel polarization phenomenon and polar codes can be generalized to non-binary discrete memoryless channels. Asymptotic performances of non-binary polar codes, which use non-binary matrices called the Reed-Solomon matrices, are better than asymptotic performances of the best explicitly known binary polar code. We also find that the Reed-Solomon matrices are considered to be natural generalization of the original binary channel polarization introduced by Ar{\i}kan.Comment: Master thesis. The supervisor is Toshiyuki Tanaka. 24 pages, 3 figure

    List decoding of noisy Reed-Muller-like codes

    Full text link
    First- and second-order Reed-Muller (RM(1) and RM(2), respectively) codes are two fundamental error-correcting codes which arise in communication as well as in probabilistically-checkable proofs and learning. In this paper, we take the first steps toward extending the quick randomized decoding tools of RM(1) into the realm of quadratic binary and, equivalently, Z_4 codes. Our main algorithmic result is an extension of the RM(1) techniques from Goldreich-Levin and Kushilevitz-Mansour algorithms to the Hankel code, a code between RM(1) and RM(2). That is, given signal s of length N, we find a list that is a superset of all Hankel codewords phi with dot product to s at least (1/sqrt(k)) times the norm of s, in time polynomial in k and log(N). We also give a new and simple formulation of a known Kerdock code as a subcode of the Hankel code. As a corollary, we can list-decode Kerdock, too. Also, we get a quick algorithm for finding a sparse Kerdock approximation. That is, for k small compared with 1/sqrt{N} and for epsilon > 0, we find, in time polynomial in (k log(N)/epsilon), a k-Kerdock-term approximation s~ to s with Euclidean error at most the factor (1+epsilon+O(k^2/sqrt{N})) times that of the best such approximation
    • …
    corecore