27,859 research outputs found

    The weight distribution and randomness of linear codes

    Get PDF
    Finding the weight distributions of block codes is a problem of theoretical and practical interest. Yet the weight distributions of most block codes are still unknown except for a few classes of block codes. Here, by using the inclusion and exclusion principle, an explicit formula is derived which enumerates the complete weight distribution of an (n,k,d) linear code using a partially known weight distribution. This expression is analogous to the Pless power-moment identities - a system of equations relating the weight distribution of a linear code to the weight distribution of its dual code. Also, an approximate formula for the weight distribution of most linear (n,k,d) codes is derived. It is shown that for a given linear (n,k,d) code over GF(q), the ratio of the number of codewords of weight u to the number of words of weight u approaches the constant Q = q(-)(n-k) as u becomes large. A relationship between the randomness of a linear block code and the minimum distance of its dual code is given, and it is shown that most linear block codes with rigid algebraic and combinatorial structure also display certain random properties which make them similar to random codes with no structure at all

    A labeling procedure for linear finite-state codes

    Get PDF
    A method to define the labels of the state diagram of a linear finite-state code is presented and investigated. This method is particularly suitable for simple hardware implementation since it simplifies the encoder structure. The method can also be applied to the labeling of a state diagram that is not completely connected to obtain a linear finite state code with larger free distance

    On the decoder error probability of linear codes

    Get PDF
    By using coding and combinatorial techniques, an approximate formula for the weight distribution of decodable words of most linear block codes is evaluated. This formula is then used to give an approximate expression for the decoder error probability P(sub E)(u) of linear block codes, given that an error pattern of weight u has occurred. It is shown that P(sub E)(u) approaches the constant Q as u gets large, where Q is the probability that a completely random error pattern will cause decoder error

    A lower bound for the decoder error probability of the linear MDS code

    Get PDF
    A lower bound for the decoder error probability (P sub E (u)) of a linear maximum distance separable (MDS) code is derived by counting the dominant types of decoding words around code words. It is shown that the lower bound derived is similar in form, and close numerically, to the upper bound derived

    More on the decoder error probability for Reed-Solomon codes

    Get PDF
    The decoder error probability for Reed-Solomon codes (more generally, linear maximum distance separable codes) is examined. McEliece and Swanson offered an upper bound on P sub E (u), the decoder error probability given that u symbol errors occurs. This upper bound is slightly greater than Q, the probability that a completely random error pattern will cause decoder error. By using a combinatoric technique, the principle of inclusion and exclusion, an exact formula for P sub E (u) is derived. The P sub e (u) for the (255, 223) Reed-Solomon Code used by NASA, and for the (31,15) Reed-Solomon code (JTIDS code), are calculated using the exact formula, and the P sub E (u)'s are observed to approach the Q's of the codes rapidly as u gets larger. An upper bound for the expression is derived, and is shown to decrease nearly exponentially as u increases. This proves analytically that P sub E (u) indeed approaches Q as u becomes large, and some laws of large numbers come into play

    An adaptive vector quantization scheme

    Get PDF
    Vector quantization is known to be an effective compression scheme to achieve a low bit rate so as to minimize communication channel bandwidth and also to reduce digital memory storage while maintaining the necessary fidelity of the data. However, the large number of computations required in vector quantizers has been a handicap in using vector quantization for low-rate source coding. An adaptive vector quantization algorithm is introduced that is inherently suitable for simple hardware implementation because it has a simple architecture. It allows fast encoding and decoding because it requires only addition and subtraction operations

    A comparison of the fractal and JPEG algorithms

    Get PDF
    A proprietary fractal image compression algorithm and the Joint Photographic Experts Group (JPEG) industry standard algorithm for image compression are compared. In every case, the JPEG algorithm was superior to the fractal method at a given compression ratio according to a root mean square criterion and a peak signal to noise criterion

    Locally adaptive vector quantization: Data compression with feature preservation

    Get PDF
    A study of a locally adaptive vector quantization (LAVQ) algorithm for data compression is presented. This algorithm provides high-speed one-pass compression and is fully adaptable to any data source and does not require a priori knowledge of the source statistics. Therefore, LAVQ is a universal data compression algorithm. The basic algorithm and several modifications to improve performance are discussed. These modifications are nonlinear quantization, coarse quantization of the codebook, and lossless compression of the output. Performance of LAVQ on various images using irreversible (lossy) coding is comparable to that of the Linde-Buzo-Gray algorithm, but LAVQ has a much higher speed; thus this algorithm has potential for real-time video compression. Unlike most other image compression algorithms, LAVQ preserves fine detail in images. LAVQ's performance as a lossless data compression algorithm is comparable to that of Lempel-Ziv-based algorithms, but LAVQ uses far less memory during the coding process

    Frame synchronization methods based on channel symbol measurements

    Get PDF
    The current DSN frame synchronization procedure is based on monitoring the decoded bit stream for the appearance of a sync marker sequence that is transmitted once every data frame. The possibility of obtaining frame synchronization by processing the raw received channel symbols rather than the decoded bits is explored. Performance results are derived for three channel symbol sync methods, and these are compared with results for decoded bit sync methods reported elsewhere. It is shown that each class of methods has advantages or disadvantages under different assumptions on the frame length, the global acquisition strategy, and the desired measure of acquisition timeliness. It is shown that the sync statistics based on decoded bits are superior to the statistics based on channel symbols, if the desired operating region utilizes a probability of miss many orders of magnitude higher than the probability of false alarm. This operating point is applicable for very large frame lengths and minimal frame-to-frame verification strategy. On the other hand, the statistics based on channel symbols are superior if the desired operating point has a miss probability only a few orders of magnitude greater than the false alarm probability. This happens for small frames or when frame-to-frame verifications are required

    Performance of concatenated codes using 8-bit and 10-bit Reed-Solomon codes

    Get PDF
    The performance improvement of concatenated coding systems using 10-bit instead of 8-bit Reed-Solomon codes is measured by simulation. Three inner convolutional codes are considered: (7,1/2), (15,1/4), and (15,1/6). It is shown that approximately 0.2 dB can be gained at a bit error rate of 10(-6). The loss due to nonideal interleaving is also evaluated. Performance comparisons at very low bit error rates may be relevant for systems using data compression
    corecore