45 research outputs found

    Information theory : proceedings of the 1990 IEEE international workshop, Eindhoven, June 10-15, 1990

    Get PDF

    Information theory : proceedings of the 1990 IEEE international workshop, Eindhoven, June 10-15, 1990

    Get PDF

    Adding RLL Properties to Four CCSDS LDPC Codes Without Increasing Their Redundancy

    Get PDF
    This paper presents the construction of Run Length Limited (RLL) Error Control Codes (ECCs) from four Low Density Parity Check (LDPC) Codes specified by Consultative Committee for Space Data Systems (CCSDS). The obtained RLL-ECCs present a practical alternative to the CCSDS codes with pseudo-randomizers. Their advantage is that the maximal runlengths of equal symbols in their codeword sequences are guaranteed, which is not the case if the common approach with pseudo-randomizers is used. The other advantages are that no additional redundancy is introduced into encoded codewords and that the encoding and decoding procedures of the original error control CCSDS codes do not have to be modified in the following cases. In the first case if hard decoding is used and the transmission channel can be modeled as a Binary Symmetric Channel (BSC) or in the second case if soft decoding and coherent Binary Phase Shift Keying (BPSK) modulation is used and the appropriate transmission channel model is an Additive White Gaussian Noise (AWGN) channel

    Variable- and fixed-length balanced runlength-limited codes based on a Knuth-like balancing method

    Get PDF
    Abstract: A novel Knuth-like balancing method for runlengthlimited words is presented, which forms the basis of new variableand fixed-length balanced runlength-limited codes that improve on the code rate as compared to balanced runlength-limited codes based on Knuth’s original balancing procedure developed by Immink et al. While Knuth’s original balancing procedure, as incorporated by Immink et al., requires the inversion of each bit one at a time, our balancing procedure only inverts the runs as a whole one at a time. The advantage of this approach is that the number of possible inversion points, which needs to be encoded by a redundancy-contributing prefix/suffix, is reduced, thereby allowing a better code rate to be achieved. Furthermore, this balancing method also allows for runlength violating markers which improve, in a number of respects, on the optimal such markers based on Knuth’s original balancing method

    A general construction of constrained parity-check codes for optical recording

    Get PDF
    This paper proposes a general and systematic code design method to efficiently combine constrained codes with parity-check (PC) codes for optical recording. The proposed constrained PC code includes two component codes: the normal constrained (NC) code and the parity-related constrained (PRC) code. They are designed based on the same finite state machine (FSM). The rates of the designed codes are only a few tenths below the theoretical maximum. The PC constraint is defined by the generator matrix (or generator polynomial) of a linear binary PC code, which can detect any type of dominant error events or error event combinations of the system. Error propagation due to parity bits is avoided, since both component codes are protected by PCs. Two approaches are proposed to design the code in the non-return-to-zero-inverse (NRZI) format and the non-return-to-zero (NRZ) format, respectively. Designing the codes in NRZ format may reduce the number of parity bits required for error detection and simplify post-processing for error correction. Examples of several newly designed codes are illustrated. Simulation results with the Blu-Ray disc (BD) systems show that the new d = 1 constrained 4-bit PC code significantly outperforms the rate 2/3 code without parity, at both nominal density and high density

    Modulation codes

    Get PDF

    Modulation codes for mobile communications

    Get PDF
    M.Ing. (Electrical and Electronic Engineering)Please refer to full text to view abstrac

    CHANNEL CODING TECHNIQUES FOR A MULTIPLE TRACK DIGITAL MAGNETIC RECORDING SYSTEM

    Get PDF
    In magnetic recording greater area) bit packing densities are achieved through increasing track density by reducing space between and width of the recording tracks, and/or reducing the wavelength of the recorded information. This leads to the requirement of higher precision tape transport mechanisms and dedicated coding circuitry. A TMS320 10 digital signal processor is applied to a standard low-cost, low precision, multiple-track, compact cassette tape recording system. Advanced signal processing and coding techniques are employed to maximise recording density and to compensate for the mechanical deficiencies of this system. Parallel software encoding/decoding algorithms have been developed for several Run-Length Limited modulation codes. The results for a peak detection system show that Bi-Phase L code can be reliably employed up to a data rate of 5kbits/second/track. Development of a second system employing a TMS32025 and sampling detection permitted the utilisation of adaptive equalisation to slim the readback pulse. Application of conventional read equalisation techniques, that oppose inter-symbol interference, resulted in a 30% increase in performance. Further investigation shows that greater linear recording densities can be achieved by employing Partial Response signalling and Maximum Likelihood Detection. Partial response signalling schemes use controlled inter-symbol interference to increase recording density at the expense of a multi-level read back waveform which results in an increased noise penalty. Maximum Likelihood Sequence detection employs soft decisions on the readback waveform to recover this loss. The associated modulation coding techniques required for optimised operation of such a system are discussed. Two-dimensional run-length-limited (d, ky) modulation codes provide a further means of increasing storage capacity in multi-track recording systems. For example the code rate of a single track run length-limited code with constraints (1, 3), such as Miller code, can be increased by over 25% when using a 4-track two-dimensional code with the same d constraint and with the k constraint satisfied across a number of parallel channels. The k constraint along an individual track, kx, can be increased without loss of clock synchronisation since the clocking information derived by frequent signal transitions can be sub-divided across a number of, y, parallel tracks in terms of a ky constraint. This permits more code words to be generated for a given (d, k) constraint in two dimensions than is possible in one dimension. This coding technique is furthered by development of a reverse enumeration scheme based on the trellis description of the (d, ky) constraints. The application of a two-dimensional code to a high linear density system employing extended class IV partial response signalling and maximum likelihood detection is proposed. Finally, additional coding constraints to improve spectral response and error performance are discussed.Hewlett Packard, Computer Peripherals Division (Bristol

    Estimating the Sizes of Binary Error-Correcting Constrained Codes

    Full text link
    In this paper, we study binary constrained codes that are resilient to bit-flip errors and erasures. In our first approach, we compute the sizes of constrained subcodes of linear codes. Since there exist well-known linear codes that achieve vanishing probabilities of error over the binary symmetric channel (which causes bit-flip errors) and the binary erasure channel, constrained subcodes of such linear codes are also resilient to random bit-flip errors and erasures. We employ a simple identity from the Fourier analysis of Boolean functions, which transforms the problem of counting constrained codewords of linear codes to a question about the structure of the dual code. We illustrate the utility of our method in providing explicit values or efficient algorithms for our counting problem, by showing that the Fourier transform of the indicator function of the constraint is computable, for different constraints. Our second approach is to obtain good upper bounds, using an extension of Delsarte's linear program (LP), on the largest sizes of constrained codes that can correct a fixed number of combinatorial errors or erasures. We observe that the numerical values of our LP-based upper bounds beat the generalized sphere packing bounds of Fazeli, Vardy, and Yaakobi (2015).Comment: 51 pages, 2 figures, 9 tables, to be submitted to the IEEE Journal on Selected Areas in Information Theor
    corecore