17 research outputs found
A general construction of constrained parity-check codes for optical recording
This paper proposes a general and systematic code design method to efficiently combine constrained codes with parity-check (PC) codes for optical recording. The proposed constrained PC code includes two component codes: the normal constrained (NC) code and the parity-related constrained (PRC) code. They are designed based on the same finite state machine (FSM). The rates of the designed codes are only a few tenths below the theoretical maximum. The PC constraint is defined by the generator matrix (or generator polynomial) of a linear binary PC code, which can detect any type of dominant error events or error event combinations of the system. Error propagation due to parity bits is avoided, since both component codes are protected by PCs. Two approaches are proposed to design the code in the non-return-to-zero-inverse (NRZI) format and the non-return-to-zero (NRZ) format, respectively. Designing the codes in NRZ format may reduce the number of parity bits required for error detection and simplify post-processing for error correction. Examples of several newly designed codes are illustrated. Simulation results with the Blu-Ray disc (BD) systems show that the new d = 1 constrained 4-bit PC code significantly outperforms the rate 2/3 code without parity, at both nominal density and high density
Entropy and power spectrum of asymmetrically DC-constrained binary sequences
The eigenstructure of bidiagonal Hessenberg-Toeplitz matrices is determined. These matrices occur as skeleton matrices of finite-state machines generating certain asymmetrically DC-constrained binary sequences that can be used for simulating pilot tracking tones in digital magnetic recording. The eigenstructure is used to calculate the Shannon upper bound to the entropy of the finite state machine as well as the power spectrum of the maxentropic process generated by it. U7 - Cited By (since 1996): 1 U7 - Export Date: 26 February 2010 U7 - Source: Scopus U7 - CODEN: IETT
Error propagation assessment of enumerative coding schemes
Enumerative coding is an attractive algorithmic procedure for translating long source words into codewords and vice versa. The usage of long codewords makes it possible to approach a code rate which is as close as desired to Shannon's noiseless capacity of the constrained channel. Enumerative encoding is prone to massive error propagation as a single bit error could ruin entire decoded words. This contribution will evaluate the effects of error propagation of the enumerative coding of runlength-limited sequences
Effects of finite-precision arithmetic in enumerative coding
The storage requirements of conventional enumerative schemes can be reduced by using floating point arithmetical operations instead of the conventional fixed point operations. The new enumeration scheme incurs a small coding loss. A simple relationship between storage requirements and coding loss is derived