122 research outputs found

    An Iteratively Decodable Tensor Product Code with Application to Data Storage

    Full text link
    The error pattern correcting code (EPCC) can be constructed to provide a syndrome decoding table targeting the dominant error events of an inter-symbol interference channel at the output of the Viterbi detector. For the size of the syndrome table to be manageable and the list of possible error events to be reasonable in size, the codeword length of EPCC needs to be short enough. However, the rate of such a short length code will be too low for hard drive applications. To accommodate the required large redundancy, it is possible to record only a highly compressed function of the parity bits of EPCC's tensor product with a symbol correcting code. In this paper, we show that the proposed tensor error-pattern correcting code (T-EPCC) is linear time encodable and also devise a low-complexity soft iterative decoding algorithm for EPCC's tensor product with q-ary LDPC (T-EPCC-qLDPC). Simulation results show that T-EPCC-qLDPC achieves almost similar performance to single-level qLDPC with a 1/2 KB sector at 50% reduction in decoding complexity. Moreover, 1 KB T-EPCC-qLDPC surpasses the performance of 1/2 KB single-level qLDPC at the same decoder complexity.Comment: Hakim Alhussien, Jaekyun Moon, "An Iteratively Decodable Tensor Product Code with Application to Data Storage

    Analysis and Design of Tuned Turbo Codes

    Get PDF
    It has been widely observed that there exists a fundamental trade-off between the minimum (Hamming) distance properties and the iterative decoding convergence behavior of turbo-like codes. While capacity achieving code ensembles typically are asymptotically bad in the sense that their minimum distance does not grow linearly with block length, and they therefore exhibit an error floor at moderate-to-high signal to noise ratios, asymptotically good codes usually converge further away from channel capacity. In this paper, we introduce the concept of tuned turbo codes, a family of asymptotically good hybrid concatenated code ensembles, where asymptotic minimum distance growth rates, convergence thresholds, and code rates can be traded-off using two tuning parameters, {\lambda} and {\mu}. By decreasing {\lambda}, the asymptotic minimum distance growth rate is reduced in exchange for improved iterative decoding convergence behavior, while increasing {\lambda} raises the asymptotic minimum distance growth rate at the expense of worse convergence behavior, and thus the code performance can be tuned to fit the desired application. By decreasing {\mu}, a similar tuning behavior can be achieved for higher rate code ensembles.Comment: Accepted for publication in IEEE Transactions on Information Theor

    Advances in Modeling and Signal Processing for Bit-Patterned Magnetic Recording Channels with Written-In Errors

    Get PDF
    In the past perpendicular magnetic recording on continuous media has served as the storage mechanism for the hard-disk drive (HDD) industry, allowing for growth in areal densities approaching 0.5 Tb/in2. Under the current system design, further increases are limited by the superparamagnetic effect where the medium's thermal energy destabilizes the individual bit domains used for storage. In order to provide for future growth in the area of magnetic recording for disk drives, a number of various technology shifts have been proposed and are currently undergoing considerable research. One promising option involves switching to a discrete medium in the form of individual bit islands, termed bit-patterned magnetic recording (BPMR).When switching from a continuous to a discrete media, the problems encountered become substantial for every aspect of the hard-disk drive design. In this dissertation the complications in modeling and signal processing for bit-patterned magnetic recording are investigated where the write and read processes along with the channel characteristics present considerable challenges. For a target areal density of 4 Tb/in2, the storage process is hindered by media noise, two-dimensional (2D) intersymbol interference (ISI), electronics noise and written-in errors introduced during the write process. Thus there is a strong possibility that BPMR may prove intractable as a future HDD technology at high areal densities because the combined negative effects of the many error sources produces an environment where current signal processing techniques cannot accurately recover the stored data. The purpose here is to exploit advanced methods of detection and error correction to show that data can be effectively recovered from a BPMR channel in the presence of multiple error sources at high areal densities.First a practical model for the readback response of an individual island is established that is capable of representing its 2D nature with a Gaussian pulse. Various characteristics of the readback pulse are shown to emerge as it is subjected to the degradation of 2D media noise. The writing of the bits within a track is also investigated with an emphasis on the write process's ability to inject written-in errors in the data stream resulting from both a loss of synchronization of the write clock and the interaction of the local-scale magnetic fields under the influence of the applied write field.To facilitate data recovery in the presence of BPMR's major degradations, various detection and error-correction methods are utilized. For single-track equalization of the channel output, noise prediction is incorporated to assist detection with increased levels of media noise. With large detrimental amounts of 2D ISI and media noise present in the channel at high areal densities, a 2D approach known as multi-track detection is investigated where multiple tracks are sensed by the read heads and then used to extract information on the target track. For BPMR the output of the detector still possesses the uncorrected written-in errors. Powerful error-correction codes based on finite geometries are employed to help recover the original data stream. Increased error-correction is sought by utilizing two-fold EG codes in combination with a form of automorphism decoding known as auto-diversity. Modifications to the parity-check matrices of the error-correction codes are also investigated for the purpose of attempting more practical applications of the decoding algorithms based on belief propagation. Under the proposed techniques it is shown that effective data recovery is possible at an areal density of 4 Tb/in2 in the presence of all significant error sources except for insertions and deletions. Data recovery from the BPMR channel with insertions and deletions remains an open problem

    Signal Processing for Bit-Patterned Media Recording

    Get PDF
    Ph.DDOCTOR OF PHILOSOPH

    Decomposition Methods for Large Scale LP Decoding

    Full text link
    When binary linear error-correcting codes are used over symmetric channels, a relaxed version of the maximum likelihood decoding problem can be stated as a linear program (LP). This LP decoder can be used to decode error-correcting codes at bit-error-rates comparable to state-of-the-art belief propagation (BP) decoders, but with significantly stronger theoretical guarantees. However, LP decoding when implemented with standard LP solvers does not easily scale to the block lengths of modern error correcting codes. In this paper we draw on decomposition methods from optimization theory, specifically the Alternating Directions Method of Multipliers (ADMM), to develop efficient distributed algorithms for LP decoding. The key enabling technical result is a "two-slice" characterization of the geometry of the parity polytope, which is the convex hull of all codewords of a single parity check code. This new characterization simplifies the representation of points in the polytope. Using this simplification, we develop an efficient algorithm for Euclidean norm projection onto the parity polytope. This projection is required by ADMM and allows us to use LP decoding, with all its theoretical guarantees, to decode large-scale error correcting codes efficiently. We present numerical results for LDPC codes of lengths more than 1000. The waterfall region of LP decoding is seen to initiate at a slightly higher signal-to-noise ratio than for sum-product BP, however an error floor is not observed for LP decoding, which is not the case for BP. Our implementation of LP decoding using ADMM executes as fast as our baseline sum-product BP decoder, is fully parallelizable, and can be seen to implement a type of message-passing with a particularly simple schedule.Comment: 35 pages, 11 figures. An early version of this work appeared at the 49th Annual Allerton Conference, September 2011. This version to appear in IEEE Transactions on Information Theor

    ADVANCED SIGNAL PROCESSING FOR MAGNETIC RECORDING ON PERPENDICULARLY MAGNETIZED MEDIA

    Get PDF
    In magnetic recording channels (MRCs) the readback signal is corrupted by many kinds of impairments, such as electronic noise, media noise, intersymbol interference (ISI), inter-track interference (ITI) and different types of erasures. The growth in demand for the information storage, leads to the continuing pursuit of higher recording density, which enhances the impact of the noise contamination and makes the recovery of the user data from magnetic media more challenging. In this dissertation, we develop advanced signal processing techniques to mitigate these impairments in MRCs.We focus on magnetic recording on perpendicularly magnetized media, from the state-of-the art continuous media to bit-patterned media, which is a possible choice for the next generation of products. We propose novel techniques for soft-input soft-output channel detection, soft iterative decoding of low-density parity-check (LDPC) codes as well as LDPC code designs for MRCs.First we apply the optimal subblock-by-subblock detector (OBBD) to nonbinary LDPC coded perpendicular magnetic recording channels (PMRCs) and derive a symbol-based detector to do the turbo equalization exactly. Second, we propose improved belief-propagation (BP) decoders for both binary and nonbinary LDPC coded PMRCs, which provide significant gains over the standard BP decoder. Third, we introduce novel LDPC code design techniques to construct LDPC codes with fewer short cycles. Performance improvement is achieved by applying the new LDPC codes to PMRCs. Fourth, we do a substantial investigation on Reed-Solomon (RS) plus LDPC coded PMRCs. Finally, we continue our research on bit-patterned magnetic recording (BPMR) channels at extremely high recording densities. A multi-track detection technique is proposed to mitigate the severe ITI in BPMR channels. The multi-track detection with both joint-track and two-dimensional (2D) equalization provide significant performance improvement compared to conventional equalization and detection methods

    Iterative detection and decoding for separable two-dimensional intersymbol interference

    Full text link

    Multitrack Detection for Magnetic Recording

    Get PDF
    The thesis develops advanced signal processing algorithms for magnetic recording to increase areal density. The exploding demand for cloud storage is motivating a push for higher areal densities, with narrower track pitches and shorter bit lengths. The resulting increase in interference and media noise requires improvements in read channel signal processing to keep pace. This thesis proposes the multitrack pattern-dependent noise-prediction algorithm as a solution to the joint maximum-likelihood multitrack detection problem in the face of pattern-dependent autoregressive Gaussian noise. The magnetic recording read channel has numerous parameters that must be carefully tuned for best performance; these include not only the equalizer coefficients but also any parameters inside the detector. This thesis proposes two new tuning strategies: one is to minimize the bit-error rate after detection, and the other is to minimize the frame-error rate after error-control decoding. Furthermore, this thesis designs a neural network read channel architecture and compares the performance and complexity with these traditional signal processing techniques.Ph.D
    corecore