340 research outputs found

    An Optimal Unequal Error Protection LDPC Coded Recording System

    Full text link
    For efficient modulation and error control coding, the deliberate flipping approach imposes the run-length-limited(RLL) constraint by bit error before recording. From the read side, a high coding rate limits the correcting capability of RLL bit error. In this paper, we study the low-density parity-check (LDPC) coding for RLL constrained recording system based on the Unequal Error Protection (UEP) coding scheme design. The UEP capability of irregular LDPC codes is used for recovering flipped bits. We provide an allocation technique to limit the occurrence of flipped bits on the bit with robust correction capability. In addition, we consider the signal labeling design to decrease the number of nearest neighbors to enhance the robust bit. We also apply the density evolution technique to the proposed system for evaluating the code performances. In addition, we utilize the EXIT characteristic to reveal the decoding behavior of the recommended code distribution. Finally, the optimization approach for the best distribution is proven by differential evolution for the proposed system.Comment: 20 pages, 18 figure

    Dynamic Data Encoding for Page-Oriented Memories

    Get PDF
    This dissertation presents a key portion of the system architecture for a high performance page-oriented memory. The focus of this research is the development of new dynamic encoding algorithms that provide high data reliability with code density that is higher than in the conventional static modulation schemes. It also presents an intelligent read/write head architecture capable of implementing the most promising of these algorithms in real-time.Data encoding techniques for page-oriented mass storage devices are typically conservative in order to overcome the destructive effects of inter-symbol interference and noise due to the physical characteristics of the media. Therefore significantly more bits are required in an encoded version of data than in the original information. This penalty in the code density, usually referred to as code rate, keeps the utilization of the media relatively low, often less than 50% of the capacity of a maximally dense code. This is partially because encoding techniques are static and assume the worst case for the information surrounding the data block being encoded. However, in the context of page-oriented data transfers it is possible to evaluate the surrounding information for each code block location, and, thus, to apply a custom code set for each code block. Since evaluating each possible code during runtime leads to very high time complexity for encoding and decoding algorithms, we also present alternative algorithms that successfully trade time complexity for code density and are a strong competition to the traditional static modulation schemes. In order to verify that the encoding algorithms are both efficient and applicable, they were analyzed using a two-photon optical memory model. The analysis focused on how well the algorithms performed as a trade off between complexity and code density. It resulted that a full enumeration of codes yielded code density as high as 83%, although the time complexity for the enumeration approach was exponential. In another study, a linear time algorithm was analyzed. The code density of this algorithm was just over 54% percent. Finally, a novel quasidynamic encoding algorithm was created, which yielded 76% code density and had constant time complexity

    The Error-Pattern-Correcting Turbo Equalizer

    Full text link
    The error-pattern correcting code (EPCC) is incorporated in the design of a turbo equalizer (TE) with aim to correct dominant error events of the inter-symbol interference (ISI) channel at the output of its matching Viterbi detector. By targeting the low Hamming-weight interleaved errors of the outer convolutional code, which are responsible for low Euclidean-weight errors in the Viterbi trellis, the turbo equalizer with an error-pattern correcting code (TE-EPCC) exhibits a much lower bit-error rate (BER) floor compared to the conventional non-precoded TE, especially for high rate applications. A maximum-likelihood upper bound is developed on the BER floor of the TE-EPCC for a generalized two-tap ISI channel, in order to study TE-EPCC's signal-to-noise ratio (SNR) gain for various channel conditions and design parameters. In addition, the SNR gain of the TE-EPCC relative to an existing precoded TE is compared to demonstrate the present TE's superiority for short interleaver lengths and high coding rates.Comment: This work has been submitted to the special issue of the IEEE Transactions on Information Theory titled: "Facets of Coding Theory: from Algorithms to Networks". This work was supported in part by the NSF Theoretical Foundation Grant 0728676

    On Coding and Detection Techniques for Two-Dimensional Magnetic Recording

    Get PDF
    Edited version embargoed until 15.04.2020 Full version: Access restricted permanently due to 3rd party copyright restrictions. Restriction set on 15/04/2019 by AS, Doctoral CollegeThe areal density growth of magnetic recording systems is fast approaching the superparamagnetic limit for conventional magnetic disks. This is due to the increasing demand for high data storage capacity. Two-dimensional Magnetic Recording (TDMR) is a new technology aimed at increasing the areal density of magnetic recording systems beyond the limit of current disk technology using conventional disk media. However, it relies on advanced coding and signal processing techniques to achieve areal density gains. Current state of the art signal processing for TDMR channel employed iterative decoding with Low Density Parity Check (LDPC) codes, coupled with 2D equalisers and full 2D Maximum Likelihood (ML) detectors. The shortcoming of these algorithms is their computation complexity especially with regards to the ML detectors which is exponential with respect to the number of bits involved. Therefore, robust low-complexity coding, equalisation and detection algorithms are crucial for successful future deployment of the TDMR scheme. This present work is aimed at finding efficient and low-complexity coding, equalisation, detection and decoding techniques for improving the performance of TDMR channel and magnetic recording channel in general. A forward error correction (FEC) scheme of two concatenated single parity bit systems along track separated by an interleaver has been presented for channel with perpendicular magnetic recording (PMR) media. Joint detection decoding algorithm using constrained MAP detector for simultaneous detection and decoding of data with single parity bit system has been proposed. It is shown that using the proposed FEC scheme with the constrained MAP detector/decoder can achieve a gain of up to 3dB over un-coded MAP decoder for 1D interference channel. A further gain of 1.5 dB was achieved by concatenating two interleavers with extra parity bit when data density along track is high. The use of single bit parity code as a run length limited code as well as an error correction code is demonstrated to simplify detection complexity and improve system performance. A low-complexity 2D detection technique for TDMR system with Shingled Magnetic Recording Media (SMR) was also proposed. The technique used the concatenation of 2D MAP detector along track with regular MAP detector across tracks to reduce the complexity order of using full 2D detection from exponential to linear. It is shown that using this technique can improve track density with limited complexity. Two methods of FEC for TDMR channel using two single parity bit systems have been discussed. One using two concatenated single parity bits along track only, separated by a Dithered Relative Prime (DRP) interleaver and the other use the single parity bits in both directions without the DRP interleaver. Consequent to the FEC coding on the channel, a 2D multi-track MAP joint detector decoder has been proposed for simultaneous detection and decoding of the coded single parity bit data. A gain of up to 5dB was achieved using the FEC scheme with the 2D multi-track MAP joint detector decoder over un-coded 2D multi-track MAP detector in TDMR channel. In a situation with high density in both directions, it is shown that FEC coding using two concatenated single parity bits along track separated by DRP interleaver performed better than when the single parity bits are used in both directions without the DRP interleaver.9mobile Nigeri

    EQUALISATION TECHNIQUES FOR MULTI-LEVEL DIGITAL MAGNETIC RECORDING

    Get PDF
    A large amount of research has been put into areas of signal processing, medium design, head and servo-mechanism design and coding for conventional longitudinal as well as perpendicular magnetic recording. This work presents some further investigation in the signal processing and coding aspects of longitudinal and perpendicular digital magnetic recording. The work presented in this thesis is based upon numerical analysis using various simulation methods. The environment used for implementation of simulation models is C/C + + programming. Important results based upon bit error rate calculations have been documented in this thesis. This work presents the new designed Asymmetric Decoder (AD) which is modified to take into account the jitter noise and shows that it has better performance than classical BCJR decoders with the use of Error Correction Codes (ECC). In this work, a new method of designing Generalised Partial Response (GPR) target and its equaliser has been discussed and implemented which is based on maximising the ratio of the minimum squared euclidean distance of the PR target to the noise penalty introduced by the Partial Response (PR) filter. The results show that the new designed GPR targets have consistently better performance in comparison to various GPR targets previously published. Two methods of equalisation including the industry's standard PR, and a novel Soft-Feedback- Equalisation (SFE) have been discussed which are complimentary to each other. The work on SFE, which is a novelty of this work, was derived from the problem of Inter Symbol Interference (ISI) and noise colouration in PR equalisation. This work also shows that multi-level SFE with MAP/BCJR feedback based magnetic recording with ECC has similar performance when compared to high density binary PR based magnetic recording with ECC, thus documenting the benefits of multi-level magnetic recording. It has been shown that 4-level PR based magnetic recording with ECC at half the density of binary PR based magnetic recording has similar performance and higher packing density by a factor of 2. A novel technique of combining SFE and PR equalisation to achieve best ISI cancellation in a iterative fashion has been discussed. A consistent gain of 0.5 dB and more is achieved when this technique is investigated with application of Maximum Transition Run (MTR) codes. As the length of the PR target in PR equalisation increases, the gain achieved using this novel technique consistently increases and reaches up to 1.2 dB in case of EEPR4 target for a bit error rate of 10-5

    A Study on DNA Memory Encoding Architecture

    Get PDF
    The amount of raw generated data is growing at an exponential rate due to the greatly increasing number of sensors in electronic systems. While the majority of this data is never used, it is often kept for cases such as failure analysis. As such, archival memory storage, where data can be stored at an extremely high density at the cost of read latency, is becoming more popular than ever for long term storage. In biological organisms, Deoxyribonucleic Acid (DNA) is used as a method of storing information in terms of simple building blocks, as to allow for larger and more complicated struc- tures in a density much higher than can currently be realized on modern memory devices. Given the ability for organisms to store this information in a set of four bases for an extremely long amounts of time with limited degradation, DNA presents itself as a possible way to store data in a manner similar to binary data. This work investigates the use of DNA strands as a storage regime, where system-level data is translated into an efficient encoding to minimize base pair errors both at a local level and at the chain level. An encoding method using a Bose-Chaudhuri-Hocquenghem (BCH) pre-coded Raptor scheme is implemented in conjunction with an 8 to 6 bi- nary to base translation, yielding an informational density of 1.18 bits/base pair. A Field-Programmable Gate Array (FPGA) is then used in conjunction with a soft-core processor to verify address and key translation abilities, providing strong support that a strand-pool DNA model is reasonable for archival storage

    Towards industrial internet of things: crankshaft monitoring, traceability and tracking using RFID

    Get PDF
    The large number of requirements and opportunities for automatic identification in manufacturing domains such as automotive and electronics has accelerated the demand for item-level tracking using radio-frequency identification technology. End-users are interested in implementing automatic identification systems, which are capable of ensuring full component process history, traceability and tracking preventing costly downtime to rectify processing defects and product recalls. The research outlined in this paper investigates the feasibility of implementing an RFID system for the manufacturing and assembly of crankshafts. The proposed solution involves the attachment of bolts with embedded RFID functionality by fitting a reader antenna reader to an overhead gantry that spans the production line and reads and writes production data to the tags. The manufacturing, assembly and service data captured through RFID tags and stored on a local server, could further be integrated with higher-level business applications facilitating seamless integration within the factory
    • …
    corecore