301 research outputs found

    Bit-Stuffing Algorithms and Analysis for Run-Length Constrained Channels in Two and Three Dimensions

    Full text link

    Coding and Probabilistic Inference Methods for Data-Dependent Two-Dimensional Channels

    Get PDF
    Recent advances in magnetic recording systems, optical recording devices and flash memory drives necessitate to study two-dimensional (2-D) coding techniques for reliable storage/retrieval of information. Most channels in such systems introduce errors in messages in response to certain data patterns, and messages containing these patterns are more prone to errors than others. For example, in a single-level cell flash memory channel, inter-cell interference (ICI) is at its maximum when 101 patterns are programmed over adjacent cells in either horizontal or vertical directions. As another example, in two-dimensional magnetic recording channels, 2-D isolated-bits patterns are shown empirically to be the dominant error event, and during the read-back process inter-symbol interference (ISI) and inter-track interference (ITI) arise when these patterns are recorded over the magnetic medium. Shannon in his seminal work, ``A Mathematical Theory of Communications," presented two techniques for reliable transmission of messages over noisy channels, namely error correction coding and constrained coding. In the first method, messages are protected via an error correction code (ECC) from random errors which are independent of input data. The theory of ECCs is well studied, and efficient code construction methods are developed for simple binary channels, additive white Gaussian noise (AWGN) channels and partial response channels. On the other hand, constrained coding reduces the likelihood of corruption by removing problematic patterns before transmission over data-dependent channels. Prominent examples of constraints include a family of binary one-dimensional (1-D) and 2-D (d,k)\left(d,k\right)-run-length-limited (RLL) constraints which improves resilience to ISI timing recovery and synchronization for bandwidth limited partial response channels, where d and k represent the minimum and maximum number of admissible zeros between two successive ones in any direction of array. In principle, the ultimate coding approach for such data-dependent channels is to design a set of sufficiently distinct error correction codewords that also satisfy channel constraints. Designing channel codewords satisfying both ECC and channel constraints is important as it would achieve the channel capacity. However, in practice this is difficult, and we rely on sub-optimal methods such as forward concatenation method (standard concatenation), reverse concatenation method (modified concatenation), and combinations of these approaches. In this dissertation, we focus on the problem of reliable transmission of binary messages over data-dependent 2-D communication channels. Our work is concerned with several challenges in regard to the transmission of binary messages over data-dependent 2-D channels. Design of Two-Dimensional Magnetic Recording (TDMR) Detector and Decoder: TDMR achieves high areal densities by reducing the size of a bit comparable to the size of the magnetic grains resulting in 2-D ISI and very high media noise. Therefore, it is critical to handle the media noise along with the 2-D ISI detection. In this work, we tune the Generalized Belief Propagation (GBP) algorithm to handle the media noise seen in TDMR. We also provide an intuition into the nature of hard decisions provided by the GBP algorithm. Investigation into Harmful Patterns for TDMR channels: This work investigates into the Voronoi based media model to study the harmful patterns over multi-track shingled recording systems. Through realistic quasi micromagnetic simulations studies, we identify 2-D data patterns that contribute to high media noise. We look into the generic Voronoi model and present our analysis on multi-track detection with constrained coded data. We show that 2-D constraints imposed on input patterns result in an order of magnitude improvement in the bit error rate for TDMR systems. Understanding of Constraint Gain for TDMR Channels: We study performance gains of constrained codes in TDMR channels using the notion of constraint gain. We consider Voronoi based TDMR channels with realistic grain, bit, track and magnetic-head dimensions. Specifically, we investigate the constraint gain for 2-D no-isolated-bits constraint over Voronoi based TDMR channels. We focus on schemes that employ the GBP algorithm for obtaining information rate estimates for TDMR channels. Design of Novel Constrained Coding Methods: In this work, we present a deliberate bit flipping (DBF) coding scheme for binary 2-D channels, where specific patterns in channel inputs are the significant cause of errors. The idea is to eliminate a constrained encoder and, instead, embed a constraint into an error correction codeword that is arranged into a 2-D array by deliberately flipping the bits that violate the constraint. The DBF method relies on the error correction capability of the code being used so that it should be able to correct both deliberate errors and channel errors. Therefore, it is crucial to flip minimum number of bits in order not to overburden the error correction decoder. We devise a constrained combinatorial formulation for minimizing the number of flipped bits for a given set of harmful patterns. The GBP algorithm is used to find an approximate solution for the problem. Devising Reduced Complexity Probabilistic Inference Methods: We propose a reduced complexity GBP that propagates messages in Log-Likelihood Ratio (LLR) domain. The key novelties of the proposed LLR-GBP are: (i) reduced fixed point precision for messages instead of computational complex floating point format, (ii) operations performed in logarithm domain, thus eliminating the need for multiplications and divisions, (iii) usage of message ratios that leads to simple hard decision mechanisms

    Coding for Two Dimensional Constrained Fields

    Get PDF

    High throughput image compression and decompression on GPUs

    Get PDF
    Diese Arbeit befasst sich mit der Entwicklung eines GPU-freundlichen, intra-only, Wavelet-basierten Videokompressionsverfahrens mit hohem Durchsatz, das für visuell verlustfreie Anwendungen optimiert ist. Ausgehend von der Beobachtung, dass der JPEG 2000 Entropie-Kodierer ein Flaschenhals ist, werden verschiedene algorithmische Änderungen vorgeschlagen und bewertet. Zunächst wird der JPEG 2000 Selective Arithmetic Coding Mode auf der GPU realisiert, wobei sich die Erhöhung des Durchsatzes hierdurch als begrenzt zeigt. Stattdessen werden zwei nicht standard-kompatible Änderungen vorgeschlagen, die (1) jede Bitebebene in nur einem einzelnen Pass verarbeiten (Single-Pass-Modus) und (2) einen echten Rohcodierungsmodus einführen, der sample-weise parallelisierbar ist und keine aufwendige Kontextmodellierung erfordert. Als nächstes wird ein alternativer Entropiekodierer aus der Literatur, der Bitplane Coder with Parallel Coefficient Processing (BPC-PaCo), evaluiert. Er gibt Signaladaptivität zu Gunsten von höherer Parallelität auf und daher wird hier untersucht und gezeigt, dass ein aus verschiedensten Testsequenzen gemitteltes statisches Wahrscheinlichkeitsmodell eine kompetitive Kompressionseffizienz erreicht. Es wird zudem eine Kombination von BPC-PaCo mit dem Single-Pass-Modus vorgeschlagen, der den Speedup gegenüber dem JPEG 2000 Entropiekodierer von 2,15x (BPC-PaCo mit zwei Pässen) auf 2,6x (BPC-PaCo mit Single-Pass-Modus) erhöht auf Kosten eines um 0,3 dB auf 1,0 dB erhöhten Spitzen-Signal-Rausch-Verhältnis (PSNR). Weiter wird ein paralleler Algorithmus zur Post-Compression Ratenkontrolle vorgestellt sowie eine parallele Codestream-Erstellung auf der GPU. Es wird weiterhin ein theoretisches Laufzeitmodell formuliert, das es durch Benchmarking von einer GPU ermöglicht die Laufzeit einer Routine auf einer anderen GPU vorherzusagen. Schließlich wird der erste JPEG XS GPU Decoder vorgestellt und evaluiert. JPEG XS wurde als Low Complexity Codec konzipiert und forderte erstmals explizit GPU-Freundlichkeit bereits im Call for Proposals. Ab Bitraten über 1 bpp ist der Decoder etwa 2x schneller im Vergleich zu JPEG 2000 und 1,5x schneller als der schnellste hier vorgestellte Entropiekodierer (BPC-PaCo mit Single-Pass-Modus). Mit einer GeForce GTX 1080 wird ein Decoder Durchsatz von rund 200 fps für eine UHD-4:4:4-Sequenz erreicht.This work investigates possibilities to create a high throughput, GPU-friendly, intra-only, Wavelet-based video compression algorithm optimized for visually lossless applications. Addressing the key observation that JPEG 2000’s entropy coder is a bottleneck and might be overly complex for a high bit rate scenario, various algorithmic alterations are proposed. First, JPEG 2000’s Selective Arithmetic Coding mode is realized on the GPU, but the gains in terms of an increased throughput are shown to be limited. Instead, two independent alterations not compliant to the standard are proposed, that (1) give up the concept of intra-bit plane truncation points and (2) introduce a true raw-coding mode that is fully parallelizable and does not require any context modeling. Next, an alternative block coder from the literature, the Bitplane Coder with Parallel Coefficient Processing (BPC-PaCo), is evaluated. Since it trades signal adaptiveness for increased parallelism, it is shown here how a stationary probability model averaged from a set of test sequences yields competitive compression efficiency. A combination of BPC-PaCo with the single-pass mode is proposed and shown to increase the speedup with respect to the original JPEG 2000 entropy coder from 2.15x (BPC-PaCo with two passes) to 2.6x (proposed BPC-PaCo with single-pass mode) at the marginal cost of increasing the PSNR penalty by 0.3 dB to at most 1 dB. Furthermore, a parallel algorithm is presented that determines the optimal code block bit stream truncation points (given an available bit rate budget) and builds the entire code stream on the GPU, reducing the amount of data that has to be transferred back into host memory to a minimum. A theoretical runtime model is formulated that allows, based on benchmarking results on one GPU, to predict the runtime of a kernel on another GPU. Lastly, the first ever JPEG XS GPU-decoder realization is presented. JPEG XS was designed to be a low complexity codec and for the first time explicitly demanded GPU-friendliness already in the call for proposals. Starting at bit rates above 1 bpp, the decoder is around 2x faster compared to the original JPEG 2000 and 1.5x faster compared to JPEG 2000 with the fastest evaluated entropy coder (BPC-PaCo with single-pass mode). With a GeForce GTX 1080, a decoding throughput of around 200 fps is achieved for a UHD 4:4:4 sequence

    LOZENGE TILING CONSTRAINED CODES

    Get PDF
    While the field of one-dimensional constrained codesis mature, with theoretical as well as practical aspects of codeanddecoder-design being well-established, such a theoreticaltreatment of its two-dimensional (2D) counterpart is still unavailable.Research has been conducted on a few exemplar2D constraints, e.g., the hard triangle model, run-length limitedconstraints on the square lattice, and 2D checkerboardconstraints. Excluding these results, 2D constrained systemsremain largely uncharacterized mathematically, with only loosebounds of capacities present. In this paper we present a lozengeconstraint on a regular triangular lattice and derive Shannonnoiseless capacity bounds. To estimate capacity of lozenge tilingwe make use of the bijection between the counting of lozengetiling and the counting of boxed plane partitions

    Joint source and channel coding

    Get PDF

    Signal constellation and carrier recovery technique for voice-band modems

    Get PDF

    Study of information transfer optimization for communication satellites

    Get PDF
    The results are presented of a study of source coding, modulation/channel coding, and systems techniques for application to teleconferencing over high data rate digital communication satellite links. Simultaneous transmission of video, voice, data, and/or graphics is possible in various teleconferencing modes and one-way, two-way, and broadcast modes are considered. A satellite channel model including filters, limiter, a TWT, detectors, and an optimized equalizer is treated in detail. A complete analysis is presented for one set of system assumptions which exclude nonlinear gain and phase distortion in the TWT. Modulation, demodulation, and channel coding are considered, based on an additive white Gaussian noise channel model which is an idealization of an equalized channel. Source coding with emphasis on video data compression is reviewed, and the experimental facility utilized to test promising techniques is fully described
    corecore