262 research outputs found

    Optimal block-type-decodable encoders for constrained systems

    Full text link

    Modulation codes

    Get PDF

    An Iteratively Decodable Tensor Product Code with Application to Data Storage

    Full text link
    The error pattern correcting code (EPCC) can be constructed to provide a syndrome decoding table targeting the dominant error events of an inter-symbol interference channel at the output of the Viterbi detector. For the size of the syndrome table to be manageable and the list of possible error events to be reasonable in size, the codeword length of EPCC needs to be short enough. However, the rate of such a short length code will be too low for hard drive applications. To accommodate the required large redundancy, it is possible to record only a highly compressed function of the parity bits of EPCC's tensor product with a symbol correcting code. In this paper, we show that the proposed tensor error-pattern correcting code (T-EPCC) is linear time encodable and also devise a low-complexity soft iterative decoding algorithm for EPCC's tensor product with q-ary LDPC (T-EPCC-qLDPC). Simulation results show that T-EPCC-qLDPC achieves almost similar performance to single-level qLDPC with a 1/2 KB sector at 50% reduction in decoding complexity. Moreover, 1 KB T-EPCC-qLDPC surpasses the performance of 1/2 KB single-level qLDPC at the same decoder complexity.Comment: Hakim Alhussien, Jaekyun Moon, "An Iteratively Decodable Tensor Product Code with Application to Data Storage

    Optimal modeling for complex system design

    Get PDF
    The article begins with a brief introduction to the theory describing optimal data compression systems and their performance. A brief outline is then given of a representative algorithm that employs these lessons for optimal data compression system design. The implications of rate-distortion theory for practical data compression system design is then described, followed by a description of the tensions between theoretical optimality and system practicality and a discussion of common tools used in current algorithms to resolve these tensions. Next, the generalization of rate-distortion principles to the design of optimal collections of models is presented. The discussion focuses initially on data compression systems, but later widens to describe how rate-distortion theory principles generalize to model design for a wide variety of modeling applications. The article ends with a discussion of the performance benefits to be achieved using the multiple-model design algorithms

    Locally decodable source coding

    Get PDF
    Thesis (S.M.)--Massachusetts Institute of Technology, Dept. of Electrical Engineering and Computer Science, 2013.Cataloged from PDF version of thesis.Includes bibliographical references (p. 63-65).Source coding is accomplished via the mapping of consecutive source symbols (blocks) into code blocks of fixed or variable length. The fundamental limits in source coding introduces a tradeoff between the rate of compression and the fidelity of the recovery. However, in practical communication systems many issues such as computational complexity, memory capacity, and memory access requirements must be considered. In conventional source coding, in order to retrieve one coordinate of the source sequence, accessing all the encoded coordinates are required. In other words, querying all of the memory cells is necessary. We study a class of codes for which the decoder is local. We introduce locally decodable source coding (LDSC), in which the decoder need not to read the entire encoded coordinates and only a few queries suffice to retrieve a given source coordinate. Both cases of having a constant number of queries and also a scaling number of queries with the source block length are studied. Also, both lossless and lossy source coding are considered. We show that with constant number of queries, the rate of (almost) lossless source coding is one, meaning that no compression is possible. We also show that with logarithmic number of queries in block length, one can achieve Shannon entropy rate. Moreover, we provide achievability bound on the rate of lossy source coding with both constant and scaling number of queries.by Ali Makhdoumi.S.M

    Multiresolution vector quantization

    Get PDF
    Multiresolution source codes are data compression algorithms yielding embedded source descriptions. The decoder of a multiresolution code can build a source reproduction by decoding the embedded bit stream in part or in whole. All decoding procedures start at the beginning of the binary source description and decode some fraction of that string. Decoding a small portion of the binary string gives a low-resolution reproduction; decoding more yields a higher resolution reproduction; and so on. Multiresolution vector quantizers are block multiresolution source codes. This paper introduces algorithms for designing fixed- and variable-rate multiresolution vector quantizers. Experiments on synthetic data demonstrate performance close to the theoretical performance limit. Experiments on natural images demonstrate performance improvements of up to 8 dB over tree-structured vector quantizers. Some of the lessons learned through multiresolution vector quantizer design lend insight into the design of more sophisticated multiresolution codes

    On an HARQ-based Coordinated Multi-point Network using Dynamic Point Selection

    Get PDF
    This paper investigates the performance of coordinated multi-point (CoMP) networks in the presence of hybrid automatic repeat request (HARQ) feedback. With an information theoretic point of view, the throughput and the outage probability of different HARQ protocols are studied for slow-fading channels. The results are compared with the ones obtained in the presence of repetition codes and basic HARQ, or when there is no channel state information available at the base stations. The analytical and numerical results demonstrate the efficiency of the CoMP-HARQ techniques in different conditions

    IST-2000-30148 I-METRA: D3.1 Design, analysis and selection of suitable algorithms

    Get PDF
    This deliverable contains a description of the space-time coding algorithms to be simulated within the I-METRA project. Different families of algorithms have been selected and described in this document with the objective of evaluating their performance. One of the main objectives of the I-METRA project is to impact into the current standardisation efforts related to the introduction of Multiple Input Multiple Output (MIMO) configurations into the High Speed Downlink and Uplink Packet Access concepts of UMTS (HSDPA and HSUPA). This required a review of the current specifications for these systems and the analysis of the impact of the potential incorporation of the selected MIMO schemes.Preprin
    • …
    corecore