2,498 research outputs found

    Design and Analysis of LT Codes with Decreasing Ripple Size

    Full text link
    In this paper we propose a new design of LT codes, which decreases the amount of necessary overhead in comparison to existing designs. The design focuses on a parameter of the LT decoding process called the ripple size. This parameter was also a key element in the design proposed in the original work by Luby. Specifically, Luby argued that an LT code should provide a constant ripple size during decoding. In this work we show that the ripple size should decrease during decoding, in order to reduce the necessary overhead. Initially we motivate this claim by analytical results related to the redundancy within an LT code. We then propose a new design procedure, which can provide any desired achievable decreasing ripple size. The new design procedure is evaluated and compared to the current state of the art through simulations. This reveals a significant increase in performance with respect to both average overhead and error probability at any fixed overhead

    Doped Fountain Coding for Minimum Delay Data Collection in Circular Networks

    Full text link
    This paper studies decentralized, Fountain and network-coding based strategies for facilitating data collection in circular wireless sensor networks, which rely on the stochastic diversity of data storage. The goal is to allow for a reduced delay collection by a data collector who accesses the network at a random position and random time. Data dissemination is performed by a set of relays which form a circular route to exchange source packets. The storage nodes within the transmission range of the route's relays linearly combine and store overheard relay transmissions using random decentralized strategies. An intelligent data collector first collects a minimum set of coded packets from a subset of storage nodes in its proximity, which might be sufficient for recovering the original packets and, by using a message-passing decoder, attempts recovering all original source packets from this set. Whenever the decoder stalls, the source packet which restarts decoding is polled/doped from its original source node. The random-walk-based analysis of the decoding/doping process furnishes the collection delay analysis with a prediction on the number of required doped packets. The number of doped packets can be surprisingly small when employed with an Ideal Soliton code degree distribution and, hence, the doping strategy may have the least collection delay when the density of source nodes is sufficiently large. Furthermore, we demonstrate that network coding makes dissemination more efficient at the expense of a larger collection delay. Not surprisingly, a circular network allows for a significantly more (analytically and otherwise) tractable strategies relative to a network whose model is a random geometric graph

    Rateless Codes with Progressive Recovery for Layered Multimedia Delivery

    Full text link
    This paper proposes a novel approach, based on unequal error protection, to enhance rateless codes with progressive recovery for layered multimedia delivery. With a parallel encoding structure, the proposed Progressive Rateless codes (PRC) assign unequal redundancy to each layer in accordance with their importance. Each output symbol contains information from all layers, and thus the stream layers can be recovered progressively at the expected received ratios of output symbols. Furthermore, the dependency between layers is naturally considered. The performance of the PRC is evaluated and compared with some related UEP approaches. Results show that our PRC approach provides better recovery performance with lower overhead both theoretically and numerically

    Fountain Codes under Maximum Likelihood Decoding

    Get PDF
    This dissertation focuses on fountain codes under maximum likelihood (ML) decoding. First LT codes are considered under a practical and widely used ML decoding algorithm known as inactivation decoding. Different analysis techniques are presented to characterize the decoding complexity. Next an upper bound to the probability of decoding failure of Raptor codes under ML decoding is provided. Then, the distance properties of an ensemble of fixed-rate Raptor codes with linear random outer codes are analyzed. Finally, a novel class of fountain codes is presented, which consists of a parallel concatenation of a block code with a linear random fountain code.Comment: PhD Thesi

    Rotation and Neoclassical Ripple Transport in ITER

    Full text link
    Neoclassical transport in the presence of non-axisymmetric magnetic fields causes a toroidal torque known as neoclassical toroidal viscosity (NTV). The toroidal symmetry of ITER will be broken by the finite number of toroidal field coils and by test blanket modules (TBMs). The addition of ferritic inserts (FIs) will decrease the magnitude of the toroidal field ripple. 3D magnetic equilibria with toroidal field ripple and ferromagnetic structures are calculated for an ITER steady-state scenario using the Variational Moments Equilibrium Code (VMEC). Neoclassical transport quantities in the presence of these error fields are calculated using the Stellarator Fokker-Planck Iterative Neoclassical Conservative Solver (SFINCS). These calculations fully account for ErE_r, flux surface shaping, multiple species, magnitude of ripple, and collisionality rather than applying approximate analytic NTV formulae. As NTV is a complicated nonlinear function of ErE_r, we study its behavior over a plausible range of ErE_r. We estimate the toroidal flow, and hence ErE_r, using a semi-analytic turbulent intrinsic rotation model and NUBEAM calculations of neutral beam torque. The NTV from the ∣n∣=18\rvert n \rvert = 18 ripple dominates that from lower nn perturbations of the TBMs. With the inclusion of FIs, the magnitude of NTV torque is reduced by about 75% near the edge. We present comparisons of several models of tangential magnetic drifts, finding appreciable differences only for superbanana-plateau transport at small ErE_r. We find the scaling of calculated NTV torque with ripple magnitude to indicate that ripple-trapping may be a significant mechanism for NTV in ITER. The computed NTV torque without ferritic components is comparable in magnitude to the NBI and intrinsic turbulent torques and will likely damp rotation, but the NTV torque is significantly reduced by the planned ferritic inserts

    LT Code Equations

    Get PDF
    The LT Code equations describe the process of encoding input symbols into an error correcting code that requires no feedback from the receiver. The mathematical process involved the development of LT Codes is important. This thesis address the issue of improving the understanding of the LT Code equations presented in the original paper. This task is accomplished by inserting the mathematical details when possible, providing graphical results to the equations, and comparing the equations results against random computer generated simulations results. This thesis will improve the understanding of how LT Codes equations related to actual results

    Raptor Codes in the Low SNR Regime

    Full text link
    In this paper, we revisit the design of Raptor codes for binary input additive white Gaussian noise (BIAWGN) channels, where we are interested in very low signal to noise ratios (SNRs). A linear programming degree distribution optimization problem is defined for Raptor codes in the low SNR regime through several approximations. We also provide an exact expression for the polynomial representation of the degree distribution with infinite maximum degree in the low SNR regime, which enables us to calculate the exact value of the fractions of output nodes of small degrees. A more practical degree distribution design is also proposed for Raptor codes in the low SNR regime, where we include the rate efficiency and the decoding complexity in the optimization problem, and an upper bound on the maximum rate efficiency is derived for given design parameters. Simulation results show that the Raptor code with the designed degree distributions can approach rate efficiencies larger than 0.95 in the low SNR regime.Comment: Submitted to the IEEE Transactions on Communications. arXiv admin note: text overlap with arXiv:1510.0772

    Unequal Error Protection Raptor Codes

    Get PDF
    We design Unequal Error Protection (UEP) Raptor codes with the UEP property provided by the precode part of Raptor codes which is usually a Low Density Parity Check (LDPC) code. Existing UEP Raptor codes apply the UEP property on the Luby transform (LT) code part of Raptor codes. This approach lowers the bit erasure rate (BER) of the more important bits (MIB) of the data decoded by the LT part of the decoder of Raptor code at the expense of degrading the BER performance of Less Important Bits (LIB), and hence the overall BER of the data passed from the LT part to the LDPC part of the decoder is higher compared to the case of using an Equal Error Protection (EEP) LT code. The proposed UEP Raptor code design has the structure of UEP LDPC code and EEP LT code so that it has the advantage of passing data blocks with lower BER from the LT code part to the LDPC code part of the decoder. This advantage is translated into improved performance in terms of required overhead and achieved BER on both the MIB bits and LIB bits of the decoded data compared to UEP Raptor codes applying the UEP property on the LT part. We propose two design schemes. The first combines a partially regular LDPC code which has UEP properties with an EEP LT code, and the second scheme uses two LDPC codes with different code rates in the precode part such that the MIB bits are encoded using the LDPC code with lower rate and the LT part is EEP. Simulations of both designs exhibit improved BER performance on both the MIB bits and LIB bits while consuming smaller overheads. The second design can be used to provide unequal protection for cases where the MIB bits comprise a fraction of more than 0.4 of the source data which is a case where UEP Raptor codes with UEP LT codes perform poorly
    • …
    corecore