4,808 research outputs found

    Evaluation of cross-layer reliability mechanisms for satellite digital multimedia broadcast

    Get PDF
    This paper presents a study of some reliability mechanisms which may be put at work in the context of Satellite Digital Multimedia Broadcasting (SDMB) to mobile devices such as handheld phones. These mechanisms include error correcting codes, interleaving at the physical layer, erasure codes at intermediate layers and error concealment on the video decoder. The evaluation is made on a realistic satellite channel and takes into account practical constraints such as the maximum zapping time and the user mobility at several speeds. The evaluation is done by simulating different scenarii with complete protocol stacks. The simulations indicate that, under the assumptions taken here, the scenario using highly compressed video protected by erasure codes at intermediate layers seems to be the best solution on this kind of channel

    Performance analysis of a hybrid ARQ system in half duplex transmission at 2400 BPS

    Get PDF
    Hybrid ARQ/FEC protocols have been proposed to provide high data link integrities whilst keeping at the same time a high mean throughput rate. Nevertheless, hybrid ARQ strategies offer a lot of choices and none of them can be considered the optimum in any case. Three alternative protocol strategies using BCH codes are evaluated and the HF channel models used for the tests are discussed.Peer ReviewedPostprint (published version

    Theoretical performance comparison between reference-based coherent BPSK and BCH coded differential BPSK

    Get PDF

    Undetected error probability for data services in a terrestrial DAB single frequency network

    Get PDF
    DAB (Digital Audio Broadcasting) is the European successor of FM radio. Besides audio services, other services such as traffic information can be provided.\ud An important parameter for data services is the probability of non-recognized or undetected errors in the system. To derive this probability, we propose a bound for the undetected error probability in CRC codes. In addition, results from measurements of a Single Frequency Network (SFN) in Amsterdam were used, where the University of Twente conducted a DAB field trial. The proposed error bound is compared with other error bounds from literature and the results are validated by simulations. Although the proposed bound is less tight than existing bounds, it requires no additional information about the CRC code such\ud as the weight distribution. Moreover, the DAB standard has been extended last year by an Enhanced Packet Mode (EPM) which provides extra protection for data services. An undetected error probability for this mode is also derived. In a realistic user scenario of 10 million users, a 8 kbit/s EPM sub channel can be considered as a system without any undetected errors (Pud = 6 · 10−40). On\ud the other hand, in a normal data sub channel, only 110 packets with undetected errors are received on average each year in the whole system (Pud = 5 · 10−13)

    Minimum Distortion Variance Concatenated Block Codes for Embedded Source Transmission

    Full text link
    Some state-of-art multimedia source encoders produce embedded source bit streams that upon the reliable reception of only a fraction of the total bit stream, the decoder is able reconstruct the source up to a basic quality. Reliable reception of later source bits gradually improve the reconstruction quality. Examples include scalable extensions of H.264/AVC and progressive image coders such as JPEG2000. To provide an efficient protection for embedded source bit streams, a concatenated block coding scheme using a minimum mean distortion criterion was considered in the past. Although, the original design was shown to achieve better mean distortion characteristics than previous studies, the proposed coding structure was leading to dramatic quality fluctuations. In this paper, a modification of the original design is first presented and then the second order statistics of the distortion is taken into account in the optimization. More specifically, an extension scheme is proposed using a minimum distortion variance optimization criterion. This robust system design is tested for an image transmission scenario. Numerical results show that the proposed extension achieves significantly lower variance than the original design, while showing similar mean distortion performance using both convolutional codes and low density parity check codes.Comment: 6 pages, 4 figures, In Proc. of International Conference on Computing, Networking and Communications, ICNC 2014, Hawaii, US

    Criticality Aware Soft Error Mitigation in the Configuration Memory of SRAM based FPGA

    Full text link
    Efficient low complexity error correcting code(ECC) is considered as an effective technique for mitigation of multi-bit upset (MBU) in the configuration memory(CM)of static random access memory (SRAM) based Field Programmable Gate Array (FPGA) devices. Traditional multi-bit ECCs have large overhead and complex decoding circuit to correct adjacent multibit error. In this work, we propose a simple multi-bit ECC which uses Secure Hash Algorithm for error detection and parity based two dimensional Erasure Product Code for error correction. Present error mitigation techniques perform error correction in the CM without considering the criticality or the execution period of the tasks allocated in different portion of CM. In most of the cases, error correction is not done in the right instant, which sometimes either suspends normal system operation or wastes hardware resources for less critical tasks. In this paper,we advocate for a dynamic priority-based hardware scheduling algorithm which chooses the tasks for error correction based on their area, execution period and criticality. The proposed method has been validated in terms of overhead due to redundant bits, error correction time and system reliabilityComment: 6 pages, 8 figures, conferenc

    Low-Complexity Codes for Random and Clustered High-Order Failures in Storage Arrays

    Get PDF
    RC (Random/Clustered) codes are a new efficient array-code family for recovering from 4-erasures. RC codes correct most 4-erasures, and essentially all 4-erasures that are clustered. Clustered erasures are introduced as a new erasure model for storage arrays. This model draws its motivation from correlated device failures, that are caused by physical proximity of devices, or by age proximity of endurance-limited solid-state drives. The reliability of storage arrays that employ RC codes is analyzed and compared to known codes. The new RC code is significantly more efficient, in all practical implementation factors, than the best known 4-erasure correcting MDS code. These factors include: small-write update-complexity, full-device update-complexity, decoding complexity and number of supported devices in the array
    corecore