1,381 research outputs found

    Expanding window fountain codes for unequal error protection

    Get PDF
    A novel approach to provide unequal error protection (UEP) using rateless codes over erasure channels, named Expanding Window Fountain (EWF) codes, is developed and discussed. EWF codes use a windowing technique rather than a weighted (non-uniform) selection of input symbols to achieve UEP property. The windowing approach introduces additional parameters in the UEP rateless code design, making it more general and flexible than the weighted approach. Furthermore, the windowing approach provides better performance of UEP scheme, which is confirmed both theoretically and experimentally. © 2009 IEEE

    Expanding window fountain codes for unequal error protection

    Get PDF
    A novel approach to provide unequal error protection (UEP) using rateless codes over erasure channels, named Expanding Window Fountain (EWF) codes, is developed and discussed. EWF codes use a windowing technique rather than a weighted (non-uniform) selection of input symbols to achieve UEP property. The windowing approach introduces additional parameters in the UEP rateless code design, making it more general and flexible than the weighted approach. Furthermore, the windowing approach provides better performance of UEP scheme, which is confirmed both theoretically and experimentally. © 2009 IEEE

    Expanding window fountain codes for unequal error protection

    Get PDF
    A novel approach to provide unequal error protection (UEP) using rateless codes over erasure channels, named Expanding Window Fountain (EWF) codes, is developed and discussed. EWF codes use a windowing technique rather than a weighted (non-uniform) selection of input symbols to achieve UEP property. The windowing approach introduces additional parameters in the UEP rateless code design, making it more general and flexible than the weighted approach. Furthermore, the windowing approach provides better performance of UEP scheme, which is confirmed both theoretically and experimentally

    Adaptive Prioritized Random Linear Coding and Scheduling for Layered Data Delivery From Multiple Servers

    Get PDF
    In this paper, we deal with the problem of jointly determining the optimal coding strategy and the scheduling decisions when receivers obtain layered data from multiple servers. The layered data is encoded by means of prioritized random linear coding (PRLC) in order to be resilient to channel loss while respecting the unequal levels of importance in the data, and data blocks are transmitted simultaneously in order to reduce decoding delays and improve the delivery performance. We formulate the optimal coding and scheduling decisions problem in our novel framework with the help of Markov decision processes (MDP), which are effective tools for modeling adapting streaming systems. Reinforcement learning approaches are then proposed to derive reduced computational complexity solutions to the adaptive coding and scheduling problems. The novel reinforcement learning approaches and the MDP solution are examined in an illustrative example for scalable video transmission . Our methods offer large performance gains over competing methods that deliver the data blocks sequentially. The experimental evaluation also shows that our novel algorithms offer continuous playback and guarantee small quality variations which is not the case for baseline solutions. Finally, our work highlights the advantages of reinforcement learning algorithms to forecast the temporal evolution of data demands and to decide the optimal coding and scheduling decisions

    Prioritized Random MAC Optimization via Graph-based Analysis

    Get PDF
    Motivated by the analogy between successive interference cancellation and iterative belief-propagation on erasure channels, irregular repetition slotted ALOHA (IRSA) strategies have received a lot of attention in the design of medium access control protocols. The IRSA schemes have been mostly analyzed for theoretical scenarios for homogenous sources, where they are shown to substantially improve the system performance compared to classical slotted ALOHA protocols. In this work, we consider generic systems where sources in different importance classes compete for a common channel. We propose a new prioritized IRSA algorithm and derive the probability to correctly resolve collisions for data from each source class. We then make use of our theoretical analysis to formulate a new optimization problem for selecting the transmission strategies of heterogenous sources. We optimize both the replication probability per class and the source rate per class, in such a way that the overall system utility is maximized. We then propose a heuristic-based algorithm for the selection of the transmission strategy, which is built on intrinsic characteristics of the iterative decoding methods adopted for recovering from collisions. Experimental results validate the accuracy of the theoretical study and show the gain of well-chosen prioritized transmission strategies for transmission of data from heterogenous classes over shared wireless channels

    Error and Congestion Resilient Video Streaming over Broadband Wireless

    Get PDF
    In this paper, error resilience is achieved by adaptive, application-layer rateless channel coding, which is used to protect H.264/Advanced Video Coding (AVC) codec data-partitioned videos. A packetization strategy is an effective tool to control error rates and, in the paper, source-coded data partitioning serves to allocate smaller packets to more important compressed video data. The scheme for doing this is applied to real-time streaming across a broadband wireless link. The advantages of rateless code rate adaptivity are then demonstrated in the paper. Because the data partitions of a video slice are each assigned to different network packets, in congestion-prone wireless networks the increased number of packets per slice and their size disparity may increase the packet loss rate from buffer overflows. As a form of congestion resilience, this paper recommends packet-size dependent scheduling as a relatively simple way of alleviating the buffer-overflow problem arising from data-partitioned packets. The paper also contributes an analysis of data partitioning and packet sizes as a prelude to considering scheduling regimes. The combination of adaptive channel coding and prioritized packetization for error resilience with packet-size dependent packet scheduling results in a robust streaming scheme specialized for broadband wireless and real-time streaming applications such as video conferencing, video telephony, and telemedicine

    Adaptive-Truncated-HARQ-Aided Layered Video Streaming Relying on Interlayer FEC Coding

    Full text link
    corecore