1,381 research outputs found
Expanding window fountain codes for unequal error protection
A novel approach to provide unequal error protection (UEP) using rateless codes over erasure channels, named Expanding Window Fountain (EWF) codes, is developed and discussed. EWF codes use a windowing technique rather than a weighted (non-uniform) selection of input symbols to achieve UEP property. The windowing approach introduces additional parameters in the UEP rateless code design, making it more general and flexible than the weighted approach. Furthermore, the windowing approach provides better performance of UEP scheme, which is confirmed both theoretically and experimentally. © 2009 IEEE
Expanding window fountain codes for unequal error protection
A novel approach to provide unequal error protection (UEP) using rateless codes over erasure channels, named Expanding Window Fountain (EWF) codes, is developed and discussed. EWF codes use a windowing technique rather than a weighted (non-uniform) selection of input symbols to achieve UEP property. The windowing approach introduces additional parameters in the UEP rateless code design, making it more general and flexible than the weighted approach. Furthermore, the windowing approach provides better performance of UEP scheme, which is confirmed both theoretically and experimentally. © 2009 IEEE
Expanding window fountain codes for unequal error protection
A novel approach to provide unequal error protection (UEP) using rateless codes over erasure channels, named Expanding Window Fountain (EWF) codes, is developed and discussed. EWF codes use a windowing technique rather than a weighted (non-uniform) selection of input symbols to achieve UEP property. The windowing approach introduces additional parameters in the UEP rateless code design, making it more general and flexible than the weighted approach. Furthermore, the windowing approach provides better performance of UEP scheme, which is confirmed both theoretically and experimentally
Adaptive Prioritized Random Linear Coding and Scheduling for Layered Data Delivery From Multiple Servers
In this paper, we deal with the problem of jointly determining the optimal coding strategy and the scheduling decisions when receivers obtain layered data from multiple servers. The layered data is encoded by means of prioritized random linear coding (PRLC) in order to be resilient to channel loss while respecting the unequal levels of importance in the data, and data blocks are transmitted simultaneously in order to reduce decoding delays and improve the delivery performance. We formulate the optimal coding and scheduling decisions problem in our novel framework with the help of Markov decision processes (MDP), which are effective tools for modeling adapting streaming systems. Reinforcement learning approaches are then proposed to derive reduced computational complexity solutions to the adaptive coding and scheduling problems. The novel reinforcement learning approaches and the MDP solution are examined in an illustrative example for scalable video transmission . Our methods offer large performance gains over competing methods that deliver the data blocks sequentially. The experimental evaluation also shows that our novel algorithms offer continuous playback and guarantee small quality variations which is not the case for baseline solutions. Finally, our work highlights the advantages of reinforcement learning algorithms to forecast the temporal evolution of data demands and to decide the optimal coding and scheduling decisions
Recommended from our members
Inter-comparison of three-dimensional models of volcanic plumes
We performed an inter-comparison study of three-dimensional models of volcanic plumes. A set of common volcanological input parameters and meteorological conditions were provided for two kinds of eruptions, representing a weak and a strong eruption column. From the different models, we compared the maximum plume height, neutral buoyancy level (where plume density equals that of the atmosphere), and level of maximum radial spreading of the umbrella cloud. We also compared the vertical profiles of eruption column properties, integrated across cross-sections of the plume (integral variables). Although the models use different numerical procedures and treatments of subgrid turbulence and particle dynamics, the inter-comparison shows qualitatively consistent results. In the weak plume case (mass eruption rate 1.5 × 106 kg s− 1), the vertical profiles of plume properties (e.g., vertical velocity, temperature) are similar among models, especially in the buoyant plume region. Variability among the simulated maximum heights is ~ 20%, whereas neutral buoyancy level and level of maximum radial spreading vary by ~ 10%. Time-averaging of the three-dimensional (3D) flow fields indicates an effective entrainment coefficient around 0.1 in the buoyant plume region, with much lower values in the jet region, which is consistent with findings of small-scale laboratory experiments. On the other hand, the strong plume case (mass eruption rate 1.5 × 109 kg s− 1) shows greater variability in the vertical plume profiles predicted by the different models. Our analysis suggests that the unstable flow dynamics in the strong plume enhances differences in the formulation and numerical solution of the models. This is especially evident in the overshooting top of the plume, which extends a significant portion (~ 1/8) of the maximum plume height. Nonetheless, overall variability in the spreading level and neutral buoyancy level is ~ 20%, whereas that of maximum height is ~ 10%. This inter-comparison study has highlighted the different capabilities of 3D volcanic plume models, and identified key features of weak and strong plumes, including the roles of jet stability, entrainment efficiency, and particle non-equilibrium, which deserve future investigation in field, laboratory, and numerical studies.YJS was partially supported by the ERI Cooperative Research Program and KAKENHI (25750142). The computations of SK-3D were carried out in part on the Earth Simulator at the JAMSTEC and also on the Primergy RX200S6 at the Research Computer System, Kyushu University. AC was partially supported by a grant of the International Research Promotion Office Earthquake Research Institute, the University of Tokyo. AC, TEO and MC were partially supported by the EU-funded project MEDiterranean Supersite Volcanoes (MEDSUV; grant no. 308665). MC acknowledges CINECA award N. HP10BKFD9F (2013) for high performance computing resources and support. AVE acknowledges NSF Postdoctoral Fellowship EAR1250029, a U.S. Geological Survey Mendenhall fellowship, and grant GID 61233 from NASA Ames Supercomputing Center
Prioritized Random MAC Optimization via Graph-based Analysis
Motivated by the analogy between successive interference cancellation and
iterative belief-propagation on erasure channels, irregular repetition slotted
ALOHA (IRSA) strategies have received a lot of attention in the design of
medium access control protocols. The IRSA schemes have been mostly analyzed for
theoretical scenarios for homogenous sources, where they are shown to
substantially improve the system performance compared to classical slotted
ALOHA protocols. In this work, we consider generic systems where sources in
different importance classes compete for a common channel. We propose a new
prioritized IRSA algorithm and derive the probability to correctly resolve
collisions for data from each source class. We then make use of our theoretical
analysis to formulate a new optimization problem for selecting the transmission
strategies of heterogenous sources. We optimize both the replication
probability per class and the source rate per class, in such a way that the
overall system utility is maximized. We then propose a heuristic-based
algorithm for the selection of the transmission strategy, which is built on
intrinsic characteristics of the iterative decoding methods adopted for
recovering from collisions. Experimental results validate the accuracy of the
theoretical study and show the gain of well-chosen prioritized transmission
strategies for transmission of data from heterogenous classes over shared
wireless channels
Error and Congestion Resilient Video Streaming over Broadband Wireless
In this paper, error resilience is achieved by adaptive, application-layer rateless channel coding, which is used to protect H.264/Advanced Video Coding (AVC) codec data-partitioned videos. A packetization strategy is an effective tool to control error rates and, in the paper, source-coded data partitioning serves to allocate smaller packets to more important compressed video data. The scheme for doing this is applied to real-time streaming across a broadband wireless link. The advantages of rateless code rate adaptivity are then demonstrated in the paper. Because the data partitions of a video slice are each assigned to different network packets, in congestion-prone wireless networks the increased number of packets per slice and their size disparity may increase the packet loss rate from buffer overflows. As a form of congestion resilience, this paper recommends packet-size dependent scheduling as a relatively simple way of alleviating the buffer-overflow problem arising from data-partitioned packets. The paper also contributes an analysis of data partitioning and packet sizes as a prelude to considering scheduling regimes. The combination of adaptive channel coding and prioritized packetization for error resilience with packet-size dependent packet scheduling results in a robust streaming scheme specialized for broadband wireless and real-time streaming applications such as video conferencing, video telephony, and telemedicine
- …