808 research outputs found
Joint On-the-Fly Network Coding/Video Quality Adaptation for Real-Time Delivery
This paper introduces a redundancy adaptation algorithm for an on-the-fly
erasure network coding scheme called Tetrys in the context of real-time video
transmission. The algorithm exploits the relationship between the redundancy
ratio used by Tetrys and the gain or loss in encoding bit rate from changing a
video quality parameter called the Quantization Parameter (QP). Our evaluations
show that with equal or less bandwidth occupation, the video protected by
Tetrys with redundancy adaptation algorithm obtains a PSNR gain up to or more 4
dB compared to the video without Tetrys protection. We demonstrate that the
Tetrys redundancy adaptation algorithm performs well with the variations of
both loss pattern and delay induced by the networks. We also show that Tetrys
with the redundancy adaptation algorithm outperforms FEC with and without
redundancy adaptation
A testbed of erasure coding on video streaming system over lossy networks
As one of the most challenging aspects of streaming video over lossy networks, the technology for controlling packet losses has attracted more and more attention. Erasure coding is one of the ideal choices to deal with this problem. In most cases, the researchers need an effective method or tool to validate the erasure codes used for dealing with different packet loss patterns. Although some previous work has been done on employing erasure codes in video streaming system, few actual buildups and experiments which involve implementation of erasure codes against real packet loss in streaming systems have been reported. In this paper, we focus on constructing a testbed that integrates loss pattern generation and erasure coding implementation into video streaming services over lossy networks. With this approach, we are able to assess the capability of erasure coding in packet loss control and compare the performances of the video streaming systems with and without erasure coding. As an example, we have implemented the Reed-Solomon (7, 5) code for protecting MPEG streaming data under random packet losses. Experiment results show that the replay quality can be improved significantly by using erasure coding in video streaming systems, and that the testbed can suggest appropriate erasure code parameters for different loss environments
Random Linear Network Coding for 5G Mobile Video Delivery
An exponential increase in mobile video delivery will continue with the
demand for higher resolution, multi-view and large-scale multicast video
services. Novel fifth generation (5G) 3GPP New Radio (NR) standard will bring a
number of new opportunities for optimizing video delivery across both 5G core
and radio access networks. One of the promising approaches for video quality
adaptation, throughput enhancement and erasure protection is the use of
packet-level random linear network coding (RLNC). In this review paper, we
discuss the integration of RLNC into the 5G NR standard, building upon the
ideas and opportunities identified in 4G LTE. We explicitly identify and
discuss in detail novel 5G NR features that provide support for RLNC-based
video delivery in 5G, thus pointing out to the promising avenues for future
research.Comment: Invited paper for Special Issue "Network and Rateless Coding for
Video Streaming" - MDPI Informatio
DeepSHARQ: hybrid error coding using deep learning
Cyber-physical systems operate under changing environments and on resource-constrained devices. Communication in these
environments must use hybrid error coding, as pure pro- or reactive schemes cannot always fulfill application demands or have
suboptimal performance. However, finding optimal coding configurations that fulfill application constraints—e.g., tolerate
loss and delay—under changing channel conditions is a computationally challenging task. Recently, the systems community
has started addressing these sorts of problems using hybrid decomposed solutions, i.e., algorithmic approaches for wellunderstood formalized parts of the problem and learning-based approaches for parts that must be estimated (either for reasons
of uncertainty or computational intractability). For DeepSHARQ, we revisit our own recent work and limit the learning
problem to block length prediction, the major contributor to inference time (and its variation) when searching for hybrid error
coding configurations. The remaining parameters are found algorithmically, and hence we make individual contributions with
respect to finding close-to-optimal coding configurations in both of these areas—combining them into a hybrid solution.
DeepSHARQ applies block length regularization in order to reduce the neural networks in comparison to purely learningbased solutions. The hybrid solution is nearly optimal concerning the channel efficiency of coding configurations it generates,
as it is trained so deviations from the optimum are upper bound by a configurable percentage. In addition, DeepSHARQ is
capable of reacting to channel changes in real time, thereby enabling cyber-physical systems even on resource-constrained
platforms. Tightly integrating algorithmic and learning-based approaches allows DeepSHARQ to react to channel changes
faster and with a more predictable time than solutions that rely only on either of the two approaches
A protection scheme for multimedia packet streams in bursty packet loss networks based on small block low-density parity-check codes
This paper proposes an enhanced forward error correction (FEC) scheme based on small block low-density parity-check (LDPC) codes to protect real-time packetized multimedia streams in bursty channels. The use of LDPC codes is typically addressed for channels where losses are uniformly distributed (memoryless channels) and for large information blocks. This work suggests the use of this type of FEC codes at the application layer, in bursty channels (e.g., Internet protocol (IP)-based networks) and for real-time scenarios that require low transmission latency. To fulfil these constraints, the appropriate configuration parameters of an LDPC scheme have been determined using small blocks of information and adapting the FEC code to be capable of recovering packet losses in bursty environments. This purpose is achieved in two steps. The first step is performed by an algorithm that estimates the recovery capability of a given LDPC code in a burst packet loss network. The second step is the optimization of the code: an algorithm optimizes the parity matrix structure in terms of recovery capability against the specific behavior of the channel with memory. Experimental results have been obtained in a simulated transmission channel to show that the optimized LDPC matrices generate a more robust protection scheme against bursty packet losses for small information blocks
- …