357 research outputs found

    Performance study of large block FEC with drop tail for video streaming over the internet

    Get PDF
    This paper, showed an investigation on performance of the large block forward errors correction(FEC) with Drop Tail (DT) queuing policy. FEC is a technique that uses redundant packet to reconstruct the dropped packet, while Drop Tail is the most popular queue management policies used in network routers. Since the Drop Tail mainly depends on the size of the queue buffer to decide on whether to drop a packet or not, the investigation considered simulation settings with varies size of the queue buffer. Results obtained from the simulation experiments show that FEC and queue size affect the performance the network.Consequently, the qualities of multimedia applications are also affected

    Packet loss characteristics of IPTV-like traffic on residential links

    Get PDF
    Packet loss is one of the principal threats to quality of experience for IPTV systems. However, the packet loss characteristics of the residential access networks which carry IPTV are not widely understood. We present packet level measurements of streaming IPTV-like traffic over four residential access links, and describe the extent and nature of packet loss we encountered. We discuss the likely impact of these losses for IPTV traffic, and outline steps which can ameliorate this

    Congestion Control using FEC for Conversational Multimedia Communication

    Full text link
    In this paper, we propose a new rate control algorithm for conversational multimedia flows. In our approach, along with Real-time Transport Protocol (RTP) media packets, we propose sending redundant packets to probe for available bandwidth. These redundant packets are Forward Error Correction (FEC) encoded RTP packets. A straightforward interpretation is that if no losses occur, the sender can increase the sending rate to include the FEC bit rate, and in the case of losses due to congestion the redundant packets help in recovering the lost packets. We also show that by varying the FEC bit rate, the sender is able to conservatively or aggressively probe for available bandwidth. We evaluate our FEC-based Rate Adaptation (FBRA) algorithm in a network simulator and in the real-world and compare it to other congestion control algorithms

    Impact of large block FEC with different queue sizes of drop tail and RED queue policy on video streaming quality over internet

    Get PDF
    In this paper, we report an investigation on the impact of large block Forward Error Correction (FEC) with Drop Tail (DT) and Random Early Detection (RED) queue policies on network performance and quality of video streaming.FEC is a technique that uses redundant packets to reconstruct dropped packets, while DT and RED are the most popular queue management policies used in network routers.DT mainly depends on the size of the queue buffer to decide on whether to drop a packet or not.RED monitors the average queue size and drops arriving packets probabilistically.The probability of dropping a packet increases as the estimated average queue size grows.In the investigation, we consider simulation settings with varying size of queue buffers.Results obtained from the simulation experiments show that large block FEC and queue size affect the performance the network.Consequently, the qualities of multimedia applications are also affected

    GRACE: Loss-Resilient Real-Time Video through Neural Codecs

    Full text link
    In real-time video communication, retransmitting lost packets over high-latency networks is not viable due to strict latency requirements. To counter packet losses without retransmission, two primary strategies are employed -- encoder-based forward error correction (FEC) and decoder-based error concealment. The former encodes data with redundancy before transmission, yet determining the optimal redundancy level in advance proves challenging. The latter reconstructs video from partially received frames, but dividing a frame into independently coded partitions inherently compromises compression efficiency, and the lost information cannot be effectively recovered by the decoder without adapting the encoder. We present a loss-resilient real-time video system called GRACE, which preserves the user's quality of experience (QoE) across a wide range of packet losses through a new neural video codec. Central to GRACE's enhanced loss resilience is its joint training of the neural encoder and decoder under a spectrum of simulated packet losses. In lossless scenarios, GRACE achieves video quality on par with conventional codecs (e.g., H.265). As the loss rate escalates, GRACE exhibits a more graceful, less pronounced decline in quality, consistently outperforming other loss-resilient schemes. Through extensive evaluation on various videos and real network traces, we demonstrate that GRACE reduces undecodable frames by 95% and stall duration by 90% compared with FEC, while markedly boosting video quality over error concealment methods. In a user study with 240 crowdsourced participants and 960 subjective ratings, GRACE registers a 38% higher mean opinion score (MOS) than other baselines

    DeepSHARQ: hybrid error coding using deep learning

    Get PDF
    Cyber-physical systems operate under changing environments and on resource-constrained devices. Communication in these environments must use hybrid error coding, as pure pro- or reactive schemes cannot always fulfill application demands or have suboptimal performance. However, finding optimal coding configurations that fulfill application constraints—e.g., tolerate loss and delay—under changing channel conditions is a computationally challenging task. Recently, the systems community has started addressing these sorts of problems using hybrid decomposed solutions, i.e., algorithmic approaches for wellunderstood formalized parts of the problem and learning-based approaches for parts that must be estimated (either for reasons of uncertainty or computational intractability). For DeepSHARQ, we revisit our own recent work and limit the learning problem to block length prediction, the major contributor to inference time (and its variation) when searching for hybrid error coding configurations. The remaining parameters are found algorithmically, and hence we make individual contributions with respect to finding close-to-optimal coding configurations in both of these areas—combining them into a hybrid solution. DeepSHARQ applies block length regularization in order to reduce the neural networks in comparison to purely learningbased solutions. The hybrid solution is nearly optimal concerning the channel efficiency of coding configurations it generates, as it is trained so deviations from the optimum are upper bound by a configurable percentage. In addition, DeepSHARQ is capable of reacting to channel changes in real time, thereby enabling cyber-physical systems even on resource-constrained platforms. Tightly integrating algorithmic and learning-based approaches allows DeepSHARQ to react to channel changes faster and with a more predictable time than solutions that rely only on either of the two approaches

    Video over DSL with LDGM Codes for Interactive Applications

    Get PDF
    Digital Subscriber Line (DSL) network access is subject to error bursts, which, for interactive video, can introduce unacceptable latencies if video packets need to be re-sent. If the video packets are protected against errors with Forward Error Correction (FEC), calculation of the application-layer channel codes themselves may also introduce additional latency. This paper proposes Low-Density Generator Matrix (LDGM) codes rather than other popular codes because they are more suitable for interactive video streaming, not only for their computational simplicity but also for their licensing advantage. The paper demonstrates that a reduction of up to 4 dB in video distortion is achievable with LDGM Application Layer (AL) FEC. In addition, an extension to the LDGM scheme is demonstrated, which works by rearranging the columns of the parity check matrix so as to make it even more resilient to burst errors. Telemedicine and video conferencing are typical target applications

    Congestion Control for Streaming Media

    Get PDF
    The Internet has assumed the role of the underlying communication network for applications such as file transfer, electronic mail, Web browsing and multimedia streaming. Multimedia streaming, in particular, is growing with the growth in power and connectivity of today\u27s computers. These Internet applications have a variety of network service requirements and traffic characteristics, which presents new challenges to the single best-effort service of today\u27s Internet. TCP, the de facto Internet transport protocol, has been successful in satisfying the needs of traditional Internet applications, but fails to satisfy the increasingly popular delay sensitive multimedia applications. Streaming applications often use UDP without a proper congestion avoidance mechanisms, threatening the well-being of the Internet. This dissertation presents an IP router traffic management mechanism, referred to as Crimson, that can be seamlessly deployed in the current Internet to protect well-behaving traffic from misbehaving traffic and support Quality of Service (QoS) requirements of delay sensitive multimedia applications as well as traditional Internet applications. In addition, as a means to enhance Internet support for multimedia streaming, this dissertation report presents design and evaluation of a TCP-Friendly and streaming-friendly transport protocol called the Multimedia Transport Protocol (MTP). Through a simulation study this report shows the Crimson network efficiently handles network congestion and minimizes queuing delay while providing affordable fairness protection from misbehaving flows over a wide range of traffic conditions. In addition, our results show that MTP offers streaming performance comparable to that provided by UDP, while doing so under a TCP-Friendly rate
    • …
    corecore