48 research outputs found
Tiny Codes for Guaranteeable Delay
Future 5G systems will need to support ultra-reliable low-latency
communications scenarios. From a latency-reliability viewpoint, it is
inefficient to rely on average utility-based system design. Therefore, we
introduce the notion of guaranteeable delay which is the average delay plus
three standard deviations of the mean. We investigate the trade-off between
guaranteeable delay and throughput for point-to-point wireless erasure links
with unreliable and delayed feedback, by bringing together signal flow
techniques to the area of coding. We use tiny codes, i.e. sliding window by
coding with just 2 packets, and design three variations of selective-repeat ARQ
protocols, by building on the baseline scheme, i.e. uncoded ARQ, developed by
Ausavapattanakun and Nosratinia: (i) Hybrid ARQ with soft combining at the
receiver; (ii) cumulative feedback-based ARQ without rate adaptation; and (iii)
Coded ARQ with rate adaptation based on the cumulative feedback. Contrasting
the performance of these protocols with uncoded ARQ, we demonstrate that HARQ
performs only slightly better, cumulative feedback-based ARQ does not provide
significant throughput while it has better average delay, and Coded ARQ can
provide gains up to about 40% in terms of throughput. Coded ARQ also provides
delay guarantees, and is robust to various challenges such as imperfect and
delayed feedback, burst erasures, and round-trip time fluctuations. This
feature may be preferable for meeting the strict end-to-end latency and
reliability requirements of future use cases of ultra-reliable low-latency
communications in 5G, such as mission-critical communications and industrial
control for critical control messaging.Comment: to appear in IEEE JSAC Special Issue on URLLC in Wireless Network
DeepSHARQ: hybrid error coding using deep learning
Cyber-physical systems operate under changing environments and on resource-constrained devices. Communication in these
environments must use hybrid error coding, as pure pro- or reactive schemes cannot always fulfill application demands or have
suboptimal performance. However, finding optimal coding configurations that fulfill application constraints—e.g., tolerate
loss and delay—under changing channel conditions is a computationally challenging task. Recently, the systems community
has started addressing these sorts of problems using hybrid decomposed solutions, i.e., algorithmic approaches for wellunderstood formalized parts of the problem and learning-based approaches for parts that must be estimated (either for reasons
of uncertainty or computational intractability). For DeepSHARQ, we revisit our own recent work and limit the learning
problem to block length prediction, the major contributor to inference time (and its variation) when searching for hybrid error
coding configurations. The remaining parameters are found algorithmically, and hence we make individual contributions with
respect to finding close-to-optimal coding configurations in both of these areas—combining them into a hybrid solution.
DeepSHARQ applies block length regularization in order to reduce the neural networks in comparison to purely learningbased solutions. The hybrid solution is nearly optimal concerning the channel efficiency of coding configurations it generates,
as it is trained so deviations from the optimum are upper bound by a configurable percentage. In addition, DeepSHARQ is
capable of reacting to channel changes in real time, thereby enabling cyber-physical systems even on resource-constrained
platforms. Tightly integrating algorithmic and learning-based approaches allows DeepSHARQ to react to channel changes
faster and with a more predictable time than solutions that rely only on either of the two approaches
High Quality of Service on Video Streaming in P2P Networks using FST-MDC
Video streaming applications have newly attracted a large number of
participants in a distribution network. Traditional client-server based video
streaming solutions sustain precious bandwidth provision rate on the server.
Recently, several P2P streaming systems have been organized to provide
on-demand and live video streaming services on the wireless network at reduced
server cost. Peer-to-Peer (P2P) computing is a new pattern to construct
disseminated network applications. Typical error control techniques are not
very well matched and on the other hand error prone channels has increased
greatly for video transmission e.g., over wireless networks and IP. These two
facts united together provided the essential motivation for the development of
a new set of techniques (error concealment) capable of dealing with
transmission errors in video systems. In this paper, we propose an flexible
multiple description coding method named as Flexible Spatial-Temporal (FST)
which improves error resilience in the sense of frame loss possibilities over
independent paths. It introduces combination of both spatial and temporal
concealment technique at the receiver and to conceal the lost frames more
effectively. Experimental results show that, proposed approach attains
reasonable quality of video performance over P2P wireless network.Comment: 11 pages, 8 figures, journa
Evaluating and improving the performance of video content distribution in lossy networks
The contributions in this research are split in to three distinct, but related, areas. The focus of the work is based on improving the efficiency of video content distribution in the networks that are liable to packet loss, such as the Internet. Initially, the benefits and limitations of content distribution using Forward Error Correction (FEC) in conjunction with the Transmission Control Protocol (TCP) is presented. Since added FEC can be used to reduce the number of retransmissions, the requirement for TCP to deal with any losses is greatly reduced. When real-time applications are needed, delay must be kept to a minimum, and retransmissions not desirable. A balance, therefore, between additional bandwidth and delays due to retransmissions must be struck. This is followed by the proposal of a hybrid transport, specifically for H.264 encoded video, as a compromise between the delay-prone TCP and the loss-prone UDP. It is argued that the playback quality at the receiver often need not be 100% perfect, providing a certain level is assured. Reliable TCP is used to transmit and guarantee delivery of the most important packets. The delay associated with the proposal is measured, and the potential for use as an alternative to the conventional methods of transporting video by either TCP or UDP alone is demonstrated. Finally, a new objective measurement is investigated for assessing the playback quality of video transported using TCP. A new metric is defined to characterise the quality of playback in terms of its continuity. Using packet traces generated from real TCP connections in a lossy environment, simulating the playback of a video is possible, whilst monitoring buffer behaviour to calculate pause intensity values. Subjective tests are conducted to verify the effectiveness of the metric introduced and show that the results of objective and subjective scores made are closely correlated
Network coding meets multimedia: a review
While every network node only relays messages in a traditional communication system, the recent network coding (NC) paradigm proposes to implement simple in-network processing with packet combinations in the nodes. NC extends the concept of "encoding" a message beyond source coding (for compression) and channel coding (for protection against errors and losses). It has been shown to increase network throughput compared to traditional networks implementation, to reduce delay and to provide robustness to transmission errors and network dynamics. These features are so appealing for multimedia applications that they have spurred a large research effort towards the development of multimedia-specific NC techniques. This paper reviews the recent work in NC for multimedia applications and focuses on the techniques that fill the gap between NC theory and practical applications. It outlines the benefits of NC and presents the open challenges in this area. The paper initially focuses on multimedia-specific aspects of network coding, in particular delay, in-network error control, and mediaspecific error control. These aspects permit to handle varying network conditions as well as client heterogeneity, which are critical to the design and deployment of multimedia systems. After introducing these general concepts, the paper reviews in detail two applications that lend themselves naturally to NC via the cooperation and broadcast models, namely peer-to-peer multimedia streaming and wireless networkin
TCP Using Adaptive FEC to Improve Throughput Performance in High-Latency Environments
Packet losses significantly degrade TCP performance in high-latency environments. This is because TCP needs at least one round-trip time (RTT) to recover lost packets. The recovery time will grow longer, especially in high-latency environments. TCP keeps transmission rate low while lost packets are recovered, thereby degrading throughput. To prevent this performance degradation, the number of retransmissions must be kept as low as possible. Therefore, we propose a scheme to apply a technology called “forward error correction” (FEC) to the entire TCP operation in order to improve throughput. Since simply applying FEC might not work effectively, three function, namely, controlling redundancy level and transmission rate, suppressing the return of duplicate ACKs, interleaving redundant packets, were devised. The effectiveness of the proposed scheme was demonstrated by simulation evaluations in high-latency environments
Smart network caches : localized content and application negotiated recovery mechanisms for multicast media distribution
Thesis (Ph. D.)--Massachusetts Institute of Technology, Program in Media Arts & Sciences, 1998.Includes bibliographical references (p. 133-138).by Roger George Kermode.Ph.D
AdamRTP: Adaptive multi-flow real-time multimedia transport protocol for Wireless Sensor Networks
Real-time multimedia applications are time sensitive and require extra resources from the network, e.g. large bandwidth and big memory. However, Wireless Sensor Networks (WSNs) suffer from limited resources such as computational, storage, and bandwidth capabilities. Therefore, sending real-time multimedia applications over WSNs can be very challenging. For this reason, we propose an Adaptive Multi-flow Real-time Multimedia Transport Protocol (AdamRTP) that has the ability to ease the process of transmitting real-time multimedia over WSNs by splitting the multimedia source stream into smaller independent flows using an MDC-aware encoder, then sending each flow to the destination using joint/disjoint path. AdamRTP uses dynamic adaptation techniques, e.g. number of flows and rate adaptation. Simulations experiments demonstrate that AdamRTP enhances the Quality of Service (QoS) of transmission. Also, we showed that in an ideal WSN, using multi-flows consumes less power than using a single flow and extends the life-time of the network
ENSURE: A Time Sensitive Transport Protocol to Achieve Reliability Over Wireless in Petrochemical Plants
As society becomes more reliant on the resources extracted in petroleum refinement the production demand for petrochemical plants increases. A key element is producing efficiently while maintaining safety through constant monitoring of equipment feedback. Currently, temperature and flow sensors are deployed at various points of production and 10/100 Ethernet cable is installed to connect them to a master control unit. This comes at a great monetary cost, not only at the time of implementation but also when repairs are required. The capability to provide plant wide wireless networks would both decrease investment cost and downtime needed for repairs. However, the current state of wireless networks does not provide any guarantee of reliability, which is critical to the industry. When factoring in the need for real-time information, network reliability further decreases. This work presents the design and development of a series of transport layer protocols (coined ENSURE) to provide time-sensitive reliability. More specifically three versions were developed to meet specific needs of the data being sent. ENSURE 1.0 addresses reliability, 2.0 enforces a time limit and the final version, 3.0, provides a balance of the two. A network engineer can set each specific area of the plant to use a different version of ENSURE based network performance needs for the data it produces. The end result being a plant wide wireless network that performs in a timely and reliable fashion
Mixed streaming of video over wireless networks
In recent years, transmission of video over the Internet has become an important application. As wireless networks are becoming increasingly popular, it is expected that video will be an important application over wireless networks as well. Unlike wired networks, wireless networks have high data loss rates. Streaming video in the presence of high data loss can be a challenge because it results in errors in the video.Video applications produce large amounts of data that need to be compressed for efficient storage and transmission. Video encoders compress data into dependent frames and independent frames. During transmission, the compressed video may lose some data. Depending on where the packet loss occurs in the video, the error can propagate for a long time. If the error occurs on a reference frame at the beginning of the video, all the frames that depend on the reference frame will not be decoded successfully. This thesis presents the concept of mixed streaming, which reduces the impact of video propagation errors in error prone networks. Mixed streaming delivers a video file using two levels of reliability; reliable and unreliable. This allows sensitive parts of the video to be delivered reliably while less sensitive areas of the video are transmitted unreliably. Experiments are conducted that study the behavior of mixed streaming over error prone wireless networks. Results show that mixed streaming makes it possible to reduce the impact of errors by making sure that errors on reference frames are corrected. Correcting errors on reference frames limits the time for which errors can propagate, thereby improving the video quality. Results also show that the delay cost associated with the mixed streaming approach is reasonable for fairly high packet loss rates