8 research outputs found

    A two-level Markov model for packet loss in UDP/IP-based real-time video applications targeting residential users

    Get PDF
    The packet loss characteristics of Internet paths that include residential broadband links are not well understood, and there are no good models for their behaviour. This compli- cates the design of real-time video applications targeting home users, since it is difficult to choose appropriate error correction and concealment algorithms without a good model for the types of loss observed. Using measurements of residential broadband networks in the UK and Finland, we show that existing models for packet loss, such as the Gilbert model and simple hidden Markov models, do not effectively model the loss patterns seen in this environment. We present a new two-level Markov model for packet loss that can more accurately describe the characteristics of these links, and quantify the effectiveness of this model. We demonstrate that our new packet loss model allows for improved application design, by using it to model the performance of forward error correction on such links

    On the benefits of applying experimental design to improve multipath TCP

    Full text link

    Network coding for computer networking

    Get PDF
    Conventional communication networks route data packets in a store-and-forward mode. A router buffers received packets and forwards them intact towards their intended destination. Network Coding (NC), however, generalises this method by allowing the router to perform algebraic operations on the packets before forwarding them. The purpose of NC is to improve the network performance to achieve its maximum capacity also known as max-flow min-cut bound. NC has become very well established in the field of information theory, however, practical implementations in real-world networks is yet to be explored. In this thesis, new implementations of NC are brought forward. The effect of NC on flow error control protocols and queuing over computer networks is investigated by establishing and designing a mathematical and simulation framework. One goal of such investigation is to understand how NC technique can reduce the number of packets required to acknowledge the reception of those sent over the network while error-control schemes are employed. Another goal is to control the network queuing stability by reducing the number of packets required to convey a set of information. A custom-built simulator based on SimEvents® has been developed in order to model several scenarios within this approach. The work in this thesis is divided into two key parts. The objective of the first part is to study the performance of communication networks employing error control protocols when NC is adopted. In particular, two main Automatic Repeat reQuest (ARQ) schemes are invoked, namely the Stop-and-Wait (SW) and Selective Repeat (SR) ARQ. Results show that in unicast point-to point communication, the proposed NC scheme offers an increase in the throughput over traditional SW ARQ between 2.5% and 50.5% at each link, with negligible decoding delay. Additionally, in a Butterfly network, SR ARQ employing NC achieves a throughput gain between 22% and 44% over traditional SR ARQ when the number of incoming links to the intermediate node varies between 2 and 5. Moreover, in an extended Butterfly network, NC offered a throughput increase of up to 48% under an error-free scenario and 50% in the presence of errors. Despite the extensive research on synchronous NC performance in various fields, little has been said about its queuing behaviour. One assumption is that packets are served following a Poisson distribution. The packets from different streams are coded prior to being served and then exit through only one stream. This study determines the arrival distribution that coded packets follow at the serving node. In general this leads to study general queuing systems of type G/M/1. Hence, the objective of the second part of this study is twofold. The study aims to determine the distribution of the coded packets and estimate the waiting time faced by coded packets before their complete serving process. Results show that NC brings a new solution for queuing stability as evidenced by the small waiting time the coded packets spend in the intermediate node queue before serving. This work is further enhanced by studying the server utilization in traditional routing and NC scenarios. NC-based M/M/1 with finite capacity K is also analysed to investigate packet loss probability for both scenarios. Based on the results achieved, the utilization of NC in error-prone and long propagation delay networks is recommended. Additionally, since the work provides an insightful prediction of particular networks queuing behaviour, employing synchronous NC can bring a solution for systems’ stability with packet-controlled sources and limited input buffers

    Rigorous statistical analysis of internet loss measurements

    No full text
    Loss measurements are widely used in today's networks. There are existing standards and commercial products to perform these measurements. The missing element is a rigorous statistical methodology for their analysis. Indeed, most existing tools ignore the correlation between packet losses and severely underestimate the errors in the measured loss ratios. In this paper, we present a rigorous technique for analyzing performance measurements, in particular, for estimating confidence intervals of packet loss measurements. The task is challenging because Internet packet loss ratios are typically small and the packet loss process is bursty. Our approach, SAIL, is motivated by some simple observations about the mechanism of packet losses. Packet losses occur when the buffer in a switch or router fills, when there are major routing instabilities, or when the hosts are overloaded, and so we expect packet loss to proceed in episodes of loss, interspersed with periods of successful packet transmission. This can be modeled as a simple on/off process, and in fact, empirical measurements suggest that an alternating renewal process is a reasonable approximation to the real underlying loss process. We use this structure to build a hidden semi-Markov model (HSMM) of the underlying loss process and, from this, to estimate both loss ratios and confidence intervals on these loss ratios. We use both simulations and a set of more than 18 000 hours of real Internet measurements (between dedicated measurement hosts, PlanetLab hosts, Web and DNS servers) to cross-validate our estimates and show that they are better than any current alternative.Hung X. Nguyen and Matthew Rougha

    Rigorous statistical analysis of internet loss measurements

    No full text
    In this paper we present a rigorous technique for estimating confidence intervals of packet loss measurements. Our approach is motivated by simple observations that the loss process can be modelled as an alternating renewal process. We use this structure to build a Hidden Semi-Markov Model (HSMM) for the measurement process, and from this estimate both loss rates, and their confidence intervals. We use both simulations and a set of more than 18000 hours of real Internet measurements (between dedicated measurement hosts, PlanetLab hosts, web and DNS servers) to cross-validate our estimates, and show that they are significantly more accurate than any current alternative.Hung X. Nguyen and Matthew Rougha
    corecore