71 research outputs found

    Adaptive delay-constrained internet media transport

    Get PDF
    Reliable transport layer Internet protocols do not satisfy the requirements of packetized, real-time multimedia streams. The available thesis motivates and defines predictable reliability as a novel, capacity-approaching transport paradigm, supporting an application-specific level of reliability under a strict delay constraint. This paradigm is being implemented into a new protocol design -- the Predictably Reliable Real-time Transport protocol (PRRT). In order to predictably achieve the desired level of reliability, proactive and reactive error control must be optimized under the application\u27s delay constraint. Hence, predictably reliable error control relies on stochastic modeling of the protocol response to the modeled packet loss behavior of the network path. The result of the joined modeling is periodically evaluated by a reliability control policy that validates the protocol configuration under the application constraints and under consideration of the available network bandwidth. The adaptation of the protocol parameters is formulated into a combinatorial optimization problem that is solved by a fast search algorithm incorporating explicit knowledge about the search space. Experimental evaluation of PRRT in real Internet scenarios demonstrates that predictably reliable transport meets the strict QoS constraints of high-quality, audio-visual streaming applications.Zuverlässige Internet-Protokolle auf Transport-Layer erfüllen nicht die Anforderungen paketierter Echtzeit-Multimediaströme. Die vorliegende Arbeit motiviert und definiert Predictable Reliability als ein neuartiges, kapazitäterreichendes Transport-Paradigma, das einen anwendungsspezifischen Grad an Zuverlässigkeit unter strikter Zeitbegrenzung unterstützt. Dieses Paradigma wird in ein neues Protokoll-Design implementiert -- das Predictably Reliable Real-time Transport Protokoll (PRRT). Um prädizierbar einen gewünschten Grad an Zuverlässigkeit zu erreichen, müssen proaktive und reaktive Maßnahmen zum Fehlerschutz unter der Zeitbegrenzung der Anwendung optimiert werden. Daher beruht Fehlerschutz mit Predictable Reliability auf der stochastischen Modellierung des Protokoll-Verhaltens unter modelliertem Paketverlust-Verhalten des Netzwerkpfades. Das Ergebnis der kombinierten Modellierung wird periodisch durch eine Reliability Control Strategie ausgewertet, die die Konfiguration des Protokolls unter den Begrenzungen der Anwendung und unter Berücksichtigung der verfügbaren Netzwerkbandbreite validiert. Die Adaption der Protokoll-Parameter wird durch ein kombinatorisches Optimierungsproblem formuliert, welches von einem schnellen Suchalgorithmus gelöst wird, der explizites Wissen über den Suchraum einbezieht. Experimentelle Auswertung von PRRT in realen Internet-Szenarien demonstriert, dass Transport mit Predictable Reliability die strikten Auflagen hochqualitativer, audiovisueller Streaming-Anwendungen erfüllt

    Rate Control State-of-the-art Survey

    Get PDF
    The majority of Internet traffic use Transmission Control Protocol (TCP) as the transport level protocol. It provides a reliable ordered byte stream for the applications. However, applications such as live video streaming place an emphasis on timeliness over reliability. Also a smooth sending rate can be desirable over sharp changes in the sending rate. For these applications TCP is not necessarily suitable. Rate control attempts to address the demands of these applications. An important design feature in all rate control mechanisms is TCP friendliness. We should not negatively impact TCP performance since it is still the dominant protocol. Rate Control mechanisms are classified into two different mechanisms: window-based mechanisms and rate-based mechanisms. Window-based mechanisms increase their sending rate after a successful transfer of a window of packets similar to TCP. They typically decrease their sending rate sharply after a packet loss. Rate-based solutions control their sending rate in some other way. A large subset of rate-based solutions are called equation-based solutions. Equation-based solutions have a control equation which provides an allowed sending rate. Typically these rate-based solutions react slower to both packet losses and increases in available bandwidth making their sending rate smoother than that of window-based solutions. This report contains a survey of rate control mechanisms and a discussion of their relative strengths and weaknesses. A section is dedicated to a discussion on the enhancements in wireless environments. Another topic in the report is bandwidth estimation. Bandwidth estimation is divided into capacity estimation and available bandwidth estimation. We describe techniques that enable the calculation of a fair sending rate that can be used to create novel rate control mechanisms.Peer reviewe

    Dynamic adaptive video streaming with minimal buffer sizes

    Get PDF
    Recently, adaptive streaming has been widely adopted in video streaming services to improve the Quality-of-Experience (QoE) of video delivery over the Internet. However, state-of-the-art bitrate adaptation achieves satisfactory performance only with extensive buffering of several tens of seconds. This leads to high playback latency in video delivery, which is undesirable especially in the context of live content with a low upper bound on the latency. Therefore, this thesis aims at pushing the application of adaptive streaming to its limit with respect to the buffer size, which is the dominant factor of the streaming latency. In this work, we first address the minimum buffering size required in adaptive streaming, which provides us with guidelines to determine a reasonable low latency for streaming systems. Then, we tackle the fundamental challenge of achieving such a low-latency streaming by developing a novel adaptation algorithm that stabilizes buffer dynamics despite a small buffer size. We also present advanced improvements by designing a novel adaptation architecture with low-delay feedback for the bitrate selection and optimizing the underlying transport layer to offer efficient realtime streaming. Experimental evaluations demonstrate that our approach achieves superior QoE in adaptive video streaming, especially in the particularly challenging case of low-latency streaming.In letzter Zeit setzen immer mehr Anbieter von Video-Streaming im Internet auf adaptives Streaming um die Nutzererfahrung (QoE) zu verbessern. Allerdings erreichen aktuelle Bitrate-Adaption-Algorithmen nur dann eine zufriedenstellende Leistung, wenn sehr große Puffer in der Größenordnung von mehreren zehn Sekunden eingesetzt werden. Dies führt zu großen Latenzen bei der Wiedergabe, was vor allem bei Live-Übertragungen mit einer niedrigen Obergrenze für Verzögerungen unerwünscht ist. Aus diesem Grund zielt die vorliegende Dissertation darauf ab adaptive Streaming-Anwendung im Bezug auf die Puffer-Größe zu optimieren da dies den Hauptfaktor für die Streaming-Latenz darstellt. In dieser Arbeit untersuchen wir zuerst die minimale benötigte Puffer-Größe für adaptives Streaming, was uns ermöglicht eine sinnvolle Untergrenze für die erreichbare Latenz festzulegen. Im nächsten Schritt gehen wir die grundlegende Herausforderung an dieses Optimum zu erreichen. Hierfür entwickeln wir einen neuartigen Adaptionsalgorithmus, der es ermöglicht den Füllstand des Puffers trotz der geringen Größe zu stabilisieren. Danach präsentieren wir weitere Verbesserungen indem wir eine neue Adaptions-Architektur für die Datenraten-Anpassung mit geringer Feedback-Verzögerung designen und das darunter liegende Transportprotokoll optimieren um effizientes Echtzeit-Streaming zu ermöglichen. Durch experimentelle Prüfung zeigen wir, dass unser Ansatz eine verbesserte Nutzererfahrung für adaptives Streaming erreicht, vor allem in besonders herausfordernden Fällen, wenn Streaming mit geringer Latenz gefordert ist

    Exploiting the power of multiplicity: a holistic survey of network-layer multipath

    Get PDF
    The Internet is inherently a multipath network: For an underlying network with only a single path, connecting various nodes would have been debilitatingly fragile. Unfortunately, traditional Internet technologies have been designed around the restrictive assumption of a single working path between a source and a destination. The lack of native multipath support constrains network performance even as the underlying network is richly connected and has redundant multiple paths. Computer networks can exploit the power of multiplicity, through which a diverse collection of paths is resource pooled as a single resource, to unlock the inherent redundancy of the Internet. This opens up a new vista of opportunities, promising increased throughput (through concurrent usage of multiple paths) and increased reliability and fault tolerance (through the use of multiple paths in backup/redundant arrangements). There are many emerging trends in networking that signify that the Internet's future will be multipath, including the use of multipath technology in data center computing; the ready availability of multiple heterogeneous radio interfaces in wireless (such as Wi-Fi and cellular) in wireless devices; ubiquity of mobile devices that are multihomed with heterogeneous access networks; and the development and standardization of multipath transport protocols such as multipath TCP. The aim of this paper is to provide a comprehensive survey of the literature on network-layer multipath solutions. We will present a detailed investigation of two important design issues, namely, the control plane problem of how to compute and select the routes and the data plane problem of how to split the flow on the computed paths. The main contribution of this paper is a systematic articulation of the main design issues in network-layer multipath routing along with a broad-ranging survey of the vast literature on network-layer multipathing. We also highlight open issues and identify directions for future work

    Managing Shared Access to a Spectrum Commons

    Get PDF
    The open access, unlicensed or spectrum commons approach to managing shared access to RF spectrum offers many attractive benefits, especially when implemented in conjunction with and as a complement to a regime of marketbased, flexible use, tradable licensed spectrum ([Benkler02], [Lehr04], [Werbach03]). However, as a number of critics have pointed out, implementing the unlicensed model poses difficult challenges that have not been well-addressed yet by commons advocates ([Benjam03], [Faulhab05], [Goodman04], [Hazlett01]). A successful spectrum commons will not be unregulated, but it also need not be command & control by another name. This paper seeks to address some of the implementation challenges associated with managing a spectrum commons. We focus on the minimal set of features that we believe a suitable management protocol, etiquette, or framework for a spectrum commons will need to incorporate. This includes: (1) No transmit only devices; (2) Power restrictions; (3) Common channel signaling; (4) Mechanism for handling congestion and allocating resources among users/uses in times of congestion; (5) Mechanism to support enforcement (e.g., established procedures to verify protocol is in conformance); (6) Mechanism to support reversibility of policy; and (7) Protection for privacy and security. We explain why each is necessary, examine their implications for current policy, and suggest ways in which they might be implemented. We present a framework that suggests a set of design principles for the protocols that will govern a successful commons management regime. Our design rules lead us to conclude that the appropriate Protocols for a Commons will need to be more liquid ([Reed05]) than in the past: (1) Marketbased instead of C&C; (2) Decentralized/distributed; and, (3) Adaptive and flexible (Anonymous, distributed, decentralized, and locally responsive)

    Equation-Based Congestion Control for Unicast and Multicast Data Streams

    Full text link
    We believe that the emergence of congestion control mechanisms for relatively-smooth congestion control for unicast and multicast traffic can play a key role in preventing the degradation of end-to-end congestion control in the public Internet, by providing a viable alternative for multimedia flows that would otherwise be tempted to avoid end-to-end congestion control altogether. The design of good congestion control mechanisms is a hard problem, even more so for multicast environments where scalability issues are much more of a concern than for unicast. In this dissertation, equation-based congestion control is presented as an alternative form of congestion control to the well-known TCP protocol. We focus on areas of equation-based congestion control which were not yet well understood and for which no adequate solutions existed. Starting from a unicast congestion control mechanism which in contrast to TCP provides smooth rate changes, we extend equation-based congestion control in several ways. We investigate how it can work together with applications which can only operate in a very limited region of available bandwidth and whose rate can thus not be adapted to the network conditions in the usual way. Such a congestion control mechanism can also complement conventional equation-based congestion control in regimes where available bandwidth is too low for further rate reduction. When extending unicast congestion control to multicast, it is of paramount importance to ensure that changes in the network conditions anywhere in the multicast tree are reported back to the sender as quickly as possible to allow the sender to adjust the rate accordingly. A scalable feedback mechanism that allows timely congestion feedback in the face of potentially very large receiver sets is one of the contributions of this dissertation. But also other components of a congestion control protocol, such as the rate increase/decrease policy or the slow-start mechanism, need to be adjusted to be able to use them in a multicast environment. Our resulting multicast congestion control protocol was implemented in a simulation environment for extensive protocol testing and turned into a library for the use in real-world applications. In addition, a simple video transmission tool was built for test purposes that uses this congestion control library
    corecore