350 research outputs found

    Adaptive Multicast of Multi-Layered Video: Rate-Based and Credit-Based Approaches

    Full text link
    Network architectures that can efficiently transport high quality, multicast video are rapidly becoming a basic requirement of emerging multimedia applications. The main problem complicating multicast video transport is variation in network bandwidth constraints. An attractive solution to this problem is to use an adaptive, multi-layered video encoding mechanism. In this paper, we consider two such mechanisms for the support of video multicast; one is a rate-based mechanism that relies on explicit rate congestion feedback from the network, and the other is a credit-based mechanism that relies on hop-by-hop congestion feedback. The responsiveness, bandwidth utilization, scalability and fairness of the two mechanisms are evaluated through simulations. Results suggest that while the two mechanisms exhibit performance trade-offs, both are capable of providing a high quality video service in the presence of varying bandwidth constraints.Comment: 11 page

    A Framework for Controlling Quality of Sessions in Multimedia Systems

    Get PDF
    Collaborative multimedia systems demand overall session quality control beyond the level of quality of service (QoS) pertaining to individual connections in isolation of others. At every instant in time, the quality of the session depends on the actual QoS offered by the system to each of the application streams, as well as on the relative priorities of these streams according to the application semantics. We introduce a framework for achieving QoSess control and address the architectural issues involved in designing a QoSess control laver that realizes the proposed framework. In addition, we detail our contributions for two main components of the QoSess control layer. The first component is a scalable and robust feedback protocol, which allows for determining the worst case state among a group of receivers of a stream. This mechanism is used for controlling the transmission rates of multimedia sources in both cases of layered and single-rate multicast streams. The second component is a set of inter-stream adaptation algorithms that dynamically control the bandwidth shares of the streams belonging to a session. Additionally, in order to ensure stability and responsiveness in the inter-stream adaptation process, several measures are taken, including devising a domain rate control protocol. The performance of the proposed mechanisms is analyzed and their advantages are demonstrated by simulation and experimental results

    Multimedia congestion control: circuit breakers for unicast RTP sessions

    Get PDF
    The Real-time Transport Protocol (RTP) is widely used in telephony, video conferencing, and telepresence applications. Such applications are often run on best-effort UDP/IP networks. If congestion control is not implemented in these applications, then network congestion can lead to uncontrolled packet loss and a resulting deterioration of the user's multimedia experience. The congestion control algorithm acts as a safety measure by stopping RTP flows from using excessive resources and protecting the network from overload. At the time of this writing, however, while there are several proprietary solutions, there is no standard algorithm for congestion control of interactive RTP flows. This document does not propose a congestion control algorithm. It instead defines a minimal set of RTP circuit breakers: conditions under which an RTP sender needs to stop transmitting media data to protect the network from excessive congestion. It is expected that, in the absence of long-lived excessive congestion, RTP applications running on best-effort IP networks will be able to operate without triggering these circuit breakers. To avoid triggering the RTP circuit breaker, any Standards Track congestion control algorithms defined for RTP will need to operate within the envelope set by these RTP circuit breaker algorithms

    A Survey on TCP-Friendly Congestion Control (extended version)

    Full text link
    New trends in communication, in particular the deployment of multicast and real-time audio/video streaming applications, are likely to increase the percentage of non-TCP traffic in the Internet. These applications rarely perform congestion control in a TCP-friendly manner, i.e., they do not share the available bandwidth fairly with applications built on TCP, such as web browsers, FTP- or email-clients. The Internet community strongly fears that the current evolution could lead to a congestion collapse and starvation of TCP traffic. For this reason, TCP-friendly protocols are being developed that behave fairly with respect to co-existent TCP flows. In this article, we present a survey of current approaches to TCP-friendliness and discuss their characteristics. Both unicast and multicast congestion control protocols are examined, and an evaluation of the different approaches is presented

    Equation-Based Congestion Control for Unicast Applications: the Extended Version

    Full text link
    This paper proposes a mechanism for equation-based congestion control for unicast traffic. Most best-effort traffic in the current Internet is well-served by the dominant transport protocol TCP. However, traffic such as best-effort unicast streaming multimedia could find use for a TCP-friendly congestion control mechanism that refrains from reducing the sending rate in half in response to a single packet drop. With our mechanism, the sender explicitly adjusts its sending rate as a function of the measured rate of loss events, where a loss event consists of one or more packets dropped within a single round-trip time. We use both simulations and experiments over the Internet to explore performance. Equation-based congestion control is also a promising avenue of development for congestion control of multicast traffic, and so an additional reason for this work is to lay a sound basis for the later development of multicast congestion control

    Equation-Based Congestion Control for Unicast and Multicast Data Streams

    Full text link
    We believe that the emergence of congestion control mechanisms for relatively-smooth congestion control for unicast and multicast traffic can play a key role in preventing the degradation of end-to-end congestion control in the public Internet, by providing a viable alternative for multimedia flows that would otherwise be tempted to avoid end-to-end congestion control altogether. The design of good congestion control mechanisms is a hard problem, even more so for multicast environments where scalability issues are much more of a concern than for unicast. In this dissertation, equation-based congestion control is presented as an alternative form of congestion control to the well-known TCP protocol. We focus on areas of equation-based congestion control which were not yet well understood and for which no adequate solutions existed. Starting from a unicast congestion control mechanism which in contrast to TCP provides smooth rate changes, we extend equation-based congestion control in several ways. We investigate how it can work together with applications which can only operate in a very limited region of available bandwidth and whose rate can thus not be adapted to the network conditions in the usual way. Such a congestion control mechanism can also complement conventional equation-based congestion control in regimes where available bandwidth is too low for further rate reduction. When extending unicast congestion control to multicast, it is of paramount importance to ensure that changes in the network conditions anywhere in the multicast tree are reported back to the sender as quickly as possible to allow the sender to adjust the rate accordingly. A scalable feedback mechanism that allows timely congestion feedback in the face of potentially very large receiver sets is one of the contributions of this dissertation. But also other components of a congestion control protocol, such as the rate increase/decrease policy or the slow-start mechanism, need to be adjusted to be able to use them in a multicast environment. Our resulting multicast congestion control protocol was implemented in a simulation environment for extensive protocol testing and turned into a library for the use in real-world applications. In addition, a simple video transmission tool was built for test purposes that uses this congestion control library

    Scalable reliable on-demand media streaming protocols

    Get PDF
    This thesis considers the problem of delivering streaming media, on-demand, to potentially large numbers of concurrent clients. The problem has motivated the development in prior work of scalable protocols based on multicast or broadcast. However, previous protocols do not allow clients to efficiently: 1) recover from packet loss; 2) share bandwidth fairly with competing flows; or 3) maximize the playback quality at the client for any given client reception rate characteristics. In this work, new protocols, namely Reliable Periodic Broadcast (RPB) and Reliable Bandwidth Skimming (RBS), are developed that efficiently recover from packet loss and achieve close to the best possible server bandwidth scalability for a given set of client characteristics. To share bandwidth fairly with competing traffic such as TCP, these protocols can employ the Vegas Multicast Rate Control (VMRC) protocol proposed in this work. The VMRC protocol exhibits TCP Vegas-like behavior. In comparison to prior rate control protocols, VMRC provides less oscillatory reception rates to clients, and operates without inducing packet loss when the bottleneck link is lightly loaded. The VMRC protocol incorporates a new technique for dynamically adjusting the TCP Vegas threshold parameters based on measured characteristics of the network. This technique implements fair sharing of network resources with other types of competing flows, including widely deployed versions of TCP such as TCP Reno. This fair sharing is not possible with the previously defined static Vegas threshold parameters. The RPB protocol is extended to efficiently support quality adaptation. The Optimized Heterogeneous Periodic Broadcast (HPB) is designed to support a range of client reception rates and efficiently support static quality adaptation by allowing clients to work-ahead before beginning playback to receive a media file of the desired quality. A dynamic quality adaptation technique is developed and evaluated which allows clients to achieve more uniform playback quality given time-varying client reception rates

    Scaleable audio for collaborative environments

    Get PDF
    This thesis is concerned with supporting natural audio communication in collaborative environments across the Internet. Recent experience with Collaborative Virtual Environments, for example, to support large on-line communities and highly interactive social events, suggest that in the future there will be applications in which many users speak at the same time. Such applications will generate large and dynamically changing volumes of audio traffic that can cause congestion and hence packet loss in the network and so seriously impair audio quality. This thesis reveals that no current approach to audio distribution can combine support for large number of simultaneous speakers with TCP-fair responsiveness to congestion. A model for audio distribution called Distributed Partial Mixing (DPM) is proposed that dynamically adapts both to varying numbers of active audio streams in collaborative environments and to congestion in the network. Each DPM component adaptively mixes subsets of its input audio streams into one or more mixed streams, which it then forwards to the other components along with any unmixed streams. DPM minimises the amount of mixing performed so that end users receive as many separate audio streams as possible within prevailing network resource constraints. This is important in order to allow maximum flexibility of audio presentation (especially spatialisation) to the end user. A distributed partial mixing prototype is realised as part of the audio service in MASSIVE-3. A series of experiments over a single network link demonstrate that DPM gracefully manages the tradeoff between preserving stable audio quality and being responsive to congestion and achieving fairness towards competing TCP traffic. The problem of large scale deployment of DPM over heterogeneous networks is also addressed. The thesis proposes that a shared tree of DPM servers and clients, where the nodes of the tree can perform distributed partial mixing, is an effective basis for wide area deployment. Two models for realising this in two contrasting situations are then explored in more detail: a static, centralised, subscription-based DPM service suitable for fully managed networks, and a fully distributed self-organising DPM service suitable for unmanaged networks (such as the current Internet)
    • …
    corecore