39 research outputs found
Stable and scalable congestion control for high-speed heterogeneous networks
For any congestion control mechanisms, the most fundamental design objectives
are stability and scalability. However, achieving both properties are very challenging
in such a heterogeneous environment as the Internet. From the end-users' perspective,
heterogeneity is due to the fact that different flows have different routing paths and
therefore different communication delays, which can significantly affect stability of the
entire system. In this work, we successfully address this problem by first proving a
sufficient and necessary condition for a system to be stable under arbitrary delay. Utilizing this result, we design a series of practical congestion control protocols (MKC
and JetMax) that achieve stability regardless of delay as well as many additional
appealing properties. From the routers' perspective, the system is heterogeneous because the incoming traffic is a mixture of short- and long-lived, TCP and non-TCP
flows. This imposes a severe challenge on traditional buffer sizing mechanisms, which
are derived using the simplistic model of a single or multiple synchronized long-lived
TCP flows. To overcome this problem, we take a control-theoretic approach and
design a new intelligent buffer sizing scheme called Adaptive Buffer Sizing (ABS),
which based on the current incoming traffic, dynamically sets the optimal buffer size
under the target performance constraints. Our extensive simulation results demonstrate that ABS exhibits quick responses to changes of traffic load, scalability to a
large number of incoming flows, and robustness to generic Internet traffic
Scalable reliable on-demand media streaming protocols
This thesis considers the problem of delivering streaming media, on-demand, to potentially large numbers of concurrent clients. The problem has motivated the development in prior work of scalable protocols based on multicast or broadcast. However, previous protocols do not allow clients to efficiently: 1) recover from packet loss; 2) share bandwidth fairly with competing flows; or 3) maximize the playback quality at the client for any given client reception rate characteristics.
In this work, new protocols, namely Reliable Periodic Broadcast (RPB) and Reliable Bandwidth Skimming (RBS), are developed that efficiently recover from packet loss and achieve close to the best possible server bandwidth scalability for a given set of client characteristics. To share bandwidth fairly with competing traffic such as TCP, these protocols can employ the Vegas Multicast Rate Control (VMRC) protocol proposed in this work.
The VMRC protocol exhibits TCP Vegas-like behavior. In comparison to prior rate control protocols, VMRC provides less oscillatory reception rates to clients, and operates without inducing packet loss when the bottleneck link is lightly loaded. The VMRC protocol incorporates a new technique for dynamically adjusting the TCP Vegas threshold parameters based on measured characteristics of the network. This technique implements fair sharing of network resources with other types of competing flows, including widely deployed versions of TCP such as TCP Reno. This fair sharing is not possible with the previously defined static Vegas threshold parameters.
The RPB protocol is extended to efficiently support quality adaptation. The Optimized Heterogeneous Periodic Broadcast (HPB) is designed to support a range of client reception rates and efficiently support static quality adaptation by allowing clients to work-ahead before beginning playback to receive a media file of the desired quality. A dynamic quality adaptation technique is developed and evaluated which allows clients to achieve more uniform playback quality given time-varying client reception rates
Dual-Mode Congestion Control Mechanism for Video Services
Recent studies have shown that video services represent over half of Internet traffic, with a growing trend. Therefore, video traffic plays a major role in network congestion. Currently on the Internet, congestion control is mainly implemented through overprovisioning and TCP congestion control. Although some video services use TCP to implement their transport services in a manner that actually works, TCP is not an ideal protocol for use by all video applications. For example, UDP is often considered to be more suitable for use by real-time video applications. Unfortunately, UDP does not implement congestion control. Therefore, these UDP-based video services operate without any kind of congestion control support unless congestion control is implemented on the application layer. There are also arguments against massive overprovisioning. Due to these factors, there is still a need to equip video services with proper congestion control.Most of the congestion control mechanisms developed for the use of video services can only offer either low priority or TCP-friendly real-time services. There is no single congestion control mechanism currently that is suitable and can be widely used for all kinds of video services. This thesis provides a study in which a new dual-mode congestion control mechanism is proposed. This mechanism can offer congestion control services for both service types. The mechanism includes two modes, a backward-loading mode and a real-time mode. The backward-loading mode works like a low-priority service where the bandwidth is given away to other connections once the load level of a network is high enough. In contrast, the real-time mode always demands its fair share of the bandwidth.The behavior of the new mechanism and its friendliness toward itself, and the TCP protocol, have been investigated by means of simulations and real network tests. It was found that this kind of congestion control approach could be suitable for video services. The new mechanism worked acceptably. In particular, the mechanism behaved toward itself in a very friendly way in most cases. The averaged TCP fairness was at a good level. In the worst cases, the faster connections received about 1.6 times as much bandwidth as the slower connections
Performance analysis and network path characterization for scalable internet streaming
Delivering high-quality of video to end users over the best-effort Internet is a
challenging task since quality of streaming video is highly subject to network conditions. A fundamental issue in this area is how real-time applications cope with
network dynamics and adapt their operational behavior to offer a favorable streaming environment to end users.
As an effort towards providing such streaming environment, the first half of
this work focuses on analyzing the performance of video streaming in best-effort
networks and developing a new streaming framework that effectively utilizes unequal
importance of video packets in rate control and achieves a near-optimal performance
for a given network packet loss rate. In addition, we study error concealment methods
such as FEC (Forward-Error Correction) that is often used to protect multimedia
data over lossy network channels. We investigate the impact of FEC on the quality of
video and develop models that can provide insights into understanding how inclusion
of FEC affects streaming performance and its optimality and resilience characteristics
under dynamically changing network conditions.
In the second part of this thesis, we focus on measuring bandwidth of network
paths, which plays an important role in characterizing Internet paths and can benefit
many applications including multimedia streaming. We conduct a stochastic analysis of an end-to-end path and develop novel bandwidth sampling techniques that
can produce asymptotically accurate capacity and available bandwidth of the path
under non-trivial cross-traffic conditions. In addition, we conduct comparative performance study of existing bandwidth estimation tools in non-simulated networks
where various timing irregularities affect delay measurements. We find that when
high-precision packet timing is not available due to hardware interrupt moderation,
the majority of existing algorithms are not robust to measure end-to-end paths with
high accuracy. We overcome this problem by using signal de-noising techniques in
bandwidth measurement. We also develop a new measurement tool called PRC-MT
based on theoretical models that simultaneously measures the capacity and available
bandwidth of the tight link with asymptotic accuracy
Rule-based expert server system design for multimedia streaming transmission
Ph.DDOCTOR OF PHILOSOPH
Methods of Congestion Control for Adaptive Continuous Media
Since the first exchange of data between machines in different locations in early 1960s,
computer networks have grown exponentially with millions of people now using the
Internet. With this, there has also been a rapid increase in different kinds of services offered
over the World Wide Web from simple e-mails to streaming video. It is generally accepted
that the commonly used protocol suite TCP/IP alone is not adequate for a number of
modern applications with high bandwidth and minimal delay requirements. Many
technologies are emerging such as IPv6, Diffserv, Intserv etc, which aim to replace the onesize-fits-all approach of the current lPv4. There is a consensus that the networks will have
to be capable of multi-service and will have to isolate different classes of traffic through
bandwidth partitioning such that, for example, low priority best-effort traffic does not cause
delay for high priority video traffic. However, this research identifies that even within a
class there may be delays or losses due to congestion and the problem will require different
solutions in different classes.
The focus of this research is on the requirements of the adaptive continuous media
class. These are traffic flows that require a good Quality of Service but are also able to
adapt to the network conditions by accepting some degradation in quality. It is potentially
the most flexible traffic class and therefore, one of the most useful types for an increasing
number of applications.
This thesis discusses the QoS requirements of adaptive continuous media and
identifies an ideal feedback based control system that would be suitable for this class. A
number of current methods of congestion control have been investigated and two methods
that have been shown to be successful with data traffic have been evaluated to ascertain if
they could be adapted for adaptive continuous media. A novel method of control based on
percentile monitoring of the queue occupancy is then proposed and developed. Simulation
results demonstrate that the percentile monitoring based method is more appropriate to this
type of flow. The problem of congestion control at aggregating nodes of the network
hierarchy, where thousands of adaptive flows may be aggregated to a single flow, is then
considered. A unique method of pricing mean and variance is developed such that each
individual flow is charged fairly for its contribution to the congestion
Real-time data flow models and congestion management for wire and wireless IP networks
Includes abstract.Includes bibliographical references (leaves 103-111).In video streaming, network congestion compromises the video throughput performance and impairs its perceptual quality and may interrupt the display. Congestion control may take the form of rate adjustment through mechanisms by attempt to minimize the probability of congestion by adjusting the rate of the streaming video to match the available capacity of the network. This can be achieved either by adapting the quantization parameter of the video encoder or by varying the rate through a scalable video technique. This thesis proposes a congestion control protocol for streaming video where an interaction between the video source and the receiver is essential to monitor the network state. The protocol consists of adjusting the video transmission rate at the encoder whenever a change in the network conditions is observed and reported back to the sender
The Effective Transmission and Processing of Mobile Multimedia
Ph.DDOCTOR OF PHILOSOPH
Cross-layer latency-aware and -predictable data communication
Cyber-physical systems are making their way into more aspects of everyday life. These systems are increasingly distributed and hence require networked communication to coordinatively fulfil control tasks. Providing this in a robust and resilient manner demands for latency-awareness and -predictability at all layers of the communication and computation stack. This thesis addresses how these two latency-related properties can be implemented at the transport layer to serve control applications in ways that traditional approaches such as TCP or RTP cannot. Thereto, the Predictably Reliable Real-time Transport (PRRT) protocol is presented, including its unique features (e.g. partially reliable, ordered, in-time delivery, and latency-avoiding congestion control) and unconventional APIs. This protocol has been intensively evaluated using the X-Lap toolkit that has been specifically developed to support protocol designers in improving latency, timing, and energy characteristics of protocols in a cross-layer, intra-host fashion. PRRT effectively circumvents latency-inducing bufferbloat using X-Pace, an implementation of the cross-layer pacing approach presented in this thesis. This is shown using experimental evaluations on real Internet paths. Apart from PRRT, this thesis presents means to make TCP-based transport aware of individual link latencies and increases the predictability of the end-to-end delays using Transparent Transmission Segmentation.Cyber-physikalische Systeme werden immer relevanter für viele Aspekte des Alltages. Sie sind zunehmend verteilt und benötigen daher Netzwerktechnik zur koordinierten Erfüllung von Regelungsaufgaben. Um dies auf eine robuste und zuverlässige Art zu tun, ist Latenz-Bewusstsein und -Prädizierbarkeit auf allen Ebenen der Informations- und Kommunikationstechnik nötig. Diese Dissertation beschäftigt sich mit der Implementierung dieser zwei Latenz-Eigenschaften auf der Transport-Schicht, sodass Regelungsanwendungen deutlich besser unterstützt werden als es traditionelle Ansätze, wie TCP oder RTP, können. Hierzu wird das PRRT-Protokoll vorgestellt, inklusive seiner besonderen Eigenschaften (z.B. partiell zuverlässige, geordnete, rechtzeitige Auslieferung sowie Latenz-vermeidende Staukontrolle) und unkonventioneller API. Das Protokoll wird mit Hilfe von X-Lap evaluiert, welches speziell dafür entwickelt wurde Protokoll-Designer dabei zu unterstützen die Latenz-, Timing- und Energie-Eigenschaften von Protokollen zu verbessern. PRRT vermeidet Latenz-verursachenden Bufferbloat mit Hilfe von X-Pace, einer Cross-Layer Pacing Implementierung, die in dieser Arbeit präsentiert und mit Experimenten auf realen Internet-Pfaden evaluiert wird. Neben PRRT behandelt diese Arbeit transparente Übertragungssegmentierung, welche dazu dient dem TCP-basierten Transport individuelle Link-Latenzen bewusst zu machen und so die Vorhersagbarkeit der Ende-zu-Ende Latenz zu erhöhen
Networking Mechanisms for Delay-Sensitive Applications
The diversity of applications served by the explosively growing Internet is increasing. In particular, applications that are sensitive to end-to-end packet delays become more common and include telephony, video conferencing, and networked games. While the single best-effort service of the current Internet favors throughput-greedy traffic by equipping congested links with large buffers, long queuing at the congested links hurts the delay-sensitive applications. Furthermore, while numerous alternative architectures have been proposed to offer diverse network services, the innovative alternatives failed to gain widespread end-to-end deployment. This dissertation explores different networking mechanisms for supporting low queueing delay required by delay-sensitive applications. In particular, it considers two different approaches. The first one assumes employing congestion control protocols for the traffic generated by the considered class of applications. The second approach relies on the router operation only and does not require support from end hosts