546 research outputs found

    Providing Fairness Through Detection and Preferential Dropping of High Bandwidth Unresponsive Flows

    Get PDF
    Stability of the Internet today depends largely on cooperation between end hosts that employ TCP (Transmission Control Protocol) protocol in the transport layer, and network routers along an end-to-end path. However, in the past several years, various types of traffic, including streaming media applications, are increasingly deployed over the Internet. Such types of traffic are mostly based on UDP (User Datagram Protocol) and usually do not employ neither end-to-end congestion norflow control mechanism, or else very limited. Such applications could unfairly consume greater amount of bandwidth than competing responsive flows such as TCP traffic. In this manner, unfairness problem and congestion collapse could occur. To avoid substantial memory requirement and complexity, fair Active Queue Management (AQM) utilizing no or partial flow state information were proposed in the past several years to solve these problems. These schemes however exhibit several problems under different circumstances.This dissertation presents two fair AQM mechanisms, BLACK and AFC, that overcome the problems and the limitations of the existing schemes. Both BLACK and AFC need to store only a small amount of state information to maintain and exercise its fairness mechanism. Extensive simulation studies show that both schemes outperform the other schemes in terms of throughput fairness under a large number of scenarios. Not only able to handle multiple unresponsive traffic, but the fairness among TCP connections with different round trip delays is also improved. AFC, with a little overhead than BLACK, provides additional advantages with an ability to achieve good fairness under a scenario with traffic of diff21erent sizes and bursty traffic, and provide smoother transfer rates for the unresponsive flows that are usually transmitting real-time traffic.This research also includes the comparative study of the existing techniques to estimate the number of active flows which is a crucial component for some fair AQM schemes including BLACK and AFC. Further contribution presented in this dissertation is the first comprehensive evaluation of fair AQM schemes under the presence of various type of TCP friendly traffic

    System Support for Bandwidth Management and Content Adaptation in Internet Applications

    Full text link
    This paper describes the implementation and evaluation of an operating system module, the Congestion Manager (CM), which provides integrated network flow management and exports a convenient programming interface that allows applications to be notified of, and adapt to, changing network conditions. We describe the API by which applications interface with the CM, and the architectural considerations that factored into the design. To evaluate the architecture and API, we describe our implementations of TCP; a streaming layered audio/video application; and an interactive audio application using the CM, and show that they achieve adaptive behavior without incurring much end-system overhead. All flows including TCP benefit from the sharing of congestion information, and applications are able to incorporate new functionality such as congestion control and adaptive behavior.Comment: 14 pages, appeared in OSDI 200

    A novel multimedia adaptation architecture and congestion control mechanism designed for real-time interactive applications

    Get PDF
    PhDThe increasing use of interactive multimedia applications over the Internet has created a problem of congestion. This is because a majority of these applications do not respond to congestion indicators. This leads to resource starvation for responsive flows, and ultimately excessive delay and losses for all flows therefore loss of quality. This results in unfair sharing of network resources and increasing the risk of network ‘congestion collapse’. Current Congestion Control Mechanisms such as ‘TCP-Friendly Rate Control’ (TFRC) have been able to achieve ‘fair-share’ of network resource when competing with responsive flows such as TCP, but TFRC’s method of congestion response (i.e. to reduce Packet Rate) is not ideally matched for interactive multimedia applications which maintain a fixed Frame Rate. This mismatch of the two rates (Packet Rate and Frame Rate) leads to buffering of frames at the Sender Buffer resulting in delay and loss, and an unacceptable reduction of quality or complete loss of service for the end-user. To address this issue, this thesis proposes a novel Congestion Control Mechanism which is referred to as ‘TCP-friendly rate control – Fine Grain Scalable’ (TFGS) for interactive multimedia applications. This new approach allows multimedia frames (data) to be sent as soon as they are generated, so that the multimedia frames can reach the destination as quickly as possible, in order to provide an isochronous interactive service. This is done by maintaining the Packet Rate of the Congestion Control Mechanism (CCM) at a level equivalent to the Frame Rate of the Multimedia Encoder.The response to congestion is to truncate the Packet Size, hence reducing the overall bitrate of the multimedia stream. This functionality of the Congestion Control Mechanism is referred to as Packet Size Truncation (PST), and takes advantage of adaptive multimedia encoding, such as Fine Grain Scalable (FGS), where the multimedia frame is encoded in order of significance, Most to Least Significant Bits. The Multimedia Adaptation Manager (MAM) truncates the multimedia frame to the size indicated by the Packet Size Truncation function of the CCM, accurately mapping user demand to available network resource. Additionally Fine Grain Scalable encoding can offer scalability at byte level granularity, providing a true match to available network resources. This approach has the benefits of achieving a ‘fair-share’ of network resource when competing with responsive flows (as similar to TFRC CCM), but it also provides an isochronous service which is of crucial benefit to real-time interactive services. Furthermore, results illustrate that an increased number of interactive multimedia flows (such as voice) can be carried over congested networks whilst maintaining a quality level equivalent to that of a standard landline telephone. This is because the loss and delay arising from the buffering of frames at the Sender Buffer is completely removed. Packets sent maintain a fixed inter-packet-gap-spacing (IPGS). This results in a majority of packets arriving at the receiving end at tight time intervals. Hence, this avoids the need of using large Playout (de-jitter) Buffer sizes and adaptive Playout Buffer configurations. As a result this reduces delay, improves interactivity and Quality of Experience (QoE) of the multimedia application

    Network delay control through adaptive queue management

    Get PDF
    Timeliness in delivering packets for delay-sensitive applications is an important QoS (Quality of Service) measure in many systems, notably those that need to provide real-time performance. In such systems, if delay-sensitive traffic is delivered to the destination beyond the deadline, then the packets will be rendered useless and dropped after received at the destination. Bandwidth that is already scarce and shared between network nodes is wasted in relaying these expired packets. This thesis proposes that a deterministic per-hop delay can be achieved by using a dynamic queue threshold concept to bound delay of each node. A deterministic per-hop delay is a key component in guaranteeing a deterministic end-to-end delay. The research aims to develop a generic approach that can constrain network delay of delay-sensitive traffic in a dynamic network. Two adaptive queue management schemes, namely, DTH (Dynamic THreshold) and ADTH (Adaptive DTH) are proposed to realize the claim. Both DTH and ADTH use the dynamic threshold concept to constrain queuing delay so that bounded average queuing delay can be achieved for the former and bounded maximum nodal delay can be achieved for the latter. DTH is an analytical approach, which uses queuing theory with superposition of N MMBP-2 (Markov Modulated Bernoulli Process) arrival processes to obtain a mapping relationship between average queuing delay and an appropriate queuing threshold, for queue management. While ADTH is an measurement-based algorithmic approach that can respond to the time-varying link quality and network dynamics in wireless ad hoc networks to constrain network delay. It manages a queue based on system performance measurements and feedback of error measured against a target delay requirement. Numerical analysis and Matlab simulation have been carried out for DTH for the purposes of validation and performance analysis. While ADTH has been evaluated in NS-2 simulation and implemented in a multi-hop wireless ad hoc network testbed for performance analysis. Results show that DTH and ADTH can constrain network delay based on the specified delay requirements, with higher packet loss as a trade-off

    Networking Mechanisms for Delay-Sensitive Applications

    Get PDF
    The diversity of applications served by the explosively growing Internet is increasing. In particular, applications that are sensitive to end-to-end packet delays become more common and include telephony, video conferencing, and networked games. While the single best-effort service of the current Internet favors throughput-greedy traffic by equipping congested links with large buffers, long queuing at the congested links hurts the delay-sensitive applications. Furthermore, while numerous alternative architectures have been proposed to offer diverse network services, the innovative alternatives failed to gain widespread end-to-end deployment. This dissertation explores different networking mechanisms for supporting low queueing delay required by delay-sensitive applications. In particular, it considers two different approaches. The first one assumes employing congestion control protocols for the traffic generated by the considered class of applications. The second approach relies on the router operation only and does not require support from end hosts

    User-Centric Quality of Service Provisioning in IP Networks

    Get PDF
    The Internet has become the preferred transport medium for almost every type of communication, continuing to grow, both in terms of the number of users and delivered services. Efforts have been made to ensure that time sensitive applications receive sufficient resources and subsequently receive an acceptable Quality of Service (QoS). However, typical Internet users no longer use a single service at a given point in time, as they are instead engaged in a multimedia-rich experience, comprising of many different concurrent services. Given the scalability problems raised by the diversity of the users and traffic, in conjunction with their increasing expectations, the task of QoS provisioning can no longer be approached from the perspective of providing priority to specific traffic types over coexisting services; either through explicit resource reservation, or traffic classification using static policies, as is the case with the current approach to QoS provisioning, Differentiated Services (Diffserv). This current use of static resource allocation and traffic shaping methods reveals a distinct lack of synergy between current QoS practices and user activities, thus highlighting a need for a QoS solution reflecting the user services. The aim of this thesis is to investigate and propose a novel QoS architecture, which considers the activities of the user and manages resources from a user-centric perspective. The research begins with a comprehensive examination of existing QoS technologies and mechanisms, arguing that current QoS practises are too static in their configuration and typically give priority to specific individual services rather than considering the user experience. The analysis also reveals the potential threat that unresponsive application traffic presents to coexisting Internet services and QoS efforts, and introduces the requirement for a balance between application QoS and fairness. This thesis proposes a novel architecture, the Congestion Aware Packet Scheduler (CAPS), which manages and controls traffic at the point of service aggregation, in order to optimise the overall QoS of the user experience. The CAPS architecture, in contrast to traditional QoS alternatives, places no predetermined precedence on a specific traffic; instead, it adapts QoS policies to each individual’s Internet traffic profile and dynamically controls the ratio of user services to maintain an optimised QoS experience. The rationale behind this approach was to enable a QoS optimised experience to each Internet user and not just those using preferred services. Furthermore, unresponsive bandwidth intensive applications, such as Peer-to-Peer, are managed fairly while minimising their impact on coexisting services. The CAPS architecture has been validated through extensive simulations with the topologies used replicating the complexity and scale of real-network ISP infrastructures. The results show that for a number of different user-traffic profiles, the proposed approach achieves an improved aggregate QoS for each user when compared with Best effort Internet, Traditional Diffserv and Weighted-RED configurations. Furthermore, the results demonstrate that the proposed architecture not only provides an optimised QoS to the user, irrespective of their traffic profile, but through the avoidance of static resource allocation, can adapt with the Internet user as their use of services change.France Teleco
    • 

    corecore