12 research outputs found

    An investigation into buffer management mechanisms for the Diffserv assured forwarding traffic class

    Get PDF
    Includes bibliographical references.One of the service classes offered by Diffserv is the Assured Forwarding (AF) class. Because of scalability concerns, IETF specifications recommend that microflow and aggregate-unaware active buffer management mechanisms such as RIO (Random early detecLion with ln/Out-ofprofile) be used in the core of Diffserv networks implementing AF. Such mechanisms have, however, been shown to provide poor performance with regard to fairness, stability and network controL Furthermore, recent advances in router technology now allow routers to implement more advanced scheduling and buffer management mechanisms on high-speed ports. This thesis evaluates the performance improvements that may be realized when implementing the Diffserv AF core using a hierarchical microflow and aggregate aware buffer management mechanism instead of RIO. The author motivates, proposes and specifies such a mechanism. The mechanism. referred to as H-MAQ or Hierarchical multi drop-precedence queue state Microflow-Aware Quelling, is evaluated on a testbed that compares the performance of a RIO network core with an H-MAQ network core

    A novel multimedia adaptation architecture and congestion control mechanism designed for real-time interactive applications

    Get PDF
    PhDThe increasing use of interactive multimedia applications over the Internet has created a problem of congestion. This is because a majority of these applications do not respond to congestion indicators. This leads to resource starvation for responsive flows, and ultimately excessive delay and losses for all flows therefore loss of quality. This results in unfair sharing of network resources and increasing the risk of network ‘congestion collapse’. Current Congestion Control Mechanisms such as ‘TCP-Friendly Rate Control’ (TFRC) have been able to achieve ‘fair-share’ of network resource when competing with responsive flows such as TCP, but TFRC’s method of congestion response (i.e. to reduce Packet Rate) is not ideally matched for interactive multimedia applications which maintain a fixed Frame Rate. This mismatch of the two rates (Packet Rate and Frame Rate) leads to buffering of frames at the Sender Buffer resulting in delay and loss, and an unacceptable reduction of quality or complete loss of service for the end-user. To address this issue, this thesis proposes a novel Congestion Control Mechanism which is referred to as ‘TCP-friendly rate control – Fine Grain Scalable’ (TFGS) for interactive multimedia applications. This new approach allows multimedia frames (data) to be sent as soon as they are generated, so that the multimedia frames can reach the destination as quickly as possible, in order to provide an isochronous interactive service. This is done by maintaining the Packet Rate of the Congestion Control Mechanism (CCM) at a level equivalent to the Frame Rate of the Multimedia Encoder.The response to congestion is to truncate the Packet Size, hence reducing the overall bitrate of the multimedia stream. This functionality of the Congestion Control Mechanism is referred to as Packet Size Truncation (PST), and takes advantage of adaptive multimedia encoding, such as Fine Grain Scalable (FGS), where the multimedia frame is encoded in order of significance, Most to Least Significant Bits. The Multimedia Adaptation Manager (MAM) truncates the multimedia frame to the size indicated by the Packet Size Truncation function of the CCM, accurately mapping user demand to available network resource. Additionally Fine Grain Scalable encoding can offer scalability at byte level granularity, providing a true match to available network resources. This approach has the benefits of achieving a ‘fair-share’ of network resource when competing with responsive flows (as similar to TFRC CCM), but it also provides an isochronous service which is of crucial benefit to real-time interactive services. Furthermore, results illustrate that an increased number of interactive multimedia flows (such as voice) can be carried over congested networks whilst maintaining a quality level equivalent to that of a standard landline telephone. This is because the loss and delay arising from the buffering of frames at the Sender Buffer is completely removed. Packets sent maintain a fixed inter-packet-gap-spacing (IPGS). This results in a majority of packets arriving at the receiving end at tight time intervals. Hence, this avoids the need of using large Playout (de-jitter) Buffer sizes and adaptive Playout Buffer configurations. As a result this reduces delay, improves interactivity and Quality of Experience (QoE) of the multimedia application

    TCP performance enhancement in wireless networks via adaptive congestion control and active queue management

    Get PDF
    The transmission control protocol (TCP) exhibits poor performance when used in error-prone wireless networks. Remedy to this problem has been an active research area. However, a widely accepted and adopted solution is yet to emerge. Difficulties of an acceptable solution lie in the areas of compatibility, scalability, computational complexity and the involvement of intermediate routers and switches. This dissertation rexriews the current start-of-the-art solutions to TCP performance enhancement, and pursues an end-to-end solution framework to the problem. The most noticeable cause of the performance degradation of TCP in wireless networks is the higher packet loss rate as compared to that in traditional wired networks. Packet loss type differentiation has been the focus of many proposed TCP performance enhancement schemes. Studies conduced by this dissertation research suggest that besides the standard TCP\u27s inability of discriminating congestion packet losses from losses related to wireless link errors, the standard TCP\u27s additive increase and multiplicative decrease (AIMD) congestion control algorithm itself needs to be redesigned to achieve better performance in wireless, and particularly, high-speed wireless networks. This dissertation proposes a simple, efficient, and effective end-to-end solution framework that enhances TCP\u27s performance through techniques of adaptive congestion control and active queue management. By end-to-end, it means a solution with no requirement of routers being wireless-aware or wireless-specific . TCP-Jersey has been introduced as an implementation of the proposed solution framework, and its performance metrics have been evaluated through extensive simulations. TCP-Jersey consists of an adaptive congestion control algorithm at the source by means of the source\u27s achievable rate estimation (ARE) —an adaptive filter of packet inter-arrival times, a congestion indication algorithm at the links (i.e., AQM) by means of packet marking, and a effective loss differentiation algorithm at the source by careful examination of the congestion marks carried by the duplicate acknowledgment packets (DUPACK). Several improvements to the proposed TCP-Jersey have been investigated, including a more robust ARE algorithm, a less computationally intensive threshold marking algorithm as the AQM link algorithm, a more stable congestion indication function based on virtual capacity at the link, and performance results have been presented and analyzed via extensive simulations of various network configurations. Stability analysis of the proposed ARE-based additive increase and adaptive decrease (AJAD) congestion control algorithm has been conducted and the analytical results have been verified by simulations. Performance of TCP-Jersey has been compared to that of a perfect , but not practical, TCP scheme, and encouraging results have been observed. Finally the framework of the TCP-Jersey\u27s source algorithm has been extended and generalized for rate-based congestion control, as opposed to TCP\u27s window-based congestion control, to provide a design platform for applications, such as real-time multimedia, that do not use TCP as transport protocol yet do need to control network congestion as well as combat packet losses in wireless networks. In conclusion, the framework architecture presented in this dissertation that combines the adaptive congestion control and active queue management in solving the TCP performance degradation problem in wireless networks has been shown as a promising answer to the problem due to its simplistic design philosophy complete compatibility with the current TCP/IP and AQM practice, end-to-end architecture for scalability, and the high effectiveness and low computational overhead. The proposed implementation of the solution framework, namely TCP-Jersey is a modification of the standard TCP protocol rather than a completely new design of the transport protocol. It is an end-to-end approach to address the performance degradation problem since it does not require split mode connection establishment and maintenance using special wireless-aware software agents at the routers. The proposed solution also differs from other solutions that rely on the link layer error notifications for packet loss differentiation. The proposed solution is also unique among other proposed end-to-end solutions in that it differentiates packet losses attributed to wireless link errors from congestion induced packet losses directly from the explicit congestion indication marks in the DUPACK packets, rather than inferring the loss type based on packet delay or delay jitter as in many other proposed solutions; nor by undergoing a computationally expensive off-line training of a classification model (e.g., HMM), or a Bayesian estimation/detection process that requires estimations of a priori loss probability distributions of different loss types. The proposed solution is also scalable and fully compatible to the current practice in Internet congestion control and queue management, but with an additional function of loss type differentiation that effectively enhances TCP\u27s performance over error-prone wireless networks. Limitations of the proposed solution architecture and areas for future researches are also addressed

    A multi-objective particle swarm optimized fuzzy logic congestion detection and dual explicit notification mechanism for IP networks.

    Get PDF
    Thesis (M.Sc.Eng.)-University of KwaZulu-Natal, 2006.The Internet has experienced a tremendous growth over the past two decades and with that growth have come severe congestion problems. Research efforts to alleviate the congestion problem can broadly be classified into three groups: Cl) Router based congestion detection; (2) Generation and transmission of congestion notification signal to the traffic sources; (3) End-to-end algorithms which control the flow of traffic between the end hosts. This dissertation has largely addressed the first two groups which are basically router initiated. Router based congestion detection mechanisms, commonly known as Active Queue Management (AQM), can be classified into two groups: conventional mathematical analytical techniques and fuzzy logic based techniques. Research has shown that fuzzy logic techniques are more effective and robust compared to the conventional techniques because they do not rely on the availability of a precise mathematical model of Internet. They use linguistic knowledge and are, therefore, better placed to handle the complexities associated with the non-linearity and dynamics of the Internet. In spite of all these developments, there still exists ample room for improvement because, practically, there has been a slow deployment of AQM mechanisms. In the first part of this dissertation, we study the major AQM schemes in both the conventional and the fuzzy logic domain in order to uncover the problems that have hampered their deployment in practical implementations. Based on the findings from this study, we model the Internet congestion problem as a multi-objective problem. We propose a Fuzzy Logic Congestion Detection (FLCD) which synergistically combines the good characteristics of the fuzzy approaches with those of the conventional approaches. We design the membership functions (MFs) of the FLCD algorithm automatically by using Multi-objective Particle Swarm Optimization (MOPSO), a population based stochastic optimization algorithm. This enables the FLCD algorithm to achieve optimal performance on all the major objectives of Internet congestion control. The FLCD algorithm is compared with the basic Fuzzy Logic AQM and the Random Explicit Marking (REM) algorithms on a best effort network. Simulation results show that the FLCD algorithm provides high link utilization whilst maintaining lower jitter and packet loss. It also exhibits higher fairness and stability compared to its basic variant and REM. We extend this concept to Proportional Differentiated Services network environment where the FLCD algorithm outperforms the traditional Weighted RED algorithm. We also propose self learning and organization structures which enable the FLCD algorithm to achieve a more stable queue, lower packet losses and UDP traffic delay in dynamic traffic environments on both wired and wireless networks. In the second part of this dissertation, we present the congestion notification mechanisms which have been proposed for wired and satellite networks. We propose an FLCD based dual explicit congestion notification algorithm which combines the merits of the Explicit Congestion Notification (ECN) and the Backward Explicit Congestion Notification (BECN) mechanisms. In this proposal, the ECN mechanism is invoked based on the packet marking probability while the BECN mechanism is invoked based on the BECN parameter which helps to ensure that BECN is invoked only when congestion is severe. Motivated by the fact that TCP reacts to tbe congestion notification signal only once during a round trip time (RTT), we propose an RTT based BECN decay function. This reduces the invocation of the BECN mechanism and resultantly the generation of reverse traffic during an RTT. Compared to the traditional explicit notification mechanisms, simulation results show that the new approach exhibits lower packet loss rates and higher queue stability on wired networks. It also exhibits lower packet loss rates, higher good-put and link utilization on satellite networks. We also observe that the BECN decay function reduces reverse traffic significantly on both wired and satellite networks while ensuring that performance remains virtually the same as in the algorithm without BECN traffic reduction.Print copy complete; page numbering of 105-108 incorrect

    Quality of service and resource management in IP and wireless networks

    Get PDF
    A common theme in the publications included in this thesis is the quality of service and resource management in IP and wireless networks. This thesis presents novel algorithms and implementations for admission control in IP and IEEE 802.16e networks, active queue management in EGPRS, WCDMA, and IEEE 802.16e networks, and scheduling in IEEE 802.16e networks. The performance of different algorithms and mechanisms is compared with the prior art through extensive ns-2 simulations. We show that similar active queue management mechanisms, such as TTLRED, can be successfully used to reduce the downlink delay (and in some cases even improve the TCP goodput) in different bottlenecks of IP, EGPRS, WCDMA, and IEEE 802.16e access networks. Moreover, almost identical connection admission control algorithms can be applied both in IP access networks and at IEEE 802.16e base stations. In the former case, one just has to first gather the link load information from the IP routers. We also note that DiffServ can be used to avoid costly overprovisioning of the backhaul in IEEE 802.16e networks. We present a simple mapping between IEEE 802.16e data delivery services and DiffServ traffic classes, and we propose that IEEE 802.16e base stations should take the backhaul traffic load into account in their admission control decisions. Moreover, different IEEE 802.16e base station scheduling algorithms and uplink channel access mechanisms are studied. In the former study, we show that proportional fair scheduling offers superior spectral efficiency when compared to deficit round-robin, though in some cases at the cost of increased delay. Additionally, we introduce a variant of deficit round-robin (WDRR), where the quantum value depends on the modulation and coding scheme. We also show that there are several ways to implement ertPS in an efficient manner, so that during the silence periods of a VoIP call no uplink slots are granted. The problem here, however, is how to implement the resumption after the silence period while introducing as little delay as possible

    User-Centric Quality of Service Provisioning in IP Networks

    Get PDF
    The Internet has become the preferred transport medium for almost every type of communication, continuing to grow, both in terms of the number of users and delivered services. Efforts have been made to ensure that time sensitive applications receive sufficient resources and subsequently receive an acceptable Quality of Service (QoS). However, typical Internet users no longer use a single service at a given point in time, as they are instead engaged in a multimedia-rich experience, comprising of many different concurrent services. Given the scalability problems raised by the diversity of the users and traffic, in conjunction with their increasing expectations, the task of QoS provisioning can no longer be approached from the perspective of providing priority to specific traffic types over coexisting services; either through explicit resource reservation, or traffic classification using static policies, as is the case with the current approach to QoS provisioning, Differentiated Services (Diffserv). This current use of static resource allocation and traffic shaping methods reveals a distinct lack of synergy between current QoS practices and user activities, thus highlighting a need for a QoS solution reflecting the user services. The aim of this thesis is to investigate and propose a novel QoS architecture, which considers the activities of the user and manages resources from a user-centric perspective. The research begins with a comprehensive examination of existing QoS technologies and mechanisms, arguing that current QoS practises are too static in their configuration and typically give priority to specific individual services rather than considering the user experience. The analysis also reveals the potential threat that unresponsive application traffic presents to coexisting Internet services and QoS efforts, and introduces the requirement for a balance between application QoS and fairness. This thesis proposes a novel architecture, the Congestion Aware Packet Scheduler (CAPS), which manages and controls traffic at the point of service aggregation, in order to optimise the overall QoS of the user experience. The CAPS architecture, in contrast to traditional QoS alternatives, places no predetermined precedence on a specific traffic; instead, it adapts QoS policies to each individual’s Internet traffic profile and dynamically controls the ratio of user services to maintain an optimised QoS experience. The rationale behind this approach was to enable a QoS optimised experience to each Internet user and not just those using preferred services. Furthermore, unresponsive bandwidth intensive applications, such as Peer-to-Peer, are managed fairly while minimising their impact on coexisting services. The CAPS architecture has been validated through extensive simulations with the topologies used replicating the complexity and scale of real-network ISP infrastructures. The results show that for a number of different user-traffic profiles, the proposed approach achieves an improved aggregate QoS for each user when compared with Best effort Internet, Traditional Diffserv and Weighted-RED configurations. Furthermore, the results demonstrate that the proposed architecture not only provides an optimised QoS to the user, irrespective of their traffic profile, but through the avoidance of static resource allocation, can adapt with the Internet user as their use of services change.France Teleco

    Recovery Act: Energy Efficiency of Data Networks through Rate Adaptation (EEDNRA) - Final Technical Report

    Full text link

    Proceedings of the 5th MIT/ONR Workshop on C[3] Systems, held at Naval Postgraduate School, Monterey, California, August 23 to 27, 1982

    Get PDF
    "December 1982."Includes bibliographies and index.Office of Naval Research Contract no. ONR/N00014-77-C-0532 NR041-519edited by Michael Athans ... [et al.]

    Improved congestion control for packet switched data networks and the Internet

    Get PDF
    Congestion control is one of the fundamental issues in computer networks. Without proper congestion control mechanisms there is the possibility of inefficient utilization of resources, ultimately leading to network collapse. Hence congestion control is an effort to adapt the performance of a network to changes in the traffic load without adversely affecting users perceived utilities. This thesis is a step in the direction of improved network congestion control. Traditionally the Internet has adopted a best effort policy while relying on an end-to-end mechanism. Complex functions are implemented by end users, keeping the core routers of network simple and scalable. This policy also helps in updating the software at the users' end. Thus, currently most of the functionality of the current Internet lie within the end users' protocols, particularly within Transmission Control Protocol (TCP). This strategy has worked fine to date, but networks have evolved and the traffic volume has increased many fold; hence routers need to be involved in controlling traffic, particularly during periods of congestion. Other benefits of using routers to control the flow of traffic would be facilitating the introduction of differentiated services or offering different qualities of service to different users. Any real congestion episode due to demand of greater than available bandwidth, or congestion created on a particular target host by computer viruses, will hamper the smooth execution of the offered network services. Thus, the role of congestion control mechanisms in modern computer networks is very crucial. In order to find effective solutions to congestion control, in this thesis we use feedback control system models of computer networks. The closed loop formed by TCPIIP between the end hosts, through intermediate routers, relies on implicit feedback of congestion information through returning acknowledgements. This feedback information about the congestion state of the network can be in the form of lost packets, changes in round trip time and rate of arrival of acknowledgements. Thus, end hosts can either execute reactive or proactive congestion control mechanisms. The former approach uses duplicate acknowledgements and timeouts as congestion signals, as done in TCP Reno, whereas the latter approach depends on changes in the round trip time, as in TCP Vegas. The protocols employing the second approach are still in their infancy as they cannot co-exist safely with protocols employing the first approach. Whereas TCP Reno and its mutations, such as TCP Sack, are presently widely used in computer networks, including the current Internet. These protocols require packet losses to happen before they can detect congestion, thus inherently leading to wastage of time and network bandwidth. Active Queue Management (AQM) is an alternative approach which provides congestion feedback from routers to end users. It makes a network to behave as a sensitive closed loop feedback control system, with a response time of one round trip time, congestion information being delivered to the end host to reduce data sending rates before actual packets losses happen. From this congestion information, end hosts can reduce their congestion window size, thus pumping fewer packets into a congested network until the congestion period is over and routers stop sending congestion signals. Keeping both approaches in view, we have adopted a two-pronged strategy to address the problem of congestion control. They are to adapt the network at its edges as well as its core routers. We begin by introducing TCPIIP based computer networks and defining the congestion control problem. Next we look at different proactive end-to-end protocols, including TCP Vegas due to its better fairness properties. We address the incompatibility problem between TCP Vegas and TCP Reno by using ECN based on Random Early Detection (RED) algorithm to adjust parameters of TCP Vegas. Further, we develop two alternative algorithms, namely optimal minimum variance and generalized optimal minimum variance, for fair end-to-end protocols. The relationship between (p, 1) proportionally fair algorithm and the generalized algorithm is investigated along with conditions for its stable operation. Noteworthy is a novel treatment of the issue of transient fairness. This represents the work done on congestion control at the edges of network. Next, we focus on router-based congestion control algorithms and start with a survey of previous work done in that direction. We select the RED algorithm for further work due to it being recommended for the implementation of AQM. First we devise a new Hybrid RED algorithm which employs instantaneous queue size along with an exponential weighted moving average queue size for making decisions about packet marking/dropping, and adjusts the average value during periods of low traffic. This algorithm improves the link utilization and packet loss rate as compared to basic RED. We further propose a control theory based Auto-tuning RED algorithm that adapts to changing traffic load. This algorithm can clamp the average queue size to a desired reference value which can be used to estimate queuing delays for Quality of Service purposes. As an alternative approach to router-based congestion control, we investigate Proportional, Proportional-Integral (PI) and Proportional-Integral-Derivative (PID) principles based control algorithms for AQM. New control-theoretic RED and frequency response based PI and PID control algorithms are developed and their performance is compared with that of existing algorithms. Later we transform the RED and PI principle based algorithms into their adaptive versions using the well known square root of p formula. The performance of these load adaptive algorithms is compared with that of the previously developed fixed parameter algorithms. Apart from some recent research, most of the previous efforts on the design of congestion control algorithms have been heuristic. This thesis provides an effective use of control theory principles in the design of congestion control algorithms. We develop fixed-parameter-type feedback congestion control algorithms as well as their adaptive versions. All of the newly proposed algorithms are evaluated by using ns-based simulations. The thesis concludes with a number of research proposals emanating from the work reported
    corecore