24 research outputs found

    Evaluating several implementations for the AS Minimum Bandwidth Egress Link Scheduler

    Get PDF
    Abstract-The relevance of the provision of QoS is taken into account in the definition of the new network technologies like for example Advanced Switching (AS). AS is a new fabricinterconnect technology that further enhances the capabilities of PCI Express, which is the next PCI generation. In this paper we discuss the aspects that must be considered for implementing a specific mechanism for the AS minimum bandwidth egress link scheduler, or just MinBW scheduler. We also propose several implementations for this scheduler, analyze their computational complexity, and compare their performance by simulation. The main differentiating aspect from other interconnection technologies that must be taken into account when implementing the AS MinBW scheduler is that both the link-level flow control and the scheduling are made at a Virtual Channel (VC) level. This means that the scheduler must have the ability to enable or disable the selection of a given VC based on the flow control information

    Providing quality of services in systems based on advances switching

    Get PDF
    Advanced Switching (AS) is a network technology based on PCI Express. PCI Express is the next PCI generation, which is already replacing the extensively used PCI bus. AS is an extrapolation of PCI ExpressSummary that borrows its lower two architectural layers and includes an optimized transaction layer to enable new capabilities like peer-to-peer communication. Whereas PCI Express has already begun to reshape a new generation of PCs and traditional servers, a common interconnect with the communications industry seems logical and necessary, AS was intended to proliferate in multiprocessor, peer-to-peer systems in the communications, storage, networking, servers, and embedded platform environments. On the other hand, Quality of Service (QoS) is becoming an important feature for high-performance networks. AS provides mechanisms that can be used to support QoS. Specifically, an AS fabric permits us to employ Virtual Channels (VCs), egress link scheduling, and an admission control mechanism. Moreover, AS performs a link-level flow control in a per VC basis. The main objective of this thesis has been to study the different AS mechanisms in order to propose a general framework for providing QoS to the applications over this network technology. In this line, the main focus of this work, due to its importance for the QoS provision, is the study of the AS scheduling mechanisms. Our goal has been to implement them in an efficient way, taking into account both their performance and their complexity. In order to achieve these objectives, we have proposed several possible implementations for the AS minimum bandwidth egress link scheduler taking into account the link-level flow control. We have proposed to modify the AS table-based scheduler in order to solve its problems to provide QoS requirements with variable packet sizes. We have also proposed how to configure the resulting table-based scheduler to decouple the bandwidth and latency assignations. Moreover, we have performed a hardware design of the different schedulers in order to obtain estimates on the arbitration time and the silicon area that they require

    A Novel Voice Priority Queue (VPQ) Schedule and Algorithm for VoIP over WLAN Network

    Get PDF
    The VoIP deployment on Wireless Local Area Networks (WLANs), which is based on IEEE 802.11 standards, is increasing. Currently, many schedulers have been introduced such as Weighted Fair Queueing (WFQ), Strict Priority (SP) General processor sharing (GPS), Deficit Round Robin (DRR), and Contention-Aware Temporally fair Scheduling (CATS). Unfortunately, the current scheduling techniques have some drawbacks on real-time applications and therefore will not be able to handle the VoIP packets in a proper way. The objective of this research is to propose a new scheduler system model for the VoIP application named final stage of Voice Priority Queue (VPQ) scheduler. The scheduler system model is to ensure efficiency by producing a higher throughput and fairness for VoIP packets. In this paper, only the final Stage of the VPQ packet scheduler and its algorithm are presented. Simulation topologies for VoIP traffic were implemented and analyzed using the Network Simulator (NS-2). The results show that this method can achieve a better and more accurate VoIP quality throughput and fairness index over WLANs

    A VOICE PRIORITY QUEUE (VPQ) SCHEDULER FOR VOIP OVER WLANs

    Get PDF
    The Voice over Internet Protocol (VoIP) application has observed the fastest growth in the world of telecommunication. The Wireless Local Area Network (WLAN) is the most assuring of technologies among the wireless networks, which has facilitated high-rate voice services at low cost and good flexibility. In a voice conversation, each client works as a sender and as a receiver depending on the direction of traffic flow over the network. A VoIP application requires a higher throughput, less packet loss and a higher fairness index over the network. The packets of VoIP streaming may experience drops because of the competition among the different kinds of traffic flow over the network. A VoIP application is also sensitive to delay and requires the voice packets to arrive on time from the sender to the receiver side without any delay over WLANs. The scheduling system model for VoIP traffic is still an unresolved problem. A new traffic scheduler is necessary to offer higher throughput and a higher fairness index for a VoIP application. The objectives of this thesis are to propose a new scheduler and algorithms that support the VoIP application and to evaluate, validate and verify the newly proposed scheduler and algorithms with the existing scheduling algorithms over WLANs through simulation and experimental environment. We proposed a new Voice Priority Queue (VPQ) scheduling system model and algorithms to solve scheduling issues. VPQ system model is implemented in three stages. The first stage of the model is to ensure efficiency by producing a higher throughput and fairness for VoIP packets. The second stage will be designed for bursty Virtual-VoIP Flow (Virtual-VF) while the third stage is a Switch Movement (SM) technique. Furthermore, we compared the VPQ scheduler with other well known schedulers and algorithms. We observed in our simulation and experimental environment that the VPQ provides better results for the VoIP over WLANs

    A novel Voice Priority Queue (VPQ) scheduler and algorithm for VOIP over WLAN network

    Get PDF
    The VoIP deployment on Wireless Local Area Networks (WLANs), which is based on IEEE 802.11 standards, is increasing.Currently, many schedulers have been introduced such as Weighted Fair Queueing (WFQ), Strict Priority (SP) General processor sharing (GPS), Deficit Round Robin (DRR), and Contention-Aware Temporally fair Scheduling(CATS).Unfortunately, the current scheduling techniques have some drawbacks on real-time applications and therefore will not be able to handle the VoIP packets in a proper way.The objective of this research is to propose a new scheduler system model for the VoIP application named final stage of Voice Priority Queue (VPQ) scheduler.The scheduler system model is to ensure efficiency by producing a higher throughput and fairness for VoIP packets.In this paper, only the final Stage of the VPQ packet scheduler and its algorithm are presented.Simulation topologies for VoIP traffic were implemented and analyzed using the Network Simulator (NS-2).The results show that this method can achieve a better and more accurate VoIP quality throughput and fairness index over WLANs

    Downstream Bandwidth Management for Emerging DOCSIS-based Networks

    Get PDF
    In this dissertation, we consider the downstream bandwidth management in the context of emerging DOCSIS-based cable networks. The latest DOCSIS 3.1 standard for cable access networks represents a significant change to cable networks. For downstream, the current 6 MHz channel size is replaced by a much larger 192 MHz channel which potentially can provide data rates up to 10 Gbps. Further, the current standard requires equipment to support a relatively new form of active queue management (AQM) referred to as delay-based AQM. Given that more than 50 million households (and climbing) use cable for Internet access, a clear understanding of the impacts of bandwidth management strategies used in these emerging networks is crucial. Further, given the scope of the change provided by emerging cable systems, now is the time to develop and introduce innovative new methods for managing bandwidth. With this motivation, we address research questions pertaining to next generation of cable access networks. The cable industry has had to deal with the problem of a small number of subscribers who utilize the majority of network resources. This problem will grow as access rates increase to gigabits per second. Fundamentally this is a problem on how to manage data flows in a fair manner and provide protection. A well known performance issue in the Internet, referred to as bufferbloat, has received significant attention recently. High throughput network flows need sufficiently large buffer to keep the pipe full and absorb occasional burstiness. Standard practice however has led to equipment offering very large unmanaged buffers that can result in sustained queue levels increasing packet latency. One reason why these problems continue to plague cable access networks is the desire for low complexity and easily explainable (to access network subscribers and to the Federal Communications Commission) bandwidth management. This research begins by evaluating modern delay-based AQM algorithms in downstream DOCSIS 3.0 environments with a focus on fairness and application performance capabilities of single queue AQMs. We are especially interested in delay-based AQM schemes that have been proposed to combat the bufferbloat problem. Our evaluation involves a variety of scenarios that include tiered services and application workloads. Based on our results, we show that in scenarios involving realistic workloads, modern delay-based AQMs can effectively mitigate bufferbloat. However they do not address the other problem related to managing the fairness. To address the combined problem of fairness and bufferbloat, we propose a novel approach to bandwidth management that provides a compromise among the conflicting requirements. We introduce a flow quantization method referred to as adaptive bandwidth binning where flows that are observed to consume similar levels of bandwidth are grouped together with the system managed through a hierarchical scheduler designed to approximate weighted fairness while addressing bufferbloat. Based on a simulation study that considers many system experimental parameters including workloads and network configurations, we provide evidence of the efficacy of the idea. Our results suggest that the scheme is able to provide long term fairness and low delay with a performance close to that of a reference approach based on fair queueing. A further contribution is our idea for replacing `tiered\u27 levels of service based on service rates with tiering based on weights. The application of our bandwidth binning scheme offers a timely and innovative alternative to broadband service that leverages the potential offered by emerging DOCSIS-based cable systems

    Design and analysis of fair, efficient and low-latency schedulers for high-speed packet-switched networks

    Get PDF
    A variety of emerging applications in education, medicine, business, and entertainment rely heavily on high-quality transmission of multimedia data over high speed networks. Packet scheduling algorithms in switches and routers play a critical role in the overall Quality of Service (QoS) strategy to ensure the performance required by such applications. Fair allocation of the link bandwidth among the traffic flows that share the link is an intuitively desirable property of packet schedulers. In addition, strict fairness can improve the isolation between users, help in countering certain kinds of denial-of-service attacks and offer a more predictable performance. Besides fairness, efficiency of implementation and low latency are among the most desirable properties of packet schedulers. The first part of this dissertation presents a novel scheduling discipline called Elastic Round Robin (ERR) which is simple, fair and efficient with a low latency bound. The perpacket work complexity of ERR is O(1). Our analysis also shows that, in comparison to all previously proposed scheduling disciplines of equivalent complexity, ERR has significantly better fairness properties as well as a lower latency bound. However, all frame-based schedulers including ERR suffer from high start-up latencies, burstiness in the output anddelayed correction of fairness. In the second part of this dissertation we propose a new scheduling discipline called Prioritized Elastic Round Robin (PERR) which overcomes the limitations associated with the round robin service order of ERR. The PERR scheduler achieves this by rearranging the sequence in which packets are transmitted in each round of the ERR scheduler. Our analysis reveals that PERR has a low work complexity which is independent of the number of flows. We also prove that PERR has better fairness and latency characteristics than other known schedulers of equivalent complexity. In addition to their obvious applications in Internet routers and switches, both the ERR and PERR schedulers also satisfy the unique requirements of wormhole switching, popular in interconnection networks of parallel systems. Finally, using real gateway traces and based on a new measure of instantaneous fairness borrowed from the field of economics, we present simulation results that demonstrate the improved fairness characteristics and latency bounds of the ERR and and PERR schedulers in comparison with other scheduling disciplines of equivalent efficiency.Ph.D., Electrical Engineering -- Drexel University, 200

    Traffic Management for Next Generation Transport Networks

    Get PDF

    Packet scheduling strategies for emerging service models in the internet

    Get PDF
    Traditional as well as emerging new Internet applications such as video-conferencing and live multimedia broadcasts from Internet TV stations will rely on scheduling algorithms in switches and routers to meet a diversity of service requirements desired from the network. This dissertation focuses on four categories of service requirements that cover the vast majority of current as well as emerging new applications: best-effort service, guaranteed service (delay and bandwidth), controlled load service, and soft real-time service. For each of these service types, we develop novel packet scheduling strategies that achieve better performance and better fairness than existing strategies.Best-effort and guaranteed services: A fair packet scheduler designed for best- effort service can also be employed to achieve bandwidth and delay guarantees. This dissertation proposes a novel fair scheduling algorithm, called Greedy Fair Queueing (GrFQ),that explicitly incorporates the goal of achieving better fairness into the actions of the scheduler. A simplified version of the scheduler is also proposed for easier deployment in real networks. Controlled load service: This dissertation analyzes and defines requirements on packet schedulers serving traffic that request the controlled load service (part of the Integrated Services architecture). We then propose a novel scheduler, called the CL(®) scheduler, which provides service differentiation for aggregated traffic for controlled load service. The proposed scheduler satisfies the defined requirements with a very low processing complexity and without requiring per-flow management. Soft real-time service: We formally define the service requirements of soft real-time applications which have delay constraints but which can tolerate some packet losses. Two novel schedulers of different levels of complexity are proposed. These schedulers achieve better performance (lower overall loss rates) and better fairness than previously known schedulers.We adapt a metric used widely in economics, called the Gini index, to our purpose of evaluating the fairness achieved by our schedulers under real traffic conditions. The Gini index captures the instantaneous fairness achieved at most instants of time as opposed to previously used measures of fairness in the networking literature. Using real video, audio and gateway traffic traces, we show that the proposed schedulers achieve better performance and fairness characteristics than other known schedulers.Ph.D., Electrical Engineering -- Drexel University, 200

    Resource allocation in computer networks: Fundamental principles and practical strategies

    Get PDF
    Fairness in the allocation of resources in a network shared among multiple flows of traffic is an intuitively desirable property with many practical benefits. Fairness in traffic management can improve the isolation between traffic streams, offer a more predictable performance, eliminate certain kinds of transient bottlenecks and may serve as a critical component of a strategy to achieve certain guaranteed services such as delay bounds and minimum bandwidths. Fairness in bandwidth allocation over a shared link has been extensively researched over the last decade. However, as flows of traffic traverse the computer network, they share not only bandwidth resources, but also multiple other types of resources such as processor, buffer, and power in mobile systems. If the network is not fair in allocating any of the shared resources, denial of service attacks based on an excessive use of this resource becomes possible. Therefore, the desired eventual goal is overall fairness in the use of all the resources in the network. This dissertation is concerned with achieving fairness in the joint allocation of multiple heterogeneous resources. We consider resources as either prioritized (such as bandwidth and buffer resources) or essential (such as processing and bandwidth resources). For each type of these systems, we present a simple but powerful general principle for defining fairness in such systems based on any of the classic notions of fairness such as max-min fairness, proportional fairness and utility max-min fairness defined for a single resource. Using max-min fairness as an example, we apply the principles to a system with a shared buffer and a shared link, and a system with a shared processor and a shared link, and propose practical and provably fair algorithms for the joint allocation of buffer and bandwidth resources, and the joint allocation of processing and bandwidth resources. We demonstrate the fairness achieved by our algorithms through simulation results using both synthetic traffic and real traffic traces. The principles and the algorithms detailed in this dissertation may also be applied in a variety of other contexts involving resource sharing.Ph.D., Electrical Engineering -- Drexel University, 200
    corecore