524 research outputs found

    Per-Priority Flow Control (Ppfc) Framework For Enhancing Qos In Metro Ethernet

    Get PDF
    Day by day Internet communication and services are experiencing an increase in variety and quantity in their capacity and demand. Thus, making traffic management and quality of service (QoS) approaches for optimization of the Internet become a challenging area of research; meanwhile flow control and congestion control will be considered as significant fundamentals for the traffic control especially on the high speed Metro Ethernet. IEEE had standardized a method (IEEE 802.3x standard), which provides Ethernet Flow Control (EFC) using PAUSE frames as MAC control frames in the data link layer, to enable or disable data frame transmission. With the initiation of Metro Carrier Ethernet, the conventional ON/OFF IEEE 802.3x approach may no longer be sufficient. Therefore, a new architecture and mechanism that offer more flexible and efficient flow and congestion control, as well as better QoS provisioning is now necessary

    A quality of service architecture for WLAN-wired networks to enhance multimedia support

    Get PDF
    Includes abstract.Includes bibliographical references (leaves 77-84).The use of WLAN for the provision of IP multimedia services faces a number of challenges which include quality of service (QoS). Because WLAN users access multimedia services usually over a wired backbone, attention must be paid to QoS over the integrated WLAN-wired network. This research focuses on the provision of QoS to WLAN users accessing multimedia services over a wired backbone. In this thesis, the IEEE 802.11-2007 enhanced data channel access (EDCA) mechanism is used to provide prioritized QoS on the WLAN media access control (MAC) layer, while weighted round robin (WRR) queue scheduling is used to provide prioritized QoS at the IP layer. The inter-working of the EDCA scheme in the WLAN and the WRR scheduling scheme in the wired network provides end-to-end QoS on a WLAN-wired IP network. A mapping module is introduced to enable the inter-working of the EDCA and WRR mechanisms

    Datacenter Traffic Control: Understanding Techniques and Trade-offs

    Get PDF
    Datacenters provide cost-effective and flexible access to scalable compute and storage resources necessary for today's cloud computing needs. A typical datacenter is made up of thousands of servers connected with a large network and usually managed by one operator. To provide quality access to the variety of applications and services hosted on datacenters and maximize performance, it deems necessary to use datacenter networks effectively and efficiently. Datacenter traffic is often a mix of several classes with different priorities and requirements. This includes user-generated interactive traffic, traffic with deadlines, and long-running traffic. To this end, custom transport protocols and traffic management techniques have been developed to improve datacenter network performance. In this tutorial paper, we review the general architecture of datacenter networks, various topologies proposed for them, their traffic properties, general traffic control challenges in datacenters and general traffic control objectives. The purpose of this paper is to bring out the important characteristics of traffic control in datacenters and not to survey all existing solutions (as it is virtually impossible due to massive body of existing research). We hope to provide readers with a wide range of options and factors while considering a variety of traffic control mechanisms. We discuss various characteristics of datacenter traffic control including management schemes, transmission control, traffic shaping, prioritization, load balancing, multipathing, and traffic scheduling. Next, we point to several open challenges as well as new and interesting networking paradigms. At the end of this paper, we briefly review inter-datacenter networks that connect geographically dispersed datacenters which have been receiving increasing attention recently and pose interesting and novel research problems.Comment: Accepted for Publication in IEEE Communications Surveys and Tutorial

    Scalable Bandwidth Management in Software-Defined Networks

    Get PDF
    There has been a growing demand to manage bandwidth as the network traffic increases. Network applications such as real time video streaming, voice over IP and video conferencing in IP networks has risen rapidly over the recently and is projected to continue in the future. These applications consume a lot of bandwidth resulting in increasing pressure on the networks. In dealing with such challenges, modern networks must be designed to be application sensitive and be able to offer Quality of Service (QoS) based on application requirements. Network paradigms such as Software Defined Networking (SDN) allows for direct network programmability to change the network behavior to suit the application needs in order to provide solutions to the challenge. In this dissertation, the objective is to research if SDN can provide scalable QoS requirements to a set of dynamic traffic flows. Methods are implemented to attain scalable bandwidth management to provide high QoS with SDN. Differentiated Services Code Point (DSCP) values and DSCP remarking with Meters are used to implement high QoS requirements such that bandwidth guarantee is provided to a selected set of traffic flows. The theoretical methodology is implemented for achieving QoS, experiments are conducted to validate and illustrate that QoS can be implemented in SDN, but it is unable to implement High QoS due to the lack of implementation for Meters with DSCP remarking. The research work presented in this dissertation aims at the identification and addressing the critical aspects related to the SDN based QoS provisioning using flow aggregation techniques. Several tests and demonstrations will be conducted by utilizing virtualization methods. The tests are aimed at supporting the proposed ideas and aims at creating an improved understanding of the practical SDN use cases and the challenges that emerge in virtualized environments. DiffServ Assured Forwarding is chosen as a QoS architecture for implementation. The bandwidth management scalability in SDN is proved based on throughput analysis by considering two conditions i.e 1) Per-flow QoS operation and 2) QoS by using DiffServ operation in the SDN environment with Ryu controller. The result shows that better performance QoS and bandwidth management is achieved using the QoS by DiffServ operation in SDN rather than the per-flow QoS operation

    Performance Improvement of Time-Sensitive Fronthaul Networks in 5G Cloud-RANs Using Reinforcement Learning-Based Scheduling Scheme

    Get PDF
    The rapid surge in internet-driven smart devices and bandwidth-hungry multimedia applications demands high-capacity internet services and low latencies during connectivity. Cloud radio access networks (C-RANs) are considered the prominent solution to meet the stringent requirements of fifth-generation (5G) and beyond networks by deploying the fronthaul transport links between baseband units (BBUs) and remote radio heads (RRHs). High-capacity optical links could be conventional mainstream technology for deploying the fronthaul in C-RANs. But densification of optical links significantly increases the cost and imposes several design challenges on fronthaul architecture which makes them impractical. Contrary, Ethernet-based fronthaul links can be lucrative solutions for connecting the BBUs and RRHs but are inadequate to meet the rigorous end-to-end delays, jitter, and bandwidth requirements of fronthaul networks. This is because of the inefficient resource allocation and congestion control schemes for the capacity constraint Ethernet-based fronthaul links. In this research, a novel reinforcement learning-based optimal resource allocation scheme has been proposed which eradicates the congestion and improves the latencies to make the capacity-constraints low-cost Ethernet a suitable solution for the fronthaul networks. The experiment results verified a notable 50% improvement in reducing delay and jitter as compared to the existing schemes. Furthermore, the proposed scheme demonstrated an enhancement of up to 70% in addressing conflicting time slots and minimizing packet loss ratios. Hence, the proposed scheme outperforms the existing state-of-the-art resource allocation techniques to satisfy the stringent performance demands of fronthaul networks.</p

    Development of the On-board Aircraft Network

    Get PDF
    Phase II will focus on the development of the on-board aircraft networking portion of the testbed which includes the subnet and router configuration and investigation of QoS issues. This implementation of the testbed will consist of a workstation, which functions as the end system, connected to a router. The router will service two subnets that provide data to the cockpit and the passenger cabin. During the testing, data will be transferred between the end systems and those on both subnets. QoS issues will be identified and a preliminary scheme will be developed. The router will be configured for the testbed network and initial security studies will be initiated. In addition, architecture studies of both the SITA and Immarsat networks will be conducted

    A Case for Data Centre Traffic Management on Software Programmable Ethernet Switches

    Full text link
    Virtualisation first and cloud computing later has led to a consolidation of workload in data centres that also comprises latency-sensitive application domains such as High Performance Computing and telecommunication. These types of applications require strict latency guarantees to maintain their Quality of Service. In virtualised environments with their churn, this demands for adaptability and flexibility to satisfy. At the same time, the mere scale of the infrastructures favours commodity (Ethernet) over specialised (Infiniband) hardware. For that purpose, this paper introduces a novel traffic management algorithm that combines Rate-limited Strict Priority and Deficit round-robin for latency-aware and fair scheduling respectively. In addition, we present an implementation of this algorithm on the bmv2 P4 software switch by evaluating it against standard priority-based and best-effort scheduling.Comment: 8th IEEE International Conference on Cloud Networking (IEEE CloudNet 2019
    corecore