513 research outputs found

    Quality of Service over Specific Link Layers: state of the art report

    Get PDF
    The Integrated Services concept is proposed as an enhancement to the current Internet architecture, to provide a better Quality of Service (QoS) than that provided by the traditional Best-Effort service. The features of the Integrated Services are explained in this report. To support Integrated Services, certain requirements are posed on the underlying link layer. These requirements are studied by the Integrated Services over Specific Link Layers (ISSLL) IETF working group. The status of this ongoing research is reported in this document. To be more specific, the solutions to provide Integrated Services over ATM, IEEE 802 LAN technologies and low-bitrate links are evaluated in detail. The ISSLL working group has not yet studied the requirements, that are posed on the underlying link layer, when this link layer is wireless. Therefore, this state of the art report is extended with an identification of the requirements that are posed on the underlying wireless link, to provide differentiated Quality of Service

    A Survey on the Contributions of Software-Defined Networking to Traffic Engineering

    Get PDF
    Since the appearance of OpenFlow back in 2008, software-defined networking (SDN) has gained momentum. Although there are some discrepancies between the standards developing organizations working with SDN about what SDN is and how it is defined, they all outline traffic engineering (TE) as a key application. One of the most common objectives of TE is the congestion minimization, where techniques such as traffic splitting among multiple paths or advanced reservation systems are used. In such a scenario, this manuscript surveys the role of a comprehensive list of SDN protocols in TE solutions, in order to assess how these protocols can benefit TE. The SDN protocols have been categorized using the SDN architecture proposed by the open networking foundation, which differentiates among data-controller plane interfaces, application-controller plane interfaces, and management interfaces, in order to state how the interface type in which they operate influences TE. In addition, the impact of the SDN protocols on TE has been evaluated by comparing them with the path computation element (PCE)-based architecture. The PCE-based architecture has been selected to measure the impact of SDN on TE because it is the most novel TE architecture until the date, and because it already defines a set of metrics to measure the performance of TE solutions. We conclude that using the three types of interfaces simultaneously will result in more powerful and enhanced TE solutions, since they benefit TE in complementary ways.European Commission through the Horizon 2020 Research and Innovation Programme (GN4) under Grant 691567 Spanish Ministry of Economy and Competitiveness under the Secure Deployment of Services Over SDN and NFV-based Networks Project S&NSEC under Grant TEC2013-47960-C4-3-

    Design of traffic shaper / scheduler for packet switches and DiffServ networks : algorithms and architectures

    Get PDF
    The convergence of communications, information, commerce and computing are creating a significant demand and opportunity for multimedia and multi-class communication services. In such environments, controlling the network behavior and guaranteeing the user\u27s quality of service is required. A flexible hierarchical sorting architecture which can function either as a traffic shaper or a scheduler according to the requirement of the traffic load is presented to meet the requirement. The core structure can be implemented as a hierarchical traffic shaper which can support a large number of connections with a wide variety of rates and burstiness without the loss of the granularity in cells\u27 conforming departure time. The hierarchical traffic shaper can implement the exact sorting scheme with a substantial reduced memory size by using two stages of timing queues, and with substantial reduction in complexity, without introducing any sorting inaccuracy. By setting a suitable threshold to the length of the departure queue and using a lookahead algorithm, the core structure can be converted to a hierarchical rateadaptive scheduler. Based on the traffic load, it can work as an exact sorting traffic shaper or a Generic Cell Rate Algorithm (GCRA) scheduler. Such a rate-adaptive scheduler can reduce the Cell Transfer Delay and the Maximum Memory Occupancy greatly while keeping the fairness in the bandwidth assignment which is the inherent characteristic of GCRA. By introducing a best-effort queue to accommodate besteffort traffic, the hierarchical sorting architecture can be changed to a near workconserving scheduler. It assigns remaining bandwidth to the best-effort traffic so that it improves the utilization, of the outlink while it guarantees the quality of service requirements of those services which require quality of service guarantees. The inherent flexibility of the hierarchical sorting architecture combined with intelligent algorithms determines its multiple functions. Its implementation not only can manage buffer and bandwidth resources effectively, but also does not require no more than off-the-shelf hardware technology. The correlation of the extra shaping delay and the rate of the connections is revealed, and an improved fair traffic shaping algorithm, Departure Event Driven plus Completing Service Time Resorting algorithm, is presented. The proposed algorithm introduces a resorting process into Departure Event Driven Traffic Shaping Algorithm to resolve the contention of multiple cells which are all eligible for transmission in the traffic shaper. By using the resorting process based on each connection\u27s rate, better fairness and flexibility in the bandwidth assignment for connections with wide range of rates can be given. A Dual Level Leaky Bucket Traffic Shaper(DLLBTS) architecture is proposed to be implemented at the edge nodes of Differentiated Services Networks in order to facilitate the quality of service management process. The proposed architecture can guarantee not only the class-based Service Level Agreement, but also the fair resource sharing among flows belonging to the same class. A simplified DLLBTS architecture is also given, which can achieve the goals of DLLBTS while maintain a very low implementation complexity so that it can be implemented with the current VLSI technology. In summary, the shaping and scheduling algorithms in the high speed packet switches and DiffServ networks are studied, and the intelligent implementation schemes are proposed for them

    On packet switch design

    Get PDF

    Quality of Service over Specific Link Layers: state of the art report

    Get PDF

    Design and analysis of a scalable terabit multicast packet switch : architecture and scheduling algorithms

    Get PDF
    Internet growth and success not only open a primary route of information exchange for millions of people around the world, but also create unprecedented demand for core network capacity. Existing switches/routers, due to the bottleneck from either switch architecture or arbitration complexity, can reach a capacity on the order of gigabits per second, but few of them are scalable to large capacity of terabits per second. In this dissertation, we propose three novel switch architectures with cooperated scheduling algorithms to design a terabit backbone switch/router which is able to deliver large capacity, multicasting, and high performance along with Quality of Service (QoS). Our switch designs benefit from unique features of modular switch architecture and distributed resource allocation scheme. Switch I is a unique and modular design characterized by input and output link sharing. Link sharing resolves output contention and eliminates speedup requirement for central switch fabric. Hence, the switch architecture is scalable to any large size. We propose a distributed round robin (RR) scheduling algorithm which provides fairness and has very low arbitration complexity. Switch I can achieve good performance under uniform traffic. However, Switch I does not perform well for non-uniform traffic. Switch II, as a modified switch design, employs link sharing as well as a token ring to pursue a solution to overcome the drawback of Switch 1. We propose a round robin prioritized link reservation (RR+POLR) algorithm which results in an improved performance especially under non-uniform traffic. However, RR+POLR algorithm is not flexible enough to adapt to the input traffic. In Switch II, the link reservation rate has a great impact on switch performance. Finally, Switch III is proposed as an enhanced switch design using link sharing and dual round robin rings. Packet forwarding is based on link reservation. We propose a queue occupancy based dynamic link reservation (QOBDLR) algorithm which can adapt to the input traffic to provide a fast and fair link resource allocation. QOBDLR algorithm is a distributed resource allocation scheme in the sense that dynamic link reservation is carried out according to local available information. Arbitration complexity is very low. Compared to the output queued (OQ) switch which is known to offer the best performance under any traffic pattern, Switch III not only achieves performance as good as the OQ switch, but also overcomes speedup problem which seriously limits the OQ switch to be a scalable switch design. Hence, Switch III would be a good choice for high performance, scalable, large-capacity core switches

    Performance Management in ATM Networks

    Get PDF
    ATM is representative of the connection-oriented resource provisioning classof protocols. The ATM network is expected to provide end-to-end QoS guaranteesto connections in the form of bounds on delays, errors and/or losses. Performancemanagement involves measurement of QoS parameters, and application of controlmeasures (if required) to improve the QoS provided to connections, or to improvethe resource utilization at switches. QoS provisioning is very important for realtimeconnections in which losses are irrecoverable and delays cause interruptionsin service. QoS of connections on a node is a direct function of the queueing andscheduling on the switch. Most scheduling architectures provide static allocationof resources (scheduling priority, maximum buffer) at connection setup time. Endto-end bounds are obtainable for some schedulers, however these are precluded forheterogeneously composed networks. The resource allocation does not adapt to theQoS provided on connections in real time. In addition, mechanisms to measurethe QoS of a connection in real-time are scarce.In this thesis, a novel framework for performance management is proposed. Itprovides QoS guarantees to real time connections. It comprises of in-service QoSmonitoring mechanisms, a hierarchical scheduling algorithm based on dynamicpriorities that are adaptive to measurements, and methods to tune the schedulers atindividual nodes based on the end-to-end measurements. Also, a novel scheduler isintroduced for scheduling maximum delay sensitive traffic. The worst case analysisfor the leaky bucket constrained traffic arrivals is presented for this scheduler. Thisscheduler is also implemented on a switch and its practical aspects are analyzed.In order to understand the implementability of complex scheduling mechanisms,a comprehensive survey of the state-of-the-art technology used in the industry isperformed. The thesis also introduces a method of measuring the one-way delayand jitter in a connection using in-service monitoring by special cells

    Quality-of-service management in IP networks

    Get PDF
    Quality of Service (QoS) in Internet Protocol (IF) Networks has been the subject of active research over the past two decades. Integrated Services (IntServ) and Differentiated Services (DiffServ) QoS architectures have emerged as proposed standards for resource allocation in IF Networks. These two QoS architectures support the need for multiple traffic queuing systems to allow for resource partitioning for heterogeneous applications making use of the networks. There have been a number of specifications or proposals for the number of traffic queuing classes (Class of Service (CoS)) that will support integrated services in IF Networks, but none has provided verification in the form of analytical or empirical investigation to prove that its specification or proposal will be optimum. Despite the existence of the two standard QoS architectures and the large volume of research work that has been carried out on IF QoS, its deployment still remains elusive in the Internet. This is not unconnected with the complexities associated with some aspects of the standard QoS architectures. [Continues.

    Title of Dissertation: Supporting Distributed Multimedia Applications on ATM Networks

    Get PDF
    ATM offers a number of features, such as high-bandwidth, and provision for per-connection quality of service guarantees, making it particularly attractive to multimedia applications. Unfortunately, the bandwidth available at ATM's data-link layer is not visible to the applications due to operating system (OS) bottlenecks at the host-network interface. Similarly, the promise of per-connection service guarantees is still elusive due to the lack of appropriate traffic control mechanisms. In this dissertation, we investigate both of these problems, taking multimedia applications as examples. The OS bottlenecks are not limited to the network interfaces, but affect the performance of the entire I/O subsystem. We propose to alleviate OS's I/O bottleneck by according more autonomy to I/O devices and by using a connection oriented framework for I/O transfers. We present experimental results on a video conferencing testbed demonstrating the tremendous performance impact of the proposed I/O architecture on networked multimedia applications. To address the problem of quality of service support in ATM networks, we propose a simple cell scheduling mechanism, named carry-over round robin (CORR). Using analytical techniques, we analyze the delay performance of CORR scheduling. Besides providing guarantees on delay, CORR is also fair in distributing the excess bandwidth. We show that albeit its simplicity, CORR is very competitive with other more complex schemes both in terms of delay performance and fairness. (Also cross-referenced as UMIACS-TR-95-88
    • …
    corecore