454 research outputs found

    Fairness in a data center

    Get PDF
    Existing data centers utilize several networking technologies in order to handle the performance requirements of different workloads. Maintaining diverse networking technologies increases complexity and is not cost effective. This results in the current trend to converge all traffic into a single networking fabric. Ethernet is both cost-effective and ubiquitous, and as such it has been chosen as the technology of choice for the converged fabric. However, traditional Ethernet does not satisfy the needs of all traffic workloads, for the most part, due to its lossy nature and, therefore, has to be enhanced to allow for full convergence. The resulting technology, Data Center Bridging (DCB), is a new set of standards defined by the IEEE to make Ethernet lossless even in the presence of congestion. As with any new networking technology, it is critical to analyze how the different protocols within DCB interact with each other as well as how each protocol interacts with existing technologies in other layers of the protocol stack. This dissertation presents two novel schemes that address critical issues in DCB networks: fairness with respect to packet lengths and fairness with respect to flow control and bandwidth utilization. The Deficit Round Robin with Adaptive Weight Control (DRR-AWC) algorithm actively monitors the incoming streams and adjusts the scheduling weights of the outbound port. The algorithm was implemented on a real DCB switch and shown to increase fairness for traffic consisting of mixed-length packets. Targeted Priority-based Flow Control (TPFC) provides a hop-by-hop flow control mechanism that restricts the flow of aggressor streams while allowing victim streams to continue unimpeded. Two variants of the targeting mechanism within TPFC are presented and their performance evaluated through simulation

    Quality-of-service management in IP networks

    Get PDF
    Quality of Service (QoS) in Internet Protocol (IF) Networks has been the subject of active research over the past two decades. Integrated Services (IntServ) and Differentiated Services (DiffServ) QoS architectures have emerged as proposed standards for resource allocation in IF Networks. These two QoS architectures support the need for multiple traffic queuing systems to allow for resource partitioning for heterogeneous applications making use of the networks. There have been a number of specifications or proposals for the number of traffic queuing classes (Class of Service (CoS)) that will support integrated services in IF Networks, but none has provided verification in the form of analytical or empirical investigation to prove that its specification or proposal will be optimum. Despite the existence of the two standard QoS architectures and the large volume of research work that has been carried out on IF QoS, its deployment still remains elusive in the Internet. This is not unconnected with the complexities associated with some aspects of the standard QoS architectures. [Continues.

    Slicing in WiFi networks through airtime-based resource allocation

    Get PDF
    Network slicing is one of the key enabling technologies for 5G networks. It allows infrastructure owners to assign resources to service providers (tenants), which will afterwards use them to satisfy their end-user demands. This paradigm, which changes the way networks have been traditionally managed, was initially proposed in the wired realm (core networks). More recently, the scientific community has paid attention to the integration of network slicing in wireless cellular technologies (LTE). However, there are not many works addressing the challenges that appear when trying to exploit slicing techniques over WiFi networks, in spite of their growing relevance. In this paper we propose a novel method of proportionally distributing resources in WiFi networks, by means of the airtime. We develop an analytical model, which shed light on how such resources could be split. The validity of the proposed model is assessed by means of simulation-based evaluation over the ns-3 framework.This work has been supported in part by the European Commission and the Spanish Government (Fondo Europeo de desarrollo Regional, FEDER) by means of the EU H2020 NECOS (777067) and ADVICE (TEC2015-71329) projects, respectively

    Statistical closed-loop process scheduling

    Get PDF
    Traditionally, scheduling algorithms have been implemented as open-loop control systems. This allows the operating system to make quick decisions on the order in which tasks should be scheduled without requiring complex calculations. It is very common for a task to be assigned a priority based on its anticipated performance, or based on general process characteristics (i.e., I/O bound versus CPU bound). The problem with this type of scheduling, and this type of control system in general, is that it is rigid and lacks the ability to adjust based on the actual performance of the system and its processes. This work is an examination of a simple closed-loop scheduling algorithm that dynamically adjusts the way tasks are scheduled based on the actual system and process performance. It is believed that by making this change to the scheduling algorithm, several important aspects of system performance will be affected. The system resources can be more efficiently utilized because scheduling parameters are dynamically adjusted to compensate for the current system load. The apparent responsiveness of the system, from the point of view of the applications, will increase because processes will be treated more fairly. Also, the overall system throughput will improve, because the closed-loop control system allows the scheduler to make better decisions on the order in which tasks should be run. The implementation of a closed-loop scheduler will result in an increase in the overhead of the scheduling algorithm; however, it is believed that this increase in overhead will be minimal. Extensive testing of the algorithm using a wide variety of applications will be used to demonstrate that the increase is indeed acceptable, given the other benefits of the algorithm. Due to the fact that the proposed scheduling algorithm is statistical in nature, it does not apply to hard real-time operating systems, but could be used to improve soft real-time operating systems, which have less stringent deadline requirements, and in general purpose time-sharing operating systems. Although this algorithm could have been implemented in any operating system, Linux was chosen as the base platform for this work due to its open source nature. Linux has the additional benefit of providing a well-known environment, and utilities that facilitate the measurements necessary to evaluate the performance of the algorithm. This work demonstrates that increased overhead required for a closed-loop system is reasonable, and that closed-loop scheduling can provide certain benefits over traditional open-loop schedulers. When compared to the original Linux kernel, the throughput performance degraded typically between 1.5% and 2% depending on the process mix; however, some of the changes to the base kernel can be used to explain this performance degradation. The system clock rate was increased from 100 Hz to 1000 Hz to obtain the timer granularity necessary for the closed-loop control system. Previous work measured a 3.1% increase in overhead when using a 1000 Hz system clock. Measurements were taken on a custom version of the original kernel that was built with a 1000 Hz system clock, which support that claim. When compared to the base kernel with a 1000 Hz system clock, the closed-loop scheduler produces better performance. This work also demonstrates the disadvantage of an open-loop scheduler. An application was developed with fixed length CPU bursts and periodic I/O requests to show that blindly giving the CPU to I/O bound processes and using epochs to age processes results in a significant number of unnecessary process switches that inevitably degrades the performance of the machine. The closed-loop scheduling algorithm balances the load across the processes more evenly, resulting in better performance under a high system load

    Analysis and Evaluation of Quality of Service (QoS) Router using Round Robin (RR) and Weighted Round Robin (WRR)

    Get PDF
    The paper discuses a scheduling system for providing Quality of service (Qos) guaranteed in a network using Round Robin (RR) and Weighted Round Robin. It illustrates the simulation and analysis of data by evaluating the performance of Round Robin (RR) and Weighted Round Robin (WRR) schedulers. The evaluation and analysis of this schedulers' is based on different parameters such as the throughput, loss rate, fairness, jitter and delay Also, in analysis and evaluation of the two scheduling using different charts to demonstrate the effects of each parameter in order to decide an efficient algorithm between Round Robin (RR) and Weighted Round Robin (WRR.).The simulated output of the experiment enabled us to determine different result of parameter used and proof the schedulers that are best to used and that will help in improving the Qos in differentiated services. Keywords: Quality of Service(QoS), Round Robin (RR), Weighted Round Robin(WRR), Throughput, Scheduling, loss rate, fairness, jitter and delay

    DOWNSTREAM RESOURCE ALLOCATION IN DOCSIS 3.0 CHANNEL BONDED NETWORKS

    Get PDF
    Modern broadband internet access cable systems follow the Data Over Cable System Interface Specification (DOCSIS) for data transfer between the individual cable modem (CM) and the Internet. The newest version of DOCSIS, version 3.0, provides an abstraction referred to as bonding groups to help manage bandwidth and to increase bandwidth to each user beyond that available within a single 6MHz. television channel. Channel bonding allows more than one channel to be used by a CM to provide a virtual channel of much greater bandwidth. This combining of channels into bonding groups, especially when channels overlap between more than one bonding group, complicates the resource allocation problem within these networks. The goal of resource allocation in this research is twofold, to provide for fairness among users while at the same time making maximum possible utilization of the available system bandwidth. The problem of resource allocation in computer networks has been widely studied by the academic community. Past work has studied resource allocation in many network types, however application in a DOCSIS channel bonded network has not been explored. This research begins by first developing a definition of fairness in a channel bonded system. After providing a theoretical definition of fairness we implement simulations of different scheduling disciplines and evaluate their performance against this theoretical ideal. The complexity caused by overlapped channels requires even the simplest scheduling algorithms to be modified to work correctly. We then develop an algorithm to maximize the use of the available system bandwidth. The approach involves using competitive analysis techniques and an online algorithm to dynamically reassign flows among the available channels. Bandwidth usage and demand requests are monitored for bandwidth that is underutilized, and demand that is unsatisfied, and real time changes are made to the flow-to-channel mappings to improve the utilization of the total available bandwidth. The contribution of this research is to provide a working definition of fairness in a channel bonded environment, the implementation of several scheduling disciplines and evaluation of their adherence to that definition, and development of an algorithm to improve overall bandwidth utilization of the system

    Scheduling algorithms in broadband wireless networks

    Get PDF
    Scheduling algorithms that support quality of service (QoS) differentiation and guarantees for wireless data networks are crucial to the development of broadband wireless networks. Wireless communication poses special problems that do not exist in wireline networks, such as time-varying channel capacity and location-dependent errors. Although many mature scheduling algorithms are available for wireline networks, they are not directly applicable in wireless networks because of these special problems. This paper provides a comprehensive and in-depth survey on recent research in wireless scheduling. The problems and difficulties in wireless scheduling are discussed. Various representative algorithms are examined. Their themes of thoughts and pros and cons are compared and analyzed. At the end of the paper, some open questions and future research directions are addressed.published_or_final_versio

    The Design and Implementation of a Wireless Video Surveillance System.

    Get PDF
    Internet-enabled cameras pervade daily life, generating a huge amount of data, but most of the video they generate is transmitted over wires and analyzed offline with a human in the loop. The ubiquity of cameras limits the amount of video that can be sent to the cloud, especially on wireless networks where capacity is at a premium. In this paper, we present Vigil, a real-time distributed wireless surveillance system that leverages edge computing to support real-time tracking and surveillance in enterprise campuses, retail stores, and across smart cities. Vigil intelligently partitions video processing between edge computing nodes co-located with cameras and the cloud to save wireless capacity, which can then be dedicated to Wi-Fi hotspots, offsetting their cost. Novel video frame prioritization and traffic scheduling algorithms further optimize Vigil's bandwidth utilization. We have deployed Vigil across three sites in both whitespace and Wi-Fi networks. Depending on the level of activity in the scene, experimental results show that Vigil allows a video surveillance system to support a geographical area of coverage between five and 200 times greater than an approach that simply streams video over the wireless network. For a fixed region of coverage and bandwidth, Vigil outperforms the default equal throughput allocation strategy of Wi-Fi by delivering up to 25% more objects relevant to a user's query
    • …
    corecore