135,971 research outputs found

    PSBS: Practical Size-Based Scheduling

    Full text link
    Size-based schedulers have very desirable performance properties: optimal or near-optimal response time can be coupled with strong fairness guarantees. Despite this, such systems are very rarely implemented in practical settings, because they require knowing a priori the amount of work needed to complete jobs: this assumption is very difficult to satisfy in concrete systems. It is definitely more likely to inform the system with an estimate of the job sizes, but existing studies point to somewhat pessimistic results if existing scheduler policies are used based on imprecise job size estimations. We take the goal of designing scheduling policies that are explicitly designed to deal with inexact job sizes: first, we show that existing size-based schedulers can have bad performance with inexact job size information when job sizes are heavily skewed; we show that this issue, and the pessimistic results shown in the literature, are due to problematic behavior when large jobs are underestimated. Once the problem is identified, it is possible to amend existing size-based schedulers to solve the issue. We generalize FSP -- a fair and efficient size-based scheduling policy -- in order to solve the problem highlighted above; in addition, our solution deals with different job weights (that can be assigned to a job independently from its size). We provide an efficient implementation of the resulting protocol, which we call Practical Size-Based Scheduler (PSBS). Through simulations evaluated on synthetic and real workloads, we show that PSBS has near-optimal performance in a large variety of cases with inaccurate size information, that it performs fairly and it handles correctly job weights. We believe that this work shows that PSBS is indeed pratical, and we maintain that it could inspire the design of schedulers in a wide array of real-world use cases.Comment: arXiv admin note: substantial text overlap with arXiv:1403.599

    Fair Queueing based Packet Scheduling for Buffered Crossbar Switches

    Get PDF
    Abstract-Recent development in VLSI technology makes it feasible to integrate on-chip memories to crossbar switching fabrics. Switches using such crossbars are called buffered crossbar switches, in which each crosspoint has a small exclusive buffer. The crosspoint buffers decouple input ports and output ports, and reduce the switch scheduling problem to the fair queueing problem. In this paper, we present the fair queueing based packet scheduling scheme for buffered crossbar switches, which requires no speedup and directly handles variable length packets without segmentation and reassembly (SAR). The presented scheme makes scheduling decisions in a distributed manner, and provides performance guarantees. Given the properties of the actual fair queueing algorithm used in the scheduling scheme, we calculate the crosspoint buffer size bound to avoid overflow, and analyze the fairness and delay guarantees provided by the scheduling scheme. In addition, we use WF 2 Q, the fair queueing algorithm with the tightest performance guarantees, as a case study, and present simulation data to verify the analytical results

    Fairness-Oriented User Scheduling for Bursty Downlink Transmission Using Multi-Agent Reinforcement Learning

    Get PDF
    In this work, we develop practical user scheduling algorithms for downlink bursty traffic with emphasis on user fairness. In contrast to the conventional scheduling algorithms that either equally divides the transmission time slots among users or maximizing some ratios without physcial meanings, we propose to use the 5%-tile user data rate (5TUDR) as the metric to evaluate user fairness. Since it is difficult to directly optimize 5TUDR, we first cast the problem into the stochastic game framework and subsequently propose a Multi-Agent Reinforcement Learning (MARL)-based algorithm to perform distributed optimization on the resource block group (RBG) allocation. Furthermore, each MARL agent is designed to take information measured by network counters from multiple network layers (e.g. Channel Quality Indicator, Buffer size) as the input states while the RBG allocation as action with a proposed reward function designed to maximize 5TUDR. Extensive simulation is performed to show that the proposed MARL-based scheduler can achieve fair scheduling while maintaining good average network throughput as compared to conventional schedulers.Comment: 30 pages, 13 figure

    Priority Based Switch Allocator in Adaptive Physical Channel Regulator for On Chip Interconnects

    Get PDF
    Chip multiprocessors (CMPs) are now popular design paradigm for microprocessors due to their power, performance and complexity advantages where a number of relatively simple cores are integrated on a single die. On chip interconnection network (NoC) is an excellent architectural paradigm which offers a stable and generalized communication platform for large scale of chip multiprocessors. The existing model APCR has three regulation schemes designed at switch allocation stage of NoC router pipelining, such as monopolizing, fair-sharing and channel-stealing. Its aim is to fairly allocate physical bandwidth in the form of flit level transmission unit while breaking the conventional assumptions i.e.its size is same as phit size. They have implemented channel-stealing scheme using the existing round-robin scheduler which is a well known scheduling algorithm for providing fairness, which is not an optimal solution. In this thesis, we have extended the efficiency of APCR model and propose three efficient scheduling policies for the channel stealing scheme in order to provide better quality of service (QoS). Our work can be divided into three parts. In the first part, we implemented ratio based scheduling technique in which we keep track of average number of its sent from each input in every cycle. It not only provides fairness among virtual channels (VCs), but also increases the saturation throughput of the network. In the second part, we have implemented an age based scheduling technique where we prioritize the VC, based on the age of the requesting flits. The age of each request is calculated as the difference between the time of injection and the current simulation time. Age based scheduler minimizes the packet latency. In the last part, we implemented a Static-Priority based scheduler. In this case, we arbitrarily assign random priorities to the packets at the time of their injection into the network. In this case, the high priority packets can be forwarded to any of the VCs, whereas the low priority packets can be forwarded to a limited number of VCs. So, basically Static-Priority based scheduler limits the accessibility on the number of VCs depending upon the packet priority. We study the performance metrics such as the average packet latency, and saturation throughput resulted by all the three new scheduling techniques. We demonstrate our simulation results for all three scheduling policies i.e. bit complement, transpose and uniform random considering from very low (no load) to high load injection rates. We evaluate the performance improvement because of our proposed scheduling techniques in APCR comparing with the performance of basic NoC design. The performance is also compared with the results found in monopolizing, fair-sharing and round-robin schemes for channel-stealing of APCR. It is observed from the simulation results using our detailed cycle-accurate simulator that our new scheduling policies implemented in APCR model improves the network throughput by 10% in case of synthetic workloads, compared with the existing round-robin scheme. Also, our scheduling policy in APCR model outperforms the baseline router by 28X under synthetic workloads

    An experimental study of monolithic scheduler architecture in cloud computing systems

    Get PDF
    Scheduling in large scale computing clusters is critical to job performance and resource utilization. As the cluster size grows to thousands of machines and scheduling needs become complex and varied, scheduling in cloud-scale clusters presents unique challenges. To encourage the development of innovative schedulers, there is a need for an experimental framework to analyze scheduling performance over large clusters, using relatively modest resources. In this thesis, we present an experimental scheduler testbed to study job scheduling in emulated cloud-scale clusters. We show that the performance of the scheduler in an emulated cluster models closely the same in a real cluster of the same size. We use the testbed to evaluate the monolithic scheduler architecture, a popular scheduling architecture, in a 6000 node emulated cluster over realistic workload. We conclude that scheduling algorithms should embrace randomness in order to beat resource contention. We infer that scheduling in the monolithic architecture is a network I/O intensive process. We calculate the optimal value of design parameters for the monolithic architecture for Google workload. Hadoop YARN is a popular open-source cluster management framework which can be seen as an implementation of the monolithic scheduler architecture. We evaluate the three default scheduling policies in Hadoop YARN: Capacity, Fair and Fifo, over realistic workload. Based on our experiments, we observe that Fifo scheduling results in unbalanced load across cluster machines and is not suitable for enterprise clusters. We study the trade-offs exploited by Capacity and Fair scheduler: while the Fair scheduler offers less scheduling delay by avoiding head-of-the-line blocking problem, it may drop applications in case the load increases. On the other hand, the Capacity scheduler does not drop any application but errs on the side of higher scheduling delay
    corecore