35 research outputs found

    Kairos: Preemptive Data Center Scheduling Without Runtime Estimates

    Get PDF
    The vast majority of data center schedulers use task runtime estimates to improve the quality of their scheduling decisions. Knowledge about runtimes allows the schedulers, among other things, to achieve better load balance and to avoid head-of-line blocking. Obtaining accurate runtime estimates is, however, far from trivial, and erroneous estimates lead to sub-optimal scheduling decisions. Techniques to mitigate the effect of inaccurate estimates have shown some success, but the fundamental problem remains. This paper presents Kairos, a novel data center scheduler that assumes no prior information on task runtimes. Kairos introduces a distributed approximation of the Least Attained Service (LAS) scheduling policy. Kairos consists of a centralized scheduler and per-node schedulers. The per-node schedulers implement LAS for tasks on their node, using preemption as necessary to avoid head-of-line blocking. The centralized scheduler distributes tasks among nodes in a manner that balances the load and imposes on each node a workload in which LAS provides favorable performance. We have implemented Kairos in YARN. We compare its performance against the YARN FIFO scheduler and Big-C, an open-source state-of-the-art YARN-based scheduler that also uses preemption. Compared to YARN FIFO, Kairos reduces the median job completion time by 73% and the 99th percentile by 30%. Compared to Big-C, the improvements are 37% for the median and 57% for the 99th percentile. We evaluate Kairos at scale by implementing it in the Eagle simulator and comparing its performance against Eagle. Kairos improves the 99th percentile of short job completion times by up to 55% for the Google trace and 85% for the Yahoo trace

    Good Faith Business Judgment: A Theory of Rhetoric in Corporate Law Jurisprudence

    Get PDF
    This Article develops a theory of rhetoric in corporate law jurisprudence. It begins by examining a recent innovation in Delaware case law: the emerging principle of good faith. Good faith is an old notion in law generally, but it offers to bring significant change to corporate law, including realignment of the business judgment rule and a shift in the traditional balance between the authority of boards and the accountability of boards to courts. This Article argues, however, that good faith functions as a rhetorical device rather than a substantive standard. That is, it operates as a speech act, a performance, as opposed to a careful method of analysis. To explain the sudden appearance of good faith, this Article articulates a model of corporate law rhetoric. Courts invent rhetorical devices to loosen corporate law doctrine and increase judicial review of board decisionmaking in response to scandals and other extralegal pressures operating upon the judiciary. These pressures stem largely from the twin threats of corporate migration and federal preemption, both of which imperil the primacy of the Delaware judiciary as a corporate lawmaker. In periods of crisis and scandal, the judiciary employs rhetorical devices to reduce these pressures, typically with the effect of increasing board accountability, only to return, once the pressure recedes, to a position of board deference. After finding several examples of this pattern in corporate law history, this Article argues, ultimately, that regular movement back and forth along the authority/accountability spectrum is an essential feature of corporate law jurisprudence and that understanding the rhetorical devices that permit this movement is necessary to complete any account of what corporate law is and how it works

    Consumer Harm Acts? An Economic Analysis of Private Actions Under State Consumer Protection Acts

    Get PDF
    State Consumer Protection Acts (CPAs) were adopted in the 1960s and 1970s to protect consumers from unfair and deceptive practices that would not be redressed but for the existence of the acts. In this sense, CPAs were designed to fill existing gaps in market, legal and regulatory protections of consumers. CPAs were designed to solve two simple economic problems: 1) individual consumers often do not have the incentive or means to pursue individual claims against mass marketers who engage in unfair and deceptive practices; and, 2) because of the difficulty of establishing elements of either common law fraud or breach of promise, those actions alone are too weak an instrument to deter seller fraud and deception. The most striking lesson of our analysis is that the typical state CPA – with relaxed rules for establishing liability, statutory damages, damage multipliers, attorneys fees and costs, and class actions – solves the basic economic problem that CPAs were intended to address several times over. The effect of this redundancy in solutions is that CPAs can deter the provision of valuable information to consumers and, thus, harm consumers. That is, as currently applied state Consumer Protection Acts harm consumers. This need not be the case. A few modest reforms would dramatically improve the impact of CPAs on consumer welfare

    Consumer Harm Acts? An Economic Analysis of Private Actions Under State Consumer Protection Acts

    Get PDF
    State Consumer Protection Acts (CPAs) were adopted in the 1960s and 1970s to protect consumers from unfair and deceptive practices that would not be redressed but for the existence of the acts. In this sense, CPAs were designed to fill existing gaps in market, legal and regulatory protections of consumers. CPAs were designed to solve two simple economic problems: 1) individual consumers often do not have the incentive or means to pursue individual claims against mass marketers who engage in unfair and deceptive practices; and, 2) because of the difficulty of establishing elements of either common law fraud or breach of promise, those actions alone are too weak an instrument to deter seller fraud and deception. The most striking lesson of our analysis is that the typical state CPA – with relaxed rules for establishing liability, statutory damages, damage multipliers, attorneys fees and costs, and class actions – solves the basic economic problem that CPAs were intended to address several times over. The effect of this redundancy in solutions is that CPAs can deter the provision of valuable information to consumers and, thus, harm consumers. That is, as currently applied state Consumer Protection Acts harm consumers. This need not be the case. A few modest reforms would dramatically improve the impact of CPAs on consumer welfare

    Hybrid, Job-Aware, and Preemptive Datacenter Scheduling

    Get PDF
    Scheduling in datacenters is an important, yet challenging problem. Datacenters are composed of a large number, typically tens of thousands, of commodity computers running a variety of data-parallel jobs. The role of the scheduler is to assign cluster resources to jobs, which is not trivial due to the large scale of the cluster, as well as the high scheduling load (tens of thousands of scheduling decisions per second). Additionally to scalability, modern datacenters face increasingly heterogeneous workloads composed of long batch jobs, e.g., data analytics, and latency-sensitive short jobs, e.g., operations of user-facing services. In such workloads, and especially if the cluster is highly utilized, it is challenging to avoid short running jobs getting stuck behind long running jobs, i.e. head-of-line blocking. Schedulers have evolved from being centralized (one single scheduler for the entire cluster) to distributed (many schedulers that take scheduling decisions in parallel). Although distributed schedulers can handle the large-scale nature of datacenters, they trade scheduling latency for accuracy. The complexity of scheduling in datacenters is exacerbated by the data-parallel nature of the jobs. That is, a job is composed of multiple tasks and the job completes only when all of its tasks complete. A scheduler that takes into account this fact, i.e. job-aware, could use this information to provide better scheduling decisions. Furthermore, to improve the quality of their scheduling decisions, most of datacenter schedulers use job runtime estimates. Obtaining accurate runtime estimates is, however, far from trivial, and erroneous estimates may lead to sub-optimal scheduling decisions. Considering these challenges, in this dissertation we argue the following: (i) that a hybrid centralized/distributed design can get the best of both worlds by scheduling long jobs in a centralized way and short jobs in a distributed way; (ii) such a hybrid scheduler can avoid head-of-line blocking and provide job-awareness by dynamically partitioning the cluster for short and long jobs and by executing a job to completion once it started; (iii) a scheduler can dispense with runtime estimates by sharing the resources of a node with preemption, and load balancing jobs among the nodes

    Using hierarchical scheduling to support soft real-time applications in general-purpose operating systems

    Get PDF
    Journal ArticleThe CPU schedulers in general-purpose operating systems are designed to provide fast response time for interactive applications and high throughput for batch applications. The heuristics used to achieve these goals do not lend themselves to scheduling real-time applications, nor do they meet other scheduling requirements such as coordinating scheduling across several processors or machines, or enforcing isolation between applications, users, and administrative domains. Extending the scheduling subsystems of general-purpose operating systems in an ad hoc manner is time consuming and requires considerable expertise as well as source code to the operating system. Furthermore, once extended, the new scheduler may be as inflexible as the original. The thesis of this dissertation is that extending a general-purpose operating system with a general, heterogeneous scheduling hierarchy is feasible and useful. A hierarchy of schedulers generalizes the role of CPU schedulers by allowing them to schedule other schedulers in addition to scheduling threads. A general, heterogeneous scheduling hierarchy is one that allows arbitrary (or nearly arbitrary) scheduling algorithms throughout the hierarchy. In contrast, most of the previous work on hierarchical scheduling has imposed restrictions on the schedulers used in part or all of the hierarchy. This dissertation describes the Hierarchical Loadable Scheduler (HLS) architecture, which permits schedulers to be dynamically composed in the kernel of a general-purpose operating system. The most important characteristics of HLS, and the ones that distinguish it from previous work, are that it has demonstrated that a hierarchy of nearly arbitrary schedulers can be efficiently implemented in a general-purpose operating system, and that the behavior of a hierarchy of soft real-time schedulers can be reasoned about in order to provide guaranteed scheduling behavior to application threads. The flexibility afforded by HLS permits scheduling behavior to be tailored to meet complex requirements without encumbering users who have modest requirements with the performance and administrative costs of a complex scheduler. Contributions of this dissertation include the following. (1) The design, prototype implementation, and performance evaluation of HLS in Windows 2000. (2) A system of guarantees for scheduler composition that permits reasoning about the scheduling behavior of a hierarchy of soft real-time schedulers. Guarantees assure users that application requirements can be met throughout the lifetime of the application, and also provide application developers with a model of CPU allocation to which they can program. (3) The design, implementation, and evaluation of two augmented CPU reservation schedulers, which provide increase scheduling predictability when low-level operating system activity steals time from applications

    Deployment and Debugging of Real-Time Applications on Multicore Architectures

    Get PDF
    It is essential to enable information extraction from software. Program tracing techniques are an example of information extraction. Program tracing extracts information from the program during execution. Tracing helps with the testing and validation of software to ensure that the software under test is correct. Information extraction is done by instrumenting the program. Logged information can be stored in dedicated logging memories or can be buffered and streamed off-chip to an external monitor. The designer inspects the trace after execution to identify potentially erroneous state information. In addition, the trace can provide the state information that serves as input to generate the erroneous output for reproducibility. Information extraction can be difficult and expensive due to the increase in size and complexity of modern software systems. For the sub-class of software systems known as real-time systems, these issues are further aggravated. This is because real-time systems demand timing guarantees in addition to functional correctness. Consequently, any instrumentation to the original program code for the purpose of information extraction may affect the temporal behaviors of the program. This perturbation of temporal behaviors can lead to the violation of timing constraints, which may bias the program execution and/or cause the program to miss its deadline. As a result, there is considerable interest in devising techniques to allow for information extraction without missing a program’s deadline that is known as time-aware instrumentation. This thesis investigates time-aware instrumentation mechanisms to instrument programs while respecting their timing constraints and functional behavior. Knowledge of the underlying hardware on which the software runs, enables the extraction of more information via the instrumentation process. Chip-multiprocessors offer a solution to the performance bottleneck on uni-processors. Providing timing guarantees for hard real-time systems, however, on chip-multiprocessors is difficult. This is because conventional communication interconnects are designed to optimize the average-case performance. Therefore, researchers propose interconnects such as the priority-aware networks to satisfy the requirements of hard real-time systems. The priority-aware interconnects, however, lack the proper analysis techniques to facilitate the deployment of real-time systems. This thesis also investigates latency and buffer space analysis techniques for pipelined communication resource models, as well as algorithms for the proper deployment of real-time applications to these platforms. The analysis techniques proposed in this thesis provide guarantees on the schedulability of real-time systems on chip-multiprocessors. These guarantees are based on reducing contention in the interconnect while simultaneously accurately computing the worst-case communication latencies. While these worst-case latencies provide bounds for computing the overall worst-case execution time of applications on chip-multiprocessors, they also provide means to assigning instrumentation budgets required by time-aware instrumentation. Leveraging these platform-specific analysis techniques for the assignment of instrumentation budgets, allows for extracting more information from the instrumentation process
    corecore