11 research outputs found

    SELFISHMIGRATE: A Scalable Algorithm for Non-clairvoyantly Scheduling Heterogeneous Processors

    Full text link
    We consider the classical problem of minimizing the total weighted flow-time for unrelated machines in the online \emph{non-clairvoyant} setting. In this problem, a set of jobs JJ arrive over time to be scheduled on a set of MM machines. Each job jj has processing length pjp_j, weight wjw_j, and is processed at a rate of ℓij\ell_{ij} when scheduled on machine ii. The online scheduler knows the values of wjw_j and ℓij\ell_{ij} upon arrival of the job, but is not aware of the quantity pjp_j. We present the {\em first} online algorithm that is {\em scalable} ((1+\eps)-speed O(1ϵ2)O(\frac{1}{\epsilon^2})-competitive for any constant \eps > 0) for the total weighted flow-time objective. No non-trivial results were known for this setting, except for the most basic case of identical machines. Our result resolves a major open problem in online scheduling theory. Moreover, we also show that no job needs more than a logarithmic number of migrations. We further extend our result and give a scalable algorithm for the objective of minimizing total weighted flow-time plus energy cost for the case of unrelated machines and obtain a scalable algorithm. The key algorithmic idea is to let jobs migrate selfishly until they converge to an equilibrium. Towards this end, we define a game where each job's utility which is closely tied to the instantaneous increase in the objective the job is responsible for, and each machine declares a policy that assigns priorities to jobs based on when they migrate to it, and the execution speeds. This has a spirit similar to coordination mechanisms that attempt to achieve near optimum welfare in the presence of selfish agents (jobs). To the best our knowledge, this is the first work that demonstrates the usefulness of ideas from coordination mechanisms and Nash equilibria for designing and analyzing online algorithms

    Provably Efficient Adaptive Scheduling for Parallel Jobs

    Get PDF
    Scheduling competing jobs on multiprocessors has always been an important issue for parallel and distributed systems. The challenge is to ensure global, system-wide efficiency while offering a level of fairness to user jobs. Various degrees of successes have been achieved over the years. However, few existing schemes address both efficiency and fairness over a wide range of work loads. Moreover, in order to obtain analytical results, most of them require prior information about jobs, which may be difficult to obtain in real applications. This paper presents two novel adaptive scheduling algorithms -- GRAD for centralized scheduling, and WRAD for distributed scheduling. Both GRAD and WRAD ensure fair allocation under all levels of workload, and they offer provable efficiency without requiring prior information of job's parallelism. Moreover, they provide effective control over the scheduling overhead and ensure efficient utilization of processors. To the best of our knowledge, they are the first non-clairvoyant scheduling algorithms that offer such guarantees. We also believe that our new approach of resource request-allotment protocol deserves further exploration. Specifically, both GRAD and WRAD are O(1)-competitive with respect to mean response time for batched jobs, and O(1)-competitive with respect to makespan for non-batched jobs with arbitrary release times. The simulation results show that, for non-batched jobs, the makespan produced by GRAD is no more than 1.39 times of the optimal on average and it never exceeds 4.5 times. For batched jobs, the mean response time produced by GRAD is no more than 2.37 times of the optimal on average, and it never exceeds 5.5 times.Singapore-MIT Alliance (SMA

    A general framework for energy-efficient cloud computing mechanisms

    Get PDF
    We present a general model for the operation of a cloud computing server comprised of one or more speed-scalable processors. Typically, tasks are submitted to such a cloud computing server in an online fashion, and the server operator has to schedule the tasks and decides on payments without knowledge about the tasks arriving in the future. Although very natural, this cloud computing problem on speed-scalable processors has not been studied from a mechanism design perspective in the online setting. We provide a mechanism for this setting, both for a single and multiprocessor environment, that has several desirable properties: (1) the induced game admits a subgame perfect equilibrium in pure strategies and therefore a pure Nash equilibrium, (2) the Price of Anarchy is constant, (3) the mechanism is budget balanced, i.e., the sum of the payments of the agents is equal to the total energy costs, (4) the communication complexity is low, (5) the mechanism is computationally tractable for both the service operator and the agents, and (6) the agents' payment is also intuitive and easy to communicate to them. We also provide a second mechanism with a better Price of Anarchy, which in turn is more involved to implement. We are able to extend our mechanisms and results to the Bayesian setting, where the type of each agent is drawn independently from some underlying distribution and agents are minimizing their expected costs. In this setting we also show the same approximation factor of our mechanism as in the basic online setting in both the single and the multiprocessor environment

    Advances and Technologies in High Voltage Power Systems Operation, Control, Protection and Security

    Get PDF
    The electrical demands in several countries around the world are increasing due to the huge energy requirements of prosperous economies and the human activities of modern life. In order to economically transfer electrical powers from the generation side to the demand side, these powers need to be transferred at high-voltage levels through suitable transmission systems and power substations. To this end, high-voltage transmission systems and power substations are in demand. Actually, they are at the heart of interconnected power systems, in which any faults might lead to unsuitable consequences, abnormal operation situations, security issues, and even power cuts and blackouts. In order to cope with the ever-increasing operation and control complexity and security in interconnected high-voltage power systems, new architectures, concepts, algorithms, and procedures are essential. This book aims to encourage researchers to address the technical issues and research gaps in high-voltage transmission systems and power substations in modern energy systems

    Parallel Real-Time Scheduling for Latency-Critical Applications

    Get PDF
    In order to provide safety guarantees or quality of service guarantees, many of today\u27s systems consist of latency-critical applications, e.g. applications with timing constraints. The problem of scheduling multiple latency-critical jobs on a multiprocessor or multicore machine has been extensively studied for sequential (non-parallizable) jobs and different system models and different objectives have been considered. However, the computational requirement of a single job is still limited by the capacity of a single core. To provide increasingly complex functionalities of applications and to complete their higher computational demands within the same or even more stringent timing constraints, we must exploit the internal parallelism of jobs, where individual jobs are parallel programs and can potentially utilize more than one core in parallel. However, there is little work considering scheduling multiple parallel jobs that are latency-critical. This dissertation focuses on developing new scheduling strategies, analysis tools, and practical platform design techniques to enable efficient and scalable parallel real-time scheduling for latency-critical applications on multicore systems. In particular, the research is focused on two types of systems: (1) static real-time systems for tasks with deadlines where the temporal properties of the tasks that need to execute is known a priori and the goal is to guarantee the temporal correctness of the tasks prior to their executions; and (2) online systems for latency-critical jobs where multiple jobs arrive over time and the goal to optimize for a performance objective of jobs during the execution. For static real-time systems for parallel tasks, several scheduling strategies, including global earliest deadline first, global rate monotonic and a novel federated scheduling, are proposed, analyzed and implemented. These scheduling strategies have the best known theoretical performance for parallel real-time tasks under any global strategy, any fixed priority scheduling and any scheduling strategy, respectively. In addition, federated scheduling is generalized to systems with multiple criticality levels and systems with stochastic tasks. Both numerical and empirical experiments show that federated scheduling and its variations have good schedulability performance and are efficient in practice. For online systems with multiple latency-critical jobs, different online scheduling strategies are proposed and analyzed for different objectives, including maximizing the number of jobs meeting a target latency, maximizing the profit of jobs, minimizing the maximum latency and minimizing the average latency. For example, a simple First-In-First-Out scheduler is proven to be scalable for minimizing the maximum latency. Based on this theoretical intuition, a more practical work-stealing scheduler is developed, analyzed and implemented. Empirical evaluations indicate that, on both real world and synthetic workloads, this work-stealing implementation performs almost as well as an optimal scheduler

    Speed Scaling for Energy Aware Processor Scheduling: Algorithms and Analysis

    Get PDF
    We present theoretical algorithmic research of processor scheduling in an energy aware environment using the mechanism of speed scaling. We have two main goals in mind. The first is the development of algorithms that allow more energy efficient utilization of resources. The second goal is to further our ability to reason abstractly about energy in computing devices by developing and understanding algorithmic models of energy management. In order to achieve these goals, we investigate three classic process scheduling problems in the setting of a speed scalable processor. Integer stretch is one of the most obvious classical scheduling objectives that has yet to be considered in the speed scaling setting. For the objective of integer stretch plus energy, we give an online scheduling algorithm that, for any input, produces a schedule with integer stretch plus energy that is competitive with the integer stretch plus energy of any schedule that finishes all jobs. Second, we consider the problem of finding the schedule, S, that minimizes some quality of service objective Q plus B times the energy used by the processor. This schedule, S, is the optimal energy trade-off schedule in the sense that: no schedule can have better quality of service given the current investment of energy used by S, and, an additional investment of one unit of energy is insufficient to improve the quality of service by more than B. When Q is fractional weighted flow, we show that the optimal energy trade-off schedule is unique and has a simple structure, thus making it easy to check the optimality of a schedule. We further show that the optimal energy trade-off schedule can be computed with a natural homotopic optimization algorithm. Lastly, we consider the speed scaling problem where the quality of service objective is deadline feasibility and the power objective is temperature. In the case of batched jobs, we give a simple algorithm to compute the optimal schedule. For general instances, we give a new online algorithm and show that it has a competitive ratio that is an order of magnitude better than the best previously known for this problem

    Enabling Hyperscale Web Services

    Full text link
    Modern web services such as social media, online messaging, web search, video streaming, and online banking often support billions of users, requiring data centers that scale to hundreds of thousands of servers, i.e., hyperscale. In fact, the world continues to expect hyperscale computing to drive more futuristic applications such as virtual reality, self-driving cars, conversational AI, and the Internet of Things. This dissertation presents technologies that will enable tomorrow’s web services to meet the world’s expectations. The key challenge in enabling hyperscale web services arises from two important trends. First, over the past few years, there has been a radical shift in hyperscale computing due to an unprecedented growth in data, users, and web service software functionality. Second, modern hardware can no longer support this growth in hyperscale trends due to a decline in hardware performance scaling. To enable this new hyperscale era, hardware architects must become more aware of hyperscale software needs and software researchers can no longer expect unlimited hardware performance scaling. In short, systems researchers can no longer follow the traditional approach of building each layer of the systems stack separately. Instead, they must rethink the synergy between the software and hardware worlds from the ground up. This dissertation establishes such a synergy to enable futuristic hyperscale web services. This dissertation bridges the software and hardware worlds, demonstrating the importance of that bridge in realizing efficient hyperscale web services via solutions that span the systems stack. The specific goal is to design software that is aware of new hardware constraints and architect hardware that efficiently supports new hyperscale software requirements. This dissertation spans two broad thrusts: (1) a software and (2) a hardware thrust to analyze the complex hyperscale design space and use insights from these analyses to design efficient cross-stack solutions for hyperscale computation. In the software thrust, this dissertation contributes uSuite, the first open-source benchmark suite of web services built with a new hyperscale software paradigm, that is used in academia and industry to study hyperscale behaviors. Next, this dissertation uses uSuite to study software threading implications in light of today’s hardware reality, identifying new insights in the age-old research area of software threading. Driven by these insights, this dissertation demonstrates how threading models must be redesigned at hyperscale by presenting an automated approach and tool, uTune, that makes intelligent run-time threading decisions. In the hardware thrust, this dissertation architects both commodity and custom hardware to efficiently support hyperscale software requirements. First, this dissertation characterizes commodity hardware’s shortcomings, revealing insights that influenced commercial CPU designs. Based on these insights, this dissertation presents an approach and tool, SoftSKU, that enables cheap commodity hardware to efficiently support new hyperscale software paradigms, improving the efficiency of real-world web services that serve billions of users, saving millions of dollars, and meaningfully reducing the global carbon footprint. This dissertation also presents a hardware-software co-design, uNotify, that redesigns commodity hardware with minimal modifications by using existing hardware mechanisms more intelligently to overcome new hyperscale overheads. Next, this dissertation characterizes how custom hardware must be designed at hyperscale, resulting in industry-academia benchmarking efforts, commercial hardware changes, and improved software development. Based on this characterization’s insights, this dissertation presents Accelerometer, an analytical model that estimates gains from hardware customization. Multiple hyperscale enterprises and hardware vendors use Accelerometer to make well-informed hardware decisions.PHDComputer Science & EngineeringUniversity of Michigan, Horace H. Rackham School of Graduate Studieshttp://deepblue.lib.umich.edu/bitstream/2027.42/169802/1/akshitha_1.pd

    Proceedings of the 26th International Symposium on Theoretical Aspects of Computer Science (STACS'09)

    Get PDF
    The Symposium on Theoretical Aspects of Computer Science (STACS) is held alternately in France and in Germany. The conference of February 26-28, 2009, held in Freiburg, is the 26th in this series. Previous meetings took place in Paris (1984), Saarbr¨ucken (1985), Orsay (1986), Passau (1987), Bordeaux (1988), Paderborn (1989), Rouen (1990), Hamburg (1991), Cachan (1992), W¨urzburg (1993), Caen (1994), M¨unchen (1995), Grenoble (1996), L¨ubeck (1997), Paris (1998), Trier (1999), Lille (2000), Dresden (2001), Antibes (2002), Berlin (2003), Montpellier (2004), Stuttgart (2005), Marseille (2006), Aachen (2007), and Bordeaux (2008). ..

    LIPIcs, Volume 248, ISAAC 2022, Complete Volume

    Get PDF
    LIPIcs, Volume 248, ISAAC 2022, Complete Volum
    corecore