41,070 research outputs found

    Non-clairvoyant Scheduling Games

    Full text link
    In a scheduling game, each player owns a job and chooses a machine to execute it. While the social cost is the maximal load over all machines (makespan), the cost (disutility) of each player is the completion time of its own job. In the game, players may follow selfish strategies to optimize their cost and therefore their behaviors do not necessarily lead the game to an equilibrium. Even in the case there is an equilibrium, its makespan might be much larger than the social optimum, and this inefficiency is measured by the price of anarchy -- the worst ratio between the makespan of an equilibrium and the optimum. Coordination mechanisms aim to reduce the price of anarchy by designing scheduling policies that specify how jobs assigned to a same machine are to be scheduled. Typically these policies define the schedule according to the processing times as announced by the jobs. One could wonder if there are policies that do not require this knowledge, and still provide a good price of anarchy. This would make the processing times be private information and avoid the problem of truthfulness. In this paper we study these so-called non-clairvoyant policies. In particular, we study the RANDOM policy that schedules the jobs in a random order without preemption, and the EQUI policy that schedules the jobs in parallel using time-multiplexing, assigning each job an equal fraction of CPU time

    Smooth Inequalities and Equilibrium Inefficiency in Scheduling Games

    Full text link
    We study coordination mechanisms for Scheduling Games (with unrelated machines). In these games, each job represents a player, who needs to choose a machine for its execution, and intends to complete earliest possible. Our goal is to design scheduling policies that always admit a pure Nash equilibrium and guarantee a small price of anarchy for the l_k-norm social cost --- the objective balances overall quality of service and fairness. We consider policies with different amount of knowledge about jobs: non-clairvoyant, strongly-local and local. The analysis relies on the smooth argument together with adequate inequalities, called smooth inequalities. With this unified framework, we are able to prove the following results. First, we study the inefficiency in l_k-norm social costs of a strongly-local policy SPT and a non-clairvoyant policy EQUI. We show that the price of anarchy of policy SPT is O(k). We also prove a lower bound of Omega(k/log k) for all deterministic, non-preemptive, strongly-local and non-waiting policies (non-waiting policies produce schedules without idle times). These results ensure that SPT is close to optimal with respect to the class of l_k-norm social costs. Moreover, we prove that the non-clairvoyant policy EQUI has price of anarchy O(2^k). Second, we consider the makespan (l_infty-norm) social cost by making connection within the l_k-norm functions. We revisit some local policies and provide simpler, unified proofs from the framework's point of view. With the highlight of the approach, we derive a local policy Balance. This policy guarantees a price of anarchy of O(log m), which makes it the currently best known policy among the anonymous local policies that always admit a pure Nash equilibrium.Comment: 25 pages, 1 figur

    SELFISHMIGRATE: A Scalable Algorithm for Non-clairvoyantly Scheduling Heterogeneous Processors

    Full text link
    We consider the classical problem of minimizing the total weighted flow-time for unrelated machines in the online \emph{non-clairvoyant} setting. In this problem, a set of jobs JJ arrive over time to be scheduled on a set of MM machines. Each job jj has processing length pjp_j, weight wjw_j, and is processed at a rate of ℓij\ell_{ij} when scheduled on machine ii. The online scheduler knows the values of wjw_j and ℓij\ell_{ij} upon arrival of the job, but is not aware of the quantity pjp_j. We present the {\em first} online algorithm that is {\em scalable} ((1+\eps)-speed O(1ϵ2)O(\frac{1}{\epsilon^2})-competitive for any constant \eps > 0) for the total weighted flow-time objective. No non-trivial results were known for this setting, except for the most basic case of identical machines. Our result resolves a major open problem in online scheduling theory. Moreover, we also show that no job needs more than a logarithmic number of migrations. We further extend our result and give a scalable algorithm for the objective of minimizing total weighted flow-time plus energy cost for the case of unrelated machines and obtain a scalable algorithm. The key algorithmic idea is to let jobs migrate selfishly until they converge to an equilibrium. Towards this end, we define a game where each job's utility which is closely tied to the instantaneous increase in the objective the job is responsible for, and each machine declares a policy that assigns priorities to jobs based on when they migrate to it, and the execution speeds. This has a spirit similar to coordination mechanisms that attempt to achieve near optimum welfare in the presence of selfish agents (jobs). To the best our knowledge, this is the first work that demonstrates the usefulness of ideas from coordination mechanisms and Nash equilibria for designing and analyzing online algorithms

    Games and Mechanism Design in Machine Scheduling – An Introduction

    Get PDF
    In this paper, we survey different models, techniques, and some recent results to tackle machine scheduling problems within a distributed setting. In traditional optimization, a central authority is asked to solve a (computationally hard) optimization problem. In contrast, in distributed settings there are several agents, possibly equipped with private information that is not publicly known, and these agents need to interact in order to derive a solution to the problem. Usually the agents have their individual preferences, which induces them to behave strategically in order to manipulate the resulting solution. Nevertheless, one is often interested in the global performance of such systems. The analysis of such distributed settings requires techniques from classical Optimization, Game Theory, and Economic Theory. The paper therefore briefly introduces the most important of the underlying concepts, and gives a selection of typical research questions and recent results, focussing on applications to machine scheduling problems. This includes the study of the so-called price of anarchy for settings where the agents do not possess private information, as well as the design and analysis of (truthful) mechanisms in settings where the agents do possess private information.computer science applications;

    Spectrum Sharing in mmWave Cellular Networks via Cell Association, Coordination, and Beamforming

    Full text link
    This paper investigates the extent to which spectrum sharing in mmWave networks with multiple cellular operators is a viable alternative to traditional dedicated spectrum allocation. Specifically, we develop a general mathematical framework by which to characterize the performance gain that can be obtained when spectrum sharing is used, as a function of the underlying beamforming, operator coordination, bandwidth, and infrastructure sharing scenarios. The framework is based on joint beamforming and cell association optimization, with the objective of maximizing the long-term throughput of the users. Our asymptotic and non-asymptotic performance analyses reveal five key points: (1) spectrum sharing with light on-demand intra- and inter-operator coordination is feasible, especially at higher mmWave frequencies (for example, 73 GHz), (2) directional communications at the user equipment substantially alleviate the potential disadvantages of spectrum sharing (such as higher multiuser interference), (3) large numbers of antenna elements can reduce the need for coordination and simplify the implementation of spectrum sharing, (4) while inter-operator coordination can be neglected in the large-antenna regime, intra-operator coordination can still bring gains by balancing the network load, and (5) critical control signals among base stations, operators, and user equipment should be protected from the adverse effects of spectrum sharing, for example by means of exclusive resource allocation. The results of this paper, and their extensions obtained by relaxing some ideal assumptions, can provide important insights for future standardization and spectrum policy.Comment: 15 pages. To appear in IEEE JSAC Special Issue on Spectrum Sharing and Aggregation for Future Wireless Network
    • …
    corecore