73 research outputs found

    Parallel Transferable Uniform Multi-Round Algorithm for Minimizing Makespan

    Get PDF
    In parallel computing systems using the master/worker model for distributed grid computing, as the size of handling data grows, the increase in the data transmission time degrades the performance. For divisible workload applications, therefore, multiple-round scheduling algorithms have been being developed to mitigate the adverse effect of longer data transmission time by dividing the data into chunks to be sent out in multiple rounds, thus overlapping the times required for computation and transmission. However, a standard multiple-round scheduling algorithm, Uniform Multi-Round (UMR), adopts a sequential transmission model where the master communicates with one worker at a time, thus the transmission capacity of the link attached to the master cannot be fully utilized due to the limits of worker-side capacity. In the present study, a Parallel Transferable Uniform Multi-Round algorithm (PTUMR) is proposed. It efficiently utilizes the data transmission capacity of network links by allowing chunks to be transmitted in parallel to workers. This algorithm divides workers into groups in a way that fully uses the link bandwidth of the master under some constraints and considers each group of workers as one virtual worker. In particular, introducing a Grouping Threshold effectively deals with very heterogeneous workers in both data transmission and computation capacities. Then, the master schedules sequential data transmissions to the virtual workers in an optimal way like in UMR. The performance evaluations show that the proposed algorithm achieves significantly shorter turnaround times (i.e., makespan) compared with UMR regardless of heterogeneity of workers, which are close to the theoretical lower limits

    Collocation Games and Their Application to Distributed Resource Management

    Full text link
    We introduce Collocation Games as the basis of a general framework for modeling, analyzing, and facilitating the interactions between the various stakeholders in distributed systems in general, and in cloud computing environments in particular. Cloud computing enables fixed-capacity (processing, communication, and storage) resources to be offered by infrastructure providers as commodities for sale at a fixed cost in an open marketplace to independent, rational parties (players) interested in setting up their own applications over the Internet. Virtualization technologies enable the partitioning of such fixed-capacity resources so as to allow each player to dynamically acquire appropriate fractions of the resources for unencumbered use. In such a paradigm, the resource management problem reduces to that of partitioning the entire set of applications (players) into subsets, each of which is assigned to fixed-capacity cloud resources. If the infrastructure and the various applications are under a single administrative domain, this partitioning reduces to an optimization problem whose objective is to minimize the overall deployment cost. In a marketplace, in which the infrastructure provider is interested in maximizing its own profit, and in which each player is interested in minimizing its own cost, it should be evident that a global optimization is precisely the wrong framework. Rather, in this paper we use a game-theoretic framework in which the assignment of players to fixed-capacity resources is the outcome of a strategic "Collocation Game". Although we show that determining the existence of an equilibrium for collocation games in general is NP-hard, we present a number of simplified, practically-motivated variants of the collocation game for which we establish convergence to a Nash Equilibrium, and for which we derive convergence and price of anarchy bounds. In addition to these analytical results, we present an experimental evaluation of implementations of some of these variants for cloud infrastructures consisting of a collection of multidimensional resources of homogeneous or heterogeneous capacities. Experimental results using trace-driven simulations and synthetically generated datasets corroborate our analytical results and also illustrate how collocation games offer a feasible distributed resource management alternative for autonomic/self-organizing systems, in which the adoption of a global optimization approach (centralized or distributed) would be neither practical nor justifiable.NSF (CCF-0820138, CSR-0720604, EFRI-0735974, CNS-0524477, CNS-052016, CCR-0635102); Universidad Pontificia Bolivariana; COLCIENCIAS–Instituto Colombiano para el Desarrollo de la Ciencia y la Tecnología "Francisco José de Caldas

    Cooperative Scheduling of Bag-of-Tasks Workflows on Hybrid Clouds

    Get PDF
    We address the problem of scheduling a class of large-scale applications inspired from real-world on hybrid Clouds, characterized by a large number of homogeneous and concurrent tasks that are the main sources of bottlenecks but open great potential for optimization. We formulate the scheduling problem as a new sequential cooperative game and propose a communication- and storage-aware multi-objective algorithm that optimizes two user objectives (execution time and economic cost) while fulfilling two constraints (network bandwidth and storage requirements). We present comprehensive experiments using both simulation and real-world applications that demonstrate the efficiency and effectiveness of our approach in terms of algorithm complexity, make span, cost, system-level efficiency, fairness, and other aspects compared with other related algorithms.(VLID)2217955Accepted versio

    Cooperative Scheduling of Bag-of-Tasks Workflows on Hybrid Clouds

    Get PDF
    We address the problem of scheduling a class of large-scale applications inspired from real-world on hybrid Clouds, characterized by a large number of homogeneous and concurrent tasks that are the main sources of bottlenecks but open great potential for optimization. We formulate the scheduling problem as a new sequential cooperative game and propose a communication- and storage-aware multi-objective algorithm that optimizes two user objectives (execution time and economic cost) while fulfilling two constraints (network bandwidth and storage requirements). We present comprehensive experiments using both simulation and real-world applications that demonstrate the efficiency and effectiveness of our approach in terms of algorithm complexity, make span, cost, system-level efficiency, fairness, and other aspects compared with other related algorithms.(VLID)2217955Accepted versio

    Nützliche Strukturen und wie sie zu finden sind: Nicht Approximierbarkeit und Approximationen für diverse Varianten des Parallel Task Scheduling Problems

    Get PDF
    In this thesis, we consider the Parallel Task Scheduling problem and several variants. This problem and its variations have diverse applications in theory and practice; for example, they appear as sub-problems in higher dimensional problems. In the Parallel Task Scheduling problem, we are given a set of jobs and a set of identical machines. Each job is a parallel task; i.e., it needs a fixed number of identical machines to be processed. A schedule assigns to each job a set of machines it is processed on and a starting time. It is feasible if at each point in time each machine processes at most one job. In a variant of this problem, called Strip Packing, the identical machines are arranged in a total order, and jobs can only allocate neighboring machines with regard to this total order. In this case, we speak of Contiguous Parallel Task Scheduling as well. In another variant, called Single Resource Constraint Scheduling, we are given an additional constraint on how many jobs can be processed at the same time. For these variants of the Parallel Task Scheduling problem, we consider an extension, where the set of machines is grouped into identical clusters. When scheduling a job, we are allowed to allocate machines from only one cluster to process the job. For all these considered problems, we close some gaps between inapproximation or hardness result and the best possible algorithm. For Parallel Task Scheduling we prove that it is strongly NP-hard if we are given precisely 4 machines. Before it was known that it is strongly NP-hard if we are given at least 5 machines, and there was an (exact) pseudo-polynomial time algorithm for up to 3 machines. For Strip Packing, we present an algorithm with approximation ratio (5/4 +ε) and prove that there is no approximation with ratio less than 5/4 unless P = NP. Concerning Single Resource Constraint Scheduling, it is not possible to find an algorithm with ratio smaller than 3/2, unless P = NP, and we present an algorithm with ratio (3/2 +ε). For the extensions to identical clusters, there can be no approximation algorithm with a ratio smaller than 2 unless P = NP. For the extensions of Strip Packing and Parallel Task Scheduling there are 2-approximations already, but they have a huge worst case running time. We present 2-approximations that have a linear running time for the extensions of Strip Packing, Parallel Task Scheduling, and Single Resource Constraint Scheduling for the case that at least three clusters are present and greatly improve the running time for two clusters. Finally, we consider three variants of Scheduling on Identical Machines with setup times. We present EPTAS results for all of them which is the best one can hope for since these problems are strongly NP-complete.In dieser Thesis untersuchen wir das Problem Parallel Task Scheduling und einige seiner Varianten. Dieses Problem und seine Variationen haben vielfältige Anwendungen in Theorie und Praxis. Beispielsweise treten sie als Teilprobleme in höherdimensionalen Problemen auf. Im Problem Parallel Task Scheduling erhalten wir eine Menge von Jobs und eine Menge identischer Maschinen. Jeder Job ist ein paralleler Task, d. h. er benötigt eine feste Anzahl der identischen Maschinen, um bearbeitet zu werden. Ein Schedule ordnet den Jobs die Maschinen zu, auf denen sie bearbeitet werden sollen, sowie einen festen Startzeitpunkt der Bearbeitung. Der Schedule ist gültig, wenn zu jedem Zeitpunkt jede Maschine höchstens einen Job bearbeitet. Beim Strip Packing Problem sind die identischen Maschinen in einer totalen Ordnung angeordnet und Jobs können nur benachbarte Maschinen in Bezug auf diese Ordnung nutzen. In dem Single Resource Constraint Scheduling Problem gibt es eine zusätzliche Einschränkung, wie viele Jobs gleichzeitig verarbeitet werden können. Für die genannten Varianten des Parallel Task Scheduling Problems betrachten wir eine Erweiterung, bei der die Maschinen in identische Cluster gruppiert sind. Bei der Bearbeitung eines Jobs dürfen in diesem Modell nur Maschinen aus einem Cluster genutzt werden. Für all diese Probleme schließen wir Lücken zwischen Nichtapproximierbarkeit und Algorithmen. Für Parallel Task Scheduling zeigen wir, dass es stark NP-vollständig ist, wenn genau 4 Maschinen gegeben sind. Vorher war ein pseudopolynomieller Algorithmus für bis zu 3 Maschinen bekannt, sowie dass dieses Problem stark NP-vollständig ist für 5 oder mehr Maschinen. Für Strip Packing zeigen wir, dass es keinen pseudopolynomiellen Algorithmus gibt, der eine Güte besser als 5/4 besitzt und geben einen pseudopolynomiellen Algorithmus mit Güte (5/4 +ε) an. Für Single Resource Constraint Scheduling ist die bestmögliche Güte eine 3/2-Approximation und wir präsentieren eine (3/2 +ε)-Approximation. Für die Erweiterung auf identische Cluster gibt es keine Approximation mit Güte besser als 2. Vor unseren Untersuchungen waren bereits Algorithmen mit Güte 2 bekannt, die jedoch gigantische Worst-Case Laufzeiten haben. Wir geben für alle drei Varianten 2-Approximationen mit linearer Laufzeit an, sofern mindestens drei Cluster gegeben sind. Schlussendlich betrachten wir noch Scheduling auf Identischen Maschinen mit Setup Zeiten. Wir entwickeln für drei untersuche Varianten dieses Problems jeweils einen EPTAS, wobei ein EPTAS das beste ist, auf das man hoffen kann, es sei denn es gilt P = NP

    Resource Management In Cloud And Big Data Systems

    Get PDF
    Cloud computing is a paradigm shift in computing, where services are offered and acquired on demand in a cost-effective way. These services are often virtualized, and they can handle the computing needs of big data analytics. The ever-growing demand for cloud services arises in many areas including healthcare, transportation, energy systems, and manufacturing. However, cloud resources such as computing power, storage, energy, dollars for infrastructure, and dollars for operations, are limited. Effective use of the existing resources raises several fundamental challenges that place the cloud resource management at the heart of the cloud providers\u27 decision-making process. One of these challenges faced by the cloud providers is to provision, allocate, and price the resources such that their profit is maximized and the resources are utilized efficiently. In addition, executing large-scale applications in clouds may require resources from several cloud providers. Another challenge when processing data intensive applications is minimizing their energy costs. Electricity used in US data centers in 2010 accounted for about 2% of total electricity used nationwide. In addition, the energy consumed by the data centers is growing at over 15% annually, and the energy costs make up about 42% of the data centers\u27 operating costs. Therefore, it is critical for the data centers to minimize their energy consumption when offering services to customers. In this Ph.D. dissertation, we address these challenges by designing, developing, and analyzing mechanisms for resource management in cloud computing systems and data centers. The goal is to allocate resources efficiently while optimizing a global performance objective of the system (e.g., maximizing revenue, maximizing social welfare, or minimizing energy). We improve the state-of-the-art in both methodologies and applications. As for methodologies, we introduce novel resource management mechanisms based on mechanism design, approximation algorithms, cooperative game theory, and hedonic games. These mechanisms can be applied in cloud virtual machine (VM) allocation and pricing, cloud federation formation, and energy-efficient computing. In this dissertation, we outline our contributions and possible directions for future research in this field

    Resource Management In Cloud And Big Data Systems

    Get PDF
    Cloud computing is a paradigm shift in computing, where services are offered and acquired on demand in a cost-effective way. These services are often virtualized, and they can handle the computing needs of big data analytics. The ever-growing demand for cloud services arises in many areas including healthcare, transportation, energy systems, and manufacturing. However, cloud resources such as computing power, storage, energy, dollars for infrastructure, and dollars for operations, are limited. Effective use of the existing resources raises several fundamental challenges that place the cloud resource management at the heart of the cloud providers\u27 decision-making process. One of these challenges faced by the cloud providers is to provision, allocate, and price the resources such that their profit is maximized and the resources are utilized efficiently. In addition, executing large-scale applications in clouds may require resources from several cloud providers. Another challenge when processing data intensive applications is minimizing their energy costs. Electricity used in US data centers in 2010 accounted for about 2% of total electricity used nationwide. In addition, the energy consumed by the data centers is growing at over 15% annually, and the energy costs make up about 42% of the data centers\u27 operating costs. Therefore, it is critical for the data centers to minimize their energy consumption when offering services to customers. In this Ph.D. dissertation, we address these challenges by designing, developing, and analyzing mechanisms for resource management in cloud computing systems and data centers. The goal is to allocate resources efficiently while optimizing a global performance objective of the system (e.g., maximizing revenue, maximizing social welfare, or minimizing energy). We improve the state-of-the-art in both methodologies and applications. As for methodologies, we introduce novel resource management mechanisms based on mechanism design, approximation algorithms, cooperative game theory, and hedonic games. These mechanisms can be applied in cloud virtual machine (VM) allocation and pricing, cloud federation formation, and energy-efficient computing. In this dissertation, we outline our contributions and possible directions for future research in this field

    Computational Aspects of Game Theory and Microeconomics

    Get PDF
    The purpose of this thesis is to study algorithmic questions that arise in the context of game theory and microeconomics. In particular, we investigate the computational complexity of various economic solution concepts by using and advancing methodologies from the fields of combinatorial optimization and approximation algorithms. We first study the problem of allocating a set of indivisible goods to a set of agents, who express preferences over combinations of items through their utility functions. Several objectives have been considered in the economic literature in different contexts. In fair division theory, a desirable outcome is to minimize the envy or the envy-ratio between any pair of players. We use tools from the theory of linear and integer programming as well as combinatorics to derive new approximation algorithms and hardness results for various types of utility functions. A different objective that has been considered in the context of auctions, is to find an allocation that maximizes the social welfare, i.e., the total utility derived by the agents. We construct reductions from multi-prover proof systems to obtain inapproximability results, given standard assumptions for the utility functions of the agents. We then consider equilibrium concepts in games. We derive the first subexponential algorithm for computing approximate Nash equilibria in 22-player noncooperative games and extend our result to multi-player games. We further propose a second algorithm based on solving polynomial equations over the reals. Both algorithms improve the previously known upper bounds on the complexity of the problem. Finally, we study game theoretic models that have been introduced recently to address incentive issues in Internet routing. A polynomial time algorithm is obtained for computing equilibria in such games, i.e., routing schemes and payoff allocations from which no subset of agents has an incentive to deviate. Our algorithm is based on linear programming duality theory. We also obtain generalizations when the agents have nonlinear utility functions.Ph.D.Committee Chair: Lipton, Richard; Committee Member: Ding, Yan; Committee Member: Duke, Richard; Committee Member: Randall, Dana; Committee Member: Vazirani, Vija

    Computing with strategic agents

    Get PDF
    Thesis (Ph. D.)--Massachusetts Institute of Technology, Dept. of Electrical Engineering and Computer Science, 2005.Includes bibliographical references (p. 179-189).This dissertation studies mechanism design for various combinatorial problems in the presence of strategic agents. A mechanism is an algorithm for allocating a resource among a group of participants, each of which has a privately-known value for any particular allocation. A mechanism is truthful if it is in each participant's best interest to reveal his private information truthfully regardless of the strategies of the other participants. First, we explore a competitive auction framework for truthful mechanism design in the setting of multi-unit auctions, or auctions which sell multiple identical copies of a good. In this framework, the goal is to design a truthful auction whose revenue approximates that of an omniscient auction for any set of bids. We focus on two natural settings - the limited demand setting where bidders desire at most a fixed number of copies and the limited budget setting where bidders can spend at most a fixed amount of money. In the limit demand setting, all prior auctions employed the use of randomization in the computation of the allocation and prices.(cont.) Randomization in truthful mechanism design is undesirable because, in arguing the truthfulness of the mechanism, we employ an underlying assumption that the bidders trust the random coin flips of the auctioneer. Despite conjectures to the contrary, we are able to design a technique to derandomize any multi-unit auction in the limited demand case without losing much of the revenue guarantees. We then consider the limited budget case and provide the first competitive auction for this setting, although our auction is randomized. Next, we consider abandoning truthfulness in order to improve the revenue properties of procurement auctions, or auctions that are used to hire a team of agents to complete a task. We study first-price procurement auctions and their variants and argue that in certain settings the payment is never significantly more than, and sometimes much less than, truthful mechanisms. Then we consider the setting of cost-sharing auctions. In a cost-sharing auction, agents bid to receive some service, such as connectivity to the Internet. A subset of agents is then selected for service and charged prices to approximately recover the cost of servicing them.(cont.) We ask what can be achieved by cost -sharing auctions satisfying a strengthening of truthfulness called group-strategyproofness. Group-strategyproofness requires that even coalitions of agents do not have an incentive to report bids other than their true values in the absence of side-payments. For a particular class of such mechanisms, we develop a novel technique based on the probabilistic method for proving bounds on their revenue and use this technique to derive tight or nearly-tight bounds for several combinatorial optimization games. Our results are quite pessimistic, suggesting that for many problems group-strategyproofness is incompatible with revenue goals. Finally, we study centralized two-sided markets, or markets that form a matching between participants based on preference lists. We consider mechanisms that output matching which are stable with respect to the submitted preferences. A matching is stable if no two participants can jointly benefit by breaking away from the assigned matching to form a pair.(cont.) For such mechanisms, we are able to prove that in a certain probabilistic setting each participant's best strategy is truthfulness with high probability (assuming other participants are truthful as well) even though in such markets in general there are provably no truthful mechanisms.by Nicole Immorlica.Ph.D

    Proactive-reactive, robust scheduling and capacity planning of deconstruction projects under uncertainty

    Get PDF
    A project planning and decision support model is developed and applied to identify and reduce risk and uncertainty in deconstruction project planning. It allows calculating building inventories based on sensor information and construction standards and it computes robust project plans for different scenarios with multiple modes, constrained renewable resources and locations. A reactive and flexible planning element is proposed in the case of schedule infeasibility during project execution
    corecore