2,802 research outputs found

    Scheduling Monotone Moldable Jobs in Linear Time

    Full text link
    A moldable job is a job that can be executed on an arbitrary number of processors, and whose processing time depends on the number of processors allotted to it. A moldable job is monotone if its work doesn't decrease for an increasing number of allotted processors. We consider the problem of scheduling monotone moldable jobs to minimize the makespan. We argue that for certain compact input encodings a polynomial algorithm has a running time polynomial in n and log(m), where n is the number of jobs and m is the number of machines. We describe how monotony of jobs can be used to counteract the increased problem complexity that arises from compact encodings, and give tight bounds on the approximability of the problem with compact encoding: it is NP-hard to solve optimally, but admits a PTAS. The main focus of this work are efficient approximation algorithms. We describe different techniques to exploit the monotony of the jobs for better running times, and present a (3/2+{\epsilon})-approximate algorithm whose running time is polynomial in log(m) and 1/{\epsilon}, and only linear in the number n of jobs

    Throughput Maximization in the Speed-Scaling Setting

    Get PDF
    We are given a set of nn jobs and a single processor that can vary its speed dynamically. Each job JjJ_j is characterized by its processing requirement (work) pjp_j, its release date rjr_j and its deadline djd_j. We are also given a budget of energy EE and we study the scheduling problem of maximizing the throughput (i.e. the number of jobs which are completed on time). We propose a dynamic programming algorithm that solves the preemptive case of the problem, i.e. when the execution of the jobs may be interrupted and resumed later, in pseudo-polynomial time. Our algorithm can be adapted for solving the weighted version of the problem where every job is associated with a weight wjw_j and the objective is the maximization of the sum of the weights of the jobs that are completed on time. Moreover, we provide a strongly polynomial time algorithm to solve the non-preemptive unweighed case when the jobs have the same processing requirements. For the weighted case, our algorithm can be adapted for solving the non-preemptive version of the problem in pseudo-polynomial time.Comment: submitted to SODA 201

    Approximation Algorithms for Correlated Knapsacks and Non-Martingale Bandits

    Full text link
    In the stochastic knapsack problem, we are given a knapsack of size B, and a set of jobs whose sizes and rewards are drawn from a known probability distribution. However, we know the actual size and reward only when the job completes. How should we schedule jobs to maximize the expected total reward? We know O(1)-approximations when we assume that (i) rewards and sizes are independent random variables, and (ii) we cannot prematurely cancel jobs. What can we say when either or both of these assumptions are changed? The stochastic knapsack problem is of interest in its own right, but techniques developed for it are applicable to other stochastic packing problems. Indeed, ideas for this problem have been useful for budgeted learning problems, where one is given several arms which evolve in a specified stochastic fashion with each pull, and the goal is to pull the arms a total of B times to maximize the reward obtained. Much recent work on this problem focus on the case when the evolution of the arms follows a martingale, i.e., when the expected reward from the future is the same as the reward at the current state. What can we say when the rewards do not form a martingale? In this paper, we give constant-factor approximation algorithms for the stochastic knapsack problem with correlations and/or cancellations, and also for budgeted learning problems where the martingale condition is not satisfied. Indeed, we can show that previously proposed LP relaxations have large integrality gaps. We propose new time-indexed LP relaxations, and convert the fractional solutions into distributions over strategies, and then use the LP values and the time ordering information from these strategies to devise a randomized adaptive scheduling algorithm. We hope our LP formulation and decomposition methods may provide a new way to address other correlated bandit problems with more general contexts

    Throughput Maximization in Multiprocessor Speed-Scaling

    Full text link
    We are given a set of nn jobs that have to be executed on a set of mm speed-scalable machines that can vary their speeds dynamically using the energy model introduced in [Yao et al., FOCS'95]. Every job jj is characterized by its release date rjr_j, its deadline djd_j, its processing volume pi,jp_{i,j} if jj is executed on machine ii and its weight wjw_j. We are also given a budget of energy EE and our objective is to maximize the weighted throughput, i.e. the total weight of jobs that are completed between their respective release dates and deadlines. We propose a polynomial-time approximation algorithm where the preemption of the jobs is allowed but not their migration. Our algorithm uses a primal-dual approach on a linearized version of a convex program with linear constraints. Furthermore, we present two optimal algorithms for the non-preemptive case where the number of machines is bounded by a fixed constant. More specifically, we consider: {\em (a)} the case of identical processing volumes, i.e. pi,j=pp_{i,j}=p for every ii and jj, for which we present a polynomial-time algorithm for the unweighted version, which becomes a pseudopolynomial-time algorithm for the weighted throughput version, and {\em (b)} the case of agreeable instances, i.e. for which ri≤rjr_i \le r_j if and only if di≤djd_i \le d_j, for which we present a pseudopolynomial-time algorithm. Both algorithms are based on a discretization of the problem and the use of dynamic programming

    Some complexity and approximation results for coupled-tasks scheduling problem according to topology

    Get PDF
    We consider the makespan minimization coupled-tasks problem in presence of compatibility constraints with a specified topology. In particular, we focus on stretched coupled-tasks, i.e. coupled-tasks having the same sub-tasks execution time and idle time duration. We study several problems in framework of classic complexity and approximation for which the compatibility graph is bipartite (star, chain,. . .). In such a context, we design some efficient polynomial-time approximation algorithms for an intractable scheduling problem according to some parameters
    • …
    corecore