3,924 research outputs found

    Scheduling parallel jobs to minimize the makespan

    Get PDF
    We consider the NP-hard problem of scheduling parallel jobs with release dates on identical parallel machines to minimize the makespan. A parallel job requires simultaneously a prespecified, job-dependent number of machines when being processed. We prove that the makespan of any nonpreemptive list-schedule is within a factor of 2 of the optimal preemptive makespan. This gives the best-known approximation algorithms for both the preemptive and the nonpreemptive variant of the problem. We also show that no list-scheduling algorithm can achieve a better performance guarantee than 2 for the nonpreemptive problem, no matter which priority list is chosen. List-scheduling also works in the online setting where jobs arrive over time and the length of a job becomes known only when it completes; it therefore yields a deterministic online algorithm with competitive ratio 2 as well. In addition, we consider a different online model in which jobs arrive one by one and need to be scheduled before the next job becomes known. We show that no list-scheduling algorithm has a constant competitive ratio. Still, we present the first online algorithm for scheduling parallel jobs with a constant competitive ratio in this context. We also prove a new information-theoretic lower bound of 2.25 for the competitive ratio of any deterministic online algorithm for this model. Moreover, we show that 6/5 is a lower bound for the competitive ratio of any deterministic online algorithm of the preemptive version of the model jobs arriving over tim

    Approximation algorithms for the parallel flow shop problem

    Get PDF
    We consider the NP-hard problem of scheduling n jobs in m two-stage parallel flow shops so as to minimize the makespan. This problem decomposes into two subproblems: assigning the jobs to parallel flow shops; and scheduling the jobs assigned to the same flow shop by use of Johnson's rule. For m = 2, we present a 32-approximation algorithm, and for m = 3, we present a 127-approximation algorithm. Both these algorithms run in O(n log n) time. These are the first approximation algorithms with fixed worst-case performance guarantees for the parallel flow shop problem

    Machine Scheduling with Resource Dependent Processing Times

    Get PDF
    We consider several parallel machine scheduling settings with the objective to minimize the schedule makespan. The most general of these settings is unrelated parallel machine scheduling. We assume that, in addition to its machine dependence, the processing time of any job is dependent on the usage of a scarce renewable resource. A given amount of that resource, e.g. workers, can be distributed over the jobs in process at any time, and the more of that resource is allocated to a job, the smaller is its processing time. This model generalizes classical machine scheduling problems, adding a time-resource tradeoff. It is also a natural variant of a generalized assignment problem studied previously by Shmoys and Tardos. On the basis of integer programming formulations for relaxations of the respective problems, we use LP rounding techniques to allocate resources to jobs, and to assign jobs to machines. Combined with Graham''s list scheduling, we thus prove the existence of constant factor approximation algorithms. Our performance guarantee is 6.83 for the most general case of unrelated parallel machine scheduling. We improve this bound for two special cases, namely to 5.83 whenever the jobs are assigned to machines beforehand, and to (5+e), e>0, whenever the processing times do not depend on the machine. Moreover, we discuss tightness of the relaxations, and derive inapproximability results.operations research and management science;

    Spatial-temporal data modelling and processing for personalised decision support

    Get PDF
    The purpose of this research is to undertake the modelling of dynamic data without losing any of the temporal relationships, and to be able to predict likelihood of outcome as far in advance of actual occurrence as possible. To this end a novel computational architecture for personalised ( individualised) modelling of spatio-temporal data based on spiking neural network methods (PMeSNNr), with a three dimensional visualisation of relationships between variables is proposed. In brief, the architecture is able to transfer spatio-temporal data patterns from a multidimensional input stream into internal patterns in the spiking neural network reservoir. These patterns are then analysed to produce a personalised model for either classification or prediction dependent on the specific needs of the situation. The architecture described above was constructed using MatLab© in several individual modules linked together to form NeuCube (M1). This methodology has been applied to two real world case studies. Firstly, it has been applied to data for the prediction of stroke occurrences on an individual basis. Secondly, it has been applied to ecological data on aphid pest abundance prediction. Two main objectives for this research when judging outcomes of the modelling are accurate prediction and to have this at the earliest possible time point. The implications of these findings are not insignificant in terms of health care management and environmental control. As the case studies utilised here represent vastly different application fields, it reveals more of the potential and usefulness of NeuCube (M1) for modelling data in an integrated manner. This in turn can identify previously unknown (or less understood) interactions thus both increasing the level of reliance that can be placed on the model created, and enhancing our human understanding of the complexities of the world around us without the need for over simplification. Read less Keywords Personalised modelling; Spiking neural network; Spatial-temporal data modelling; Computational intelligence; Predictive modelling; Stroke risk predictio

    Efficient heuristics for the parallel blocking flow shop scheduling problem

    Get PDF
    We consider the NP-hard problem of scheduling n jobs in F identical parallel flow shops, each consisting of a series of m machines, and doing so with a blocking constraint. The applied criterion is to minimize the makespan, i.e., the maximum completion time of all the jobs in F flow shops (lines). The Parallel Flow Shop Scheduling Problem (PFSP) is conceptually similar to another problem known in the literature as the Distributed Permutation Flow Shop Scheduling Problem (DPFSP), which allows modeling the scheduling process in companies with more than one factory, each factory with a flow shop configuration. Therefore, the proposed methods can solve the scheduling problem under the blocking constraint in both situations, which, to the best of our knowledge, has not been studied previously. In this paper, we propose a mathematical model along with some constructive and improvement heuristics to solve the parallel blocking flow shop problem (PBFSP) and thus minimize the maximum completion time among lines. The proposed constructive procedures use two approaches that are totally different from those proposed in the literature. These methods are used as initial solution procedures of an iterated local search (ILS) and an iterated greedy algorithm (IGA), both of which are combined with a variable neighborhood search (VNS). The proposed constructive procedure and the improved methods take into account the characteristics of the problem. The computational evaluation demonstrates that both of them –especially the IGA– perform considerably better than those algorithms adapted from the DPFSP literature.Peer ReviewedPostprint (author's final draft

    Energy-Efficient Multiprocessor Scheduling for Flow Time and Makespan

    Full text link
    We consider energy-efficient scheduling on multiprocessors, where the speed of each processor can be individually scaled, and a processor consumes power sαs^{\alpha} when running at speed ss, for α>1\alpha>1. A scheduling algorithm needs to decide at any time both processor allocations and processor speeds for a set of parallel jobs with time-varying parallelism. The objective is to minimize the sum of the total energy consumption and certain performance metric, which in this paper includes total flow time and makespan. For both objectives, we present instantaneous parallelism clairvoyant (IP-clairvoyant) algorithms that are aware of the instantaneous parallelism of the jobs at any time but not their future characteristics, such as remaining parallelism and work. For total flow time plus energy, we present an O(1)O(1)-competitive algorithm, which significantly improves upon the best known non-clairvoyant algorithm and is the first constant competitive result on multiprocessor speed scaling for parallel jobs. In the case of makespan plus energy, which is considered for the first time in the literature, we present an O(ln11/αP)O(\ln^{1-1/\alpha}P)-competitive algorithm, where PP is the total number of processors. We show that this algorithm is asymptotically optimal by providing a matching lower bound. In addition, we also study non-clairvoyant scheduling for total flow time plus energy, and present an algorithm that achieves O(lnP)O(\ln P)-competitive for jobs with arbitrary release time and O(ln1/αP)O(\ln^{1/\alpha}P)-competitive for jobs with identical release time. Finally, we prove an Ω(ln1/αP)\Omega(\ln^{1/\alpha}P) lower bound on the competitive ratio of any non-clairvoyant algorithm, matching the upper bound of our algorithm for jobs with identical release time

    Spatial-temporal data modelling and processing for personalised decision support

    Get PDF
    The purpose of this research is to undertake the modelling of dynamic data without losing any of the temporal relationships, and to be able to predict likelihood of outcome as far in advance of actual occurrence as possible. To this end a novel computational architecture for personalised ( individualised) modelling of spatio-temporal data based on spiking neural network methods (PMeSNNr), with a three dimensional visualisation of relationships between variables is proposed. In brief, the architecture is able to transfer spatio-temporal data patterns from a multidimensional input stream into internal patterns in the spiking neural network reservoir. These patterns are then analysed to produce a personalised model for either classification or prediction dependent on the specific needs of the situation. The architecture described above was constructed using MatLab© in several individual modules linked together to form NeuCube (M1). This methodology has been applied to two real world case studies. Firstly, it has been applied to data for the prediction of stroke occurrences on an individual basis. Secondly, it has been applied to ecological data on aphid pest abundance prediction. Two main objectives for this research when judging outcomes of the modelling are accurate prediction and to have this at the earliest possible time point. The implications of these findings are not insignificant in terms of health care management and environmental control. As the case studies utilised here represent vastly different application fields, it reveals more of the potential and usefulness of NeuCube (M1) for modelling data in an integrated manner. This in turn can identify previously unknown (or less understood) interactions thus both increasing the level of reliance that can be placed on the model created, and enhancing our human understanding of the complexities of the world around us without the need for over simplification. Read less Keywords Personalised modelling; Spiking neural network; Spatial-temporal data modelling; Computational intelligence; Predictive modelling; Stroke risk predictio

    Sensitivity Analysis of List Scheduling Heuristics

    Get PDF
    When jobs have to be processed on a set of identical parallel machines so as to minimize the makespan of the schedule, list scheduling rules form a popular class of heuristics. The order in which jobs appear on the list is assumed here to be determined by the relative size of their processing times; well-known special cases are the LPT rule and the SPT rule, in which the jobs are ordered according to non-increasing and non-decreasing processing time respectively. When all processing times are exactly known, a given list scheduling rule will generate a unique assignment of jobs to machines. However, when there exists a priori uncertainty with respect to one of the processing times, then there will be, in general, several possibilities for the assignment that will be generated once the processing time is known. This number of possible assignments may be viewed as a measure of the sensitivity of the list scheduling rule that is applied. We derive bounds on the maximum number of possible assignments for several list scheduling heuristics, and we also study the makespan associated with these assignments. In this way we obtain analytical support for the intuitively plausible notion that the sensitivity of a list scheduling rule increases with the quality of the schedule produced
    corecore