21,438 research outputs found

    Single-machine scheduling with stepwise tardiness costs and release times

    Get PDF
    We study a scheduling problem that belongs to the yard operations component of the railroad planning problems, namely the hump sequencing problem. The scheduling problem is characterized as a single-machine problem with stepwise tardiness cost objectives. This is a new scheduling criterion which is also relevant in the context of traditional machine scheduling problems. We produce complexity results that characterize some cases of the problem as pseudo-polynomially solvable. For the difficult-to-solve cases of the problem, we develop mathematical programming formulations, and propose heuristic algorithms. We test the formulations and heuristic algorithms on randomly generated single-machine scheduling problems and real-life datasets for the hump sequencing problem. Our experiments show promising results for both sets of problems

    The Lazy Bureaucrat Scheduling Problem

    Full text link
    We introduce a new class of scheduling problems in which the optimization is performed by the worker (single ``machine'') who performs the tasks. A typical worker's objective is to minimize the amount of work he does (he is ``lazy''), or more generally, to schedule as inefficiently (in some sense) as possible. The worker is subject to the constraint that he must be busy when there is work that he can do; we make this notion precise both in the preemptive and nonpreemptive settings. The resulting class of ``perverse'' scheduling problems, which we denote ``Lazy Bureaucrat Problems,'' gives rise to a rich set of new questions that explore the distinction between maximization and minimization in computing optimal schedules.Comment: 19 pages, 2 figures, Latex. To appear, Information and Computatio

    Lift-and-Round to Improve Weighted Completion Time on Unrelated Machines

    Get PDF
    We consider the problem of scheduling jobs on unrelated machines so as to minimize the sum of weighted completion times. Our main result is a (3/2c)(3/2-c)-approximation algorithm for some fixed c>0c>0, improving upon the long-standing bound of 3/2 (independently due to Skutella, Journal of the ACM, 2001, and Sethuraman & Squillante, SODA, 1999). To do this, we first introduce a new lift-and-project based SDP relaxation for the problem. This is necessary as the previous convex programming relaxations have an integrality gap of 3/23/2. Second, we give a new general bipartite-rounding procedure that produces an assignment with certain strong negative correlation properties.Comment: 21 pages, 4 figure

    Probabilistic alternatives for competitive analysis

    Get PDF
    In the last 20 years competitive analysis has become the main tool for analyzing the quality of online algorithms. Despite of this, competitive analysis has also been criticized: it sometimes cannot discriminate between algorithms that exhibit significantly different empirical behavior or it even favors an algorithm that is worse from an empirical point of view. Therefore, there have been several approaches to circumvent these drawbacks. In this survey, we discuss probabilistic alternatives for competitive analysis.operations research and management science;

    Scheduling MapReduce Jobs under Multi-Round Precedences

    Full text link
    We consider non-preemptive scheduling of MapReduce jobs with multiple tasks in the practical scenario where each job requires several map-reduce rounds. We seek to minimize the average weighted completion time and consider scheduling on identical and unrelated parallel processors. For identical processors, we present LP-based O(1)-approximation algorithms. For unrelated processors, the approximation ratio naturally depends on the maximum number of rounds of any job. Since the number of rounds per job in typical MapReduce algorithms is a small constant, our scheduling algorithms achieve a small approximation ratio in practice. For the single-round case, we substantially improve on previously best known approximation guarantees for both identical and unrelated processors. Moreover, we conduct an experimental analysis and compare the performance of our algorithms against a fast heuristic and a lower bound on the optimal solution, thus demonstrating their promising practical performance

    How the structure of precedence constraints may change the complexity class of scheduling problems

    Full text link
    This survey aims at demonstrating that the structure of precedence constraints plays a tremendous role on the complexity of scheduling problems. Indeed many problems can be NP-hard when considering general precedence constraints, while they become polynomially solvable for particular precedence constraints. We also show that there still are many very exciting challenges in this research area

    Stochastic scheduling on unrelated machines

    Get PDF
    Two important characteristics encountered in many real-world scheduling problems are heterogeneous machines/processors and a certain degree of uncertainty about the actual sizes of jobs. The first characteristic entails machine dependent processing times of jobs and is captured by the classical unrelated machine scheduling model.The second characteristic is adequately addressed by stochastic processing times of jobs as they are studied in classical stochastic scheduling models. While there is an extensive but separate literature for the two scheduling models, we study for the first time a combined model that takes both characteristics into account simultaneously. Here, the processing time of job jj on machine ii is governed by random variable PijP_{ij}, and its actual realization becomes known only upon job completion. With wjw_j being the given weight of job jj, we study the classical objective to minimize the expected total weighted completion time E[jwjCj]E[\sum_j w_jC_j], where CjC_j is the completion time of job jj. By means of a novel time-indexed linear programming relaxation, we compute in polynomial time a scheduling policy with performance guarantee (3+Δ)/2+ϵ(3+\Delta)/2+\epsilon. Here, ϵ>0\epsilon>0 is arbitrarily small, and Δ\Delta is an upper bound on the squared coefficient of variation of the processing times. We show that the dependence of the performance guarantee on Δ\Delta is tight, as we obtain a Δ/2\Delta/2 lower bound for the type of policies that we use. When jobs also have individual release dates rijr_{ij}, our bound is (2+Δ)+ϵ(2+\Delta)+\epsilon. Via Δ=0\Delta=0, currently best known bounds for deterministic scheduling are contained as a special case

    Spatial-temporal data modelling and processing for personalised decision support

    Get PDF
    The purpose of this research is to undertake the modelling of dynamic data without losing any of the temporal relationships, and to be able to predict likelihood of outcome as far in advance of actual occurrence as possible. To this end a novel computational architecture for personalised ( individualised) modelling of spatio-temporal data based on spiking neural network methods (PMeSNNr), with a three dimensional visualisation of relationships between variables is proposed. In brief, the architecture is able to transfer spatio-temporal data patterns from a multidimensional input stream into internal patterns in the spiking neural network reservoir. These patterns are then analysed to produce a personalised model for either classification or prediction dependent on the specific needs of the situation. The architecture described above was constructed using MatLab© in several individual modules linked together to form NeuCube (M1). This methodology has been applied to two real world case studies. Firstly, it has been applied to data for the prediction of stroke occurrences on an individual basis. Secondly, it has been applied to ecological data on aphid pest abundance prediction. Two main objectives for this research when judging outcomes of the modelling are accurate prediction and to have this at the earliest possible time point. The implications of these findings are not insignificant in terms of health care management and environmental control. As the case studies utilised here represent vastly different application fields, it reveals more of the potential and usefulness of NeuCube (M1) for modelling data in an integrated manner. This in turn can identify previously unknown (or less understood) interactions thus both increasing the level of reliance that can be placed on the model created, and enhancing our human understanding of the complexities of the world around us without the need for over simplification. Read less Keywords Personalised modelling; Spiking neural network; Spatial-temporal data modelling; Computational intelligence; Predictive modelling; Stroke risk predictio
    corecore