1,395 research outputs found

    NEH-based heuristics for the permutation flowshop scheduling problem to minimize total tardiness

    Get PDF
    Since Johnson׳s seminal paper in 1954, scheduling jobs in a permutation flowshop has been receiving the attention of hundreds of practitioners and researchers, being one of the most studied topics in the Operations Research literature. Among the different objectives that can be considered, minimising the total tardiness (i.e. the sum of the surplus of the completion time of each job over its due date) is regarded as a key objective for manufacturing companies, as it entails the fulfilment of the due dates committed to customers. Since this problem is known to be NP-hard, most research has focused on proposing approximate procedures to solve it in reasonable computation times. Particularly, several constructive heuristics have been proposed, with NEHedd being the most efficient one, serving also to provide an initial solution for more elaborate approximate procedures. In this paper, we first analyse in detail the decision problem depending on the generation of the due dates of the jobs, and discuss the similarities with different related decision problems. In addition, for the most characteristic tardiness scenario, the analysis shows that a huge number of ties appear during the construction of the solutions done by the NEHedd heuristic, and that wisely breaking the ties greatly influences the quality of the final solution. Since no tie-breaking mechanism has been designed for this heuristic up to now, we propose several mechanisms that are exhaustively tested. The results show that some of them outperform the original NEHedd by about 25% while keeping the same computational requirements.Ministerio de Ciencia e Innovación DPI2010-15573/DPIMinisterio de Ciencia e Innovación DPI2013-44461-P/DP

    A tabu search procedure for developing robust predicitive project schedules.

    Get PDF
    Proactive scheduling aims at the generation of robust baseline schedules that are as much as possible protected against disruptions that may occur during project execution. In this paper, we focus on disruptions caused by stochastic resource availabilities and aim at generating stable baseline schedules. A schedule’s robustness (stability) is measured by the weighted deviation between the planned and the actually realized activity starting times during project execution. We present a tabu search procedure that operates on a surrogate, free slack based objective function. Its effectiveness is demonstrated by extensive computational results obtained on a set of randomly generated test instances.Project scheduling; Robustness; Proactive; Stability;

    Average-case Approximation Ratio of Scheduling without Payments

    Full text link
    Apart from the principles and methodologies inherited from Economics and Game Theory, the studies in Algorithmic Mechanism Design typically employ the worst-case analysis and approximation schemes of Theoretical Computer Science. For instance, the approximation ratio, which is the canonical measure of evaluating how well an incentive-compatible mechanism approximately optimizes the objective, is defined in the worst-case sense. It compares the performance of the optimal mechanism against the performance of a truthful mechanism, for all possible inputs. In this paper, we take the average-case analysis approach, and tackle one of the primary motivating problems in Algorithmic Mechanism Design -- the scheduling problem [Nisan and Ronen 1999]. One version of this problem which includes a verification component is studied by [Koutsoupias 2014]. It was shown that the problem has a tight approximation ratio bound of (n+1)/2 for the single-task setting, where n is the number of machines. We show, however, when the costs of the machines to executing the task follow any independent and identical distribution, the average-case approximation ratio of the mechanism given in [Koutsoupias 2014] is upper bounded by a constant. This positive result asymptotically separates the average-case ratio from the worst-case ratio, and indicates that the optimal mechanism for the problem actually works well on average, although in the worst-case the expected cost of the mechanism is Theta(n) times that of the optimal cost

    Robust job-sequencing with an uncertain flexible maintenance activity

    Get PDF
    In this study, the problem of scheduling a set of jobs and one uncertain maintenance activity on a single machine, with the objective of minimizing the makespan is addressed. The maintenance activity has a given duration and must be executed within a given time window. Furthermore, duration and time window of the maintenance are uncertain, and can take different values which can be described by different scenarios. The problem is to determine a job sequence which performs well, in terms of makespan, independently on the possible variation of the data concerning the maintenance. A robust scheduling approach is used for the problem, in which four different measures of robustness are considered, namely, maximum absolute regret, maximum relative regret, worst-case scenario, and ordered weighted averaging. Complexity and approximation results are presented. In particular, we show that, for all the four robustness criteria, the problem is strongly NP-hard. A number of special cases are explored, and an exact pseudopolynomial algorithm based on dynamic programming is devised when the number of scenarios is fixed. Two Mixed Integer Programming (MIP) models are also presented for the general problem. Several computational experiments have been conducted to evaluate the efficiency and effectiveness of the MIP models and of the dynamic programming approach

    Timeslack-based techniques for generating robust projectschedules subject to resource uncertainty.

    Get PDF
    The classical, deterministic resource-constrained project scheduling problem has been the subject of a great deal of research during the previous decades. This is not surprising given the high practical relevance of this scheduling problem. Nevertheless, extensions are needed to be better able to cope with situations arising in practice such as multiple activity execution modes, activity duration changes and resource breakdowns. In this paper we analytically determine the impact of unexpected resource breakdowns on activity durations. Furthermore, using this information we develop an approach for inserting explicit idle time into the project schedule in order to protect it as well as possible from disruptions caused by resource unavailabilities. This strategy will be compared to a traditional simulation-based procedure and to a heuristic developed for the case of stochastic activity durations.Uncertainty; Project scheduling; Scheduling; Research; Impact; Information; Time; Order; IT; Strategy; Heuristic;

    Optimizing egalitarian performance in the side-effects model of colocation for data center resource management

    Full text link
    In data centers, up to dozens of tasks are colocated on a single physical machine. Machines are used more efficiently, but tasks' performance deteriorates, as colocated tasks compete for shared resources. As tasks are heterogeneous, the resulting performance dependencies are complex. In our previous work [18] we proposed a new combinatorial optimization model that uses two parameters of a task - its size and its type - to characterize how a task influences the performance of other tasks allocated to the same machine. In this paper, we study the egalitarian optimization goal: maximizing the worst-off performance. This problem generalizes the classic makespan minimization on multiple processors (P||Cmax). We prove that polynomially-solvable variants of multiprocessor scheduling are NP-hard and hard to approximate when the number of types is not constant. For a constant number of types, we propose a PTAS, a fast approximation algorithm, and a series of heuristics. We simulate the algorithms on instances derived from a trace of one of Google clusters. Algorithms aware of jobs' types lead to better performance compared with algorithms solving P||Cmax. The notion of type enables us to model degeneration of performance caused by using standard combinatorial optimization methods. Types add a layer of additional complexity. However, our results - approximation algorithms and good average-case performance - show that types can be handled efficiently.Comment: Author's version of a paper published in Euro-Par 2017 Proceedings, extends the published paper with addtional results and proof
    corecore