1,722 research outputs found

    Common Due-Date Problem: Exact Polynomial Algorithms for a Given Job Sequence

    Full text link
    This paper considers the problem of scheduling jobs on single and parallel machines where all the jobs possess different processing times but a common due date. There is a penalty involved with each job if it is processed earlier or later than the due date. The objective of the problem is to find the assignment of jobs to machines, the processing sequence of jobs and the time at which they are processed, which minimizes the total penalty incurred due to tardiness or earliness of the jobs. This work presents exact polynomial algorithms for optimizing a given job sequence or single and parallel machines with the run-time complexities of O(nlogn)O(n \log n) and O(mn2logn)O(mn^2 \log n) respectively, where nn is the number of jobs and mm the number of machines. The algorithms take a sequence consisting of all the jobs (Ji,i=1,2,,n)(J_i, i=1,2,\dots,n) as input and distribute the jobs to machines (for m>1m>1) along with their best completion times so as to get the least possible total penalty for this sequence. We prove the optimality for the single machine case and the runtime complexities of both. Henceforth, we present the results for the benchmark instances and compare with previous work for single and parallel machine cases, up to 200200 jobs.Comment: 15th International Symposium on Symbolic and Numeric Algorithms for Scientific Computin

    Tabu Search: A Comparative Study

    Get PDF

    A Novel Approach to the Common Due-Date Problem on Single and Parallel Machines

    Full text link
    This paper presents a novel idea for the general case of the Common Due-Date (CDD) scheduling problem. The problem is about scheduling a certain number of jobs on a single or parallel machines where all the jobs possess different processing times but a common due-date. The objective of the problem is to minimize the total penalty incurred due to earliness or tardiness of the job completions. This work presents exact polynomial algorithms for optimizing a given job sequence for single and identical parallel machines with the run-time complexities of O(nlogn)O(n \log n) for both cases, where nn is the number of jobs. Besides, we show that our approach for the parallel machine case is also suitable for non-identical parallel machines. We prove the optimality for the single machine case and the runtime complexities of both. Henceforth, we extend our approach to one particular dynamic case of the CDD and conclude the chapter with our results for the benchmark instances provided in the OR-library.Comment: Book Chapter 22 page

    A review of lot streaming in a flow shop environment with makespan criteria

    Get PDF
    [EN] Purpose: This paper reviews current literature and contributes a set of findings that capture the current state-of-the-art of the topic of lot streaming in a flow-shop. Design/methodology/approach: A literature review to capture, classify and summarize the main body of knowledge on lot streaming in a flow-shop with makespan criteria and, translate this into a form that is readily accessible to researchers and practitioners in the more mainstream production scheduling community. Findings: The existing knowledge base is somewhat fragmented. This is a relatively unexplored topic within mainstream operations management research and one which could provide rich opportunities for further exploration. Originality/value: This paper sets out to review current literature, from an advanced production scheduling perspective, and contributes a set of findings that capture the current state-of-the-art of this topic.This work has been carried out as part of the project “Programación de la Producción con Partición Ajustable de Lotes en entornos de Planificación mixta Pedido/Stock (PP-PAL-PPS)”, ref. GVA/2013/034 funded by Consellería de Educación, Cultura y Deportes de la Generalitat Valenciana.Gómez-Gasquet, P.; Segura Andrés, R.; Andrés Romano, C. (2013). A review of lot streaming in a flow shop environment with makespan criteria. Journal of Industrial Engineering and Management. 6(3):761-770. https://doi.org/10.3926/jiem.553S7617706

    Modelling and Scheduling Lot Streaming Flexible Flow Lines

    Get PDF
    Although lot streaming scheduling is an active research field, lot streaming flexible flow lines problems have received far less attention than classical flow shops. This paper deals with scheduling jobs in lot streaming flexible flow line problems. The paper mathematically formulates the problem by a mixed integer linear programming model. This model solves small instances to optimality. Moreover, a novel artificial bee colony optimization is developed. This algorithm utilizes five effective mechanisms to solve the problem. To evaluate the algorithm, it is compared with adaptation of four available algorithms. The statistical analyses showed that the proposed algorithm significantly outperformed the other tested algorithms

    Offline Learning for Sequence-based Selection Hyper-heuristics

    Get PDF
    This thesis is concerned with finding solutions to discrete NP-hard problems. Such problems occur in a wide range of real-world applications, such as bin packing, industrial flow shop problems, determining Boolean satisfiability, the traveling salesman and vehicle routing problems, course timetabling, personnel scheduling, and the optimisation of water distribution networks. They are typically represented as optimisation problems where the goal is to find a ``best'' solution from a given space of feasible solutions. As no known polynomial-time algorithmic solution exists for NP-hard problems, they are usually solved by applying heuristic methods. Selection hyper-heuristics are algorithms that organise and combine a number of individual low level heuristics into a higher level framework with the objective of improving optimisation performance. Many selection hyper-heuristics employ learning algorithms in order to enhance optimisation performance by improving the selection of single heuristics, and this learning may be classified as either online or offline. This thesis presents a novel statistical framework for the offline learning of subsequences of low level heuristics in order to improve the optimisation performance of sequenced-based selection hyper-heuristics. A selection hyper-heuristic is used to optimise the HyFlex set of discrete benchmark problems. The resulting sequences of low level heuristic selections and objective function values are used to generate an offline learning database of heuristic selections. The sequences in the database are broken down into subsequences and the mathematical concept of a logarithmic return is used to discriminate between ``effective'' subsequences, that tend to lead to improvements in optimisation performance, and ``disruptive'' subsequences that tend to lead to worsening performance. Effective subsequences are used to improve hyper-heuristics performance directly, by embedding them in a simple hyper-heuristic design, and indirectly as the inputs to an appropriate hyper-heuristic learning algorithm. Furthermore, by comparing effective subsequences across different problem domains it is possible to investigate the potential for cross-domain learning. The results presented here demonstrates that the use of well chosen subsequences of heuristics can lead to small, but statistically significant, improvements in optimisation performance

    Hyper-heuristic decision tree induction

    Get PDF
    A hyper-heuristic is any algorithm that searches or operates in the space of heuristics as opposed to the space of solutions. Hyper-heuristics are increasingly used in function and combinatorial optimization. Rather than attempt to solve a problem using a fixed heuristic, a hyper-heuristic approach attempts to find a combination of heuristics that solve a problem (and in turn may be directly suitable for a class of problem instances). Hyper-heuristics have been little explored in data mining. This work presents novel hyper-heuristic approaches to data mining, by searching a space of attribute selection criteria for decision tree building algorithm. The search is conducted by a genetic algorithm. The result of the hyper-heuristic search in this case is a strategy for selecting attributes while building decision trees. Most hyper-heuristics work by trying to adapt the heuristic to the state of the problem being solved. Our hyper-heuristic is no different. It employs a strategy for adapting the heuristic used to build decision tree nodes according to some set of features of the training set it is working on. We introduce, explore and evaluate five different ways in which this problem state can be represented for a hyper-heuristic that operates within a decisiontree building algorithm. In each case, the hyper-heuristic is guided by a rule set that tries to map features of the data set to be split by the decision tree building algorithm to a heuristic to be used for splitting the same data set. We also explore and evaluate three different sets of low-level heuristics that could be employed by such a hyper-heuristic. This work also makes a distinction between specialist hyper-heuristics and generalist hyper-heuristics. The main difference between these two hyperheuristcs is the number of training sets used by the hyper-heuristic genetic algorithm. Specialist hyper-heuristics are created using a single data set from a particular domain for evolving the hyper-heurisic rule set. Such algorithms are expected to outperform standard algorithms on the kind of data set used by the hyper-heuristic genetic algorithm. Generalist hyper-heuristics are trained on multiple data sets from different domains and are expected to deliver a robust and competitive performance over these data sets when compared to standard algorithms. We evaluate both approaches for each kind of hyper-heuristic presented in this thesis. We use both real data sets as well as synthetic data sets. Our results suggest that none of the hyper-heuristics presented in this work are suited for specialization – in most cases, the hyper-heuristic’s performance on the data set it was specialized for was not significantly better than that of the best performing standard algorithm. On the other hand, the generalist hyper-heuristics delivered results that were very competitive to the best standard methods. In some cases we even achieved a significantly better overall performance than all of the standard methods
    corecore