2,227 research outputs found

    Stronger Lagrangian bounds by use of slack variables: applications to machine scheduling problems

    Get PDF
    Lagrangian relaxation is a powerful bounding technique that has been applied successfully to manyNP-hard combinatorial optimization problems. The basic idea is to see anNP-hard problem as an easy-to-solve problem complicated by a number of nasty side constraints. We show that reformulating nasty inequality constraints as equalities by using slack variables leads to stronger lower bounds. The trick is widely applicable, but we focus on a broad class of machine scheduling problems for which it is particularly useful. We provide promising computational results for three problems belonging to this class for which Lagrangian bounds have appeared in the literature: the single-machine problem of minimizing total weighted completion time subject to precedence constraints, the two-machine flow-shop problem of minimizing total completion time, and the single-machine problem of minimizing total weighted tardiness

    Scheduling Jobs in Flowshops with the Introduction of Additional Machines in the Future

    Get PDF
    This is the author's peer-reviewed final manuscript, as accepted by the publisher. The published article is copyrighted by Elsevier and can be found at: http://www.journals.elsevier.com/expert-systems-with-applications/.The problem of scheduling jobs to minimize total weighted tardiness in flowshops,\ud with the possibility of evolving into hybrid flowshops in the future, is investigated in\ud this paper. As this research is guided by a real problem in industry, the flowshop\ud considered has considerable flexibility, which stimulated the development of an\ud innovative methodology for this research. Each stage of the flowshop currently has\ud one or several identical machines. However, the manufacturing company is planning\ud to introduce additional machines with different capabilities in different stages in the\ud near future. Thus, the algorithm proposed and developed for the problem is not only\ud capable of solving the current flow line configuration but also the potential new\ud configurations that may result in the future. A meta-heuristic search algorithm based\ud on Tabu search is developed to solve this NP-hard, industry-guided problem. Six\ud different initial solution finding mechanisms are proposed. A carefully planned\ud nested split-plot design is performed to test the significance of different factors and\ud their impact on the performance of the different algorithms. To the best of our\ud knowledge, this research is the first of its kind that attempts to solve an industry-guided\ud problem with the concern for future developments

    Spatial-temporal data modelling and processing for personalised decision support

    Get PDF
    The purpose of this research is to undertake the modelling of dynamic data without losing any of the temporal relationships, and to be able to predict likelihood of outcome as far in advance of actual occurrence as possible. To this end a novel computational architecture for personalised ( individualised) modelling of spatio-temporal data based on spiking neural network methods (PMeSNNr), with a three dimensional visualisation of relationships between variables is proposed. In brief, the architecture is able to transfer spatio-temporal data patterns from a multidimensional input stream into internal patterns in the spiking neural network reservoir. These patterns are then analysed to produce a personalised model for either classification or prediction dependent on the specific needs of the situation. The architecture described above was constructed using MatLab© in several individual modules linked together to form NeuCube (M1). This methodology has been applied to two real world case studies. Firstly, it has been applied to data for the prediction of stroke occurrences on an individual basis. Secondly, it has been applied to ecological data on aphid pest abundance prediction. Two main objectives for this research when judging outcomes of the modelling are accurate prediction and to have this at the earliest possible time point. The implications of these findings are not insignificant in terms of health care management and environmental control. As the case studies utilised here represent vastly different application fields, it reveals more of the potential and usefulness of NeuCube (M1) for modelling data in an integrated manner. This in turn can identify previously unknown (or less understood) interactions thus both increasing the level of reliance that can be placed on the model created, and enhancing our human understanding of the complexities of the world around us without the need for over simplification. Read less Keywords Personalised modelling; Spiking neural network; Spatial-temporal data modelling; Computational intelligence; Predictive modelling; Stroke risk predictio

    Spatial-temporal data modelling and processing for personalised decision support

    Get PDF
    The purpose of this research is to undertake the modelling of dynamic data without losing any of the temporal relationships, and to be able to predict likelihood of outcome as far in advance of actual occurrence as possible. To this end a novel computational architecture for personalised ( individualised) modelling of spatio-temporal data based on spiking neural network methods (PMeSNNr), with a three dimensional visualisation of relationships between variables is proposed. In brief, the architecture is able to transfer spatio-temporal data patterns from a multidimensional input stream into internal patterns in the spiking neural network reservoir. These patterns are then analysed to produce a personalised model for either classification or prediction dependent on the specific needs of the situation. The architecture described above was constructed using MatLab© in several individual modules linked together to form NeuCube (M1). This methodology has been applied to two real world case studies. Firstly, it has been applied to data for the prediction of stroke occurrences on an individual basis. Secondly, it has been applied to ecological data on aphid pest abundance prediction. Two main objectives for this research when judging outcomes of the modelling are accurate prediction and to have this at the earliest possible time point. The implications of these findings are not insignificant in terms of health care management and environmental control. As the case studies utilised here represent vastly different application fields, it reveals more of the potential and usefulness of NeuCube (M1) for modelling data in an integrated manner. This in turn can identify previously unknown (or less understood) interactions thus both increasing the level of reliance that can be placed on the model created, and enhancing our human understanding of the complexities of the world around us without the need for over simplification. Read less Keywords Personalised modelling; Spiking neural network; Spatial-temporal data modelling; Computational intelligence; Predictive modelling; Stroke risk predictio

    Scheduling in assembly type job-shops

    Get PDF
    Assembly type job-shop scheduling is a generalization of the job-shop scheduling problem to include assembly operations. In the assembly type job-shops scheduling problem, there are n jobs which are to be processed on in workstations and each job has a due date. Each job visits one or more workstations in a predetermined route. The primary difference between this new problem and the classical job-shop problem is that two or more jobs can merge to foul\u27 a new job at a specified workstation, that is job convergence is permitted. This feature cannot be modeled by existing job-shop techniques. In this dissertation, we develop scheduling procedures for the assembly type job-shop with the objective of minimizing total weighted tardiness. Three types of workstations are modeled: single machine, parallel machine, and batch machine. We label this new scheduling procedure as SB. The SB procedure is heuristic in nature and is derived from the shifting bottleneck concept. SB decomposes the assembly type job-shop scheduling problem into several workstation scheduling sub-problems. Various types of techniques are used in developing the scheduling heuristics for these sub-problems including the greedy method, beam search, critical path analysis, local search, and dynamic programming. The performance of SB is validated on a set of test problems and compared with priority rules that are normally used in practice. The results show that SB outperforms the priority rules by an average of 19% - 36% for the test problems. SB is extended to solve scheduling problems with other objectives including minimizing the maximum completion time, minimizing weighted flow time and minimizing maximum weighted lateness. Comparisons with the test problems, indicate that SB outperforms the priority rules for these objectives as well. The SB procedure and its accompanying logic is programmed into an object oriented scheduling system labeled as LEKIN. The LEKIN program includes a standard library of scheduling rules and hence can be used as a platform for the development of new scheduling heuristics. In industrial applications LEKIN allows schedulers to obtain effective machine schedules rapidly. The results from this research allow us to increase shop utilization, improve customer satisfaction, and lower work-in-process inventory without a major capital investment

    Dynamic resource constrained multi-project scheduling problem with weighted earliness/tardiness costs

    Get PDF
    In this study, a conceptual framework is given for the dynamic multi-project scheduling problem with weighted earliness/tardiness costs (DRCMPSPWET) and a mathematical programming formulation of the problem is provided. In DRCMPSPWET, a project arrives on top of an existing project portfolio and a due date has to be quoted for the new project while minimizing the costs of schedule changes. The objective function consists of the weighted earliness tardiness costs of the activities of the existing projects in the current baseline schedule plus a term that increases linearly with the anticipated completion time of the new project. An iterated local search based approach is developed for large instances of this problem. In order to analyze the performance and behavior of the proposed method, a new multi-project data set is created by controlling the total number of activities, the due date tightness, the due date range, the number of resource types, and the completion time factor in an instance. A series of computational experiments are carried out to test the performance of the local search approach. Exact solutions are provided for the small instances. The results indicate that the local search heuristic performs well in terms of both solution quality and solution time

    Flow shop scheduling with earliness, tardiness and intermediate inventory holding costs

    Get PDF
    We consider the problem of scheduling customer orders in a flow shop with the objective of minimizing the sum of tardiness, earliness (finished goods inventory holding) and intermediate (work-in-process) inventory holding costs. We formulate this problem as an integer program, and based on approximate solutions to two di erent, but closely related, Dantzig-Wolfe reformulations, we develop heuristics to minimize the total cost. We exploit the duality between Dantzig-Wolfe reformulation and Lagrangian relaxation to enhance our heuristics. This combined approach enables us to develop two di erent lower bounds on the optimal integer solution, together with intuitive approaches for obtaining near-optimal feasible integer solutions. To the best of our knowledge, this is the first paper that applies column generation to a scheduling problem with di erent types of strongly NP-hard pricing problems which are solved heuristically. The computational study demonstrates that our algorithms have a significant speed advantage over alternate methods, yield good lower bounds, and generate near-optimal feasible integer solutions for problem instances with many machines and a realistically large number of jobs
    corecore