7 research outputs found

    Dynamic adjustment of dispatching rule parameters in flow shops with sequence dependent setup times

    Get PDF
    Decentralized scheduling with dispatching rules is applied in many fields of production and logistics, especially in highly complex manufacturing systems. Since dispatching rules are restricted to their local information horizon, there is no rule that outperforms other rules across various objectives, scenarios and system conditions. In this paper, we present an approach to dynamically adjust the parameters of a dispatching rule depending on the current system conditions. The influence of different parameter settings of the chosen rule on system performance is estimated by a machine learning method, whose learning data is generated by preliminary simulation runs. Using a dynamic flow shop scenario with sequence dependent setup times, we demonstrate that our approach is capable of significantly reducing the mean tardiness of jobs

    Learning-based scheduling of flexible manufacturing systems using ensemble methods

    Get PDF
    Dispatching rules are commonly applied to schedule jobs in Flexible Manufacturing Systems (FMSs). However, the suitability of these rules relies heavily on the state of the system; hence, there is no single rule that always outperforms the others. In this scenario, machine learning techniques, such as support vector machines (SVMs), inductive learning-based decision trees (DTs), backpropagation neural networks (BPNs), and case based-reasoning (CBR), offer a powerful approach for dynamic scheduling, as they help managers identify the most appropriate rule in each moment. Nonetheless, different machine learning algorithms may provide different recommendations. In this research, we take the analysis one step further by employing ensemble methods, which are designed to select the most reliable recommendations over time. Specifically, we compare the behaviour of the bagging, boosting, and stacking methods. Building on the aforementioned machine learning algorithms, our results reveal that ensemble methods enhance the dynamic performance of the FMS. Through a simulation study, we show that this new approach results in an improvement of key performance metrics (namely, mean tardiness and mean flow time) over existing dispatching rules and the individual use of each machine learning algorithm

    conwip card setting in a flow shop system with a batch production machine

    Get PDF
    A B S T R A C T This paper presents an analytical technique to determine the optimum number of cards to control material release in a CONWIP system. The work focuses on the card setting problem for a flow-shop system characterised by the presence of a batch processing machine (e.g. a kiln for long heat treatment). To control production, two different static approaches are developed: the first one is used when the bottleneck coincides with the batch processing machine and the second one is proposed when the bottleneck is another machine of the flow shop. In both contexts, by means of the appropriate model, one can optimize the performance of the flow- shop by maximizing the throughput and keeping the work in process at a minimum level. Numerical examples are also included in the paper to confirm the validity of the models and to demonstrate their practical utility

    A survey of AI in operations management from 2005 to 2009

    Get PDF
    Purpose: the use of AI for operations management, with its ability to evolve solutions, handle uncertainty and perform optimisation continues to be a major field of research. The growing body of publications over the last two decades means that it can be difficult to keep track of what has been done previously, what has worked, and what really needs to be addressed. Hence this paper presents a survey of the use of AI in operations management aimed at presenting the key research themes, trends and directions of research. Design/methodology/approach: the paper builds upon our previous survey of this field which was carried out for the ten-year period 1995-2004. Like the previous survey, it uses Elsevier’s Science Direct database as a source. The framework and methodology adopted for the survey is kept as similar as possible to enable continuity and comparison of trends. Thus, the application categories adopted are: design; scheduling; process planning and control; and quality, maintenance and fault diagnosis. Research on utilising neural networks, case-based reasoning (CBR), fuzzy logic (FL), knowledge-Based systems (KBS), data mining, and hybrid AI in the four application areas are identified. Findings: the survey categorises over 1,400 papers, identifying the uses of AI in the four categories of operations management and concludes with an analysis of the trends, gaps and directions for future research. The findings include: the trends for design and scheduling show a dramatic increase in the use of genetic algorithms since 2003 that reflect recognition of their success in these areas; there is a significant decline in research on use of KBS, reflecting their transition into practice; there is an increasing trend in the use of FL in quality, maintenance and fault diagnosis; and there are surprising gaps in the use of CBR and hybrid methods in operations management that offer opportunities for future research. Design/methodology/approach: the paper builds upon our previous survey of this field which was carried out for the 10 year period 1995 to 2004 (Kobbacy et al. 2007). Like the previous survey, it uses the Elsevier’s ScienceDirect database as a source. The framework and methodology adopted for the survey is kept as similar as possible to enable continuity and comparison of trends. Thus the application categories adopted are: (a) design, (b) scheduling, (c) process planning and control and (d) quality, maintenance and fault diagnosis. Research on utilising neural networks, case based reasoning, fuzzy logic, knowledge based systems, data mining, and hybrid AI in the four application areas are identified. Findings: The survey categorises over 1400 papers, identifying the uses of AI in the four categories of operations management and concludes with an analysis of the trends, gaps and directions for future research. The findings include: (a) The trends for Design and Scheduling show a dramatic increase in the use of GAs since 2003-04 that reflect recognition of their success in these areas, (b) A significant decline in research on use of KBS, reflecting their transition into practice, (c) an increasing trend in the use of fuzzy logic in Quality, Maintenance and Fault Diagnosis, (d) surprising gaps in the use of CBR and hybrid methods in operations management that offer opportunities for future research. Originality/value: This is the largest and most comprehensive study to classify research on the use of AI in operations management to date. The survey and trends identified provide a useful reference point and directions for future research

    Optimization Models and Approximate Algorithms for the Aerial Refueling Scheduling and Rescheduling Problems

    Get PDF
    The Aerial Refueling Scheduling Problem (ARSP) can be defined as determining the refueling completion times for fighter aircrafts (jobs) on multiple tankers (machines) to minimize the total weighted tardiness. ARSP can be modeled as a parallel machine scheduling with release times and due date-to-deadline window. ARSP assumes that the jobs have different release times, due dates, and due date-to-deadline windows between the refueling due date and a deadline to return without refueling. The Aerial Refueling Rescheduling Problem (ARRP), on the other hand, can be defined as updating the existing AR schedule after being disrupted by job related events including the arrival of new aircrafts, departure of an existing aircrafts, and changes in aircraft priorities. ARRP is formulated as a multiobjective optimization problem by minimizing the total weighted tardiness (schedule quality) and schedule instability. Both ARSP and ARRP are formulated as mixed integer programming models. The objective function in ARSP is a piecewise tardiness cost that takes into account due date-to-deadline windows and job priorities. Since ARSP is NP-hard, four approximate algorithms are proposed to obtain solutions in reasonable computational times, namely (1) apparent piecewise tardiness cost with release time rule (APTCR), (2) simulated annealing starting from random solution (SArandom ), (3) SA improving the initial solution constructed by APTCR (SAAPTCR), and (4) Metaheuristic for Randomized Priority Search (MetaRaPS). Additionally, five regeneration and partial repair algorithms (MetaRE, BestINSERT, SEPRE, LSHIFT, and SHUFFLE) were developed for ARRP to update instantly the current schedule at the disruption time. The proposed heuristic algorithms are tested in terms of solution quality and CPU time through computational experiments with randomly generated data to represent AR operations and disruptions. Effectiveness of the scheduling and rescheduling algorithms are compared to optimal solutions for problems with up to 12 jobs and to each other for larger problems with up to 60 jobs. The results show that, APTCR is more likely to outperform SArandom especially when the problem size increases, although it has significantly worse performance than SA in terms of deviation from optimal solution for small size problems. Moreover CPU time performance of APTCR is significantly better than SA in both cases. MetaRaPS is more likely to outperform SAAPTCR in terms of average error from optimal solutions for both small and large size problems. Results for small size problems show that MetaRaPS algorithm is more robust compared to SAAPTCR. However, CPU time performance of SA is significantly better than MetaRaPS in both cases. ARRP experiments were conducted with various values of objective weighting factor for extended analysis. In the job arrival case, MetaRE and BestINSERT have significantly performed better than SEPRE in terms of average relative error for small size problems. In the case of job priority disruption, there is no significant difference between MetaRE, BestINSERT, and SHUFFLE algorithms. MetaRE has significantly performed better than LSHIFT to repair job departure disruptions and significantly superior to the BestINSERT algorithm in terms of both relative error and computational time for large size problems

    Scheduling Hybrid Flow Lines of Aerospace Composite Manufacturing Systems

    Get PDF
    Composite manufacturing is a vital part of aerospace manufacturing systems. Applying effective scheduling within these systems can cut the costs in aerospace companies significantly. These systems can be characterized as two-stage Hybrid Flow Shops (HFS) with identical, non-identical and unrelated parallel discrete-processing machines in the first stage and non-identical parallel batch-processing machines in the second stage. The first stage is normally the lay-up process in which the carbon fiber sheets are stacked on the molds (tools). Then, the parts are batched based on the compatibility of their cure recipe before going to the second stage into the autoclave for curing. Autoclaves require enormous capital investment and maximizing their utilization is of utmost importance. In this thesis, a Mixed Integer Linear Programming (MILP) model is developed to maximize the utilization of the resources in the second stage of this HFS. CPLEX, with an underlying branch and bound algorithm, is used to solve the model. The results show the high level of flexibility and computational efficiency of the proposed model when applied to small and medium-size problems. However, due to the NP-hardness of the problem, the MILP model fails to solve large problems (i.e. problems with more than 120 jobs as input) in reasonable CPU times. To solve the larger instances of the problem, a novel heuristic method along with a Genetic Algorithm (GA) are developed. The heuristic algorithm is designed based on a careful observation of the behavior of the MILP model for different problem sets. Moreover, it is enhanced by adding a number of proper dispatching rules. As its output, this heuristic algorithm generates eight initial feasible solutions which are then used as the initial population of the proposed GA. The GA improves the initial solutions obtained from the aforementioned heuristic through its stochastic iterations until it reaches the satisfactory near-optimal solutions. A novel crossover operator is introduced in this GA which is unique to the HFS of aerospace composite manufacturing systems. The proposed GA is proven to be very efficient when applied to large-size problems with up to 300 jobs. The results show the high quality of the solutions achieved by the GA when compared to the optimal solutions which are obtained from the MILP model. A real case study undertaken at one of the leading companies in the Canadian aerospace industry is used for the purpose of data experiments and analysis
    corecore