4,696 research outputs found

    Dynamic scheduling in a multi-product manufacturing system

    Get PDF
    To remain competitive in global marketplace, manufacturing companies need to improve their operational practices. One of the methods to increase competitiveness in manufacturing is by implementing proper scheduling system. This is important to enable job orders to be completed on time, minimize waiting time and maximize utilization of equipment and machineries. The dynamics of real manufacturing system are very complex in nature. Schedules developed based on deterministic algorithms are unable to effectively deal with uncertainties in demand and capacity. Significant differences can be found between planned schedules and actual schedule implementation. This study attempted to develop a scheduling system that is able to react quickly and reliably for accommodating changes in product demand and manufacturing capacity. A case study, 6 by 6 job shop scheduling problem was adapted with uncertainty elements added to the data sets. A simulation model was designed and implemented using ARENA simulation package to generate various job shop scheduling scenarios. Their performances were evaluated using scheduling rules, namely, first-in-first-out (FIFO), earliest due date (EDD), and shortest processing time (SPT). An artificial neural network (ANN) model was developed and trained using various scheduling scenarios generated by ARENA simulation. The experimental results suggest that the ANN scheduling model can provided moderately reliable prediction results for limited scenarios when predicting the number completed jobs, maximum flowtime, average machine utilization, and average length of queue. This study has provided better understanding on the effects of changes in demand and capacity on the job shop schedules. Areas for further study includes: (i) Fine tune the proposed ANN scheduling model (ii) Consider more variety of job shop environment (iii) Incorporate an expert system for interpretation of results. The theoretical framework proposed in this study can be used as a basis for further investigation

    Using real-time information to reschedule jobs in a flowshop with variable processing times

    Get PDF
    Versión revisada. Embargo 36 mesesIn a time where detailed, instantaneous and accurate information on shop-floor status is becoming available in many manufacturing companies due to Information Technologies initiatives such as Smart Factory or Industry 4.0, a question arises regarding when and how this data can be used to improve scheduling decisions. While it is acknowledged that a continuous rescheduling based on the updated information may be beneficial as it serves to adapt the schedule to unplanned events, this rather general intuition has not been supported by a thorough experimentation, particularly for multi-stage manufacturing systems where such continuous rescheduling may introduce a high degree of nervousness in the system and deteriorates its performance. In order to study this research problem, in this paper we investigate how real-time information on the completion times of the jobs in a flowshop with variable processing times can be used to reschedule the jobs. In an exhaustive computational experience, we show that rescheduling policies pay off as long as the variability of the processing times is not very high, and only if the initially generated schedule is of good quality. Furthermore, we propose several rescheduling policies to improve the performance of continuous rescheduling while greatly reducing the frequency of rescheduling. One of these policies, based on the concept of critical path of a flowshop, outperforms the rest of policies for a wide range of scenarios.Ministerio de Ciencia e Innovación DPI2016-80750-

    Project scheduling under undertainty – survey and research potentials.

    Get PDF
    The vast majority of the research efforts in project scheduling assume complete information about the scheduling problem to be solved and a static deterministic environment within which the pre-computed baseline schedule will be executed. However, in the real world, project activities are subject to considerable uncertainty, that is gradually resolved during project execution. In this survey we review the fundamental approaches for scheduling under uncertainty: reactive scheduling, stochastic project scheduling, stochastic GERT network scheduling, fuzzy project scheduling, robust (proactive) scheduling and sensitivity analysis. We discuss the potentials of these approaches for scheduling projects under uncertainty.Management; Project management; Robustness; Scheduling; Stability;

    BALANCING TRADE-OFFS IN ONE-STAGE PRODUCTION WITH PROCESSING TIME UNCERTAINTY

    Get PDF
    Stochastic production scheduling faces three challenges, first the inconsistencies among key performance indicators (KPIs), second the trade-offs between the expected return and the risk for a portfolio of KPIs, and third the uncertainty in processing times. Based on two inconsistent KPIs of total completion time (TCT) and variance of completion times (VCT), we propose our trade-off balancing (ToB) heuristic for one-stage production scheduling. Through comprehensive case studies, we show that our ToB heuristic with preference =0.0:0.1:1.0 efficiently and effectively addresses the three challenges. Moreover, our trade-off balancing scheme can be generalized to balance a number of inconsistent KPIs more than two. Daniels and Kouvelis (DK) proposed a scheme to optimize the worst-case scenario for stochastic production scheduling and proposed the endpoint product (EP) and endpoint sum (ES) heuristics to hedge against processing time uncertainty. Using 5 levels of coefficients of variation (CVs) to represent processing time uncertainty, we show that our ToB heuristic is robust as well, and even outperforms the EP and ES heuristics on worst-case scenarios at high levels of processing time uncertainty. Moreover, our ToB heuristic generates undominated solution spaces of KPIs, which not only provides a solid base to set up specification limits for statistical process control (SPC) but also facilitates the application of modern portfolio theory and SPC techniques in the industry

    Learning Scheduling Algorithms for Data Processing Clusters

    Full text link
    Efficiently scheduling data processing jobs on distributed compute clusters requires complex algorithms. Current systems, however, use simple generalized heuristics and ignore workload characteristics, since developing and tuning a scheduling policy for each workload is infeasible. In this paper, we show that modern machine learning techniques can generate highly-efficient policies automatically. Decima uses reinforcement learning (RL) and neural networks to learn workload-specific scheduling algorithms without any human instruction beyond a high-level objective such as minimizing average job completion time. Off-the-shelf RL techniques, however, cannot handle the complexity and scale of the scheduling problem. To build Decima, we had to develop new representations for jobs' dependency graphs, design scalable RL models, and invent RL training methods for dealing with continuous stochastic job arrivals. Our prototype integration with Spark on a 25-node cluster shows that Decima improves the average job completion time over hand-tuned scheduling heuristics by at least 21%, achieving up to 2x improvement during periods of high cluster load
    corecore