5,742 research outputs found

    Scheduling for Service Stability and Supply Chain Coordination

    Get PDF
    This dissertation studies scheduling for service stability and for supply chain coordination as well. The scheduling problems for service stability are studied from the single perspective of a firm itself, while the scheduling problems for supply chain coordination are investigated from the perspective of a supply chain. Both the studies have broad applications in real life. In the first study, several job scheduling problems are addressed, with the measure of performance being job completion time variance (CTV). CTV minimization is used to represent service stability, since it means that jobs are completed in a relative concentrated period of time. CTV minimization also conforms to the Just-in-time philosophy. Two scheduling problems are studied on multiple identical parallel machines. The one problem does not restrict the idle times of machines before their job processing, while the other does. For these two scheduling problems, desirable properties are explored and heuristic algorithms are proposed. Computational results show the excellent performances of the proposed algorithms. The third scheduling problem in the first study is considered on a single machine and from the users’ perspective rather than the system’s perspective. The performance measure is thus class-based completion time variance (CB-CTV). This problem is shown to be able to be transformed into multiple CTV problems. Therefore, the well-developed desirable properties of the CTV problem can be applied to solve the CB-CTV problem. The tradeoff between the CB-CTV problem and the CTV problem is also investigated. The second study deals with scheduling coordination in a supply chain, since supply chain coordination is increasingly critical in recent years. Usually, different standpoints prevent decision makers in a supply chain from having agreement on a certain scheduling decision. Therefore conflicts arise. In pursuit of excellent performance of the whole supply chain, coordination among decision makers is needed. In this study, the scheduling conflicts are measured and analyzed from different perspectives of decision makers, and cooperation mechanisms are proposed based on different scenarios of the relative bargaining power among decision makers. The cooperation savings are examined as well

    Heuristics Techniques for Scheduling Problems with Reducing Waiting Time Variance

    Get PDF
    In real computational world, scheduling is a decision making process. This is nothing but a systematic schedule through which a large numbers of tasks are assigned to the processors. Due to the resource limitation, creation of such schedule is a real challenge. This creates the interest of developing a qualitative scheduler for the processors. These processors are either single or parallel. One of the criteria for improving the efficiency of scheduler is waiting time variance (WTV). Minimizing the WTV of a task is a NP-hard problem. Achieving the quality of service (QoS) in a single or parallel processor by minimizing the WTV is a problem of task scheduling. To enhance the performance of a single or parallel processor, it is required to develop a stable and none overlap scheduler by minimizing WTV. An automated scheduler\u27s performance is always measured by the attributes of QoS. One of the attributes of QoS is ‘Timeliness’. First, this chapter presents the importance of heuristics with five heuristic-based solutions. Then applies these heuristics on 1‖WTV minimization problem and three heuristics with a unique task distribution mechanism on Qm|prec|WTV minimization problem. The experimental result shows the performance of heuristic in the form of graph for consonant problems

    Dynamic scheduling in a multi-product manufacturing system

    Get PDF
    To remain competitive in global marketplace, manufacturing companies need to improve their operational practices. One of the methods to increase competitiveness in manufacturing is by implementing proper scheduling system. This is important to enable job orders to be completed on time, minimize waiting time and maximize utilization of equipment and machineries. The dynamics of real manufacturing system are very complex in nature. Schedules developed based on deterministic algorithms are unable to effectively deal with uncertainties in demand and capacity. Significant differences can be found between planned schedules and actual schedule implementation. This study attempted to develop a scheduling system that is able to react quickly and reliably for accommodating changes in product demand and manufacturing capacity. A case study, 6 by 6 job shop scheduling problem was adapted with uncertainty elements added to the data sets. A simulation model was designed and implemented using ARENA simulation package to generate various job shop scheduling scenarios. Their performances were evaluated using scheduling rules, namely, first-in-first-out (FIFO), earliest due date (EDD), and shortest processing time (SPT). An artificial neural network (ANN) model was developed and trained using various scheduling scenarios generated by ARENA simulation. The experimental results suggest that the ANN scheduling model can provided moderately reliable prediction results for limited scenarios when predicting the number completed jobs, maximum flowtime, average machine utilization, and average length of queue. This study has provided better understanding on the effects of changes in demand and capacity on the job shop schedules. Areas for further study includes: (i) Fine tune the proposed ANN scheduling model (ii) Consider more variety of job shop environment (iii) Incorporate an expert system for interpretation of results. The theoretical framework proposed in this study can be used as a basis for further investigation

    Optimization Models and Approximate Algorithms for the Aerial Refueling Scheduling and Rescheduling Problems

    Get PDF
    The Aerial Refueling Scheduling Problem (ARSP) can be defined as determining the refueling completion times for fighter aircrafts (jobs) on multiple tankers (machines) to minimize the total weighted tardiness. ARSP can be modeled as a parallel machine scheduling with release times and due date-to-deadline window. ARSP assumes that the jobs have different release times, due dates, and due date-to-deadline windows between the refueling due date and a deadline to return without refueling. The Aerial Refueling Rescheduling Problem (ARRP), on the other hand, can be defined as updating the existing AR schedule after being disrupted by job related events including the arrival of new aircrafts, departure of an existing aircrafts, and changes in aircraft priorities. ARRP is formulated as a multiobjective optimization problem by minimizing the total weighted tardiness (schedule quality) and schedule instability. Both ARSP and ARRP are formulated as mixed integer programming models. The objective function in ARSP is a piecewise tardiness cost that takes into account due date-to-deadline windows and job priorities. Since ARSP is NP-hard, four approximate algorithms are proposed to obtain solutions in reasonable computational times, namely (1) apparent piecewise tardiness cost with release time rule (APTCR), (2) simulated annealing starting from random solution (SArandom ), (3) SA improving the initial solution constructed by APTCR (SAAPTCR), and (4) Metaheuristic for Randomized Priority Search (MetaRaPS). Additionally, five regeneration and partial repair algorithms (MetaRE, BestINSERT, SEPRE, LSHIFT, and SHUFFLE) were developed for ARRP to update instantly the current schedule at the disruption time. The proposed heuristic algorithms are tested in terms of solution quality and CPU time through computational experiments with randomly generated data to represent AR operations and disruptions. Effectiveness of the scheduling and rescheduling algorithms are compared to optimal solutions for problems with up to 12 jobs and to each other for larger problems with up to 60 jobs. The results show that, APTCR is more likely to outperform SArandom especially when the problem size increases, although it has significantly worse performance than SA in terms of deviation from optimal solution for small size problems. Moreover CPU time performance of APTCR is significantly better than SA in both cases. MetaRaPS is more likely to outperform SAAPTCR in terms of average error from optimal solutions for both small and large size problems. Results for small size problems show that MetaRaPS algorithm is more robust compared to SAAPTCR. However, CPU time performance of SA is significantly better than MetaRaPS in both cases. ARRP experiments were conducted with various values of objective weighting factor for extended analysis. In the job arrival case, MetaRE and BestINSERT have significantly performed better than SEPRE in terms of average relative error for small size problems. In the case of job priority disruption, there is no significant difference between MetaRE, BestINSERT, and SHUFFLE algorithms. MetaRE has significantly performed better than LSHIFT to repair job departure disruptions and significantly superior to the BestINSERT algorithm in terms of both relative error and computational time for large size problems

    BALANCING TRADE-OFFS IN ONE-STAGE PRODUCTION WITH PROCESSING TIME UNCERTAINTY

    Get PDF
    Stochastic production scheduling faces three challenges, first the inconsistencies among key performance indicators (KPIs), second the trade-offs between the expected return and the risk for a portfolio of KPIs, and third the uncertainty in processing times. Based on two inconsistent KPIs of total completion time (TCT) and variance of completion times (VCT), we propose our trade-off balancing (ToB) heuristic for one-stage production scheduling. Through comprehensive case studies, we show that our ToB heuristic with preference =0.0:0.1:1.0 efficiently and effectively addresses the three challenges. Moreover, our trade-off balancing scheme can be generalized to balance a number of inconsistent KPIs more than two. Daniels and Kouvelis (DK) proposed a scheme to optimize the worst-case scenario for stochastic production scheduling and proposed the endpoint product (EP) and endpoint sum (ES) heuristics to hedge against processing time uncertainty. Using 5 levels of coefficients of variation (CVs) to represent processing time uncertainty, we show that our ToB heuristic is robust as well, and even outperforms the EP and ES heuristics on worst-case scenarios at high levels of processing time uncertainty. Moreover, our ToB heuristic generates undominated solution spaces of KPIs, which not only provides a solid base to set up specification limits for statistical process control (SPC) but also facilitates the application of modern portfolio theory and SPC techniques in the industry

    Scalable and Distributed Resource Management Protocols for Cloud and Big Data Clusters

    Get PDF
    Cloud data centers require an operating system to manage resources and satisfy operational requirements and management objectives. The growth of popularity in cloud services causes the appearance of a new spectrum of services with sophisticated workload and resource management requirements. Also, data centers are growing by addition of various type of hardware to accommodate the ever-increasing requests of users. Nowadays a large percentage of cloud resources are executing data-intensive applications which need continuously changing workload fluctuations and specific resource management. To this end, cluster computing frameworks are shifting towards distributed resource management for better scalability and faster decision making. Such systems benefit from the parallelization of control and are resilient to failures. Throughout this thesis we investigate algorithms, protocols and techniques to address these challenges in large-scale data centers. We introduce a distributed resource management framework which consolidates virtual machine to as few servers as possible to reduce the energy consumption of data center and hence decrease the cost of cloud providers. This framework can characterize the workload of virtual machines and hence handle trade-off energy consumption and Service Level Agreement (SLA) of customers efficiently. The algorithm is highly scalable and requires low maintenance cost with dynamic workloads and it tries to minimize virtual machines migration costs. We also introduce a scalable and distributed probe-based scheduling algorithm for Big data analytics frameworks. This algorithm can efficiently address the problem job heterogeneity in workloads that has appeared after increasing the level of parallelism in jobs. The algorithm is massively scalable and can reduce significantly average job completion times in comparison with the-state of-the-art. Finally, we propose a probabilistic fault-tolerance technique as part of the scheduling algorithm

    Towards Optimality in Parallel Scheduling

    Full text link
    To keep pace with Moore's law, chip designers have focused on increasing the number of cores per chip rather than single core performance. In turn, modern jobs are often designed to run on any number of cores. However, to effectively leverage these multi-core chips, one must address the question of how many cores to assign to each job. Given that jobs receive sublinear speedups from additional cores, there is an obvious tradeoff: allocating more cores to an individual job reduces the job's runtime, but in turn decreases the efficiency of the overall system. We ask how the system should schedule jobs across cores so as to minimize the mean response time over a stream of incoming jobs. To answer this question, we develop an analytical model of jobs running on a multi-core machine. We prove that EQUI, a policy which continuously divides cores evenly across jobs, is optimal when all jobs follow a single speedup curve and have exponentially distributed sizes. EQUI requires jobs to change their level of parallelization while they run. Since this is not possible for all workloads, we consider a class of "fixed-width" policies, which choose a single level of parallelization, k, to use for all jobs. We prove that, surprisingly, it is possible to achieve EQUI's performance without requiring jobs to change their levels of parallelization by using the optimal fixed level of parallelization, k*. We also show how to analytically derive the optimal k* as a function of the system load, the speedup curve, and the job size distribution. In the case where jobs may follow different speedup curves, finding a good scheduling policy is even more challenging. We find that policies like EQUI which performed well in the case of a single speedup function now perform poorly. We propose a very simple policy, GREEDY*, which performs near-optimally when compared to the numerically-derived optimal policy
    • …
    corecore