60,098 research outputs found

    A Novel Approach to the Common Due-Date Problem on Single and Parallel Machines

    Full text link
    This paper presents a novel idea for the general case of the Common Due-Date (CDD) scheduling problem. The problem is about scheduling a certain number of jobs on a single or parallel machines where all the jobs possess different processing times but a common due-date. The objective of the problem is to minimize the total penalty incurred due to earliness or tardiness of the job completions. This work presents exact polynomial algorithms for optimizing a given job sequence for single and identical parallel machines with the run-time complexities of O(nlogn)O(n \log n) for both cases, where nn is the number of jobs. Besides, we show that our approach for the parallel machine case is also suitable for non-identical parallel machines. We prove the optimality for the single machine case and the runtime complexities of both. Henceforth, we extend our approach to one particular dynamic case of the CDD and conclude the chapter with our results for the benchmark instances provided in the OR-library.Comment: Book Chapter 22 page

    The Batch Scheduling Model for Dynamic Multi-Item, Multi-Level Production in an Assembly Job Shop with Parallel Machines

    Get PDF
    Most classical scheduling approaches deal with single products, single machines, and static manufacturing environments. In real-world manufacturing systems, however, scheduling can be assigned for multi-item production on multimachines in a dynamic environment in which unexpected new orders may be received. This paper focuses on scheduling problems in an assembly job shop with parallel machines that produce multi-item multi-level products. Models were developed for due date fulfillment and due date assignment in static and dynamic conditions, with the objectives of minimizing total actual flow time, while considering the defect rate at each stage of the process. The insertion technique was used in the scheduling process; insertion can be performed in batch operations at all available positions on all machines. A hypothetical case of job shop scheduling problems associated with multi-item, multi-level production on parallel machines was studied, and the computational results demonstrated the validity of the proposed algorithms

    Evolving control rules for a dual-constrained job scheduling scenario

    Get PDF
    Dispatching rules are often used for scheduling in semiconductor manufacturing due to the complexity and stochasticity of the problem. In the past, simulation-based Genetic Programming has been shown to be a powerful tool to automate the time-consuming and expensive process of designing such rules. However, the scheduling problems considered were usually only constrained by the capacity of the machines. In this paper, we extend this idea to dual-constrained flow shop scheduling, with machines and operators for loading and unloading to be scheduled simultaneously. We show empirically on a small test problem with parallel workstations, re-entrant flows and dynamic stochastic job arrival that the approach is able to generate dispatching rules that perform significantly better than benchmark rules from the literature

    Load Balancing Regular Meshes on SMPS with MPI

    Get PDF
    Domain decomposition for regular meshes on parallel computers has traditionally been performed by attempting to exactly partition the work among the available processors (now cores). However, these strategies often do not consider the inherent system noise which can hinder MPI application scalability to emerging peta-scale machines with 10000+ nodes. In this work, we suggest a solution that uses a tunable hybrid static/dynamic scheduling strategy that can be incorporated into current MPI implementations of mesh codes. By applying this strategy to a 3D jacobi algorithm, we achieve performance gains of at least 16% for 64 SMP nodes

    A New Approach to Configurable Dynamic Scheduling in Clusters based on Single System Image Technologies

    Get PDF
    Clusters are now considered as an alternative to parallel machines to execute workloads made up of sequential and/or parallel applications. For efficient application execution on clusters, dynamic global process scheduling is of prime importance. Different dynamic scheduling policies that have been studied for distributed systems or parallel machines may be used in clusters. The choice of a particular policy depends on the kind of workload to be executed. In a cluster, it is thus highly desirable to implement a configurable global scheduler to be able to adapt the dynamic scheduling policy to the workload characteristics, to take benefit of all cluster resources and tocope with node shutdown and reboot. In this paper, we present the architecture of the global scheduler and the process management mechanisms of Kerrighed, a single system image operating system designed for high performance computing on clusters. Kerrighed provides a development framework allowing to easily implement dynamic scheduling policies without kernel modification. In Kerrighed, the global scheduling policy can be dynamically changed while applications execute on the cluster. Kerrighed's process management mechanisms allow to easily deploy parallelapplications in the cluster and to efficiently migrate or checkpoint processes, including processes sharing memory. Kerrighed has been implemented as a set of modules extending Linux kernel. Preliminary performance results are presented

    Analysis Of The Effect Of Dynamic Issues In Scheduling Of Flow Shop Environment

    Get PDF
    This thesis is a study about the effect of dynamic issues in scheduling of flow shop environment. This flow shop scheduling was simulated with two dynamic issues based on two dispatching rules. Two dynamic characteristics selected are disturbance problem of machine breakdown and random arrival time for each job, meanwhile two dispatching rules selected are based on short processing time (SPT) and earliest due date (EDD). WITNESS simulation model was built to investigate the disturbance problems of dynamic issues in this flow shop scheduling. This model has six jobs and five machines were arranged in parallel line with concept of flow shop environment, in which the jobs was processed in a group of machines. The conclusion that we can get from the result analysis is comparison dynamic issues mostly influence the flow shop scheduling and which dispatching rule should be selected especially the rule that give short completion time and therefore giving good effect to the productivity of production system

    Grain-size optimization and scheduling for distributed memory architectures

    Get PDF
    The problem of scheduling parallel programs for execution on distributed memory parallel architectures has become the subject of intense research in recent, years. Because of the high inter-processor communication overhead in existing parallel machines, a crucial step in scheduling is task clustering, the process of coalescing heavily communicating fine grain tasks into coarser ones in order to reduce the communication overhead so that the overall execution time is minimized. The thesis of this research is that the task of exposing the parallelism in a given application should be left to the algorithm designer. On the other hand, the task of limiting the parallelism in a chosen parallel algorithm is best handled by the compiler or operating system for the target parallel machine. Toward this end, we have developed CASS (for Clustering And Scheduling System), a. task management system that provides facilities for automatic granularity optimization and task scheduling of parallel programs on distributed memory parallel architectures. In CASS, a task graph generated by a profiler is used by the clustering module to find the best granularity al which to execute the program so that the overall execution time is minimized. The scheduling module maps the clusters onto a. fixed number of processors and determines the order of execution of tasks in each processor. The output of scheduling module is then used by a code generator to generate machine instructions. CASS employs two efficient heuristic algorithms for clustering static task graphs: CASS-I for clustering with task duplication, and CASS-II for clustering without task duplication. It is shown that the clustering algorithms used by CASS outperform the best known algorithms reported in the literature. For the scheduling module in CASS, a heuristic algorithm based on load balancing is used to merge clusters such that the number of clusters matches the number of available physical processors. We also investigate task clustering algorithms for dynamic task graphs and show that it is inherently more difficult than the static case
    corecore