113 research outputs found

    Higher-Level Consistencies: Where, When, and How Much

    Get PDF
    Determining whether or not a Constraint Satisfaction Problem (CSP) has a solution is NP-complete. CSPs are solved by inference (i.e., enforcing consistency), conditioning (i.e., doing search), or, more commonly, by interleaving the two mechanisms. The most common consistency property enforced during search is Generalized Arc Consistency (GAC). In recent years, new algorithms that enforce consistency properties stronger than GAC have been proposed and shown to be necessary to solve difficult problem instances. We frame the question of balancing the cost and the pruning effectiveness of consistency algorithms as the question of determining where, when, and how much of a higher-level consistency to enforce during search. To answer the `where\u27 question, we exploit the topological structure of a problem instance and target high-level consistency where cycle structures appear. To answer the \u27when\u27 question, we propose a simple, reactive, and effective strategy that monitors the performance of backtrack search and triggers a higher-level consistency as search thrashes. Lastly, for the question of `how much,\u27 we monitor the amount of updates caused by propagation and interrupt the process before it reaches a fixpoint. Empirical evaluations on benchmark problems demonstrate the effectiveness of our strategies. Adviser: B.Y. Choueiry and C. Bessier

    Job shop scheduling with probabilistic durations

    Get PDF
    Proactive approaches to scheduling take into account information about the execution time uncertainty in forming a schedule. In this paper, we investigate proactive approaches for the job shop scheduling problem where activity durations are random variables. The main contributions are (i) the introduction of the problem of finding probabilistic execution guarantees for difficult scheduling problems; (ii) a method for generating a lower bound on the minimal makespan; (iii) the development of the Monte Carlo approach for evaluating solutions; and (iv) the design and empirical analysis of three solution techniques: an approximately complete technique, found to be computationally feasible only for very small problems, and two techniques based on finding good solutions to a deterministic scheduling problem, which scale to much larger problems

    Higher-Level Consistencies: Where, When, and How Much

    Get PDF
    Determining whether or not a Constraint Satisfaction Problem (CSP) has a solution is NP-complete. CSPs are solved by inference (i.e., enforcing consistency), conditioning (i.e., doing search), or, more commonly, by interleaving the two mechanisms. The most common consistency property enforced during search is Generalized Arc Consistency (GAC). In recent years, new algorithms that enforce consistency properties stronger than GAC have been proposed and shown to be necessary to solve difficult problem instances. We frame the question of balancing the cost and the pruning effectiveness of consistency algorithms as the question of determining where, when, and how much of a higher-level consistency to enforce during search. To answer the `where\u27 question, we exploit the topological structure of a problem instance and target high-level consistency where cycle structures appear. To answer the \u27when\u27 question, we propose a simple, reactive, and effective strategy that monitors the performance of backtrack search and triggers a higher-level consistency as search thrashes. Lastly, for the question of `how much,\u27 we monitor the amount of updates caused by propagation and interrupt the process before it reaches a fixpoint. Empirical evaluations on benchmark problems demonstrate the effectiveness of our strategies. Adviser: B.Y. Choueiry and C. Bessier

    On Selecting and Scheduling Assembly Plans Using Constraint Programming

    Get PDF
    This work presents the application of Constraint Programming to the problem of selecting and sequencing assembly operations. The set of all feasible assembly plans for a single product is represented using an And/Or graph. This representation embodies some of the constraints involved in the planning problem, such as precedence of tasks, and the constraints due to the completion of a correct assembly plan. The work is focused on the selection of tasks and their optimal ordering, taking into account their execution in a generic multi-robot system. In order to include all different constraints of the problem, the And/Or graph representation is extended, so that links between nodes corresponding to assembly tasks are added, taking into account the resource constraints. The resultant problem is mapped to a Constraint Satisfaction Problem (CSP), and is solved using Constraint Programming, a powerful programming paradigm that is increasingly used to model and solve many hard real-life problems

    Higher-Level Consistencies: Where, When, and How Much

    Get PDF
    Determining whether or not a Constraint Satisfaction Problem (CSP) has a solution is NP-complete. CSPs are solved by inference (i.e., enforcing consistency), conditioning (i.e., doing search), or, more commonly, by interleaving the two mechanisms. The most common consistency property enforced during search is Generalized Arc Consistency (GAC). In recent years, new algorithms that enforce consistency properties stronger than GAC have been proposed and shown to be necessary to solve difficult problem instances. We frame the question of balancing the cost and the pruning effectiveness of consistency algorithms as the question of determining where, when, and how much of a higher-level consistency to enforce during search. To answer the `where\u27 question, we exploit the topological structure of a problem instance and target high-level consistency where cycle structures appear. To answer the \u27when\u27 question, we propose a simple, reactive, and effective strategy that monitors the performance of backtrack search and triggers a higher-level consistency as search thrashes. Lastly, for the question of `how much,\u27 we monitor the amount of updates caused by propagation and interrupt the process before it reaches a fixpoint. Empirical evaluations on benchmark problems demonstrate the effectiveness of our strategies. Adviser: B.Y. Choueiry and C. Bessier

    Monte Carlo Simulation to Compare Markovian and Neural Network Models for Reliability Assessment in Multiple AGV Manufacturing System

    Get PDF
    We compare two approaches for a Markovian model in flexible manufacturing systems (FMSs) using Monte Carlo simulation. The model which is a development of Fazlollahtabar and Saidi-Mehrabad (2013), considers two features of automated flexible manufacturing systems equipped with automated guided vehicle (AGV) namely, the reliability of machines and the reliability of AGVs in a multiple AGV jobshop manufacturing system. The current methods for modeling reliability of a system involve determination of system state probabilities and transition states. Since, the failure of the machines and AGVs could be considered in different states, therefore a Markovian model is proposed for reliability assessment. The traditional Markovian computation is compared with a neural network methodology. Monte Carlo simulation has verified the neural network method having better performance for Markovian computations.We compare two approaches for a Markovian model in flexible manufacturing systems (FMSs) using Monte Carlo simulation. The model which is a development of Fazlollahtabar and Saidi-Mehrabad (2013), considers two features of automated flexible manufacturing systems equipped with automated guided vehicle (AGV) namely, the reliability of machines and the reliability of AGVs in a multiple AGV jobshop manufacturing system. The current methods for modeling reliability of a system involve determination of system state probabilities and transition states. Since, the failure of the machines and AGVs could be considered in different states, therefore a Markovian model is proposed for reliability assessment

    A critical analysis of job shop scheduling in context of industry 4.0

    Get PDF
    Scheduling plays a pivotal role in the competitiveness of a job shop facility. The traditional job shop scheduling problem (JSSP) is centralized or semi-distributed. With the advent of Industry 4.0, there has been a paradigm shift in the manufacturing industry from traditional scheduling to smart distributed scheduling (SDS). The implementation of Industry 4.0 results in increased flexibility, high product quality, short lead times, and customized production. Smart/intelligent manufacturing is an integral part of Industry 4.0. The intelligent manufacturing approach converts renewable and nonrenewable resources into intelligent objects capable of sensing, working, and acting in a smart environment to achieve effective scheduling. This paper aims to provide a comprehensive review of centralized and decentralized/distributed JSSP techniques in the context of the Industry 4.0 environment. Firstly, centralized JSSP models and problem-solving methods along with their advantages and limitations are discussed. Secondly, an overview of associated techniques used in the Industry 4.0 environment is presented. The third phase of this paper discusses the transition from traditional job shop scheduling to decentralized JSSP with the aid of the latest research trends in this domain. Finally, this paper highlights futuristic approaches in the JSSP research and application in light of the robustness of JSSP and the current pandemic situation

    Comparing Mixed & Integer Programming vs. Constraint Programming by solving Job-Shop Scheduling Problems

    Get PDF
    Scheduling is a key factor for operations management as well as for business success. From industrial Job-shop Scheduling problems (JSSP), many optimization challenges have emerged since de 1960s when improvements have been continuously required such as bottlenecks allocation, lead-time reductions and reducing response time to requests.  With this in perspective, this work aims to discuss 3 different optimization models for minimizing Makespan. Those 3 models were applied on 17 classical problems of examples JSSP and produced different outputs.  The first model resorts on Mixed and Integer Programming (MIP) and it resulted on optimizing 60% of the studied problems. The other models were based on Constraint Programming (CP) and approached the problem in two different ways: a) model CP1 is a standard IBM algorithm whereof restrictions have an interval structure that fail to solve 53% of the proposed instances, b) Model CP-2 approaches the problem with disjunctive constraints and optimized 88% of the instances. In this work, each model is individually analyzed and then compared considering: i) Optimization success performance, ii) Computational processing time, iii) Greatest Resource Utilization and, iv) Minimum Work-in-process Inventory. Results demonstrated that CP-2 presented best results on criteria i and ii, but MIP was superior on criteria iii and iv and those findings are discussed at the final section of this work
    • …
    corecore