2,178 research outputs found

    Mixed integer-linear formulations of cumulative scheduling constraints - A comparative study

    Get PDF
    This paper introduces two MILP models for the cumulative scheduling constraint and associated pre-processing filters. We compare standard solver performance for these models on three sets of problems and for two of them, where tasks have unitary resource consumption, we also compare them with two models based on a geometric placement constraint. In the experiments, the solver performance of one of the cumulative models, is clearly the best and is also shown to scale very well for a large scale industrial transportation scheduling problem

    Global Constraint Catalog, 2nd Edition

    Get PDF
    This report presents a catalogue of global constraints where each constraint is explicitly described in terms of graph properties and/or automata and/or first order logical formulae with arithmetic. When available, it also presents some typical usage as well as some pointers to existing filtering algorithms

    Global Constraint Catalog, 2nd Edition (revision a)

    Get PDF
    This report presents a catalogue of global constraints where each constraint is explicitly described in terms of graph properties and/or automata and/or first order logical formulae with arithmetic. When available, it also presents some typical usage as well as some pointers to existing filtering algorithms

    Synchronized sweep algorithms for scalable scheduling constraints

    Get PDF
    This report introduces a family of synchronized sweep based filtering algorithms for handling scheduling problems involving resource and precedence constraints. The key idea is to filter all constraints of a scheduling problem in a synchronized way in order to scale better. In addition to normal filtering mode, the algorithms can run in greedy mode, in which case they perform a greedy assignment of start and end times. The filtering mode achieves a significant speed-up over the decomposition into independent cumulative and precedence constraints, while the greedy mode can handle up to 1 million tasks with 64 resources constraints and 2 million precedences. These algorithms were implemented in both CHOCO and SICStus

    Heuristic Approaches for Generating Local Process Models through Log Projections

    Full text link
    Local Process Model (LPM) discovery is focused on the mining of a set of process models where each model describes the behavior represented in the event log only partially, i.e. subsets of possible events are taken into account to create so-called local process models. Often such smaller models provide valuable insights into the behavior of the process, especially when no adequate and comprehensible single overall process model exists that is able to describe the traces of the process from start to end. The practical application of LPM discovery is however hindered by computational issues in the case of logs with many activities (problems may already occur when there are more than 17 unique activities). In this paper, we explore three heuristics to discover subsets of activities that lead to useful log projections with the goal of speeding up LPM discovery considerably while still finding high-quality LPMs. We found that a Markov clustering approach to create projection sets results in the largest improvement of execution time, with discovered LPMs still being better than with the use of randomly generated activity sets of the same size. Another heuristic, based on log entropy, yields a more moderate speedup, but enables the discovery of higher quality LPMs. The third heuristic, based on the relative information gain, shows unstable performance: for some data sets the speedup and LPM quality are higher than with the log entropy based method, while for other data sets there is no speedup at all.Comment: paper accepted and to appear in the proceedings of the IEEE Symposium on Computational Intelligence and Data Mining (CIDM), special session on Process Mining, part of the Symposium Series on Computational Intelligence (SSCI

    Global constraints as graph properties on structured network of elementary constraints of the same type

    Get PDF
    This report introduces a classification scheme for the global constraints. This classification is based on four basic ingredients from which one can generate almost all existing global constraints and come up with new interesting constraints. Global constraints are defined in a very concise way, in term of graph properties that have to hold, where the graph is a structured network of same elementary constraints. Since this classification is based on the internal structure of the global constraints it is also a strong hint for the pruning algorithms of the global constraints

    Linear-time filtering algorithms for the disjunctive constraint and a quadratic filtering algorithm for the cumulative not-first not-last

    Get PDF
    We present new filtering algorithms for Disjunctive and Cumulative constraints, each of which improves the complexity of the state-of-theart algorithms by a factor of log n. We show how to perform TimeTabling and Detectable Precedences in linear time on the Disjunctive constraint. Furthermore, we present a linear-time Overload Checking for the Disjunctive and Cumulative constraints. Finally, we show how the rule of Not-first/Not-last can be enforced in quadratic time for the Cumulative constraint. These algorithms rely on the union find data structure, from which we take advantage to introduce a new data structure that we call it time line. This data structure provides constant time operations that were previously implemented in logarithmic time by the Θ-tree data structure. Experiments show that these new algorithms are competitive even for a small number of tasks and outperform existing algorithms as the number of tasks increases. We also show that the time line can be used to solve specific scheduling problems

    A Bayesian approach to constrained single- and multi-objective optimization

    Get PDF
    This article addresses the problem of derivative-free (single- or multi-objective) optimization subject to multiple inequality constraints. Both the objective and constraint functions are assumed to be smooth, non-linear and expensive to evaluate. As a consequence, the number of evaluations that can be used to carry out the optimization is very limited, as in complex industrial design optimization problems. The method we propose to overcome this difficulty has its roots in both the Bayesian and the multi-objective optimization literatures. More specifically, an extended domination rule is used to handle objectives and constraints in a unified way, and a corresponding expected hyper-volume improvement sampling criterion is proposed. This new criterion is naturally adapted to the search of a feasible point when none is available, and reduces to existing Bayesian sampling criteria---the classical Expected Improvement (EI) criterion and some of its constrained/multi-objective extensions---as soon as at least one feasible point is available. The calculation and optimization of the criterion are performed using Sequential Monte Carlo techniques. In particular, an algorithm similar to the subset simulation method, which is well known in the field of structural reliability, is used to estimate the criterion. The method, which we call BMOO (for Bayesian Multi-Objective Optimization), is compared to state-of-the-art algorithms for single- and multi-objective constrained optimization
    corecore