2,149 research outputs found

    Efficient algorithms to solve scheduling problems with a variety of optimization criteria

    Get PDF
    La programmation par contraintes est une technique puissante pour résoudre, entre autres, des problèmes d'ordonnancement de grande envergure. L'ordonnancement vise à allouer dans le temps des tâches à des ressources. Lors de son exécution, une tâche consomme une ressource à un taux constant. Généralement, on cherche à optimiser une fonction objectif telle la durée totale d'un ordonnancement. Résoudre un problème d'ordonnancement signifie trouver quand chaque tâche doit débuter et quelle ressource doit l'exécuter. La plupart des problèmes d'ordonnancement sont NP-Difficiles. Conséquemment, il n'existe aucun algorithme connu capable de les résoudre en temps polynomial. Cependant, il existe des spécialisations aux problèmes d'ordonnancement qui ne sont pas NP-Complet. Ces problèmes peuvent être résolus en temps polynomial en utilisant des algorithmes qui leur sont propres. Notre objectif est d'explorer ces algorithmes d'ordonnancement dans plusieurs contextes variés. Les techniques de filtrage ont beaucoup évolué dans les dernières années en ordonnancement basé sur les contraintes. La proéminence des algorithmes de filtrage repose sur leur habilité à réduire l'arbre de recherche en excluant les valeurs des domaines qui ne participent pas à des solutions au problème. Nous proposons des améliorations et présentons des algorithmes de filtrage plus efficaces pour résoudre des problèmes classiques d'ordonnancement. De plus, nous présentons des adaptations de techniques de filtrage pour le cas où les tâches peuvent être retardées. Nous considérons aussi différentes propriétés de problèmes industriels et résolvons plus efficacement des problèmes où le critère d'optimisation n'est pas nécessairement le moment où la dernière tâche se termine. Par exemple, nous présentons des algorithmes à temps polynomial pour le cas où la quantité de ressources fluctue dans le temps, ou quand le coût d'exécuter une tâche au temps t dépend de t.Constraint programming is a powerful methodology to solve large scale and practical scheduling problems. Resource-constrained scheduling deals with temporal allocation of a variety of tasks to a set of resources, where the tasks consume a certain amount of resource during their execution. Ordinarily, a desired objective function such as the total length of a feasible schedule, called the makespan, is optimized in scheduling problems. Solving the scheduling problem is equivalent to finding out when each task starts and which resource executes it. In general, the scheduling problems are NP-Hard. Consequently, there exists no known algorithm that can solve the problem by executing a polynomial number of instructions. Nonetheless, there exist specializations for scheduling problems that are not NP-Complete. Such problems can be solved in polynomial time using dedicated algorithms. We tackle such algorithms for scheduling problems in a variety of contexts. Filtering techniques are being developed and improved over the past years in constraint-based scheduling. The prominency of filtering algorithms lies on their power to shrink the search tree by excluding values from the domains which do not yield a feasible solution. We propose improvements and present faster filtering algorithms for classical scheduling problems. Furthermore, we establish the adaptions of filtering techniques to the case that the tasks can be delayed. We also consider distinct properties of industrial scheduling problems and solve more efficiently the scheduling problems whose optimization criteria is not necessarily the makespan. For instance, we present polynomial time algorithms for the case that the amount of available resources fluctuates over time, or when the cost of executing a task at time t is dependent on t

    Models and Strategies for Variants of the Job Shop Scheduling Problem

    Full text link
    Recently, a variety of constraint programming and Boolean satisfiability approaches to scheduling problems have been introduced. They have in common the use of relatively simple propagation mechanisms and an adaptive way to focus on the most constrained part of the problem. In some cases, these methods compare favorably to more classical constraint programming methods relying on propagation algorithms for global unary or cumulative resource constraints and dedicated search heuristics. In particular, we described an approach that combines restarting, with a generic adaptive heuristic and solution guided branching on a simple model based on a decomposition of disjunctive constraints. In this paper, we introduce an adaptation of this technique for an important subclass of job shop scheduling problems (JSPs), where the objective function involves minimization of earliness/tardiness costs. We further show that our technique can be improved by adding domain specific information for one variant of the JSP (involving time lag constraints). In particular we introduce a dedicated greedy heuristic, and an improved model for the case where the maximal time lag is 0 (also referred to as no-wait JSPs).Comment: Principles and Practice of Constraint Programming - CP 2011, Perugia : Italy (2011

    Modelling a Maintenance Scheduling Problem with Alternative Resources

    Get PDF
    Effective management of maintenance in buildings can have a signi cant impact on the total life cycle costs and on the building energy use. Nevertheless, the building maintenance scheduling problem has been infrequently studied. In this paper, we present constraint-based scheduling models for the building maintenance scheduling problem, where each activity has a set of alternative resources. We consider two di erent models, one using basic constraints, and the other using our new and modi fied global constraints, which handle alternative disjunctive resources for each activity to allow propagation before activities are assigned to resources. We evaluate these models on randomly generated problems and show that while the basic model is faster on smaller problems, the global con- straint model scales better

    On the Reification of Global Constraints

    Get PDF
    We introduce a simple idea for deriving reified global constraints in a systematic way. It is based on the observation that most global constraints can be reformulated as a conjunction of pure functional dependency constraints together with a constraint that can be easily reified. We first show how the core constraints of the Global Constraint Catalogue can be reified and we then identify several reification categories that apply to at least 82% of the constraints in the Global Constraint Catalogue

    To Preference via Entrenchment

    Get PDF
    We introduce a simple generalization of Gardenfors and Makinson's epistemic entrenchment called partial entrenchment. We show that preferential inference can be generated as the sceptical counterpart of an inference mechanism defined directly on partial entrenchment.Comment: 16 page

    A Multi-Engine Approach to Answer Set Programming

    Full text link
    Answer Set Programming (ASP) is a truly-declarative programming paradigm proposed in the area of non-monotonic reasoning and logic programming, that has been recently employed in many applications. The development of efficient ASP systems is, thus, crucial. Having in mind the task of improving the solving methods for ASP, there are two usual ways to reach this goal: (i)(i) extending state-of-the-art techniques and ASP solvers, or (ii)(ii) designing a new ASP solver from scratch. An alternative to these trends is to build on top of state-of-the-art solvers, and to apply machine learning techniques for choosing automatically the "best" available solver on a per-instance basis. In this paper we pursue this latter direction. We first define a set of cheap-to-compute syntactic features that characterize several aspects of ASP programs. Then, we apply classification methods that, given the features of the instances in a {\sl training} set and the solvers' performance on these instances, inductively learn algorithm selection strategies to be applied to a {\sl test} set. We report the results of a number of experiments considering solvers and different training and test sets of instances taken from the ones submitted to the "System Track" of the 3rd ASP Competition. Our analysis shows that, by applying machine learning techniques to ASP solving, it is possible to obtain very robust performance: our approach can solve more instances compared with any solver that entered the 3rd ASP Competition. (To appear in Theory and Practice of Logic Programming (TPLP).)Comment: 26 pages, 8 figure

    On Properties of Update Sequences Based on Causal Rejection

    Full text link
    We consider an approach to update nonmonotonic knowledge bases represented as extended logic programs under answer set semantics. New information is incorporated into the current knowledge base subject to a causal rejection principle enforcing that, in case of conflicts, more recent rules are preferred and older rules are overridden. Such a rejection principle is also exploited in other approaches to update logic programs, e.g., in dynamic logic programming by Alferes et al. We give a thorough analysis of properties of our approach, to get a better understanding of the causal rejection principle. We review postulates for update and revision operators from the area of theory change and nonmonotonic reasoning, and some new properties are considered as well. We then consider refinements of our semantics which incorporate a notion of minimality of change. As well, we investigate the relationship to other approaches, showing that our approach is semantically equivalent to inheritance programs by Buccafurri et al. and that it coincides with certain classes of dynamic logic programs, for which we provide characterizations in terms of graph conditions. Therefore, most of our results about properties of causal rejection principle apply to these approaches as well. Finally, we deal with computational complexity of our approach, and outline how the update semantics and its refinements can be implemented on top of existing logic programming engines.Comment: 59 pages, 2 figures, 3 tables, to be published in "Theory and Practice of Logic Programming

    Modeling cumulative biological phenomena with Suppes-Bayes Causal Networks

    Get PDF
    Several diseases related to cell proliferation are characterized by the accumulation of somatic DNA changes, with respect to wildtype conditions. Cancer and HIV are two common examples of such diseases, where the mutational load in the cancerous/viral population increases over time. In these cases, selective pressures are often observed along with competition, cooperation and parasitism among distinct cellular clones. Recently, we presented a mathematical framework to model these phenomena, based on a combination of Bayesian inference and Suppes' theory of probabilistic causation, depicted in graphical structures dubbed Suppes-Bayes Causal Networks (SBCNs). SBCNs are generative probabilistic graphical models that recapitulate the potential ordering of accumulation of such DNA changes during the progression of the disease. Such models can be inferred from data by exploiting likelihood-based model-selection strategies with regularization. In this paper we discuss the theoretical foundations of our approach and we investigate in depth the influence on the model-selection task of: (i) the poset based on Suppes' theory and (ii) different regularization strategies. Furthermore, we provide an example of application of our framework to HIV genetic data highlighting the valuable insights provided by the inferred
    corecore