20 research outputs found

    Enforcement in Abstract Argumentation via Boolean Optimization

    Get PDF
    Computational aspects of argumentation are a central research topic of modern artificial intelligence. A core formal model for argumentation, where the inner structure of arguments is abstracted away, was provided by Dung in the form of abstract argumentation frameworks (AFs). AFs are syntactically directed graphs with the nodes representing arguments and edges representing attacks between them. Given the AF, sets of jointly acceptable arguments or extensions are defined via different semantics. The computational complexity and algorithmic solutions to so-called static problems, such as the enumeration of extensions, is a well-studied topic. Since argumentation is a dynamic process, understanding the dynamic aspects of AFs is also important. However, computational aspects of dynamic problems have not been studied thoroughly. This work concentrates on different forms of enforcement, which is a core dynamic problem in the area of abstract argumentation. In this case, given an AF, one wants to modify it by adding and removing attacks in a way that a given set of arguments becomes an extension (extension enforcement) or that given arguments are credulously or skeptically accepted (status enforcement). In this thesis, the enforcement problem is viewed as a constrained optimization task where the change to the attack structure is minimized. The computational complexity of the extension and status enforcement problems is analyzed, showing that they are in the general case NP-hard optimization problems. Motivated by this, algorithms are presented based on the Boolean optimization paradigm of maximum satisfiability (MaxSAT) for the NP-complete variants, and counterexample-guided abstraction refinement (CEGAR) procedures, where an interplay between MaxSAT and Boolean satisfiability (SAT) solvers is utilized, for problems beyond NP. The algorithms are implemented in the open source software system Pakota, which is empirically evaluated on randomly generated enforcement instances

    LS-DTKMS: A Local Search Algorithm for Diversified Top-k MaxSAT Problem

    Get PDF
    The Maximum Satisfiability (MaxSAT), an important optimization problem, has a range of applications, including network routing, planning and scheduling, and combinatorial auctions. Among these applications, one usually benefits from having not just one single solution, but k diverse solutions. Motivated by this, we study an extension of MaxSAT, named Diversified Top-k MaxSAT (DTKMS) problem, which is to find k feasible assignments of a given formula such that each assignment satisfies all hard clauses and all of them together satisfy the maximum number of soft clauses. This paper presents a local search algorithm, LS-DTKMS, for DTKMS problem, which exploits novel scoring functions to select variables and assignments. Experiments demonstrate that LS-DTKMS outperforms the top-k MaxSAT based DTKMS solvers and state-of-the-art solvers for diversified top-k clique problem

    Modelling and Solving Problems Using SAT Techniques

    Get PDF
    Řešení problémů plánování prostřednictvím překladů do splnitelnosti (SAT) je jedním z nejúspěšnějších přístupů k automatickému plánování. V této práci popíšeme několik způsobů jak přeložit problém plánování reprezentovaný v SAS+ formalismu do SAT. Přezkoumáme a přizpůsobíme stávající kódování a také zavedeme nové vlastní způsoby kódování. Porovnáme jednotlivá kódování pomocí výpočtu horních odhadů na velikosti formulí, které produkují, a pomocí spuštění rozsáhlých experimentů na referenčních problémech z Mezinárodní plánovací soutěže 2011. V experimentální části také porovnáme své kódování s nejmodernejšími kódováními z plánovače Madagascar. Experimenty ukazují, že naše techniky dokažou překonat tato kódování. V předložené práci také řešíme speciální případ optimalizace plánů -- odstranění redundantních akcí. Odstranění všech redundantních akcí je NP- úplný problém. Prostudujeme existující polynomialní heuristické přístupy a navrhneme vlastní heuristický přístup, který dokaže eliminovat vyšší počet a dražší redundantní akce než stávající techniky. Také navrhneme způsob kódování problému redundance plánů do SAT, který nám za použití MaxSAT řešičů umožní optimálně vyřešit problém eliminace redundantních akcí. Naše experimenty provedené s plány od nejmodernejších satisficing plánovačů pro referenční problémy...Solving planning problems via translation to satisfiability (SAT) is one of the most successful approaches to automated planning. In this thesis we describe several ways of encoding a planning problem represented in the SAS+ formalism into SAT. We review and adapt existing encoding schemes as well as introduce new original encodings. We compare the encodings by calculating upper bounds on the size of the formulas they produce as well as by running extensive experiments on benchmark problems from the 2011 International Planning Competition (IPC). In the experimental section we also compare our encodings with the state-of-the-art encodings of the planner Madagascar. The experiments show, that our techniques can outperform these state-of-the-art encodings. In the presented thesis we also deal with a special case of post-planning optimization -- elimination of redundant actions. The elimination of all redundant actions is NP-complete. We review the existing polynomial heuristic approaches and propose our own heuristic approach which can eliminate a higher number and more costly redundant actions than the existing techniques. We also propose a SAT encoding for the problem of plan redundancy which together with MaxSAT solvers allows us to solve the problem of action elimination optimally. Experiments done with...Katedra teoretické informatiky a matematické logikyDepartment of Theoretical Computer Science and Mathematical LogicMatematicko-fyzikální fakultaFaculty of Mathematics and Physic

    The 2011 International Planning Competition

    Get PDF
    After a 3 years gap, the 2011 edition of the IPC involved a total of 55 planners, some of them versions of the same planner, distributed among four tracks: the sequential satisficing track (27 planners submitted out of 38 registered), the sequential multicore track (8 planners submitted out of 12 registered), the sequential optimal track (12 planners submitted out of 24 registered) and the temporal satisficing track (8 planners submitted out of 14 registered). Three more tracks were open to participation: temporal optimal, preferences satisficing and preferences optimal. Unfortunately the number of submitted planners did not allow these tracks to be finally included in the competition. A total of 55 people were participating, grouped in 31 teams. Participants came from Australia, Canada, China, France, Germany, India, Israel, Italy, Spain, UK and USA. For the sequential tracks 14 domains, with 20 problems each, were selected, while the temporal one had 12 domains, also with 20 problems each. Both new and past domains were included. As in previous competitions, domains and problems were unknown for participants and all the experimentation was carried out by the organizers. To run the competition a cluster of eleven 64-bits computers (Intel XEON 2.93 Ghz Quad core processor) using Linux was set up. Up to 1800 seconds, 6 GB of RAM memory and 750 GB of hard disk were available for each planner to solve a problem. This resulted in 7540 computing hours (about 315 days), plus a high number of hours devoted to preliminary experimentation with new domains, reruns and bugs fixing. The detailed results of the competition, the software used for automating most tasks, the source code of all the participating planners and the description of domains and problems can be found at the competition’s web page: http://www.plg.inf.uc3m.es/ipc2011-deterministicThis booklet summarizes the participants on the Deterministic Track of the International Planning Competition (IPC) 2011. Papers describing all the participating planners are included

    Using Plan Decomposition for Continuing Plan Optimisation and Macro Generation

    No full text
    This thesis addresses three problems in the field of classical AI planning: decomposing a plan into meaningful subplans, continuing plan quality optimisation, and macro generation for efficient planning. The importance and difficulty of each of these problems is outlined below. (1) Decomposing a plan into meaningful subplans can facilitate a number of postplan generation tasks, including plan quality optimisation and macro generation – the two key concerns of this thesis. However, conventional plan decomposition techniques are often unable to decompose plans because they consider dependencies among steps, rather than subplans. (2) Finding high quality plans for large planning problems is hard. Planners that guarantee optimal, or bounded suboptimal, plan quality often cannot solve them In one experiment with the Genome Edit Distance domain optimal planners solved only 11.5% of problems. Anytime planners promise a way to successively produce better plans over time. However, current anytime planners tend to reach a limit where they stop finding any further improvement, and the plans produced are still very far from the best possible. In the same experiment, the LAMA anytime planner solved all problems but found plans whose average quality is 1.57 times worse than the best known. (3) Finding solutions quickly or even finding any solution for large problems within some resource constraint is also difficult. The best-performing planner in the 2014 international planning competition still failed to solve 29.3% of problems. Re-engineering a domain model by capturing and exploiting structural knowledge in the form of macros has been found very useful in speeding up planners. However, existing planner independent macro generation techniques often fail to capture some promising macro candidates because the constituent actions are not found in sequence in the totally ordered training plans. This thesis contributes to plan decomposition by developing a new plan deordering technique, named block deordering, that allows two subplans to be unordered even when their constituent steps cannot. Based on the block-deordered plan, this thesis further contributes to plan optimisation and macro generation, and their implementations in two systems, named BDPO2 and BloMa. Key to BDPO2 is a decomposition into subproblems of improving parts of the current best plan, rather than the plan as a whole. BDPO2 can be seen as an application of the large neighbourhood search strategy to planning. We use several windowing strategies to extract subplans from the block deordering of the current plan, and on-line learning for applying the most promising subplanners to the most promising subplans. We demonstrate empirically that even starting with the best plans found by other means, BDPO2 is still able to continue improving plan quality, and often produces better plans than other anytime planners when all are given enough runtime. BloMa uses an automatic planner independent technique to extract and filter “self-containe” subplans as macros from the block deordered training plans. These macros represent important longer activities useful to improve planners coverage and efficiency compared to the traditional macro generation approaches

    Exact methods for Bayesian network structure learning and cost function networks

    Get PDF
    Les modèles graphiques discrets représentent des fonctions jointes sur de grands ensembles de variables en tant qu'une combinaison de fonctions plus petites. Il existe plusieurs instanciations de modèles graphiques, notamment des modèles probabilistes et dirigés comme les réseaux Bayésiens, ou des modèles déterministes et non-dirigés comme les réseaux de fonctions de coûts. Des requêtes comme trouver l'explication la plus probable (MPE) sur un réseau Bayésiens, et son équivalent, trouver une solution de coût minimum sur un réseau de fonctions de coût, sont toutes les deux des tâches d’optimisation combinatoire NP-difficiles. Il existe cependant des techniques de résolution robustes qui ont une large gamme de domaines d'applications, notamment les réseaux de régulation de gènes, l'analyse de risques et le traitement des images. Dans ce travail, nous contribuons à l'état de l'art de l'apprentissage de la structure des réseaux Bayésiens (BNSL), et répondons à des requêtes de MPE et de minimisation des coûts sur les réseaux Bayésiens et les réseaux de fonctions de coûts. Pour le BNSL, nous découvrons un nouveau point dans l'espace de conception des algorithmes de recherche qui atteint un compromis différent entre la qualité et la vitesse de l'inférence. Les algorithmes existants optent soit pour la qualité maximale de l'inférence en utilisant la programmation linéaire en nombres entiers (PLNE) et la séparation et évaluation, soit pour la vitesse de l'inférence en utilisant la programmation par contraintes (PPC). Nous définissons des propriétés d'une classe spéciale d'inégalités, qui sont appelées "les inégalités de cluster" et qui mènent à un algorithme avec une qualité d'inférence beaucoup plus puissante que celle basée sur la PPC, et beaucoup plus rapide que celle basée sur la PLNE. Nous combinons cet algorithme avec des idées originales pour une propagation renforcée ainsi qu'une représentation de domaines plus compacte, afin d'obtenir des performances dépassant l'état de l'art dans le solveur open source ELSA (Exact Learning of bayesian network Structure using Acyclicity reasoning). Pour les réseaux de fonctions de coûts, nous identifions une faiblesse dans l'utilisation de la relaxation continue dans une classe spécifique de solveurs, y compris le solveur primé "ToulBar2". Nous prouvons que cette faiblesse peut entraîner des décisions de branchement sous-optimales et montrons comment détecter un ensemble maximal de telles décisions qui peuvent ensuite être évitées par le solveur. Cela permet à ToulBar2 de résoudre des problèmes qui étaient auparavant solvables uniquement par des algorithmes hybrides.Discrete Graphical Models (GMs) represent joint functions over large sets of discrete variables as a combination of smaller functions. There exist several instantiations of GMs, including directed probabilistic GMs like Bayesian Networks (BNs) and undirected deterministic models like Cost Function Networks (CFNs). Queries like Most Probable Explanation (MPE) on BNs and its equivalent on CFNs, which is cost minimisation, are NP-hard, but there exist robust solving techniques which have found a wide range of applications in fields such as bioinformatics, image processing, and risk analysis. In this thesis, we make contributions to the state of the art in learning the structure of BNs, namely the Bayesian Network Structure Learning problem (BNSL), and answering MPE and minimisation queries on BNs and CFNs. For BNSL, we discover a new point in the design space of search algorithms, which achieves a different trade-off between inference strength and speed of inference. Existing algorithms for it opt for either maximal strength of inference, like the algorithms based on Integer Programming (IP) and branch-and-cut, or maximal speed of inference, like the algorithms based on Constraint Programming (CP). We specify properties of a specific class of inequalities, called cluster inequalities, which lead to an algorithm that performs much stronger inference than that based on CP, much faster than that based on IP. We combine this with novel ideas for stronger propagation and more compact domain representations to achieve state-of-the-art performance in the open-source solver ELSA (Exact Learning of bayesian network Structure using Acyclicity reasoning). For CFNs, we identify a weakness in the use of linear programming relaxations by a specific class of solvers, which includes the award-winning open-source ToulBar2 solver. We prove that this weakness can lead to suboptimal branching decisions and show how to detect maximal sets of such decisions, which can then be avoided by the solver. This allows ToulBar2 to tackle problems previously solvable only by hybrid algorithms

    Optimally Relaxing Partial-Order Plans with MaxSAT

    No full text
    Partial-order plans (POPs) are attractive because of their least commitment nature, providing enhanced plan flexibility at execution time relative to sequential plans. Despite the appeal of POPs, most of the recent research on automated plan generation has focused on sequential plans. In this paper we examine the task of POP generation by relaxing or modifying the action orderings of a plan to optimize for plan criteria that promotes flexibility in the POP. Our approach relies on a novel partial weighted MaxSAT encoding of a plan that supports the minimization of deordering or reordering of actions. We further extend the classical least commitment criterion for a POP to consider the number of actions in a solution, and provide an encoding to achieve least commitment plans with respect to this criterion. We compare the effectiveness of our approach to a previous approach for POP generation via sequential-plan relaxation. Our results show that while the previous approach is proficient at heuristically finding the optimal deordering of a plan, our approach gains greater flexibility with the optimal reordering

    Estimation of distribution algorithms in logistics : Analysis, design, and application

    Get PDF
    This thesis considers the analysis, design and application of Estimation of Distribution Algorithms (EDA) in Logistics. It approaches continouos nonlinear optimization problems (standard test problems and stochastic transportation problems) as well as location problems, strategic safety stock placement problems and lotsizing problems. The thesis adds to the existing literature by proposing theoretical advances for continuous EDAs and practical applications of discrete EDAs. Thus, it should be of interest for researchers from evolutionary computation, as well as practitioners that are in need of efficient algorithms for the above mentioned problems

    Methods for integrating machine learning and constrained optimization

    Get PDF
    In the framework of industrial problems, the application of Constrained Optimization is known to have overall very good modeling capability and performance and stands as one of the most powerful, explored, and exploited tool to address prescriptive tasks. The number of applications is huge, ranging from logistics to transportation, packing, production, telecommunication, scheduling, and much more. The main reason behind this success is to be found in the remarkable effort put in the last decades by the OR community to develop realistic models and devise exact or approximate methods to solve the largest variety of constrained or combinatorial optimization problems, together with the spread of computational power and easily accessible OR software and resources. On the other hand, the technological advancements lead to a data wealth never seen before and increasingly push towards methods able to extract useful knowledge from them; among the data-driven methods, Machine Learning techniques appear to be one of the most promising, thanks to its successes in domains like Image Recognition, Natural Language Processes and playing games, but also the amount of research involved. The purpose of the present research is to study how Machine Learning and Constrained Optimization can be used together to achieve systems able to leverage the strengths of both methods: this would open the way to exploiting decades of research on resolution techniques for COPs and constructing models able to adapt and learn from available data. In the first part of this work, we survey the existing techniques and classify them according to the type, method, or scope of the integration; subsequently, we introduce a novel and general algorithm devised to inject knowledge into learning models through constraints, Moving Target. In the last part of the thesis, two applications stemming from real-world projects and done in collaboration with Optit will be presented
    corecore