315 research outputs found

    Star-topology decoupled state-space search in AI planning and model checking

    Get PDF
    State-space search is a widely employed concept in many areas of computer science. The well-known state explosion problem, however, imposes a severe limitation to the effective implementation of search in state spaces that are exponential in the size of a compact system description, which captures the state-transition semantics. Decoupled state-space search, decoupled search for short, is a novel approach to tackle the state explosion. It decomposes the system such that the dependencies between components take the form of a star topology with a center and several leaf components. Decoupled search exploits that the leaves in that topology are conditionally independent. Such independence naturally arises in many kinds of factored model representations, where the overall state space results from the product of several system components. In this work, we introduce decoupled search in the context of artificial intelligence planning and formal verification using model checking. Building on common formalisms, we develop the concept of the decoupled state space and prove its correctness with respect to capturing reachability of the underlying model exactly. This allows us to connect decoupled search to any search algorithm, and, important for planning, adapt any heuristic function to the decoupled state representation. Such heuristics then guide the search towards states that satisfy a desired goal condition. In model checking, we address the problems of verifying safety properties, which express system states that must never occur, and liveness properties, that must hold in any infinite system execution. Many approaches have been proposed in the past to tackle the state explosion problem. Most prominently partial-order reduction, symmetry breaking, Petri-net unfolding, and symbolic state representations. Like decoupled search, all of these are capable of exponentially reducing the search effort, either by pruning part of the state space (the former two), or by representing large state sets compactly (the latter two). For all these techniques, we prove that decoupled search can be exponentially more efficient, confirming that it is indeed a novel concept that exploits model properties in a unique way. Given such orthogonality, we combine decoupled search with several complementary methods. Empirically, we show that decoupled search favourably compares to state-of-the-art planners in common algorithmic planning problems using standard benchmarks. In model checking, decoupled search outperforms well-established tools, both in the context of the verification of safety and liveness properties.Die Zustandsraumsuche ist ein weit verbreitetes Konzept in vielen Bereichen der Informatik, deren effektive Anwendung jedoch durch das Problem der Zustandsexplosion deutlich erschwert wird. Die Zustandsexplosion ist dadurch charakterisiert dass kompakte Systemmodelle exponentiell große Zustandsräume beschreiben. Entkoppelte Zustandsraumsuche (entkoppelte Suche) beschreibt einen neuartigen Ansatz der Zustandsexplosion entgegenzuwirken indem die Struktur des Modells, insbesondere die bedingte Unabhängigkeit von Systemkomponenten in einer Sterntopologie, ausgenutzt wird. Diese Unabhängigkeit ergibt sich bei vielen faktorisierten Modellen deren Zustandsraum sich aus dem Produkt mehrerer Komponenten zusammensetzt. In dieser Arbeit wird die entkoppelte Suche in der Planung, als Teil der Künstlichen Intelligenz, und der Verifikation mittels Modellprüfung eingeführt. In etablierten Formalismen wird das Konzept des entkoppelten Zustandsraums entwickelt und dessen Korrektheit bezüglich der exakten Erfassung der Erreichbarkeit von Modellzuständen bewiesen. Dies ermöglicht die Kombination der entkoppelten Suche mit beliebigen Suchalgorithmen. Wichtig für die Planung ist zudem die Nutzung von Heuristiken, die die Suche zu Zuständen führen, die eine gewünschte Zielbedingung erfüllen, mit der entkoppelten Zustandsdarstellung. Im Teil zur Modellprüfung wird die Verifikation von Sicherheits- sowie Lebendigkeitseigenschaften betrachtet, die unerwünschte Zustände, bzw. Eigenschaften, die bei unendlicher Systemausführung gelten müssen, beschreiben. Es existieren diverse Ansätze um die Zustandsexplosion anzugehen. Am bekanntesten sind die Reduktion partieller Ordnung, Symmetriereduktion, Entfaltung von Petri-Netzen und symbolische Suche. Diese können, wie die entkoppelte Suche, den Suchaufwand exponentiell reduzieren. Dies geschieht durch Beschneidung eines Teils des Zustandsraums, oder durch die kompakte Darstellung großer Zustandsmengen. Für diese Verfahren wird bewiesen, dass die entkoppelte Suche exponentiell effizienter sein kann. Dies belegt dass es sich um ein neuartiges Konzept handelt, das sich auf eigene Art der Modelleigenschaften bedient. Auf Basis dieser Beobachtung werden, mit Ausnahme der Entfaltung, Kombinationen mit entkoppelter Suche entwickelt. Empirisch kann die entkoppelte Suche im Vergleich zu modernen Planern zu deutlichen Vorteilen führen. In der Modellprüfung werden, sowohl bei der Überprüfung von Sicherheit-, als auch Lebendigkeitseigenschaften, etablierte Programme übertroffen.Deutsche Forschungsgesellschaft; Star-Topology Decoupled State Space Searc

    Merge-and-Shrink Task Reformulation for Classical Planning

    Get PDF
    The performance of domain-independent planning systems heavily depends on how the planning task has been modeled. This makes task reformulation an important tool to get rid of unnecessary complexity and increase the robustness of planners with respect to the model chosen by the user. In this paper, we represent tasks as factored transition systems (FTS), and use the merge-and-shrink (M&S) framework for task reformulation for optimal and satisficing planning. We prove that the flexibility of the underlying representation makes the M&S reformulation methods more powerful than the counterparts based on the more popular finite-domain representation. We adapt delete-relaxation and M&S heuristics to work on the FTS representation and evaluate the impact of our reformulation

    Adaptive search techniques in AI planning and heuristic search

    Get PDF
    State-space search is a common approach to solve problems appearing in artificial intelligence and other subfields of computer science. In such problems, an agent must find a sequence of actions leading from an initial state to a goal state. However, the state spaces of practical applications are often too large to explore exhaustively. Hence, heuristic functions that estimate the distance to a goal state (such as straight-line distance for navigation tasks) are used to guide the search more effectively. Heuristic search is typically viewed as a static process. The heuristic function is assumed to be unchanged throughout the search, and its resulting values are directly used for guidance without applying any further reasoning to them. Yet critical aspects of the task may only be discovered during the search, e.g., regions of the state space where the heuristic does not yield reliable values. Our work here aims to make this process more dynamic, allowing the search to adapt to such observations. One form of adaptation that we consider is online refinement of the heuristic function. We design search algorithms that detect weaknesses in the heuristic, and address them with targeted refinement operations. If the heuristic converges to perfect estimates, this results in a secondary method of progress, causing search algorithms that are otherwise incomplete to eventually find a solution. We also consider settings that inherently require adaptation: In online replanning, a plan that is being executed must be amended for changes in the environment. Similarly, in real-time search, an agent must act under strict time constraints with limited information. The search algorithms we introduce in this work share a common pattern of online adaptation, allowing them to effectively react to challenges encountered during the search. We evaluate our contributions on a wide range of standard benchmarks. Our results show that the flexibility of these algorithms makes them more robust than traditional approaches, and they often yield substantial improvements over current state-of-the-art planners.Die Zustandsraumsuche ist ein oft verwendeter Ansatz um verschiedene Probleme zu lösen, die in der Künstlichen Intelligenz und anderen Bereichen der Informatik auftreten. Dabei muss ein Akteur eine Folge von Aktionen finden, die einen Pfad von einem Startzustand zu einem Zielzustand bilden. Die Zustandsräume von praktischen Anwendungen sind häufig zu groß um sie vollständig zu durchsuchen. Aus diesem Grund leitet man die Suche mit Heuristiken, die die Distanz zu einem Zielzustand abschätzen; zum Beispiel lässt sich die Luftliniendistanz als Heuristik für Navigationsprobleme einsetzen. Heuristische Suche wird typischerweise als statischer Prozess angesehen. Man nimmt an, dass die Heuristik während der Suche eine unveränderte Funktion ist, und die resultierenden Werte werden direkt zur Leitung der Suche benutzt ohne weitere Logik darauf anzuwenden. Jedoch könnten kritische Aspekte des Problems erst im Laufe der Suche erkannt werden, wie zum Beispiel Bereiche des Zustandsraums in denen die Heuristik keine verlässlichen Abschätzungen liefert. In dieser Arbeit wird der Suchprozess dynamischer gestaltet und der Suche ermöglicht sich solchen Beobachtungen anzupassen. Eine Art dieser Anpassung ist die Onlineverbesserung der Heuristik. Es werden Suchalgorithmen entwickelt, die Schwächen in der Heuristik erkennen und mit gezielten Verbesserungsoperationen beheben. Wenn die Heuristik zu perfekten Werten konvergiert ergibt sich daraus eine zusätzliche Form von Fortschritt, wodurch auch Suchalgorithmen, die sonst unvollständig sind, garantiert irgendwann eine Lösung finden werden. Es werden auch Szenarien betrachtet, die schon von sich aus Anpassung erfordern: In der Onlineumplanung muss ein Plan, der gerade ausgeführt wird, auf Änderungen in der Umgebung angepasst werden. Ähnlich dazu muss sich ein Akteur in der Echtzeitsuche unter strengen Zeitauflagen und mit eingeschränkten Informationen bewegen. Die Suchalgorithmen, die in dieser Arbeit eingeführt werden, folgen einem gemeinsamen Muster von Onlineanpassung, was ihnen ermöglicht effektiv auf Herausforderungen zu reagieren die im Verlauf der Suche aufkommen. Diese Ansätze werden auf einer breiten Reihe von Benchmarks ausgewertet. Die Ergebnisse zeigen, dass die Flexibilität dieser Algorithmen zu erhöhter Zuverlässigkeit im Vergleich zu traditionellen Ansätzen führt, und es werden oft deutliche Verbesserungen gegenüber modernen Planungssystemen erzielt.DFG grant 389792660 as part of TRR 248 – CPEC (see https://perspicuous-computing.science), and DFG grant HO 2169/5-1, "Critically Constrained Planning via Partial Delete Relaxation

    Finding common ground when experts disagree: robust portfolio decision analysis

    Get PDF
    We address the problem of decision making under “deep uncertainty,” introducing an approach we call Robust Portfolio Decision Analysis. We introduce the idea of Belief Dominance as a prescriptive operationalization of a concept that has appeared in the literature under a number of names. We use this concept to derive a set of non-dominated portfolios; and then identify robust individual alternatives from the non-dominated portfolios. The Belief Dominance concept allows us to synthesize multiple conflicting sources of information by uncovering the range of alternatives that are intelligent responses to the range of beliefs. This goes beyond solutions that are optimal for any specific set of beliefs to uncover defensible solutions that may not otherwise be revealed. We illustrate our approach using a problem in the climate change and energy policy context: choosing among clean energy technology R&D portfolios. We demonstrate how the Belief Dominance concept can uncover portfolios that would otherwise remain hidden and identify robust individual investments

    Finding Common Ground When Experts Disagree: Robust Portfolio Decision Analysis

    Full text link

    Interactive Decision Analysis; Proceedings of an International Workshop on Interactive Decision Analysis and Interpretative Computer Intelligence, Laxenburg, Austria, September 20-23, 1983

    Get PDF
    An International Workshop on Interactive Decision Analysis and Interpretative Computer Intelligence was held at IIASA in September 1983. The Workshop was motivated, firstly, by the realization that the rapid development of computers, especially microcomputers, will greatly increase the scope and capabilities of computerized decision-support systems. It is important to explore the potential of these systems for use in handling the complex technological, environmental, economic and social problems that face the world today. Research in decision-support systems also has another, less tangible but possibly more important, motivation. The development of efficient systems for decision support requires a thorough understanding of the differences between the decision-making processes in different nations and cultures. An understanding of the different rationales underlying decision making is not only necessary for the development of efficient decision-support systems, but it is also an important factor in encouraging international understanding and cooperation. The Proceedings of the Workshop which are contained in this volume are divided in four main sections. The first section consists of an introductory lecture in which a unifying approach to the use of computers and computerized mathematical models for decision analysis and support is described. The second section is concerned with approaches and concepts in interactive decision analysis and section three is devoted to methods and techniques for decision analysis. The final section contains descriptions of a wide range of applications of interactive techniques, covering the fields of economics, public policy planning, energy policy evaluation, hydrology and industrial development

    The 2011 International Planning Competition

    Get PDF
    After a 3 years gap, the 2011 edition of the IPC involved a total of 55 planners, some of them versions of the same planner, distributed among four tracks: the sequential satisficing track (27 planners submitted out of 38 registered), the sequential multicore track (8 planners submitted out of 12 registered), the sequential optimal track (12 planners submitted out of 24 registered) and the temporal satisficing track (8 planners submitted out of 14 registered). Three more tracks were open to participation: temporal optimal, preferences satisficing and preferences optimal. Unfortunately the number of submitted planners did not allow these tracks to be finally included in the competition. A total of 55 people were participating, grouped in 31 teams. Participants came from Australia, Canada, China, France, Germany, India, Israel, Italy, Spain, UK and USA. For the sequential tracks 14 domains, with 20 problems each, were selected, while the temporal one had 12 domains, also with 20 problems each. Both new and past domains were included. As in previous competitions, domains and problems were unknown for participants and all the experimentation was carried out by the organizers. To run the competition a cluster of eleven 64-bits computers (Intel XEON 2.93 Ghz Quad core processor) using Linux was set up. Up to 1800 seconds, 6 GB of RAM memory and 750 GB of hard disk were available for each planner to solve a problem. This resulted in 7540 computing hours (about 315 days), plus a high number of hours devoted to preliminary experimentation with new domains, reruns and bugs fixing. The detailed results of the competition, the software used for automating most tasks, the source code of all the participating planners and the description of domains and problems can be found at the competition’s web page: http://www.plg.inf.uc3m.es/ipc2011-deterministicThis booklet summarizes the participants on the Deterministic Track of the International Planning Competition (IPC) 2011. Papers describing all the participating planners are included

    Multicriteria analysis in the appraisal of projects : the case of Santa Catalina Watershed project in the Philippines

    No full text
    This study briefly reviews the conventional project appraisal method and presents a new method that takes account of the multiple objectives, conflicts of interests, externalities and intangibles in projects dealing with public goods. A brief historical perspective of traditional methodologies i3 presented which provides a starting point in recommending an alternative appraisal method called Multicriteria Analysis. The discussion of its theoretical premises is presented in earlier chapters. This is followed by a chapter that provides relevant information about the Philippines and about the case study area, Santa Catalina Watershed. The last chapters present an empirical application of a multicriteria analysis variant, the concordance analysis, on the case study area. * The results of the study show that the new methodology can incorporate many issues that are otherwise left out in conventional economic-financial analysis and can overcome some of the major difficulties of cost-benefit analysis. A significant feature of the methodology is its departure from pursuing a single objective function and its attempt to incorporate as many objectives as are considered necessary in the decision framework to reach a 'satisficing compromise" solution. While it does not consider trade-offs in the analysis, the methodology provides for an inter-active procedure which draws the decision-maker into the evaluation process and in the process reveals his hidden preferences and solve the problem of trade-offs. The issue of time may also be dealt with by compounding the impacts forward to a common terminal date. Uncertainties are taken account of by a more sophisticated sensitivity analysis through stochastic approach. The study recognizes that the methodology has a great potential in giving more information to a decision-maker and a stronger basis for deciding within the context of conflicts of interests and multiple objectives. It is recommended, however, that the methodology be applied on an exploratory basis since data needed may not yet be available in the Philippines or their collection may prove to be lengthy

    A RISK-INFORMED DECISION-MAKING METHODOLOGY TO IMPROVE LIQUID ROCKET ENGINE PROGRAM TRADEOFFS

    Get PDF
    This work provides a risk-informed decision-making methodology to improve liquid rocket engine program tradeoffs with the conflicting areas of concern affordability, reliability, and initial operational capability (IOC) by taking into account psychological and economic theories in combination with reliability engineering. Technical program risks are associated with the number of predicted failures of the test-analyze-and-fix (TAAF) cycle that is based on the maturity of the engine components. Financial and schedule program risks are associated with the epistemic uncertainty of the models that determine the measures of effectiveness in the three areas of concern. The affordability and IOC models' inputs reflect non-technical and technical factors such as team experience, design scope, technology readiness level, and manufacturing readiness level. The reliability model introduces the Reliability- As-an-Independent-Variable (RAIV) strategy that aggregates fictitious or actual hotfire tests of testing profiles that differ from the actual mission profile to estimate the system reliability. The main RAIV strategy inputs are the physical or functional architecture of the system, the principal test plan strategy, a stated reliability-bycredibility requirement, and the failure mechanisms that define the reliable life of the system components. The results of the RAIV strategy, which are the number of hardware sets and number of hot-fire tests, are used as inputs to the affordability and the IOC models. Satisficing within each tradeoff is attained by maximizing the weighted sum of the normalized areas of concern subject to constraints that are based on the decision-maker's targets and uncertainty about the affordability, reliability, and IOC using genetic algorithms. In the planning stage of an engine program, the decision variables of the genetic algorithm correspond to fictitious hot-fire tests that include TAAF cycle failures. In the program execution stage, the RAIV strategy is used as reliability growth planning, tracking, and projection model. The main contributions of this work are the development of a comprehensible and consistent risk-informed tradeoff framework, the RAIV strategy that links affordability and reliability, a strategy to define an industry or government standard or guideline for liquid rocket engine hot-fire test plans, and an alternative to the U.S. Crow/AMSAA reliability growth model applying the RAIV strategy

    Methodological Advances in Dea

    Get PDF
    We survey the methodological advances in DEA over the last 25 years and discuss the necessary conditions for a sound empirical application. We hope this survey will contribute to the further dissemination of DEA, the knowledge of its relative strengths and weaknesses, and the tools currently available for exploiting its full potential. Our main points are illustrated by the case of the DEA study used by the regulatory office of the Dutch electricity sector (Dienst Toezicht Elektriciteitswet; Dte) for setting price caps
    corecore