1,550 research outputs found

    Star-topology decoupled state-space search in AI planning and model checking

    Get PDF
    State-space search is a widely employed concept in many areas of computer science. The well-known state explosion problem, however, imposes a severe limitation to the effective implementation of search in state spaces that are exponential in the size of a compact system description, which captures the state-transition semantics. Decoupled state-space search, decoupled search for short, is a novel approach to tackle the state explosion. It decomposes the system such that the dependencies between components take the form of a star topology with a center and several leaf components. Decoupled search exploits that the leaves in that topology are conditionally independent. Such independence naturally arises in many kinds of factored model representations, where the overall state space results from the product of several system components. In this work, we introduce decoupled search in the context of artificial intelligence planning and formal verification using model checking. Building on common formalisms, we develop the concept of the decoupled state space and prove its correctness with respect to capturing reachability of the underlying model exactly. This allows us to connect decoupled search to any search algorithm, and, important for planning, adapt any heuristic function to the decoupled state representation. Such heuristics then guide the search towards states that satisfy a desired goal condition. In model checking, we address the problems of verifying safety properties, which express system states that must never occur, and liveness properties, that must hold in any infinite system execution. Many approaches have been proposed in the past to tackle the state explosion problem. Most prominently partial-order reduction, symmetry breaking, Petri-net unfolding, and symbolic state representations. Like decoupled search, all of these are capable of exponentially reducing the search effort, either by pruning part of the state space (the former two), or by representing large state sets compactly (the latter two). For all these techniques, we prove that decoupled search can be exponentially more efficient, confirming that it is indeed a novel concept that exploits model properties in a unique way. Given such orthogonality, we combine decoupled search with several complementary methods. Empirically, we show that decoupled search favourably compares to state-of-the-art planners in common algorithmic planning problems using standard benchmarks. In model checking, decoupled search outperforms well-established tools, both in the context of the verification of safety and liveness properties.Die Zustandsraumsuche ist ein weit verbreitetes Konzept in vielen Bereichen der Informatik, deren effektive Anwendung jedoch durch das Problem der Zustandsexplosion deutlich erschwert wird. Die Zustandsexplosion ist dadurch charakterisiert dass kompakte Systemmodelle exponentiell große Zustandsräume beschreiben. Entkoppelte Zustandsraumsuche (entkoppelte Suche) beschreibt einen neuartigen Ansatz der Zustandsexplosion entgegenzuwirken indem die Struktur des Modells, insbesondere die bedingte Unabhängigkeit von Systemkomponenten in einer Sterntopologie, ausgenutzt wird. Diese Unabhängigkeit ergibt sich bei vielen faktorisierten Modellen deren Zustandsraum sich aus dem Produkt mehrerer Komponenten zusammensetzt. In dieser Arbeit wird die entkoppelte Suche in der Planung, als Teil der Künstlichen Intelligenz, und der Verifikation mittels Modellprüfung eingeführt. In etablierten Formalismen wird das Konzept des entkoppelten Zustandsraums entwickelt und dessen Korrektheit bezüglich der exakten Erfassung der Erreichbarkeit von Modellzuständen bewiesen. Dies ermöglicht die Kombination der entkoppelten Suche mit beliebigen Suchalgorithmen. Wichtig für die Planung ist zudem die Nutzung von Heuristiken, die die Suche zu Zuständen führen, die eine gewünschte Zielbedingung erfüllen, mit der entkoppelten Zustandsdarstellung. Im Teil zur Modellprüfung wird die Verifikation von Sicherheits- sowie Lebendigkeitseigenschaften betrachtet, die unerwünschte Zustände, bzw. Eigenschaften, die bei unendlicher Systemausführung gelten müssen, beschreiben. Es existieren diverse Ansätze um die Zustandsexplosion anzugehen. Am bekanntesten sind die Reduktion partieller Ordnung, Symmetriereduktion, Entfaltung von Petri-Netzen und symbolische Suche. Diese können, wie die entkoppelte Suche, den Suchaufwand exponentiell reduzieren. Dies geschieht durch Beschneidung eines Teils des Zustandsraums, oder durch die kompakte Darstellung großer Zustandsmengen. Für diese Verfahren wird bewiesen, dass die entkoppelte Suche exponentiell effizienter sein kann. Dies belegt dass es sich um ein neuartiges Konzept handelt, das sich auf eigene Art der Modelleigenschaften bedient. Auf Basis dieser Beobachtung werden, mit Ausnahme der Entfaltung, Kombinationen mit entkoppelter Suche entwickelt. Empirisch kann die entkoppelte Suche im Vergleich zu modernen Planern zu deutlichen Vorteilen führen. In der Modellprüfung werden, sowohl bei der Überprüfung von Sicherheit-, als auch Lebendigkeitseigenschaften, etablierte Programme übertroffen.Deutsche Forschungsgesellschaft; Star-Topology Decoupled State Space Searc

    Parallel symbolic state-space exploration is difficult, but what is the alternative?

    Full text link
    State-space exploration is an essential step in many modeling and analysis problems. Its goal is to find the states reachable from the initial state of a discrete-state model described. The state space can used to answer important questions, e.g., "Is there a dead state?" and "Can N become negative?", or as a starting point for sophisticated investigations expressed in temporal logic. Unfortunately, the state space is often so large that ordinary explicit data structures and sequential algorithms cannot cope, prompting the exploration of (1) parallel approaches using multiple processors, from simple workstation networks to shared-memory supercomputers, to satisfy large memory and runtime requirements and (2) symbolic approaches using decision diagrams to encode the large structured sets and relations manipulated during state-space generation. Both approaches have merits and limitations. Parallel explicit state-space generation is challenging, but almost linear speedup can be achieved; however, the analysis is ultimately limited by the memory and processors available. Symbolic methods are a heuristic that can efficiently encode many, but not all, functions over a structured and exponentially large domain; here the pitfalls are subtler: their performance varies widely depending on the class of decision diagram chosen, the state variable order, and obscure algorithmic parameters. As symbolic approaches are often much more efficient than explicit ones for many practical models, we argue for the need to parallelize symbolic state-space generation algorithms, so that we can realize the advantage of both approaches. This is a challenging endeavor, as the most efficient symbolic algorithm, Saturation, is inherently sequential. We conclude by discussing challenges, efforts, and promising directions toward this goal

    Doctor of Philosophy

    Get PDF
    dissertationOver the last decade, cyber-physical systems (CPSs) have seen significant applications in many safety-critical areas, such as autonomous automotive systems, automatic pilot avionics, wireless sensor networks, etc. A Cps uses networked embedded computers to monitor and control physical processes. The motivating example for this dissertation is the use of fault- tolerant routing protocol for a Network-on-Chip (NoC) architecture that connects electronic control units (Ecus) to regulate sensors and actuators in a vehicle. With a network allowing Ecus to communicate with each other, it is possible for them to share processing power to improve performance. In addition, networked Ecus enable flexible mapping to physical processes (e.g., sensors, actuators), which increases resilience to Ecu failures by reassigning physical processes to spare Ecus. For the on-chip routing protocol, the ability to tolerate network faults is important for hardware reconfiguration to maintain the normal operation of a system. Adding a fault-tolerance feature in a routing protocol, however, increases its design complexity, making it prone to many functional problems. Formal verification techniques are therefore needed to verify its correctness. This dissertation proposes a link-fault-tolerant, multiflit wormhole routing algorithm, and its formal modeling and verification using two different methodologies. An improvement upon the previously published fault-tolerant routing algorithm, a link-fault routing algorithm is proposed to relax the unrealistic node-fault assumptions of these algorithms, while avoiding deadlock conservatively by appropriately dropping network packets. This routing algorithm, together with its routing architecture, is then modeled in a process-algebra language LNT, and compositional verification techniques are used to verify its key functional properties. As a comparison, it is modeled using channel-level VHDL which is compiled to labeled Petri-nets (LPNs). Algorithms for a partial order reduction method on LPNs are given. An optimal result is obtained from heuristics that trace back on LPNs to find causally related enabled predecessor transitions. Key observations are made from the comparison between these two verification methodologies

    Automata-theoretic and bounded model checking for linear temporal logic

    Get PDF
    In this work we study methods for model checking the temporal logic LTL. The focus is on the automata-theoretic approach to model checking and bounded model checking. We begin by examining automata-theoretic methods to model check LTL safety properties. The model checking problem can be reduced to checking whether the language of a finite state automaton on finite words is empty. We describe an efficient algorithm for generating small finite state automata for so called non-pathological safety properties. The presented implementation is the first tool able to decide whether a formula is non-pathological. The experimental results show that treating safety properties can benefit model checking at very little cost. In addition, we find supporting evidence for the view that minimising the automaton representing the property does not always lead to a small product state space. A deterministic property automaton can result in a smaller product state space even though it might have a larger number states. Next we investigate modular analysis. Modular analysis is a state space reduction method for modular Petri nets. The method can be used to construct a reduced state space called the synchronisation graph. We devise an on-the-fly automata-theoretic method for model checking the behaviour of a modular Petri net from the synchronisation graph. The solution is based on reducing the model checking problem to an instance of verification with testers. We analyse the tester verification problem and present an efficient on-the-fly algorithm, the first complete solution to tester verification problem, based on generalised nested depth-first search. We have also studied propositional encodings for bounded model checking LTL. A new simple linear sized encoding is developed and experimentally evaluated. The implementation in the NuSMV2 model checker is competitive with previously presented encodings. We show how to generalise the LTL encoding to a more succint logic: LTL with past operators. The generalised encoding compares favourably with previous encodings for LTL with past operators. Links between bounded model checking and the automata-theoretic approach are also explored.reviewe

    Conflict-driven learning in AI planning state-space search

    Get PDF
    Many combinatorial computation problems in computer science can be cast as a reachability problem in an implicitly described, potentially huge, graph: the state space. State-space search is a versatile and widespread method to solve such reachability problems, but it requires some form of guidance to prevent exploring that combinatorial space exhaustively. Conflict-driven learning is an indispensable search ingredient for solving constraint satisfaction problems (most prominently, Boolean satisfiability). It guides search towards solutions by identifying conflicts during the search, i.e., search branches not leading to any solution, learning from them knowledge to avoid similar conflicts in the remainder of the search. This thesis adapts the conflict-driven learning methodology to more general classes of reachability problems. Specifically, our work is placed in AI planning. We consider goal-reachability objectives in classical planning and in planning under uncertainty. The canonical form of "conflicts" in this context are dead-end states, i.e., states from which the desired goal property cannot be reached. We pioneer methods for learning sound and generalizable dead-end knowledge from conflicts encountered during forward state-space search. This embraces the following core contributions: When acting under uncertainty, the presence of dead-end states may make it impossible to satisfy the goal property with absolute certainty. The natural planning objective then is MaxProb, maximizing the probability of reaching the goal. However, algorithms for MaxProb probabilistic planning are severely underexplored. We close this gap by developing a large design space of probabilistic state-space search methods, contributing new search algorithms, admissible state-space reduction techniques, and goal-probability bounds suitable for heuristic state-space search. We systematically explore this design space through an extensive empirical evaluation. The key to our conflict-driven learning algorithm adaptation are unsolvability detectors, i.e., goal-reachability overapproximations. We design three complementary families of such unsolvability detectors, building upon known techniques: critical-path heuristics, linear-programming-based heuristics, and dead-end traps. We develop search methods to identify conflicts in deterministic and probabilistic state spaces, and we develop suitable refinement methods for the different unsolvability detectors so to recognize these states. Arranged in a depth-first search, our techniques approach the elegance of conflict-driven learning in constraint satisfaction, featuring the ability to learn to refute search subtrees, and intelligent backjumping to the root cause of a conflict. We provide a comprehensive experimental evaluation, demonstrating that the proposed techniques yield state-of-the-art performance for finding plans for solvable classical planning tasks, proving classical planning tasks unsolvable, and solving MaxProb in probabilistic planning, on benchmarks where dead-end states abound.Viele kombinatorisch komplexe Berechnungsprobleme in der Informatik lassen sich als Erreichbarkeitsprobleme in einem implizit dargestellten, potenziell riesigen, Graphen - dem Zustandsraum - verstehen. Die Zustandsraumsuche ist eine weit verbreitete Methode, um solche Erreichbarkeitsprobleme zu lösen. Die Effizienz dieser Methode hängt aber maßgeblich von der Verwendung strikter Suchkontrollmechanismen ab. Das konfliktgesteuerte Lernen ist eine essenzielle Suchkomponente für das Lösen von Constraint-Satisfaction-Problemen (wie dem Erfüllbarkeitsproblem der Aussagenlogik), welches von Konflikten, also Fehlern in der Suche, neue Kontrollregeln lernt, die ähnliche Konflikte zukünftig vermeiden. In dieser Arbeit erweitern wir die zugrundeliegende Methodik auf Zielerreichbarkeitsfragen, wie sie im klassischen und probabilistischen Planen, einem Teilbereich der Künstlichen Intelligenz, auftauchen. Die kanonische Form von „Konflikten“ in diesem Kontext sind sog. Sackgassen, Zustände, von denen aus die Zielbedingung nicht erreicht werden kann. Wir präsentieren Methoden, die es ermöglichen, während der Zustandsraumsuche von solchen Konflikten korrektes und verallgemeinerbares Wissen über Sackgassen zu erlernen. Unsere Arbeit umfasst folgende Beiträge: Wenn der Effekt des Handelns mit Unsicherheiten behaftet ist, dann kann die Existenz von Sackgassen dazu führen, dass die Zielbedingung nicht unter allen Umständen erfüllt werden kann. Die naheliegendste Planungsbedingung in diesem Fall ist MaxProb, das Maximieren der Wahrscheinlichkeit, dass die Zielbedingung erreicht wird. Planungsalgorithmen für MaxProb sind jedoch wenig erforscht. Um diese Lücke zu schließen, erstellen wir einen umfangreichen Bausatz für Suchmethoden in probabilistischen Zustandsräumen, und entwickeln dabei neue Suchalgorithmen, Zustandsraumreduktionsmethoden, und Abschätzungen der Zielerreichbarkeitswahrscheinlichkeit, wie sie für heuristische Suchalgorithmen gebraucht werden. Wir explorieren den resultierenden Gestaltungsraum systematisch in einer breit angelegten empirischen Studie. Die Grundlage unserer Adaption des konfliktgesteuerten Lernens bilden Unerreichbarkeitsdetektoren. Wir konzipieren drei Familien solcher Detektoren basierend auf bereits bekannten Techniken: Kritische-Pfad Heuristiken, Heuristiken basierend auf linearer Optimierung, und Sackgassen-Fallen. Wir entwickeln Suchmethoden, um Konflikte in deterministischen und probabilistischen Zustandsräumen zu erkennen, sowie Methoden, um die verschiedenen Unerreichbarkeitsdetektoren basierend auf den erkannten Konflikten zu verfeinern. Instanziiert als Tiefensuche weisen unsere Techniken ähnliche Eigenschaften auf wie das konfliktgesteuerte Lernen für Constraint-Satisfaction-Problemen. Wir evaluieren die entwickelten Methoden empirisch, und zeigen dabei, dass das konfliktgesteuerte Lernen unter gewissen Voraussetzungen zu signifikanten Suchreduktionen beim Finden von Plänen in lösbaren klassischen Planungsproblemen, Beweisen der Unlösbarkeit von klassischen Planungsproblemen, und Lösen von MaxProb im probabilistischen Planen, führen kann

    Fiscal Deficits and Executive Planning Horizons

    Get PDF
    Executive control of government is generally not a long-term job. In such cases, relatively short executive tenure should be expected to play an important role in determining the degree to which policymakers internalize the future costs associated with their current fiscal behavior. The effects of policymaker’s expected planning horizons on macroeconomic outcomes, however, have been difficult to model outside of a fixed term limit context due to the unobserved likelihood of remaining in office, along with potential endogeneity problems where re-election campaigns can be enhanced with generous, deficit financed expenditures in election years. From a globally representative sample of 79 countries over a 32 year period (1980-2012), this paper provides empirical evidence suggesting that incumbent governments who know that will not be in office in the following period with a probability of one, are found to generate significantly higher deficits in a linear discounting model, and are found to produce the least responsible fiscal outcomes where the likelihood of re-election is around fifty percent in quadratic discounting models

    Wireless Sensor Data Transport, Aggregation and Security

    Get PDF
    abstract: Wireless sensor networks (WSN) and the communication and the security therein have been gaining further prominence in the tech-industry recently, with the emergence of the so called Internet of Things (IoT). The steps from acquiring data and making a reactive decision base on the acquired sensor measurements are complex and requires careful execution of several steps. In many of these steps there are still technological gaps to fill that are due to the fact that several primitives that are desirable in a sensor network environment are bolt on the networks as application layer functionalities, rather than built in them. For several important functionalities that are at the core of IoT architectures we have developed a solution that is analyzed and discussed in the following chapters. The chain of steps from the acquisition of sensor samples until these samples reach a control center or the cloud where the data analytics are performed, starts with the acquisition of the sensor measurements at the correct time and, importantly, synchronously among all sensors deployed. This synchronization has to be network wide, including both the wired core network as well as the wireless edge devices. This thesis studies a decentralized and lightweight solution to synchronize and schedule IoT devices over wireless and wired networks adaptively, with very simple local signaling. Furthermore, measurement results have to be transported and aggregated over the same interface, requiring clever coordination among all nodes, as network resources are shared, keeping scalability and fail-safe operation in mind. Furthermore ensuring the integrity of measurements is a complicated task. On the one hand Cryptography can shield the network from outside attackers and therefore is the first step to take, but due to the volume of sensors must rely on an automated key distribution mechanism. On the other hand cryptography does not protect against exposed keys or inside attackers. One however can exploit statistical properties to detect and identify nodes that send false information and exclude these attacker nodes from the network to avoid data manipulation. Furthermore, if data is supplied by a third party, one can apply automated trust metric for each individual data source to define which data to accept and consider for mentioned statistical tests in the first place. Monitoring the cyber and physical activities of an IoT infrastructure in concert is another topic that is investigated in this thesis.Dissertation/ThesisDoctoral Dissertation Electrical Engineering 201

    Planning while Believing to Know

    Get PDF
    Over the last few years, the concept of Artificial Intelligence (AI) has become essential in our daily life and in several working scenarios. Among the various branches of AI, automated planning and the study of multi-agent systems are central research fields. This thesis focuses on a combination of these two areas: that is, a specialized kind of planning known as Multi-agent Epistemic Planning. This field of research is concentrated on all those scenarios where agents, reasoning in the space of knowledge/beliefs, try to find a plan to reach a desirable state from a starting one. This requires agents able to reason about her/his and others’ knowledge/beliefs and, therefore, capable of performing epistemic reasoning. Being aware of the information flows and the others’ states of mind is, in fact, a key aspect in several planning situations. That is why developing autonomous agents, that can reason considering the perspectives of their peers, is paramount to model a variety of real-world domains. The objective of our work is to formalize an environment where a complete characterization of the agents’ knowledge/beliefs interactions and updates are possible. In particular, we achieved such a goal by defining a new action-based language for Multi-agent Epistemic Planning and implementing epistemic planners based on it. These solvers, flexible enough to reason about various domains and different nuances of knowledge/belief update, can provide a solid base for further research on epistemic reasoning or real-base applications. This dissertation also proposes the design of a more general epistemic planning architecture. This architecture, following famous cognitive theories, tries to emulate some characteristics of the human decision-making process. In particular, we envisioned a system composed of several solving processes, each one with its own trade-off between efficiency and correctness, which are arbitrated by a meta-cognitive module
    • …
    corecore