34 research outputs found

    Cooperative Monitoring to Diagnose Multiagent Plans

    Get PDF
    Diagnosing the execution of a Multiagent Plan (MAP) means identifying and explaining action failures (i.e., actions that did not reach their expected effects). Current approaches to MAP diagnosis are substantially centralized, and assume that action failures are inde-pendent of each other. In this paper, the diagnosis of MAPs, executed in a dynamic and partially observable environment, is addressed in a fully distributed and asynchronous way; in addition, action failures are no longer assumed as independent of each other. The paper presents a novel methodology, named Cooperative Weak-Committed Moni-toring (CWCM), enabling agents to cooperate while monitoring their own actions. Coop-eration helps the agents to cope with very scarcely observable environments: what an agent cannot observe directly can be acquired from other agents. CWCM exploits nondetermin-istic action models to carry out two main tasks: detecting action failures and building trajectory-sets (i.e., structures representing the knowledge an agent has about the environ-ment in the recent past). Relying on trajectory-sets, each agent is able to explain its own action failures in terms of exogenous events that have occurred during the execution of the actions themselves. To cope with dependent failures, CWCM is coupled with a diagnostic engine that distinguishes between primary and secondary action failures. An experimental analysis demonstrates that the CWCM methodology, together with the proposed diagnostic inferences, are effective in identifying and explaining action failures even in scenarios where the system observability is significantly reduced. 1

    An overview of decision table literature 1982-1995.

    Get PDF
    This report gives an overview of the literature on decision tables over the past 15 years. As much as possible, for each reference, an author supplied abstract, a number of keywords and a classification are provided. In some cases own comments are added. The purpose of these comments is to show where, how and why decision tables are used. The literature is classified according to application area, theoretical versus practical character, year of publication, country or origin (not necessarily country of publication) and the language of the document. After a description of the scope of the interview, classification results and the classification by topic are presented. The main body of the paper is the ordered list of publications with abstract, classification and comments.

    Artificial evolution with Binary Decision Diagrams: a study in evolvability in neutral spaces

    Get PDF
    This thesis develops a new approach to evolving Binary Decision Diagrams, and uses it to study evolvability issues. For reasons that are not yet fully understood, current approaches to artificial evolution fail to exhibit the evolvability so readily exhibited in nature. To be able to apply evolvability to artificial evolution the field must first understand and characterise it; this will then lead to systems which are much more capable than they are currently. An experimental approach is taken. Carefully crafted, controlled experiments elucidate the mechanisms and properties that facilitate evolvability, focusing on the roles and interplay between neutrality, modularity, gradualism, robustness and diversity. Evolvability is found to emerge under gradual evolution as a biased distribution of functionality within the genotype-phenotype map, which serves to direct phenotypic variation. Neutrality facilitates fitness-conserving exploration, completely alleviating local optima. Population diversity, in conjunction with neutrality, is shown to facilitate the evolution of evolvability. The search is robust, scalable, and insensitive to the absence of initial diversity. The thesis concludes that gradual evolution in a search space that is free of local optima by way of neutrality can be a viable alternative to problematic evolution on multi-modal landscapes

    Dagstuhl News January - December 2001

    Get PDF
    "Dagstuhl News" is a publication edited especially for the members of the Foundation "Informatikzentrum Schloss Dagstuhl" to thank them for their support. The News give a summary of the scientific work being done in Dagstuhl. Each Dagstuhl Seminar is presented by a small abstract describing the contents and scientific highlights of the seminar as well as the perspectives or challenges of the research topic

    Temporal Logic Motion Planning

    Get PDF
    In this paper, a critical review on temporal logic motion planning is presented. The review paper aims to address the following problems: (a) In a realistic situation, the motion planning problem is carried out in real-time, in a dynamic, uncertain and ever-changing environment, and (b) The accomplishment of high-level specification tasks which are more than just the traditional planning problem (i.e., start at initial state A and go to the goal state B) are considered. The use of theory of computation and formal methods, tools and techniques present a promising direction of research in solving motion planning problems that are influenced by high-level specification of complex tasks. The review, therefore, focuses only on those papers that use the aforementioned tools and techniques to solve a motion planning problem. A proposed robust platform that deals with the complexity of more expressive temporal logics is also presented.Defence Science Journal, 2010, 60(1), pp.23-38, DOI:http://dx.doi.org/10.14429/dsj.60.9

    Probabilistic and Epistemic Model Checking for Multi-Agent Systems

    Get PDF
    Model checking is a formal technique widely used to verify security and communication protocols in epistemic multi-agent systems against given properties. Qualitative properties such as safety and liveliness have been widely analysed in the literature. However, systems also have quantitative and uncertain (i.e., probabilistic) properties such as degree of reliability and reachability, which still need further attention from the model checking perspective. In this dissertation, we analyse such properties and present a new method for probabilistic model checking of epistemic multi-agent systems specified by a new probabilistic-epistemic logic PCTLK. We model multiagent systems distributed knowledge bases using probabilistic interpreted systems. We also define transformations from those interpreted systems into discrete-time Markov chains and from PCTLK formulae to PCTL formulae, an existing extension of CTL with probabilities. By so doing, we are able to convert the PCTLK model checking problem into the PCTL one. We address the problem of verifying probabilistic properties and epistemic properties in concurrent probabilistic systems as well. We then prove that model checking a formula of PCTLK in concurrent probabilistic systems is PSPACE-complete. Furthermore, we represent models associated with PCTLK logic symbolically with Multi-Terminal Binary Decision Diagrams (MTBDDs). Finally, we make use of PRISM, the model checker of PCTL without adding new computation cost. Dining cryptographers protocol is implemented to show the applicability of the proposed technique along with performance analysis and comparison in terms of execution time and state space scalability with MCK, an existing epistemic-probabilistic model checker, and MCMAS, a model checker for multi-agent systems. Another example, NetBill protocol, is also implemented with PRISM to verify probabilistic epistemic properties and to evaluate the complexity of this verification

    Reasoning about LTL Synthesis over finite and infinite games

    Get PDF
    In the last few years, research formal methods for the analysis and the verification of properties of systems has increased greatly. A meaningful contribution in this area has been given by algorithmic methods developed in the context of synthesis. The basic idea is simple and appealing: instead of developing a system and verifying that it satisfies its specification, we look for an automated procedure that, given the specification returns a system that is correct by construction. Synthesis of reactive systems is one of the most popular variants of this problem, in which we want to synthesize a system characterized by an ongoing interaction with the environment. In this setting, large effort has been devoted to analyze specifications given as formulas of linear temporal logic, i.e., LTL synthesis. Traditional approaches to LTL synthesis rely on transforming the LTL specification into parity deterministic automata, and then to parity games, for which a so-called winning region is computed. Computing such an automaton is, in the worst-case, double-exponential in the size of the LTL formula, and this becomes a computational bottleneck in using the synthesis process in practice. The first part of this thesis is devoted to improve the solution of parity games as they are used in solving LTL synthesis, trying to give efficient techniques, in terms of running time and space consumption, for solving parity games. We start with the study and the implementation of an automata-theoretic technique to solve parity games. More precisely, we consider an algorithm introduced by Kupferman and Vardi that solves a parity game by solving the emptiness problem of a corresponding alternating parity automaton. Our empirical evaluation demonstrates that this algorithm outperforms other algorithms when the game has a small number of priorities relative to the size of the game. In many concrete applications, we do indeed end up with parity games where the number of priorities is relatively small. This makes the new algorithm quite useful in practice. We then provide a broad investigation of the symbolic approach for solving parity games. Specifically, we implement in a fresh tool, called SPGSolver, four symbolic algorithms to solve parity games and compare their performances to the corresponding explicit versions for different classes of games. By means of benchmarks, we show that for random games, even for constrained random games, explicit algorithms actually perform better than symbolic algorithms. The situation changes, however, for structured games, where symbolic algorithms seem to have the advantage. This suggests that when evaluating algorithms for parity-game solving, it would be useful to have real benchmarks and not only random benchmarks, as the common practice has been. LTL synthesis has been largely investigated also in artificial intelligence, and specifically in automated planning. Indeed, LTL synthesis corresponds to fully observable nondeterministic planning in which the domain is given compactly and the goal is an LTL formula, that in turn is related to two-player games with LTL goals. Finding a strategy for these games means to synthesize a plan for the planning problem. The last part of this thesis is then dedicated to investigate LTL synthesis under this different view. In particular, we study a generalized form of planning under partial observability, in which we have multiple, possibly infinitely many, planning domains with the same actions and observations, and goals expressed over observations, which are possibly temporally extended. By building on work on two-player games with imperfect information in the Formal Methods literature, we devise a general technique, generalizing the belief-state construction, to remove partial observability. This reduces the planning problem to a game of perfect information with a tight correspondence between plans and strategies. Then we instantiate the technique and solve some generalized planning problems

    ICAPS 2012. Proceedings of the third Workshop on the International Planning Competition

    Get PDF
    22nd International Conference on Automated Planning and Scheduling. June 25-29, 2012, Atibaia, Sao Paulo (Brazil). Proceedings of the 3rd the International Planning CompetitionThe Academic Advising Planning Domain / Joshua T. Guerin, Josiah P. Hanna, Libby Ferland, Nicholas Mattei, and Judy Goldsmith. -- Leveraging Classical Planners through Translations / Ronen I. Brafman, Guy Shani, and Ran Taig. -- Advances in BDD Search: Filtering, Partitioning, and Bidirectionally Blind / Stefan Edelkamp, Peter Kissmann, and Álvaro Torralba. -- A Multi-Agent Extension of PDDL3.1 / Daniel L. Kovacs. -- Mining IPC-2011 Results / Isabel Cenamor, TomĂĄs de la Rosa, and Fernando FernĂĄndez. -- How Good is the Performance of the Best Portfolio in IPC-2011? / Sergio Nuñez, Daniel Borrajo, and Carlos Linares LĂłpez. -- “Type Problem in Domain Description!” or, Outsiders’ Suggestions for PDDL Improvement / Robert P. Goldman and Peter KellerEn prens

    Efficient local search for Pseudo Boolean Optimization

    Get PDF
    Algorithms and the Foundations of Software technolog
    corecore