88 research outputs found

    A computationally grounded, weighted doxastic logic

    Get PDF
    Modelling, reasoning and verifying complex situations involving a system of agents is crucial in all phases of the development of a number of safety-critical systems. In particular, it is of fundamental importance to have tools and techniques to reason about the doxastic and epistemic states of agents, to make sure that the agents behave as intended. In this paper we introduce a computationally grounded logic called COGWED and we present two types of semantics that support a range of practical situations. We provide model checking algorithms, complexity characterisations and a prototype implementation. We validate our proposal against a case study from the avionic domain: we assess and verify the situational awareness of pilots flying an aircraft with several automated components in off-nominal conditions

    Model checking degrees of belief in a system of agents

    Get PDF
    In this paper we present a uniļ¬ed framework to model and verify degrees of belief in a system of agents. In particular, we describe an extension of the temporal-epistemic logic CTLK and we introduce a semantics based on interpreted systems for this extension. In this way, degrees of beliefs do not need to be provided externally, but can be derived automatically from the possible executions of the system,thereby providing a computationally grounded formalism.We leverage the semantics to (a) construct a model checking algorithm, (b) investigate its complexity, (c) provide a Java implementation of the model checking algorithm, and(d) evaluate our approach using the standard benchmark of the dining cryptographers. Finally, we provide a detailed case study: using our framework and our implementation,we assess and verify the situational awareness of the pilot of Air France 447 ļ¬‚ying in oļ¬€-nominal conditions

    Model checking degrees of belief in a system of agents

    Get PDF
    In this paper we present a uniļ¬ed framework to model and verify degrees of belief in a system of agents. In particular, we describe an extension of the temporal-epistemic logic CTLK and we introduce a semantics based on interpreted systems for this extension. In this way, degrees of beliefs do not need to be provided externally, but can be derived automatically from the possible executions of the system,thereby providing a computationally grounded formalism.We leverage the semantics to (a) construct a model checking algorithm, (b) investigate its complexity, (c) provide a Java implementation of the model checking algorithm, and(d) evaluate our approach using the standard benchmark of the dining cryptographers. Finally, we provide a detailed case study: using our framework and our implementation,we assess and verify the situational awareness of the pilot of Air France 447 ļ¬‚ying in oļ¬€-nominal conditions

    Using multi-agent systems to go beyond temporal patterns verification

    Get PDF
    A key step in formal verification is the translation of requirements into logic formulae. Various flavours of temporal logic are commonly used in academia and in industry to capture, among others, liveness and safety requirements. In the past two decades there has been a substantial amount of work in the area of verification of extensions of temporal logic. In this column I will provide a high level overview of some work in this area, focussing in particular on the verification of temporal-epistemic properties, showing how temporal-epistemic logics can be used to capture requirements that are common in many concrete systems, and describing a model checker for multi-agent systems called MCMAS

    Can AI Help Us to Understand Belief? Sources, Advances, Limits, and Future Directions

    Get PDF
    The study of belief is expanding and involves a growing set of disciplines and research areas. These research programs attempt to shed light on the process of believing, understood as a central human cognitive function. Computational systems and, in particular, what we commonly understand as Artificial Intelligence (AI), can provide some insights on how beliefs work as either a linear process or as a complex system. However, the computational approach has undergone some scrutiny, in particular about the differences between what is distinctively human and what can be inferred from AI systems. The present article investigates to what extent recent developments in AI provide new elements to the debate and clarify the process of belief acquisition, consolidation, and recalibration. The article analyses and debates current issues and topics of investigation such as: different models to understand belief, the exploration of belief in an automated reasoning environment, the case of religious beliefs, and future directions of research

    Can AI Help Us to Understand Belief? Sources, Advances, Limits, and Future Directions

    Get PDF
    The study of belief is expanding and involves a growing set of disciplines and research areas. These research programs attempt to shed light on the process of believing, understood as a central human cognitive function. Computational systems and, in particular, what we commonly understand as Artificial Intelligence (AI), can provide some insights on how beliefs work as either a linear process or as a complex system. However, the computational approach has undergone some scrutiny, in particular about the differences between what is distinctively human and what can be inferred from AI systems. The present article investigates to what extent recent developments in AI provide new elements to the debate and clarify the process of belief acquisition, consolidation, and recalibration. The article analyses and debates current issues and topics of investigation such as: different models to understand belief, the exploration of belief in an automated reasoning environment, the case of religious beliefs, and future directions of research

    On the adaptive advantage of always being right (even when one is not)

    Get PDF
    We propose another positive illusion ā€“ overconfidence in the generalisability of oneā€™s theory ā€“ that fits with McKay & Dennettā€™s (M&Dā€™s) criteria for adaptive misbeliefs. This illusion is pervasive in adult reasoning but we focus on its prevalence in childrenā€™s developing theories. It is a strongly held conviction arising from normal functioning of the doxastic system that confers adaptive advantage on the individual

    From raw data to agent perceptions for simulation, verification, and monitoring

    Get PDF
    In this paper we present a practical solution to the problem of connecting ā€œreal worldā€ data exchanged between sensors and actuators with the higher level of abstraction used in frameworks for multiagent systems. In particular, we show how to connect an industry-standard publish-subscribe communication protocol for embedded systems called MQTT with two Belief-Desire-Intention agent modelling and programming languages: Jason/AgentSpeak and Brahms. In the paper we describe the details of our Java implementation and we release all the code open source

    Arguments to believe and beliefs to argue. Epistemic logics for argumentation and its dynamics

    Get PDF
    Arguing and believing are two skills that have typically played a crucial role in the analysis of human cognition. Both notions have received notable attention from a broad range of disciplines, including linguistics, philosophy, psychology, and computer science. The main goal of this dissertation consists in studying from a logical perspective (that is, focused on reasoning) some of the existing relations between beliefs and argumentation. From a methodological point of view, we propose to combine two well-known families of formalisms for knowledge representation that have been relatively disconnected (with some salient exceptions): epistemic logic (Fagin et al., 2004; Meyer and van der Hoek, 1995) together with its dynamic extensions (van Ditmarsch et al., 2007; van Benthem, 2011), on the one hand, and formal argumentation (Baroni et al., 2018; Gabbay et al., 2021), on the other hand. This choice is arguably natural. Epistemic logic provides well-known tools for qualitatively representing epistemic attitudes (belief, among them). Formal argumentation, on its side, is the broad research field where mathematical representations of argumentative phenomena are investigated. Moreover, the notion of awareness, as treated in the epistemic logic tradition since Fagin and Halpern (1987), can be used as a theoretical bridge among both areas. This dissertation is presented as a collection of papers [compendio de publicaciones], meaning that its main contributions are contained in the reprint of six works that have been previously published, placed in Chapter 4. In chapter 1, we pursue a general introduction to the research problem. Chapter 2 is devoted to the presentation of the technical tools employed through the thesis. Chapter 3 explains how the contributions approach the research problem. Chapter 5 provides a general discussion of results, by analysing closely related work. We conclude in Chapter 6 with some remarks and open paths for future research
    • ā€¦
    corecore