116,732 research outputs found

    Dynamic agent safety logic : theory and applications

    Get PDF
    Modal logic is a family of logics for reasoning about relational structures, broadly construed. It sits at the nexus of philosophy, mathematics, software engineering, and economics. By modeling a target domain as a relational structure, one can define a modal logic for reasoning about its properties. Common examples include modal logics for knowledge, belief, time, program execution, mathematical provability, and ethics. This thesis presents a modal logic that combines several modalities in order to reason about realistic human-like agents. We combine knowledge, belief, action, and safe action, which we call Dynamic Agent Safety Logic, or DASL. We distinguish DASL from other modal logics treating similar topics by arguing that the standard models of human agency are not adequate. We present some criteria a logic of agency should strive to achieve, and then compare how related logics fare. We use the Coq interactive theorem prover to mechanically prove soundness and completeness results for the logic, as well as apply it to case studies in the domain of aviation safety, demonstrating its ability to model realistic, minimally rational agents. Finally, we examine the consequences of modeling agents capable of a certain sort of self-reflection. Such agents face a formal difficulty due to Lob's Theorem, called Lob's Obstacle in the literature. We show how DASL can be relaxed to avoid Lob's Obstacle, while the other modal logics of agency cannot easily do so.Includes bibliographical reference

    Reasoning about Cognitive Trust in Stochastic Multiagent Systems

    Get PDF
    We consider the setting of stochastic multiagent systems modelled as stochastic multiplayer games and formulate an automated verification framework for quantifying and reasoning about agents’ trust. To capture human trust, we work with a cognitive notion of trust defined as a subjective evaluation that agent A makes about agent B’s ability to complete a task, which in turn may lead to a decision by A to rely on B. We propose a probabilistic rational temporal logic PRTL*, which extends the probabilistic computation tree logic PCTL* with reasoning about mental attitudes (beliefs, goals, and intentions) and includes novel operators that can express concepts of social trust such as competence, disposition, and dependence. The logic can express, for example, that “agent A will eventually trust agent B with probability at least p that B will behave in a way that ensures the successful completion of a given task.” We study the complexity of the automated verification problem and, while the general problem is undecidable, we identify restrictions on the logic and the system that result in decidable, or even tractable, subproblems

    Multi-agent verification and control with probabilistic model checking

    Get PDF
    Probabilistic model checking is a technique for formal automated reasoning about software or hardware systems that operate in the context of uncertainty or stochasticity. It builds upon ideas and techniques from a diverse range of fields, from logic, automata and graph theory, to optimisation, numerical methods and control. In recent years, probabilistic model checking has also been extended to integrate ideas from game theory, notably using models such as stochastic games and solution concepts such as equilibria, to formally verify the interaction of multiple rational agents with distinct objectives. This provides a means to reason flexibly about agents acting in either an adversarial or a collaborative fashion, and opens up opportunities to tackle new problems within, for example, artificial intelligence, robotics and autonomous systems. In this paper, we summarise some of the advances in this area, and highlight applications for which they have already been used. We discuss how the strengths of probabilistic model checking apply, or have the potential to apply, to the multi-agent setting and outline some of the key challenges required to make further progress in this field

    (WP 2018-02) Extending Behavioral Economics’ Methodological Critique of Rational Choice Theory to Include Counterfactual Reasoning

    Get PDF
    This paper extends behavioral economics’ realist methodological critique of rational choice theory to include the type of logical reasoning underlying its axiomatic foundations. A purely realist critique ignores Kahneman’s emphasis on how the theory’s axiomatic foundations make it normative. I extend his critique to the theory’s reliance on classical logic, which excludes the concept of possibility employed in counterfactual reasoning. Nudge theory reflects this in employing counterfactual conditionals. This answers the complaint that the Homo sapiens agent conception ultimately reduces to a Homo economicus conception, and also provides grounds for treating Homo sapiens as an adaptive, non-optimizing, reflexive agent

    Pushing the bounds of rationality: Argumentation and extended cognition

    Get PDF
    One of the central tasks of a theory of argumentation is to supply a theory of appraisal: a set of standards and norms according to which argumentation, and the reasoning involved in it, is properly evaluated. In their most general form, these can be understood as rational norms, where the core idea of rationality is that we rightly respond to reasons by according the credence we attach to our doxastic and conversational commitments with the probative strength of the reasons we have for them. Certain kinds of rational failings are so because they are manifestly illogical – for example, maintaining overtly contradictory commitments, violating deductive closure by refusing to accept the logical consequences of one’s present commitments, or failing to track basing relations by not updating one’s commitments in view of new, defeating information. Yet, according to the internal and empirical critiques, logic and probability theory fail to supply a fit set of norms for human reasoning and argument. Particularly, theories of bounded rationality have put pressure on argumentation theory to lower the normative standards of rationality for reasoners and arguers on the grounds that we are bounded, finite, and fallible agents incapable of meeting idealized standards. This paper explores the idea that argumentation, as a set of practices, together with the procedures and technologies of argumentation theory, is able to extend cognition such that we are better able to meet these idealized logical standards, thereby extending our responsibilities to adhere to idealized rational norms

    Rational physical agent reasoning beyond logic

    No full text
    The paper addresses the problem of defining a theoretical physical agent framework that satisfies practical requirements of programmability by non-programmer engineers and at the same time permitting fast realtime operation of agents on digital computer networks. The objective of the new framework is to enable the satisfaction of performance requirements on autonomous vehicles and robots in space exploration, deep underwater exploration, defense reconnaissance, automated manufacturing and household automation

    Reasoning about Emotional Agents

    Get PDF
    In this paper we are concerned with reasoning about agents with emotions. To be more precise: we aim at a logical account of emotional agents. The very topic may already raise some eyebrows. Reasoning / rationality and emotions seem opposites, and reasoning about emotions or a logic of emotional agents seems a contradiction in terms. However, emotions and rationality are known to be more interconnected than one may suspect. There is psychological evidence that having emotions may help one to do reasoning and tasks for which rationality seems to be the only factor [1]. Moreover, work by e.g. Sloman [5] shows that one may think of designing agentbased systems where these agents show some kind of emotions, and, even more importantly, display behaviour dependent on their emotional state. It is exactly in this sense that we aim at looking at emotional agents: artificial systems that are designed in such a manner that emotions play a role. Also in psychology emotions are viewed as a structuring mechanism. Emotions are held to help human beings to choose from a myriad of possible actions in response to what happens in ou

    A Dynamic Solution to the Problem of Logical Omniscience

    Get PDF
    The traditional possible-worlds model of belief describes agents as ‘logically omniscient’ in the sense that they believe all logical consequences of what they believe, including all logical truths. This is widely considered a problem if we want to reason about the epistemic lives of non-ideal agents who—much like ordinary human beings—are logically competent, but not logically omniscient. A popular strategy for avoiding logical omniscience centers around the use of impossible worlds: worlds that, in one way or another, violate the laws of logic. In this paper, we argue that existing impossible-worlds models of belief fail to describe agents who are both logically non-omniscient and logically competent. To model such agents, we argue, we need to ‘dynamize’ the impossible-worlds framework in a way that allows us to capture not only what agents believe, but also what they are able to infer from what they believe. In light of this diagnosis, we go on to develop the formal details of a dynamic impossible-worlds framework, and show that it successfully models agents who are both logically non-omniscient and logically competent
    corecore