4,680 research outputs found

    Progression and Verification of Situation Calculus Agents with Bounded Beliefs

    Get PDF
    We investigate agents that have incomplete information and make decisions based on their beliefs expressed as situation calculus bounded action theories. Such theories have an infinite object domain, but the number of objects that belong to fluents at each time point is bounded by a given constant. Recently, it has been shown that verifying temporal properties over such theories is decidable. We take a first-person view and use the theory to capture what the agent believes about the domain of interest and the actions affecting it. In this paper, we study verification of temporal properties over online executions. These are executions resulting from agents performing only actions that are feasible according to their beliefs. To do so, we first examine progression, which captures belief state update resulting from actions in the situation calculus. We show that, for bounded action theories, progression, and hence belief states, can always be represented as a bounded first-order logic theory. Then, based on this result, we prove decidability of temporal verification over online executions for bounded action theories. © 2015 The Author(s

    Bounded Situation Calculus Action Theories

    Full text link
    In this paper, we investigate bounded action theories in the situation calculus. A bounded action theory is one which entails that, in every situation, the number of object tuples in the extension of fluents is bounded by a given constant, although such extensions are in general different across the infinitely many situations. We argue that such theories are common in applications, either because facts do not persist indefinitely or because the agent eventually forgets some facts, as new ones are learnt. We discuss various classes of bounded action theories. Then we show that verification of a powerful first-order variant of the mu-calculus is decidable for such theories. Notably, this variant supports a controlled form of quantification across situations. We also show that through verification, we can actually check whether an arbitrary action theory maintains boundedness.Comment: 51 page

    Why De Minimis?

    Get PDF
    De minimis cutoffs are a familiar feature of risk regulation. This includes the quantitative individual risk thresholds for fatality risks employed in many contexts by EPA, FDA, and other agencies, such as the 1-in-1 million lifetime cancer risk cutoff; extreme event cutoffs for addressing natural hazards, such as the 100 - year - flood or 475 - year - earthquake; de minimis failure probabilities for built structures; the exclusion of low - probability causal models; and other policymaking criteria. All these tests have a common structure, as I show in the Article. A de minimis test, broadly defined, tells the decisionmaker to determine whether the probability of some outcome is above a low threshold and makes this determination relevant, in some way, to her choice. De minimis cutoffs are deeply problematic, and have been generally misunderstood by scholars. First, they are warranted - if at all - by virtue of policymakers\u27 bounded rationality. If policymakers were fully rational, de minimis cutoffs would have no justification. (This is true, I suggest, across a wide range of normative theories, and for the full gamut of de minimis tests). Second, although it seems plausible that some de minimis tests are justified once bounded rationality is brought into the picture, it is not clear which those are, or even how we should go about identifying them

    Hierarchical agent supervision

    Get PDF
    Agent supervision is a form of control/customization where a supervisor restricts the behavior of an agent to enforce certain requirements, while leaving the agent as much autonomy as possible. To facilitate supervision, it is often of interest to consider hierarchical models where a high level abstracts over low-level behavior details. We study hierarchical agent supervision in the context of the situation calculus and the ConGolog agent programming language, where we have a rich first-order representation of the agent state. We define the constraints that ensure that the controllability of in-dividual actions at the high level in fact captures the controllability of their implementation at the low level. On the basis of this, we show that we can obtain the maximally permissive supervisor by first considering only the high-level model and obtaining a high- level supervisor and then refining its actions locally, thus greatly simplifying the supervisor synthesis task

    Abstraction of Agents Executing Online and their Abilities in the Situation Calculus

    Get PDF
    We develop a general framework for abstracting online behavior of an agent that may acquire new knowledge during execution (e.g., by sensing), in the situation calculus and ConGolog. We assume that we have both a high-level action theory and a low-level one that represent the agent's behavior at different levels of detail. In this setting, we define ability to perform a task/achieve a goal, and then show that under some reasonable assumptions, if the agent has a strategy by which she is able to achieve a goal at the high level, then we can refine it into a low-level strategy to do so

    Abstracting Noisy Robot Programs

    Get PDF
    Abstraction is a commonly used process to represent some low-level system by a more coarse specification with the goal to omit unnecessary details while preserving important aspects. While recent work on abstraction in the situation calculus has focused on non-probabilistic domains, we describe an approach to abstraction of probabilistic and dynamic systems. Based on a variant of the situation calculus with probabilistic belief, we define a notion of bisimulation that allows to abstract a detailed probabilistic basic action theory with noisy actuators and sensors by a possibly deterministic basic action theory. By doing so, we obtain abstract Golog programs that omit unnecessary details and which can be translated back to a detailed program for actual execution. This simplifies the implementation of noisy robot programs, opens up the possibility of using deterministic reasoning methods (e.g., planning) on probabilistic problems, and provides domain descriptions that are more easily understandable and explainable

    Verification of Agent-Based Artifact Systems

    Full text link
    Artifact systems are a novel paradigm for specifying and implementing business processes described in terms of interacting modules called artifacts. Artifacts consist of data and lifecycles, accounting respectively for the relational structure of the artifacts' states and their possible evolutions over time. In this paper we put forward artifact-centric multi-agent systems, a novel formalisation of artifact systems in the context of multi-agent systems operating on them. Differently from the usual process-based models of services, the semantics we give explicitly accounts for the data structures on which artifact systems are defined. We study the model checking problem for artifact-centric multi-agent systems against specifications written in a quantified version of temporal-epistemic logic expressing the knowledge of the agents in the exchange. We begin by noting that the problem is undecidable in general. We then identify two noteworthy restrictions, one syntactical and one semantical, that enable us to find bisimilar finite abstractions and therefore reduce the model checking problem to the instance on finite models. Under these assumptions we show that the model checking problem for these systems is EXPSPACE-complete. We then introduce artifact-centric programs, compact and declarative representations of the programs governing both the artifact system and the agents. We show that, while these in principle generate infinite-state systems, under natural conditions their verification problem can be solved on finite abstractions that can be effectively computed from the programs. Finally we exemplify the theoretical results of the paper through a mainstream procurement scenario from the artifact systems literature

    Credal pragmatism

    Get PDF
    According to doxastic pragmatism, certain perceived practical factors, such as high stakes and urgency, have systematic effects on normal subjects’ outright beliefs. Upholders of doxastic pragmatism have so far endorsed a particular version of this view, which we may call threshold pragmatism. This view holds that the sensitivity of belief to the relevant practical factors is due to a corresponding sensitivity of the threshold on the degree of credence necessary for outright belief. According to an alternative but yet unrecognised version of doxastic pragmatism, practical factors affect credence rather than the threshold on credence. Let’s call this alternative view credal pragmatism. In this paper, I argue that credal pragmatism is more plausible than threshold pragmatism. I show that the former view better accommodates a cluster of intuitive and empirical data. I conclude by considering the issue of whether our doxastic attitudes’ sensitivity to practical factors can be considered rational, and if yes, in what sense

    Epistemic and Ontic Quantum Realities

    Get PDF
    Quantum theory has provoked intense discussions about its interpretation since its pioneer days. One of the few scientists who have been continuously engaged in this development from both physical and philosophical perspectives is Carl Friedrich von Weizsaecker. The questions he posed were and are inspiring for many, including the authors of this contribution. Weizsaecker developed Bohr's view of quantum theory as a theory of knowledge. We show that such an epistemic perspective can be consistently complemented by Einstein's ontically oriented position
    • …
    corecore