192 research outputs found

    Bounded Situation Calculus Action Theories

    Full text link
    In this paper, we investigate bounded action theories in the situation calculus. A bounded action theory is one which entails that, in every situation, the number of object tuples in the extension of fluents is bounded by a given constant, although such extensions are in general different across the infinitely many situations. We argue that such theories are common in applications, either because facts do not persist indefinitely or because the agent eventually forgets some facts, as new ones are learnt. We discuss various classes of bounded action theories. Then we show that verification of a powerful first-order variant of the mu-calculus is decidable for such theories. Notably, this variant supports a controlled form of quantification across situations. We also show that through verification, we can actually check whether an arbitrary action theory maintains boundedness.Comment: 51 page

    LTLf/LDLf Non-Markovian Rewards

    Get PDF
    In Markov Decision Processes (MDPs), the reward obtained in a state is Markovian, i.e., depends on the last state and action. This dependency makes it difficult to reward more interesting long-term behaviors, such as always closing a door after it has been opened, or providing coffee only following a request. Extending MDPs to handle non-Markovian reward functions was the subject of two previous lines of work. Both use LTL variants to specify the reward function and then compile the new model back into a Markovian model. Building on recent progress in temporal logics over finite traces, we adopt LDLf for specifying non-Markovian rewards and provide an elegant automata construction for building a Markovian model, which extends that of previous work and offers strong minimality and compositionality guarantees

    On First-Order ÎĽ-Calculus over Situation Calculus Action Theories

    Get PDF
    In this paper we study verification of situation calculus action theories against first-order mu-calculus with quantification across situations. Specifically, we consider mu-La and mu-Lp, the two variants of mu-calculus introduced in the literature for verification of data-aware processes. The former requires that quantification ranges over objects in the current active domain, while the latter additionally requires that objects assigned to variables persist across situations. Each of these two logics has a distinct corresponding notion of bisimulation. In spite of the differences we show that the two notions of bisimulation collapse for dynamic systems that are generic, which include all those systems specified through a situation calculus action theory. Then, by exploiting this result, we show that for bounded situation calculus action theories, mu-La and mu-Lp have exactly the same expressive power. Finally, we prove decidability of verification of mu-La properties over bounded action theories, using finite faithful abstractions. Differently from the mu-Lp case, these abstractions must depend on the number of quantified variables in the mu-La formula

    Progression and Verification of Situation Calculus Agents with Bounded Beliefs

    Get PDF
    We investigate agents that have incomplete information and make decisions based on their beliefs expressed as situation calculus bounded action theories. Such theories have an infinite object domain, but the number of objects that belong to fluents at each time point is bounded by a given constant. Recently, it has been shown that verifying temporal properties over such theories is decidable. We take a first-person view and use the theory to capture what the agent believes about the domain of interest and the actions affecting it. In this paper, we study verification of temporal properties over online executions. These are executions resulting from agents performing only actions that are feasible according to their beliefs. To do so, we first examine progression, which captures belief state update resulting from actions in the situation calculus. We show that, for bounded action theories, progression, and hence belief states, can always be represented as a bounded first-order logic theory. Then, based on this result, we prove decidability of temporal verification over online executions for bounded action theories. © 2015 The Author(s

    Clock specifications for temporal tasks in planning and learning

    Get PDF
    Recently, Linear Temporal Logics on finite traces, such as LTL (or LDL ), have been advocated as high-level formalisms to express dynamic properties, such as goals in planning domains or rewards in Reinforcement Learning (RL). This paper addresses the challenge of separating high-level temporal specifications from the low-level details of the underlying environment (domain or MDP), by allowing for expressing the specifications at a different time granularity than the environment. We study the notion of a clock which progresses the high-level LTL specification, whose ticks are triggered by dynamic (low-level) properties defined on the underlying environment. The obtained separation enables terse high-level specifications while allowing for very expressive forms of clock expressed as general LTL properties over low-level features, such as counting or occurrence/alternation of special events. We devise an automata-based construction to compile away the clock into a deterministic automaton that is polynomial in the size of the automata characterizing the high-level and clock specifications. We show the correctness of the approach and discuss its application in several contexts, including FOND planning, RL with LTL Restraining Bolts, and Reward Machines

    Mimicking behaviors in separated domains

    Get PDF
    Devising a strategy to make a system mimic behaviors from another system is a problem that naturally arises in many areas of Computer Science. In this work, we interpret this problem in the context of intelligent agents, from the perspective of LTLf, a formalism commonly used in AI for expressing finite-trace properties. Our model consists of two separated dynamic domains, DA and DB, and an LTLf specification that formalizes the notion of mimicking by mapping properties on behaviors (traces) of DA into properties on behaviors of DB. The goal is to synthesize a strategy that step-by-step maps every behavior of DA into a behavior of DB so that the specification is met. We consider several forms of mapping specifications, ranging from simple ones to full LTLf, and for each, we study synthesis algorithms and computational properties

    Unilaterale ulzerationen der fingerspitzen [Unilateral finger tip ulcerations]

    Get PDF
    none5Eine 56-jährige Frau stellte sich vor mit seit 4 Monaten bestehenden chronisch-rezidivierenden bullösen nekrotischen Läsionen an den Endgliedern des Zeige- und Mittelfi ngers der linken Hand. An denselben Fingern litt sie unter Taubheitsgefühl, Lähmung, nachts verstärktem Schmerz sowie Verlust des Feingefühls. Die Behandlung mit topischen Antibiotika und Kortikosteroiden sowie systemischem Ibuprofen und Prednison führte nicht zur Besserung. Die Patientin hatte keine weiteren Erkrankungen, war Nichtraucherin und verneinte eine Verletzung bei der Arbeit oder im Haushalt [Carpal tunnel syndrome (CTS) is caused by the compression of the median nerve throughout the carpal tunnel. CTS usually presents with the classic triad of nocturnal pain, thenar atrophy and hypoesthesia [1], but other common neurologic symptoms include numbness, unrelenting pins and needles paresthesia, loss of sensitivity and motor defi cits [2]. Moreover, CTS may show cutaneous involvement of the distal phalanges of the second and third fi ngers of the hand [1], as in our patient, who had no thenar atrophy. There are few reports that describe this cutaneous variant [2]]openSavoia F.; Tengattini V.; Valenti L.; Gaddoni G.; Patrizi A.Savoia F.; Tengattini V.; Valenti L.; Gaddoni G.; Patrizi A

    Exploiting Multiple Abstractions in Episodic RL via Reward Shaping

    Full text link
    One major limitation to the applicability of Reinforcement Learning (RL) to many practical domains is the large number of samples required to learn an optimal policy. To address this problem and improve learning efficiency, we consider a linear hierarchy of abstraction layers of the Markov Decision Process (MDP) underlying the target domain. Each layer is an MDP representing a coarser model of the one immediately below in the hierarchy. In this work, we propose a novel form of Reward Shaping where the solution obtained at the abstract level is used to offer rewards to the more concrete MDP, in such a way that the abstract solution guides the learning in the more complex domain. In contrast with other works in Hierarchical RL, our technique has few requirements in the design of the abstract models and it is also tolerant to modeling errors, thus making the proposed approach practical. We formally analyze the relationship between the abstract models and the exploration heuristic induced in the lower-level domain. Moreover, we prove that the method guarantees optimal convergence and we demonstrate its effectiveness experimentally.Comment: This is an extended version of the paper presented at AAAI 2023, https://doi.org/10.1609/aaai.v37i6.2588

    Temporal Logic Monitoring Rewards via Transducers

    Get PDF
    In Markov Decision Processes (MDPs), rewards are assigned according to a function of the last state and action. This is often limiting, when the considered domain is not naturally Markovian, but becomes so after careful engineering of extended state space. The extended states record information from the past that is sufficient to assign rewards by looking just at the last state and action. Non-Markovian Reward Decision Processes (NRMDPs) extend MDPs by allowing for non-Markovian rewards, which depend on the history of states and actions. Non-Markovian rewards can be specified in temporal logics on finite traces such as LTLf/LDLf, with the great advantage of a higher abstraction and succinctness; they can then be automatically compiled into an MDP with an extended state space. We contribute to the techniques to handle temporal rewards and to the solutions to engineer them. We first present an approach to compiling temporal rewards which merges the formula automata into a single transducer, sometimes saving up to an exponential number of states. We then define monitoring rewards, which add a further level of abstraction to temporal rewards by adopting the four-valued conditions of runtime monitoring; we argue that our compilation technique allows for an efficient handling of monitoring rewards. Finally, we discuss application to reinforcement learning

    High-level Programming via Generalized Planning and LTL Synthesis

    Get PDF
    We look at program synthesis where the aim is to automatically synthesize a controller that operates on data structures and from which a concrete program can be easily derived. We do not aim at a fully-automatic process or tool that produces a program meeting a given specification of the program’s behaviour. Rather, we aim at the design of a clear and well- founded approach for supporting programmers at the design and implementation phases. Concretely, we first show that a program synthesis task can be modeled as a generalized planning problem. This is done at an abstraction level where the involved data structures are seen as black-boxes that can be interfaced with actions and observations, the first corresponding to the operations and the second to the queries provided by the data structure. The abstraction level is high enough to capture intuitive and common assumptions as well as general and simple strategies used by programmers, and yet it contains sufficient structure to support the automated generation of concrete solutions (in the form of controllers). From such controllers and the use of standard data structures, an actual program in a general language like C++ or Python can be easily obtained. Then, we discuss how the resulting generalized planning problem can be reduced to an LTL synthesis problem, thus making available any LTL synthesis engine for obtaining the controllers. We illustrate the effectiveness of the approach on a series of examples
    • …
    corecore