43 research outputs found

    Robust degradation and enhancement of robot mission behaviour in unpredictable environments

    Get PDF
    © 2015 ACM.Temporal logic based approaches that automatically generate controllers have been shown to be useful for mission level planning of motion, surveillance and navigation, among others. These approaches critically rely on the validity of the environment models used for synthesis. Yet simplifying assumptions are inevitable to reduce complexity and provide mission-level guarantees; no plan can guarantee results in a model of a world in which everything can go wrong. In this paper, we show how our approach, which reduces reliance on a single model by introducing a stack of models, can endow systems with incremental guarantees based on increasingly strengthened assumptions, supporting graceful degradation when the environment does not behave as expected, and progressive enhancement when it does

    Controllability in partial and uncertain environments

    Get PDF
    © 2014 IEEE.Controller synthesis is a well studied problem that attempts to automatically generate an operational behaviour model of the system-to-be that satisfies a given goal when deployed in a given domain model that behaves according to specified assumptions. A limitation of many controller synthesis techniques is that they require complete descriptions of the problem domain. This is limiting in the context of modern incremental development processes when a fully described problem domain is unavailable, undesirable or uneconomical. Previous work on Modal Transition Systems (MTS) control problems exists, however it is restricted to deterministic MTSs and deterministic Labelled Transition Systems (LTS) implementations. In this paper we study the Modal Transition System Control Problem in its full generality, allowing for nondeterministic MTSs modelling the environments behaviour and nondeterministic LTS implementations. Given an nondeterministic MTS we ask if all, none or some of the nondeterministic LTSs it describes admit an LTS controller that guarantees a given property. We show a technique that solves effectively the MTS realisability problem and it can be, in some cases, reduced to deterministic control problems. In all cases the MTS realisability problem is in same complexity class as the corresponding LTS problem

    Model checker execution reports

    No full text
    Software model checking constitutes an undecidable problem and, as such, even an ideal tool will in some cases fail to give a conclusive answer. In practice, software model checkers fail often and usually do not provide any information on what was effectively checked. The purpose of this work is to provide a conceptual framing to extend software model checkers in a way that allows users to access information about incomplete checks. We characterize the information that model checkers themselves can provide, in terms of analyzed traces, i.e. sequences of statements, and safe canes, and present the notion of execution reports (ERs), which we also formalize. We instantiate these concepts for a family of techniques based on Abstract Reachability Trees and implement the approach using the software model checker CPAchecker. We evaluate our approach empirically and provide examples to illustrate the ERs produced and the information that can be extracted

    Exploration policies for on-the-fly controller synthesis: a reinforcement learning approach

    Get PDF
    Controller synthesis is in essence a case of model-based planning for non-deterministic environments in which plans (actually “strategies”) are meant to preserve system goals indefinitely. In the case of supervisory control environments are specified as the parallel composition of state machines and valid strategies are required to be “non-blocking” (i.e., always enabling the environment to reach certain marked states) in addition to safe (i.e., keep the system within a safe zone). Recently, On-the-fly Directed Controller Synthesis techniques were proposed to avoid the exploration of the entire -and exponentially large- environment space, at the cost of non-maximal permissiveness, to either find a strategy or conclude that there is none. The incremental exploration of the plant is currently guided by a domain-independent human-designed heuristic. In this work, we propose a new method for obtaining heuristics based on Reinforcement Learning (RL). The synthesis algorithm is thus framed as an RL task with an unbounded action space and a modified version of DQN is used. With a simple and general set of features that abstracts both states and actions, we show that it is possible to learn heuristics on small versions of a problem that generalize to the larger instances, effectively doing zero-shot policy transfer. Our agents learn from scratch in a highly partially observable RL task and outperform the existing heuristic overall, in instances unseen during training

    Minimising makespan of discrete controllers: a qualitative approach

    Get PDF
    Qualitative controller synthesis techniques produce controllers that guarantee to achieve a given goal in the presence of an adversarial environment. However, qualitative synthesis only produces one controller out of many possible solutions and typically does not provide support for expressing preferences over other alternatives. In this paper, we thus present a formal approach to reason about preferences qualitatively, restricting attention to makespan of discrete eventbased controllers for reachability goals. Time is reasoned upon symbolically, which relieves the user from providing concrete quantitative measures. In particular, we study the scenario in which durations of individual activities are not known up-front. We first show how controllers can be symbolically and fairly compared by fixing the contingencies. Then, we present an algorithm to produce controllers that are makespan-minimising

    Dynamic update of discrete event controllers

    Get PDF
    Discrete event controllers are at the heart of many software systems that require continuous operation. Changing these controllers at runtime to cope with changes in its execution environment or system requirements change is a challenging open problem. In this paper we address the problem of dynamic update of controllers in reactive systems. We present a general approach to specifying correctness criteria for dynamic update and a technique for automatically computing a controller that handles the transition from the old to the new specification, assuring that the system will reach a state in which such a transition can correctly occur and in which the underlying system architecture can reconfigure. Our solution uses discrete event controller synthesis to automatically build a controller that guarantees both progress towards update and safe update

    Quantum and random walks as universal generators of probability distributions

    Get PDF
    Quantum walks and random walks bear similarities and divergences. One of the most remarkable disparities affects the probability of finding the particle at a given location: typically, almost a flat function in the first case and a bell-shaped one in the second case. Here I show how one can impose any desired stochastic behavior (compatible with the continuity equation for the probability function) on both systems by the appropriate choice of time- and site-dependent coins. This implies, in particular, that one can devise quantum walks that show diffusive spreading without loosing coherence, as well as random walks that exhibit the characteristic fast propagation of a quantum particle driven by a Hadamard coin.Comment: 8 pages, 2 figures; revised and enlarged versio

    Worst-case analysis of heap allocations

    Get PDF
    Abstract. In object oriented languages, dynamic memory allocation is a fundamental concept. When using such a language in hard real-time systems, it becomes important to bound both the worst-case execution time and the worst-case memory consumption. In this paper, we present an analysis to determine the worst-case heap allocations of tasks. The analysis builds upon techniques that are well established for worst-case execution time analysis. The difference is that the cost function is not the execution time of instructions in clock cycles, but the allocation in bytes. In contrast to worst-case execution time analysis, worst-case heap allocation analysis is not processor dependent. However, the cost function depends on the object layout of the runtime system. The analysis is evaluated with several real-time benchmarks to establish the usefulness of the analysis, and to compare the memory consumption of different object layouts.

    Less is More: Estimating Probabilistic Rewards over Partial System Explorations

    Get PDF
    corecore