22 research outputs found

    Monitoring with uncertainty

    Full text link
    We discuss the problem of runtime verification of an instrumented program that misses to emit and to monitor some events. These gaps can occur when a monitoring overhead control mechanism is introduced to disable the monitor of an application with real-time constraints. We show how to use statistical models to learn the application behavior and to "fill in" the introduced gaps. Finally, we present and discuss some techniques developed in the last three years to estimate the probability that a property of interest is violated in the presence of an incomplete trace.Comment: In Proceedings HAS 2013, arXiv:1308.490

    Self-Adaptive resource allocation for event monitoring with uncertainty in Sensor Networks

    Get PDF
    Event monitoring is an important application of sensor networks. Multiple parties, with different surveillance targets, can share the same network, with limited sensing resources, to monitor their events of interest simultaneously. Such a system achieves profit by allocating sensing resources to missions to collect event related information (e.g., videos, photos, electromagnetic signals). We address the problem of dynamically assigning resources to missions so as to achieve maximum profit with uncertainty in event occurrence. We consider timevarying resource demands and profits, and multiple concurrent surveillance missions. We model each mission as a sequence of monitoring attempts, each being allocated with a certain amount of resources, on a specific set of events that occurs as a Markov process. We propose a Self-Adaptive Resource Allocation algorithm (SARA) to adaptively and efficiently allocate resources according to the results of previous observations. By means of simulations we compare SARA to previous solutions and show SARA’s potential in finding higher profit in both static and dynamic scenarios

    Learning Pedagogical Policies from Few Training Data

    Get PDF
    [Poster of] 17th European Conference on Artificial Intelligence (ECAI'06). Workshop on Planning, Learning and Monitoring with Uncertainty and Dynamic Worlds, Riva del Garda, Italy, August 8, 2006Learning a pedagogical policy in an Adaptive Educational System (AIES) fits as a Reinforcement Learning (RL) problem. However, to learn pedagogical policies requires to acquire a huge amount of experience interacting with the students, so applying RL to the AIES from scratch is infeasible. In this paper we describe RLATES, an AIES that uses RL to learn an accurate pedagogical policy to teach a course of Data Base Design. To reduce the experience required to learn the pedagogical policy, we propose to use an initial value function learned with simulated students, whose model is provided by an expert as a Markov Decision Process. Empirical results demonstrate that the value function learned with the simulated students and transferred to the AIES is a very accurate initial pedagogical policy. The evaluation is based on the interaction of more than 70 Computer Science undergraduate students, and demonstrates that an efficient guide through the contents of the educational system is obtained.This work was supported by the project GPS (TIN2004/07083

    Cyber-physikalische Systeme:Herausforderung des 21. Jahrhunderts

    Get PDF
    Cyber-physical systems and the Internet of Things will be omnipresent in the near future. These systems will be tightly integrated in and interacting with our environment to support us in our daily tasks and in achieving our personal goals. However, to achieve this vision, we have to tackle various challenges

    FluCaP: A Heuristic Search Planner for First-Order MDPs

    Full text link
    We present a heuristic search algorithm for solving first-order Markov Decision Processes (FOMDPs). Our approach combines first-order state abstraction that avoids evaluating states individually, and heuristic search that avoids evaluating all states. Firstly, in contrast to existing systems, which start with propositionalizing the FOMDP and then perform state abstraction on its propositionalized version we apply state abstraction directly on the FOMDP avoiding propositionalization. This kind of abstraction is referred to as first-order state abstraction. Secondly, guided by an admissible heuristic, the search is restricted to those states that are reachable from the initial state. We demonstrate the usefulness of the above techniques for solving FOMDPs with a system, referred to as FluCaP (formerly, FCPlanner), that entered the probabilistic track of the 2004 International Planning Competition (IPC2004) and demonstrated an advantage over other planners on the problems represented in first-order terms

    Learning in Relational Contracts

    Get PDF
    We study relational contracts between a firm and a worker with mutual uncertainty about match quality. The worker’s actions are publicly observed and generate both output and information about the match quality. We show that the relational contracts may be inefficient. We characterize the inefficiency through a holdup problem on the contemporaneous output. In the frequent action limit, these inefficiencies persist if and only if information degrades at least at the same rate at which impatience vanishes. We characterize optimal relational contracts and show that they involve actions that yield both a lower payoff and less information than another action
    corecore