38,877 research outputs found

    Simple Stochastic Temporal Constraint Networks

    Get PDF
    Many artificial intelligence tasks (e.g., planning, situation assessment, scheduling) require reasoning about events in time. Temporal constraint networks offer an elegant and often computationally efficient framework for such temporal reasoning tasks. Temporal data and knowledge available in some domains is necessarily imprecise - e.g., as a result of measurement errors associated with sensors. This paper introduces stochastic temporal constraint networks thereby extending constraint-based approaches to temporal reasoning with precise temporal knowledge to handle stochastic imprecision. The paper proposes an algorithm for inference of implicit stochastic temporal constraints from a given set of explicit constraints. It also introduces a stochastic version of the temporal constraint network consistency problem and describes techniques for solving it under certain simplifying assumptions

    Stochastic Constraint Programming

    Full text link
    To model combinatorial decision problems involving uncertainty and probability, we introduce stochastic constraint programming. Stochastic constraint programs contain both decision variables (which we can set) and stochastic variables (which follow a probability distribution). They combine together the best features of traditional constraint satisfaction, stochastic integer programming, and stochastic satisfiability. We give a semantics for stochastic constraint programs, and propose a number of complete algorithms and approximation procedures. Finally, we discuss a number of extensions of stochastic constraint programming to relax various assumptions like the independence between stochastic variables, and compare with other approaches for decision making under uncertainty.Comment: Proceedings of the 15th Eureopean Conference on Artificial Intelligenc

    Metareasoning for Planning Under Uncertainty

    Full text link
    The conventional model for online planning under uncertainty assumes that an agent can stop and plan without incurring costs for the time spent planning. However, planning time is not free in most real-world settings. For example, an autonomous drone is subject to nature's forces, like gravity, even while it thinks, and must either pay a price for counteracting these forces to stay in place, or grapple with the state change caused by acquiescing to them. Policy optimization in these settings requires metareasoning---a process that trades off the cost of planning and the potential policy improvement that can be achieved. We formalize and analyze the metareasoning problem for Markov Decision Processes (MDPs). Our work subsumes previously studied special cases of metareasoning and shows that in the general case, metareasoning is at most polynomially harder than solving MDPs with any given algorithm that disregards the cost of thinking. For reasons we discuss, optimal general metareasoning turns out to be impractical, motivating approximations. We present approximate metareasoning procedures which rely on special properties of the BRTDP planning algorithm and explore the effectiveness of our methods on a variety of problems.Comment: Extended version of IJCAI 2015 pape
    • …
    corecore