80,150 research outputs found

    Planning, Acting, and Learning in Incomplete Domains

    Get PDF
    The engineering of complete planning domain descriptions is often very costly because of human error or lack of domain knowledge. Learning complete domain descriptions is also very challenging because many features are irrelevant to achieving the goals and data may be scarce. Given incomplete knowledge of their actions, agents can ignore the incompleteness, plan around it, ask questions of a domain expert, or learn through trial and error. Our agent Goalie learns about the preconditions and effects of its incompletely-specified actions by monitoring the environment state. In conjunction with the plan failure explanations generated by its planner DeFault, Goalie diagnoses past and future action failures. DeFault computes failure explanations for each action and state in the plan and counts the number of incomplete domain interpretations wherein failure will occur. The questionasking strategies employed by our extended Goalie agent using these conjunctive normal form-based plan failure explanations are goal-directed and attempt to approach always successful execution while asking the fewest questions possible. In sum, Goalie: i) interleaves acting, planning, and question-asking; ii) synthesizes plans that avoid execution failure due to ignorance of the domain model; iii) uses these plans to identify relevant (goal-directed) questions; iv) passively learns about the domain model during execution to improve later replanning attempts; v) and employs various targeted (goal-directed) strategies to ask questions (actively learn). Our planner DeFault is the first reason about a domain\u27s incompleteness to avoid potential plan failure. We show that DeFault performs best by counting prime implicants (failure diagnoses) rather than propositional models. Further, we show that by reasoning about incompleteness in planning (as opposed to ignoring it), Goalie fails and replans less often, and executes fewer actions. Finally, we show that goal-directed knowledge acquisition - prioritizing questions based on plan failure diagnoses - leads to fewer questions, lower overall planning and replanning time, and higher success rates than approaches that naively ask many questions or learn by trial and error

    Multi-Agent Planning with Planning Graph

    Get PDF
    In this paper, we consider planning for multi-agents situations in STRIPS-like domains with planning graph. Three possible relationships between agents' goals are considered in order to evaluate plans: the agents may be collaborative, adversarial or indifferent entities. We propose algorithms to deal with each situation. The collaborative situations can be easily dealt with the original Graphplan algorithm by redefining the domain in a proper way. Forward-chaining and backward chaining algorithms are discussed to find infallible plans in adversarial situations. In case such plans cannot be found, the agent can still attempt to find a plan for achieving some part of the goals. A forward-chaining algorithm is also proposed to find plans for agents with independent goals

    Active Classification: Theory and Application to Underwater Inspection

    Full text link
    We discuss the problem in which an autonomous vehicle must classify an object based on multiple views. We focus on the active classification setting, where the vehicle controls which views to select to best perform the classification. The problem is formulated as an extension to Bayesian active learning, and we show connections to recent theoretical guarantees in this area. We formally analyze the benefit of acting adaptively as new information becomes available. The analysis leads to a probabilistic algorithm for determining the best views to observe based on information theoretic costs. We validate our approach in two ways, both related to underwater inspection: 3D polyhedra recognition in synthetic depth maps and ship hull inspection with imaging sonar. These tasks encompass both the planning and recognition aspects of the active classification problem. The results demonstrate that actively planning for informative views can reduce the number of necessary views by up to 80% when compared to passive methods.Comment: 16 page

    Expectation-Aware Planning: A Unifying Framework for Synthesizing and Executing Self-Explaining Plans for Human-Aware Planning

    Full text link
    In this work, we present a new planning formalism called Expectation-Aware planning for decision making with humans in the loop where the human's expectations about an agent may differ from the agent's own model. We show how this formulation allows agents to not only leverage existing strategies for handling model differences but can also exhibit novel behaviors that are generated through the combination of these different strategies. Our formulation also reveals a deep connection to existing approaches in epistemic planning. Specifically, we show how we can leverage classical planning compilations for epistemic planning to solve Expectation-Aware planning problems. To the best of our knowledge, the proposed formulation is the first complete solution to decision-making in the presence of diverging user expectations that is amenable to a classical planning compilation while successfully combining previous works on explanation and explicability. We empirically show how our approach provides a computational advantage over existing approximate approaches that unnecessarily try to search in the space of models while also failing to facilitate the full gamut of behaviors enabled by our framework
    • …
    corecore