5 research outputs found

    A belief-desire-intention architechture with a logic-based planner for agents in stochastic domains

    Get PDF
    This dissertation investigates high-level decision making for agents that are both goal and utility driven. We develop a partially observable Markov decision process (POMDP) planner which is an extension of an agent programming language called DTGolog, itself an extension of the Golog language. Golog is based on a logic for reasoning about action—the situation calculus. A POMDP planner on its own cannot cope well with dynamically changing environments and complicated goals. This is exactly a strength of the belief-desire-intention (BDI) model: BDI theory has been developed to design agents that can select goals intelligently, dynamically abandon and adopt new goals, and yet commit to intentions for achieving goals. The contribution of this research is twofold: (1) developing a relational POMDP planner for cognitive robotics, (2) specifying a preliminary BDI architecture that can deal with stochasticity in action and perception, by employing the planner.ComputingM. Sc. (Computer Science

    Decision-Theoretic Planning with Linguistic Terms in GOLOG

    Get PDF
    Abstract In this paper we propose an extension of the action language GOLOG that integrates linguistic terms in non-deterministic argument choices and the reward function for decision-theoretic planning. It is often cumbersome to specify the set of values to pick from in the non-deterministic-choice-of-argument statement. Also, specifying a reward function is not always easy, even for domain experts. Instead of providing a finite domain for values in the non-deterministic-choice-of-argument statement in GOLOG, we now allow for stating the argument domain by simply providing a formula over linguistic terms and fuzzy fluents. In GOLOG's forwardsearch DT planning algorithm, these formulas are evaluated in order to find the agent's optimal policy. We illustrate this in the Diner Domain where the agent needs to calculate the optimal serving order

    Probabilistic Projection and Belief Update in the pGOLOG Framework

    No full text
    . High-level controllers that operate robots in dynamic, uncertain domains are concerned with two reasoning tasks dealing with the eects of noisy sensors and eectors. They must be able to a) project the outcome of a candidate plan and b) update their belief during execution. In this paper, we show how both tasks can be achieved within the pGOLOG framework [9]. Our approach relies on the idea to characterize the robot's sensors and eectors as programs written in the probabilistic action language pGOLOG. We are then able to reason about the interaction of the high-level controller and the sensors and eectors through simulation of the concurrent execution of the high-level plan and the pGOLOG model of the sensors and eectors. 1 Introduction In order to make reasoned decisions, high-level controllers that operate robots in dynamic, uncertain domains must be able to reason about the impact of noisy sensors and eectors. As high-level controllers do not directly control physical sens..

    Belief Update in the pGOLOG Framework

    No full text
    High-level controllers that operate robots in dynamic, uncertain domains are concerned with at least two reasoning tasks dealing with the effects of noisy sensors and effectors: They have a) to project the effects of a candidate plan and b) to update their beliefs during on-line execution of a plan. In this paper, we show how the pGOLOG framework, which in its original form only accounted for the projection of high-level plans, can be extended to reason about the way the robot's beliefs evolve during the on-line execution of a plan. pGOLOG, an extension of the high-level programming language GOLOG, allows the specification of probabilistic beliefs about the state of the world and the representation of sensors and effectors which have uncertain, probabilistic outcomes. As an application of belief update, we introduce belief-based programs, GOLOG-style programs whose tests appeal to the agent's beliefs at execution time
    corecore