8 research outputs found
A Stochastic Belief Change Framework with an Observation Stream and Defaults as Expired Observations
Abstract. A framework for an agent to change its probabilistic beliefs after a stream of noisy observations is received is proposed. Observations which are no longer relevant, become default assumptions until overridden by newer, more prevalent observations. A distinction is made between background and foreground beliefs. Agent actions and environment events are distinguishable and form part of the agent model. It is left up to the agent designer to provide an environment model; a submodel of the agent model. An example of an environment model is provided in the paper, and an example scenario is based on it. Given the particular form of the agent model, several 'patterns of cognition' can be identified. An argument is made for four particular patterns
A belief-desire-intention architechture with a logic-based planner for agents in stochastic domains
This dissertation investigates high-level decision making for agents that are both goal and utility
driven. We develop a partially observable Markov decision process (POMDP) planner which
is an extension of an agent programming language called DTGolog, itself an extension of the
Golog language. Golog is based on a logic for reasoning about action—the situation calculus.
A POMDP planner on its own cannot cope well with dynamically changing environments
and complicated goals. This is exactly a strength of the belief-desire-intention (BDI) model:
BDI theory has been developed to design agents that can select goals intelligently, dynamically
abandon and adopt new goals, and yet commit to intentions for achieving goals. The contribution
of this research is twofold: (1) developing a relational POMDP planner for cognitive
robotics, (2) specifying a preliminary BDI architecture that can deal with stochasticity in action
and perception, by employing the planner.ComputingM. Sc. (Computer Science
Formalisms for agents reasoning with stochastic actions and perceptions.
Ph. D. University of KwaZulu-Natal, Durban 2014.The thesis reports on the development of a sequence of logics (formal languages based on mathematical
logic) to deal with a class of uncertainty that agents may encounter. More accurately, the
logics are meant to be used for allowing robots or software agents to reason about the uncertainty
they have about the effects of their actions and the noisiness of their observations. The approach
is to take the well-established formalism called the partially observable Markov decision process
(POMDP) as an underlying formalism and then design a modal logic based on POMDP theory to
allow an agent to reason with a knowledge-base (including knowledge about the uncertainties).
First, three logics are designed, each one adding one or more important features for reasoning in
the class of domains of interest (i.e., domains where stochastic action and sensing are considered).
The final logic, called the Stochastic Decision Logic (SDL) combines the three logics into a coherent
formalism, adding three important notions for reasoning about stochastic decision-theoretic
domains: (i) representation of and reasoning about degrees of belief in a statement, given stochastic
knowledge, (ii) representation of and reasoning about the expected future rewards of a sequence
of actions and (iii) the progression or update of an agent’s epistemic, stochastic knowledge.
For all the logics developed in this thesis, entailment is defined, that is, whether a sentence logically
follows from a knowledge-base. Decision procedures for determining entailment are developed,
and they are all proved sound, complete and terminating. The decision procedures all
employ tableau calculi to deal with the traditional logical aspects, and systems of equations and
inequalities to deal with the probabilistic aspects.
Besides promoting the compact representation of POMDP models, and the power that logic brings
to the automation of reasoning, the Stochastic Decision Logic is novel and significant in that it
allows the agent to determine whether or not a set of sentences is entailed by an arbitrarily precise
specification of a POMDP model, where this is not possible with standard POMDPs.
The research conducted for this thesis has resulted in several publications and has been presented
at several workshops, symposia and conferences
Decision Theory, the Situation Calculus, and Conditional Plans
This paper shows how to combine decision theory and logical representations of actions in a manner that seems natural for both. In particular, we assume an axiomatization of the domain in terms of situation calculus, using what is essentially Reiter's solution to the frame problem, in terms of the completion of the axioms defining the state change. Uncertainty is handled in terms of the independent choice logic, which allows for independent choices and a logic program that gives the consequences of the choices. As part of the consequences are a specification of the utility of (final) states, and how (possibly noisy) sensors depend on the state. The robot adopts conditional plans, similar to the GOLOG programming language. Within this logic, we can define the expected utility of a conditional plan, based on the axiomatization of the actions, the sensors and the utility. Sensors can be noisy and actions can be stochastic. The planning problem is to find the plan with the highest expected utility. This representation is related to recent structured representations for partially observable Markov decision processes (POMDPs); here we use stochastic situation calculus rules to specify the state transition function and the reward/value function. Finally we show that with stochastic frame axioms, action representations in probabilistic STRIPS are exponentially larger than using the representation proposed here. 1
Decision Theory, the Situation Calculus and Conditional Plans
This paper shows how to combine decision theory and logical representations of actions in a manner that seems natural for both. In particular, we assume an axiomatization of the domain in terms of situation calculus, using what is essentially Reiter's solution to the frame problem, in terms of the completion of the axioms defining the state change. Uncertainty is handled in terms of the independent choice logic, which allows for independent choices and a logic program that gives the consequences of the choices. As part of the consequences are a specification of the utility of (final) states, and how (possibly noisy) sensors depend on the state. The robot adopts conditional plans, similar to the GOLOG programming language. Within this logic, we can define the expected utility of a conditional plan, based on the axiomatization of the actions, the sensors and the utility. Sensors can be noisy and actions can be stochastic. The planning problem is to find the plan with the highest expected utility. This representation is related to recent structured representations for partially observable Markov decision processes (POMDPs); here we use stochastic situation calculus rules to specify the state transition function and the reward/value function. Finally we show that with stochastic frame axioms, action representations in probabilistic STRIPS are exponentially larger than using the representation proposed here
Decision Theory, the Situation Calculus, and Conditional Plans
This paper shows how to combine decision theory and logical representations of actions in a manner that seems natural for both. In particular, we assume an axiomatization of the domain in terms of situation calculus, using what is essentially Reiter's solution to the frame problem, in terms of the completion of the axioms defining the state change. Uncertainty is handled in terms of the independent choice logic, which allows for independent choices and a logic program that gives the consequences of the choices. As part of the consequences are a specification of the utility of (final) states, and how (possibly noisy) sensors depend on the state. The robot adopts conditional plans, similar to the GOLOG programming language. Within this logic, we can define the expected utility of a conditional plan, based on the axiomatization of the actions, the sensors and the utility. Sensors can be noisy and actions can be stochastic. The planning problem is to find the plan with the highest expected..