181,851 research outputs found
Learning and Reasoning for Robot Sequential Decision Making under Uncertainty
Robots frequently face complex tasks that require more than one action, where
sequential decision-making (SDM) capabilities become necessary. The key
contribution of this work is a robot SDM framework, called LCORPP, that
supports the simultaneous capabilities of supervised learning for passive state
estimation, automated reasoning with declarative human knowledge, and planning
under uncertainty toward achieving long-term goals. In particular, we use a
hybrid reasoning paradigm to refine the state estimator, and provide
informative priors for the probabilistic planner. In experiments, a mobile
robot is tasked with estimating human intentions using their motion
trajectories, declarative contextual knowledge, and human-robot interaction
(dialog-based and motion-based). Results suggest that, in efficiency and
accuracy, our framework performs better than its no-learning and no-reasoning
counterparts in office environment.Comment: In proceedings of 34th AAAI conference on Artificial Intelligence,
202
Counterfactual Explanations in Sequential Decision Making Under Uncertainty
Methods to find counterfactual explanations have predominantly focused on one step decision making processes. In this work, we initiate the development of methods to find counterfactual explanations for decision making processes in which multiple, dependent actions are taken sequentially over time. We start by formally characterizing a sequence of actions and states using finite horizon Markov decision processes and the Gumbel-Max structural causal model. Building upon this characterization, we formally state the problem of finding counterfactual explanations for sequential decision making processes. In our problem formulation, the counterfactual explanation specifies an alternative sequence of actions differing in at most k actions from the observed sequence that could have led the observed process realization to a better outcome. Then, we introduce a polynomial time algorithm based on dynamic programming to build a counterfactual policy that is guaranteed to always provide the optimal counterfactual explanation on every possible realization of the counterfactual environment dynamics. We validate our algorithm using both synthetic and real data from cognitive behavioral therapy and show that the counterfactual explanations our algorithm finds can provide valuable insights to enhance sequential decision making under uncertainty
Sequential Decision Making in Repeated Coalition Formation under Uncertainty
The problem of coalition formation when agents are uncertain about the types or capabilities of their potential partners is a critical one. In [3] a Bayesian reinforcement learning framework is developed for this problem when coalitions are formed (and tasks undertaken) repeatedly: not only does the model allow agents to refine their beliefs about the types of others, but uses value of information to define optimal exploration policies. However, computational approximations in that work are purely myopic. We present novel, non-myopic learning algorithms to approximate the optimal Bayesian solution, providing tractable means to ensure good sequential performance. We evaluate our algorithms in a variety of settings, and show that one, in particular, exhibits consistently good sequential performance. Further, it enables the Bayesian agents to transfer acquired knowledge among different dynamic tasks
Risk-Averse Decision-Making under Parametric Uncertainty
For sequential decision-making problems with potentially catastrophic consequences appropriate risk assessment may be required. In contrast to traditional techniques for decision-making under uncertainty that aim to maximise performance in expectation, we chose to focus on other properties of the probability distribution. For instance, in an application such as autonomous driving, the chance of causing an accident might be small but yet fatal. A decision-maker focusing on performance in the worst outcomes may be able to obtain a safer decision-making process by keeping this in mind. We propose frameworks for quantifying uncertainty under the reinforcement learning framework and develop algorithms that allow for risk-sensitive decision-making under uncertainty
Collective multiagent sequential decision making under uncertainty
National Research Foundation (NRF) Singapore under Corp Lab @ Universit
A Survey of Knowledge-based Sequential Decision Making under Uncertainty
Reasoning with declarative knowledge (RDK) and sequential decision-making
(SDM) are two key research areas in artificial intelligence. RDK methods reason
with declarative domain knowledge, including commonsense knowledge, that is
either provided a priori or acquired over time, while SDM methods
(probabilistic planning and reinforcement learning) seek to compute action
policies that maximize the expected cumulative utility over a time horizon;
both classes of methods reason in the presence of uncertainty. Despite the rich
literature in these two areas, researchers have not fully explored their
complementary strengths. In this paper, we survey algorithms that leverage RDK
methods while making sequential decisions under uncertainty. We discuss
significant developments, open problems, and directions for future work
- âŠ