174 research outputs found
Sample-Efficient Learning of POMDPs with Multiple Observations In Hindsight
This paper studies the sample-efficiency of learning in Partially Observable
Markov Decision Processes (POMDPs), a challenging problem in reinforcement
learning that is known to be exponentially hard in the worst-case. Motivated by
real-world settings such as loading in game playing, we propose an enhanced
feedback model called ``multiple observations in hindsight'', where after each
episode of interaction with the POMDP, the learner may collect multiple
additional observations emitted from the encountered latent states, but may not
observe the latent states themselves. We show that sample-efficient learning
under this feedback model is possible for two new subclasses of POMDPs:
\emph{multi-observation revealing POMDPs} and \emph{distinguishable POMDPs}.
Both subclasses generalize and substantially relax \emph{revealing POMDPs} -- a
widely studied subclass for which sample-efficient learning is possible under
standard trajectory feedback. Notably, distinguishable POMDPs only require the
emission distributions from different latent states to be \emph{different}
instead of \emph{linearly independent} as required in revealing POMDPs
Provably Efficient UCB-type Algorithms For Learning Predictive State Representations
The general sequential decision-making problem, which includes Markov
decision processes (MDPs) and partially observable MDPs (POMDPs) as special
cases, aims at maximizing a cumulative reward by making a sequence of decisions
based on a history of observations and actions over time. Recent studies have
shown that the sequential decision-making problem is statistically learnable if
it admits a low-rank structure modeled by predictive state representations
(PSRs). Despite these advancements, existing approaches typically involve
oracles or steps that are not computationally efficient. On the other hand, the
upper confidence bound (UCB) based approaches, which have served successfully
as computationally efficient methods in bandits and MDPs, have not been
investigated for more general PSRs, due to the difficulty of optimistic bonus
design in these more challenging settings. This paper proposes the first known
UCB-type approach for PSRs, featuring a novel bonus term that upper bounds the
total variation distance between the estimated and true models. We further
characterize the sample complexity bounds for our designed UCB-type algorithms
for both online and offline PSRs. In contrast to existing approaches for PSRs,
our UCB-type algorithms enjoy computational efficiency, last-iterate guaranteed
near-optimal policy, and guaranteed model accuracy
- …