An important step in the design of autonomous systems is to evaluate the
probability that a failure will occur. In safety-critical domains, the failure
probability is extremely small so that the evaluation of a policy through Monte
Carlo sampling is inefficient. Adaptive importance sampling approaches have
been developed for rare event estimation but do not scale well to sequential
systems with long horizons. In this work, we develop two adaptive importance
sampling algorithms that can efficiently estimate the probability of rare
events for sequential decision making systems. The basis for these algorithms
is the minimization of the Kullback-Leibler divergence between a
state-dependent proposal distribution and a target distribution over
trajectories, but the resulting algorithms resemble policy gradient and
value-based reinforcement learning. We apply multiple importance sampling to
reduce the variance of our estimate and to address the issue of multi-modality
in the optimal proposal distribution. We demonstrate our approach on a control
task with both continuous and discrete actions spaces and show accuracy
improvements over several baselines