Modelers use automatic differentiation (AD) of computation graphs to
implement complex Deep Learning models without defining gradient computations.
Stochastic AD extends AD to stochastic computation graphs with sampling steps,
which arise when modelers handle the intractable expectations common in
Reinforcement Learning and Variational Inference. However, current methods for
stochastic AD are limited: They are either only applicable to continuous random
variables and differentiable functions, or can only use simple but high
variance score-function estimators. To overcome these limitations, we introduce
Storchastic, a new framework for AD of stochastic computation graphs.
Storchastic allows the modeler to choose from a wide variety of gradient
estimation methods at each sampling step, to optimally reduce the variance of
the gradient estimates. Furthermore, Storchastic is provably unbiased for
estimation of any-order gradients, and generalizes variance reduction
techniques to higher-order gradient estimates. Finally, we implement
Storchastic as a PyTorch library at https://github.com/HEmile/storchastic.Comment: 30 pages, 2 figures, 1 table, accepted in NeurIPS 202