We present a pseudo-reversible normalizing flow method for efficiently
generating samples of the state of a stochastic differential equation (SDE)
with different initial distributions. The primary objective is to construct an
accurate and efficient sampler that can be used as a surrogate model for
computationally expensive numerical integration of SDE, such as those employed
in particle simulation. After training, the normalizing flow model can directly
generate samples of the SDE's final state without simulating trajectories.
Existing normalizing flows for SDEs depend on the initial distribution, meaning
the model needs to be re-trained when the initial distribution changes. The
main novelty of our normalizing flow model is that it can learn the conditional
distribution of the state, i.e., the distribution of the final state
conditional on any initial state, such that the model only needs to be trained
once and the trained model can be used to handle various initial distributions.
This feature can provide a significant computational saving in studies of how
the final state varies with the initial distribution. We provide a rigorous
convergence analysis of the pseudo-reversible normalizing flow model to the
target probability density function in the Kullback-Leibler divergence metric.
Numerical experiments are provided to demonstrate the effectiveness of the
proposed normalizing flow model