We present a method to predict image deformations based on patch-wise image
appearance. Specifically, we design a patch-based deep encoder-decoder network
which learns the pixel/voxel-wise mapping between image appearance and
registration parameters. Our approach can predict general deformation
parameterizations, however, we focus on the large deformation diffeomorphic
metric mapping (LDDMM) registration model. By predicting the LDDMM
momentum-parameterization we retain the desirable theoretical properties of
LDDMM, while reducing computation time by orders of magnitude: combined with
patch pruning, we achieve a 1500x/66x speed up compared to GPU-based
optimization for 2D/3D image registration. Our approach has better prediction
accuracy than predicting deformation or velocity fields and results in
diffeomorphic transformations. Additionally, we create a Bayesian probabilistic
version of our network, which allows evaluation of deformation field
uncertainty through Monte Carlo sampling using dropout at test time. We show
that deformation uncertainty highlights areas of ambiguous deformations. We
test our method on the OASIS brain image dataset in 2D and 3D