Implicit Neural Representations for Deformable Image Registration

Abstract

Deformable medical image registration has in past years been revolutionized by the use of convolutional neural networks. These methods surpass conventional image registration techniques in speed but not in accuracy. Here, we present an alternative approach to leveraging neural networks for image registration. Instead of using a convolutional neural network to predict the transformation between images, we optimize a multi-layer perceptron to represent this transformation function. Using recent insights from differentiable rendering, we show how such an implicit deformable image registration (idir) model can be naturally combined with regularization terms based on standard automatic differentiation techniques. We demonstrate the effectiveness of this model on 4D chest CT registration in the DIR-LAB data set and find that a three-layer multi-layer perceptron with periodic activation functions outperforms all published deep learning-based results on this problem, without any folding and without the need for training data. The model is implemented using standard deep learning libraries and flexible enough to be extended to include different losses, regularizers, and optimization schemes.</p

    Similar works