Combining the strengths of model-based iterative algorithms and data-driven
deep learning solutions, deep unrolling networks (DuNets) have become a popular
tool to solve inverse imaging problems. While DuNets have been successfully
applied to many linear inverse problems, nonlinear problems tend to impair the
performance of the method. Inspired by momentum acceleration techniques that
are often used in optimization algorithms, we propose a recurrent momentum
acceleration (RMA) framework that uses a long short-term memory recurrent
neural network (LSTM-RNN) to simulate the momentum acceleration process. The
RMA module leverages the ability of the LSTM-RNN to learn and retain knowledge
from the previous gradients. We apply RMA to two popular DuNets -- the learned
proximal gradient descent (LPGD) and the learned primal-dual (LPD) methods,
resulting in LPGD-RMA and LPD-RMA respectively. We provide experimental results
on two nonlinear inverse problems: a nonlinear deconvolution problem, and an
electrical impedance tomography problem with limited boundary measurements. In
the first experiment we have observed that the improvement due to RMA largely
increases with respect to the nonlinearity of the problem. The results of the
second example further demonstrate that the RMA schemes can significantly
improve the performance of DuNets in strongly ill-posed problems