4 research outputs found
Fourier-Domain Inversion for the Modulo Radon Transform
Inspired by the multiple-exposure fusion approach in computational
photography, recently, several practitioners have explored the idea of high
dynamic range (HDR) X-ray imaging and tomography. While establishing promising
results, these approaches inherit the limitations of multiple-exposure fusion
strategy. To overcome these disadvantages, the modulo Radon transform (MRT) has
been proposed. The MRT is based on a co-design of hardware and algorithms. In
the hardware step, Radon transform projections are folded using modulo
non-linearities. Thereon, recovery is performed by algorithmically inverting
the folding, thus enabling a single-shot, HDR approach to tomography. The first
steps in this topic established rigorous mathematical treatment to the problem
of reconstruction from folded projections. This paper takes a step forward by
proposing a new, Fourier domain recovery algorithm that is backed by
mathematical guarantees. The advantages include recovery at lower sampling
rates while being agnostic to modulo threshold, lower computational complexity
and empirical robustness to system noise. Beyond numerical simulations, we use
prototype modulo ADC based hardware experiments to validate our claims. In
particular, we report image recovery based on hardware measurements up to 10
times larger than the sensor's dynamic range while benefiting with lower
quantization noise (12 dB).Comment: 12 pages, submitted for possible publicatio
Bayesian view on the training of invertible residual networks for solving linear inverse problems
Learning-based methods for inverse problems, adapting to the data's inherent
structure, have become ubiquitous in the last decade. Besides empirical
investigations of their often remarkable performance, an increasing number of
works addresses the issue of theoretical guarantees. Recently, [3] exploited
invertible residual networks (iResNets) to learn provably convergent
regularizations given reasonable assumptions. They enforced these guarantees by
approximating the linear forward operator with an iResNet. Supervised training
on relevant samples introduces data dependency into the approach. An open
question in this context is to which extent the data's inherent structure
influences the training outcome, i.e., the learned reconstruction scheme. Here
we address this delicate interplay of training design and data dependency from
a Bayesian perspective and shed light on opportunities and limitations. We
resolve these limitations by analyzing reconstruction-based training of the
inverses of iResNets, where we show that this optimization strategy introduces
a level of data-dependency that cannot be achieved by approximation training.
We further provide and discuss a series of numerical experiments underpinning
and extending the theoretical findings
Invertible residual networks in the context of regularization theory for linear inverse problems
Learned inverse problem solvers exhibit remarkable performance in
applications like image reconstruction tasks. These data-driven reconstruction
methods often follow a two-step scheme. First, one trains the often neural
network-based reconstruction scheme via a dataset. Second, one applies the
scheme to new measurements to obtain reconstructions. We follow these steps but
parameterize the reconstruction scheme with invertible residual networks
(iResNets). We demonstrate that the invertibility enables investigating the
influence of the training and architecture choices on the resulting
reconstruction scheme. For example, assuming local approximation properties of
the network, we show that these schemes become convergent regularizations. In
addition, the investigations reveal a formal link to the linear regularization
theory of linear inverse problems and provide a nonlinear spectral
regularization for particular architecture classes. On the numerical side, we
investigate the local approximation property of selected trained architectures
and present a series of experiments on the MNIST dataset that underpin and
extend our theoretical findings
Recommended from our members
Invertible residual networks in the context of regularization theory for linear inverse problems
Abstract
Learned inverse problem solvers exhibit remarkable performance in applications like image reconstruction tasks. These data-driven reconstruction methods often follow a two-step procedure. First, one trains the often neural network-based reconstruction scheme via a dataset. Second, one applies the scheme to new measurements to obtain reconstructions. We follow these steps but parameterize the reconstruction scheme with invertible residual networks (iResNets). We demonstrate that the invertibility enables investigating the influence of the training and architecture choices on the resulting reconstruction scheme. For example, assuming local approximation properties of the network, we show that these schemes become convergent regularizations. In addition, the investigations reveal a formal link to the linear regularization theory of linear inverse problems and provide a nonlinear spectral regularization for particular architecture classes. On the numerical side, we investigate the local approximation property of selected trained architectures and present a series of experiments on the MNIST dataset that underpin and extend our theoretical findings.</jats:p