3,618 research outputs found
Fidelity Imposed Network Edit (FINE) for Solving Ill-Posed Image Reconstruction
Deep learning (DL) is increasingly used to solve ill-posed inverse problems
in imaging, such as reconstruction from noisy or incomplete data, as DL offers
advantages over explicit image feature extractions in defining the needed
prior. However, DL typically does not incorporate the precise physics of data
generation or data fidelity. Instead, DL networks are trained to output some
average response to an input. Consequently, DL image reconstruction contains
errors, and may perform poorly when the test data deviates significantly from
the training data, such as having new pathological features. To address this
lack of data fidelity problem in DL image reconstruction, a novel approach,
which we call fidelity-imposed network edit (FINE), is proposed. In FINE, a
pre-trained prior network's weights are modified according to the physical
model, on a test case. Our experiments demonstrate that FINE can achieve
superior performance in two important inverse problems in neuroimaging:
quantitative susceptibility mapping (QSM) and under-sampled reconstruction in
MRI
Data-driven Seismic Waveform Inversion: A Study on the Robustness and Generalization
Acoustic- and elastic-waveform inversion is an important and widely used
method to reconstruct subsurface velocity image. Waveform inversion is a
typical non-linear and ill-posed inverse problem. Existing physics-driven
computational methods for solving waveform inversion suffer from the cycle
skipping and local minima issues, and not to mention solving waveform inversion
is computationally expensive. In recent years, data-driven methods become a
promising way to solve the waveform inversion problem. However, most deep
learning frameworks suffer from generalization and over-fitting issue. In this
paper, we developed a real-time data-driven technique and we call it
VelocityGAN, to accurately reconstruct subsurface velocities. Our VelocityGAN
is built on a generative adversarial network (GAN) and trained end-to-end to
learn a mapping function from the raw seismic waveform data to the velocity
image. Different from other encoder-decoder based data-driven seismic waveform
inversion approaches, our VelocityGAN learns regularization from data and
further impose the regularization to the generator so that inversion accuracy
is improved. We further develop a transfer learning strategy based on
VelocityGAN to alleviate the generalization issue. A series of experiments are
conducted on the synthetic seismic reflection data to evaluate the
effectiveness, efficiency, and generalization of VelocityGAN. We not only
compare it with existing physics-driven approaches and data-driven frameworks
but also conduct several transfer learning experiments. The experiment results
show that VelocityGAN achieves state-of-the-art performance among the baselines
and can improve the generalization results to some extent
Stable Architectures for Deep Neural Networks
Deep neural networks have become invaluable tools for supervised machine
learning, e.g., classification of text or images. While often offering superior
results over traditional techniques and successfully expressing complicated
patterns in data, deep architectures are known to be challenging to design and
train such that they generalize well to new data. Important issues with deep
architectures are numerical instabilities in derivative-based learning
algorithms commonly called exploding or vanishing gradients. In this paper we
propose new forward propagation techniques inspired by systems of Ordinary
Differential Equations (ODE) that overcome this challenge and lead to
well-posed learning problems for arbitrarily deep networks.
The backbone of our approach is our interpretation of deep learning as a
parameter estimation problem of nonlinear dynamical systems. Given this
formulation, we analyze stability and well-posedness of deep learning and use
this new understanding to develop new network architectures. We relate the
exploding and vanishing gradient phenomenon to the stability of the discrete
ODE and present several strategies for stabilizing deep learning for very deep
networks. While our new architectures restrict the solution space, several
numerical experiments show their competitiveness with state-of-the-art
networks.Comment: 23 pages, 7 figure
Joint Image Reconstruction and Segmentation Using the Potts Model
We propose a new algorithmic approach to the non-smooth and non-convex Potts
problem (also called piecewise-constant Mumford-Shah problem) for inverse
imaging problems. We derive a suitable splitting into specific subproblems that
can all be solved efficiently. Our method does not require a priori knowledge
on the gray levels nor on the number of segments of the reconstruction.
Further, it avoids anisotropic artifacts such as geometric staircasing. We
demonstrate the suitability of our method for joint image reconstruction and
segmentation. We focus on Radon data, where we in particular consider limited
data situations. For instance, our method is able to recover all segments of
the Shepp-Logan phantom from angular views only. We illustrate the
practical applicability on a real PET dataset. As further applications, we
consider spherical Radon data as well as blurred data
InverseNet: Solving Inverse Problems with Splitting Networks
We propose a new method that uses deep learning techniques to solve the
inverse problems. The inverse problem is cast in the form of learning an
end-to-end mapping from observed data to the ground-truth. Inspired by the
splitting strategy widely used in regularized iterative algorithm to tackle
inverse problems, the mapping is decomposed into two networks, with one
handling the inversion of the physical forward model associated with the data
term and one handling the denoising of the output from the former network,
i.e., the inverted version, associated with the prior/regularization term. The
two networks are trained jointly to learn the end-to-end mapping, getting rid
of a two-step training. The training is annealing as the intermediate variable
between these two networks bridges the gap between the input (the degraded
version of output) and output and progressively approaches to the ground-truth.
The proposed network, referred to as InverseNet, is flexible in the sense that
most of the existing end-to-end network structure can be leveraged in the first
network and most of the existing denoising network structure can be used in the
second one. Extensive experiments on both synthetic data and real datasets on
the tasks, motion deblurring, super-resolution, and colorization, demonstrate
the efficiency and accuracy of the proposed method compared with other image
processing algorithms
DeepRED: Deep Image Prior Powered by RED
Inverse problems in imaging are extensively studied, with a variety of
strategies, tools, and theory that have been accumulated over the years.
Recently, this field has been immensely influenced by the emergence of
deep-learning techniques. One such contribution, which is the focus of this
paper, is the Deep Image Prior (DIP) work by Ulyanov, Vedaldi, and Lempitsky
(2018). DIP offers a new approach towards the regularization of inverse
problems, obtained by forcing the recovered image to be synthesized from a
given deep architecture. While DIP has been shown to be quite an effective
unsupervised approach, its results still fall short when compared to
state-of-the-art alternatives.
In this work, we aim to boost DIP by adding an explicit prior, which enriches
the overall regularization effect in order to lead to better-recovered images.
More specifically, we propose to bring-in the concept of Regularization by
Denoising (RED), which leverages existing denoisers for regularizing inverse
problems. Our work shows how the two (DIP and RED) can be merged into a highly
effective unsupervised recovery process while avoiding the need to
differentiate the chosen denoiser, and leading to very effective results,
demonstrated for several tested problems
Particle methods enable fast and simple approximation of Sobolev gradients in image segmentation
Bio-image analysis is challenging due to inhomogeneous intensity
distributions and high levels of noise in the images. Bayesian inference
provides a principled way for regularizing the problem using prior knowledge. A
fundamental choice is how one measures "distances" between shapes in an image.
It has been shown that the straightforward geometric L2 distance is degenerate
and leads to pathological situations. This is avoided when using Sobolev
gradients, rendering the segmentation problem less ill-posed. The high
computational cost and implementation overhead of Sobolev gradients, however,
have hampered practical applications. We show how particle methods as applied
to image segmentation allow for a simple and computationally efficient
implementation of Sobolev gradients. We show that the evaluation of Sobolev
gradients amounts to particle-particle interactions along the contour in an
image. We extend an existing particle-based segmentation algorithm to using
Sobolev gradients. Using synthetic and real-world images, we benchmark the
results for both 2D and 3D images using piecewise smooth and piecewise constant
region models. The present particle approximation of Sobolev gradients is 2.8
to 10 times faster than the previous reference implementation, but retains the
known favorable properties of Sobolev gradients. This speedup is achieved by
using local particle-particle interactions instead of solving a global Poisson
equation at each iteration. The computational time per iteration is higher for
Sobolev gradients than for L2 gradients. Since Sobolev gradients precondition
the optimization problem, however, a smaller number of overall iterations may
be necessary for the algorithm to converge, which can in some cases amortize
the higher per-iteration cost.Comment: 21 pages, 10 figure
Improved Search Strategies with Application to Estimating Facial Blendshape Parameters
It is well known that popular optimization techniques can lead to overfitting
or even a lack of convergence altogether; thus, practitioners often utilize ad
hoc regularization terms added to the energy functional. When carefully
crafted, these regularizations can produce compelling results. However,
regularization changes both the energy landscape and the solution to the
optimization problem, which can result in underfitting. Surprisingly, many
practitioners both add regularization and claim that their model lacks the
expressivity to fit the data. Motivated by a geometric interpretation of the
linearized search space, we propose an approach that ameliorates overfitting
without the need for regularization terms that restrict the expressiveness of
the underlying model. We illustrate the efficacy of our approach on
minimization problems related to three-dimensional facial expression estimation
where overfitting clouds semantic understanding and regularization may lead to
underfitting that misses or misinterprets subtle expressions
Bilevel approaches for learning of variational imaging models
We review some recent learning approaches in variational imaging, based on
bilevel optimisation, and emphasize the importance of their treatment in
function space. The paper covers both analytical and numerical techniques.
Analytically, we include results on the existence and structure of minimisers,
as well as optimality conditions for their characterisation. Based on this
information, Newton type methods are studied for the solution of the problems
at hand, combining them with sampling techniques in case of large databases.
The computational verification of the developed techniques is extensively
documented, covering instances with different type of regularisers, several
noise models, spatially dependent weights and large image databases
RANCOR: Non-Linear Image Registration with Total Variation Regularization
Optimization techniques have been widely used in deformable registration,
allowing for the incorporation of similarity metrics with regularization
mechanisms. These regularization mechanisms are designed to mitigate the
effects of trivial solutions to ill-posed registration problems and to
otherwise ensure the resulting deformation fields are well-behaved. This paper
introduces a novel deformable registration algorithm, RANCOR, which uses
iterative convexification to address deformable registration problems under
total-variation regularization. Initial comparative results against four
state-of-the-art registration algorithms are presented using the Internet Brain
Segmentation Repository (IBSR) database.Comment: 9 pages, 1 figure, technical not
- …