714 research outputs found
Self-Organized Operational Neural Networks for Severe Image Restoration Problems
Discriminative learning based on convolutional neural networks (CNNs) aims to
perform image restoration by learning from training examples of noisy-clean
image pairs. It has become the go-to methodology for tackling image restoration
and has outperformed the traditional non-local class of methods. However, the
top-performing networks are generally composed of many convolutional layers and
hundreds of neurons, with trainable parameters in excess of several millions.
We claim that this is due to the inherent linear nature of convolution-based
transformation, which is inadequate for handling severe restoration problems.
Recently, a non-linear generalization of CNNs, called the operational neural
networks (ONN), has been shown to outperform CNN on AWGN denoising. However,
its formulation is burdened by a fixed collection of well-known nonlinear
operators and an exhaustive search to find the best possible configuration for
a given architecture, whose efficacy is further limited by a fixed output layer
operator assignment. In this study, we leverage the Taylor series-based
function approximation to propose a self-organizing variant of ONNs, Self-ONNs,
for image restoration, which synthesizes novel nodal transformations onthe-fly
as part of the learning process, thus eliminating the need for redundant
training runs for operator search. In addition, it enables a finer level of
operator heterogeneity by diversifying individual connections of the receptive
fields and weights. We perform a series of extensive ablation experiments
across three severe image restoration tasks. Even when a strict equivalence of
learnable parameters is imposed, Self-ONNs surpass CNNs by a considerable
margin across all problems, improving the generalization performance by up to 3
dB in terms of PSNR
Deep Mean-Shift Priors for Image Restoration
In this paper we introduce a natural image prior that directly represents a
Gaussian-smoothed version of the natural image distribution. We include our
prior in a formulation of image restoration as a Bayes estimator that also
allows us to solve noise-blind image restoration problems. We show that the
gradient of our prior corresponds to the mean-shift vector on the natural image
distribution. In addition, we learn the mean-shift vector field using denoising
autoencoders, and use it in a gradient descent approach to perform Bayes risk
minimization. We demonstrate competitive results for noise-blind deblurring,
super-resolution, and demosaicing.Comment: NIPS 201
Learning shape correspondence with anisotropic convolutional neural networks
Establishing correspondence between shapes is a fundamental problem in
geometry processing, arising in a wide variety of applications. The problem is
especially difficult in the setting of non-isometric deformations, as well as
in the presence of topological noise and missing parts, mainly due to the
limited capability to model such deformations axiomatically. Several recent
works showed that invariance to complex shape transformations can be learned
from examples. In this paper, we introduce an intrinsic convolutional neural
network architecture based on anisotropic diffusion kernels, which we term
Anisotropic Convolutional Neural Network (ACNN). In our construction, we
generalize convolutions to non-Euclidean domains by constructing a set of
oriented anisotropic diffusion kernels, creating in this way a local intrinsic
polar representation of the data (`patch'), which is then correlated with a
filter. Several cascades of such filters, linear, and non-linear operators are
stacked to form a deep neural network whose parameters are learned by
minimizing a task-specific cost. We use ACNNs to effectively learn intrinsic
dense correspondences between deformable shapes in very challenging settings,
achieving state-of-the-art results on some of the most difficult recent
correspondence benchmarks
Semi-Blind Spatially-Variant Deconvolution in Optical Microscopy with Local Point Spread Function Estimation By Use Of Convolutional Neural Networks
We present a semi-blind, spatially-variant deconvolution technique aimed at
optical microscopy that combines a local estimation step of the point spread
function (PSF) and deconvolution using a spatially variant, regularized
Richardson-Lucy algorithm. To find the local PSF map in a computationally
tractable way, we train a convolutional neural network to perform regression of
an optical parametric model on synthetically blurred image patches. We
deconvolved both synthetic and experimentally-acquired data, and achieved an
improvement of image SNR of 1.00 dB on average, compared to other deconvolution
algorithms.Comment: 2018/02/11: submitted to IEEE ICIP 2018 - 2018/05/04: accepted to
IEEE ICIP 201
- …