7,686 research outputs found
A discrepancy principle for Poisson data: uniqueness of the solution for 2D and 3D data
This paper is concerned with the uniqueness of the solution of a nonlinear
equation, named discrepancy equation. For the restoration problem of data corrupted
by Poisson noise, we have to minimize an objective function that combines a
data-fidelity function, given by the generalized KullbackāLeibler divergence, and a
regularization penalty function. Bertero et al. recently proposed to use the solution
of the discrepancy equation as a convenient value for the regularization parameter.
Furthermore they devised suitable conditions to assure the uniqueness of this solution
for several regularization functions in 1D denoising and deblurring problems.
The aim of this paper is to generalize this uniqueness result to 2D and 3D problems
for several penalty functions, such as an edge preserving functional, a simple case of
the class of Markov Random Field (MRF) regularization functionals and the classical
Tikhonov regularization
Generative Adversarial Networks (GANs): Challenges, Solutions, and Future Directions
Generative Adversarial Networks (GANs) is a novel class of deep generative
models which has recently gained significant attention. GANs learns complex and
high-dimensional distributions implicitly over images, audio, and data.
However, there exists major challenges in training of GANs, i.e., mode
collapse, non-convergence and instability, due to inappropriate design of
network architecture, use of objective function and selection of optimization
algorithm. Recently, to address these challenges, several solutions for better
design and optimization of GANs have been investigated based on techniques of
re-engineered network architectures, new objective functions and alternative
optimization algorithms. To the best of our knowledge, there is no existing
survey that has particularly focused on broad and systematic developments of
these solutions. In this study, we perform a comprehensive survey of the
advancements in GANs design and optimization solutions proposed to handle GANs
challenges. We first identify key research issues within each design and
optimization technique and then propose a new taxonomy to structure solutions
by key research issues. In accordance with the taxonomy, we provide a detailed
discussion on different GANs variants proposed within each solution and their
relationships. Finally, based on the insights gained, we present the promising
research directions in this rapidly growing field.Comment: 42 pages, Figure 13, Table
Stabilizing Training of Generative Adversarial Networks through Regularization
Deep generative models based on Generative Adversarial Networks (GANs) have
demonstrated impressive sample quality but in order to work they require a
careful choice of architecture, parameter initialization, and selection of
hyper-parameters. This fragility is in part due to a dimensional mismatch or
non-overlapping support between the model distribution and the data
distribution, causing their density ratio and the associated f-divergence to be
undefined. We overcome this fundamental limitation and propose a new
regularization approach with low computational cost that yields a stable GAN
training procedure. We demonstrate the effectiveness of this regularizer across
several architectures trained on common benchmark image generation tasks. Our
regularization turns GAN models into reliable building blocks for deep
learning
- ā¦