50,181 research outputs found
Generative Adversarial Networks (GANs): Challenges, Solutions, and Future Directions
Generative Adversarial Networks (GANs) is a novel class of deep generative
models which has recently gained significant attention. GANs learns complex and
high-dimensional distributions implicitly over images, audio, and data.
However, there exists major challenges in training of GANs, i.e., mode
collapse, non-convergence and instability, due to inappropriate design of
network architecture, use of objective function and selection of optimization
algorithm. Recently, to address these challenges, several solutions for better
design and optimization of GANs have been investigated based on techniques of
re-engineered network architectures, new objective functions and alternative
optimization algorithms. To the best of our knowledge, there is no existing
survey that has particularly focused on broad and systematic developments of
these solutions. In this study, we perform a comprehensive survey of the
advancements in GANs design and optimization solutions proposed to handle GANs
challenges. We first identify key research issues within each design and
optimization technique and then propose a new taxonomy to structure solutions
by key research issues. In accordance with the taxonomy, we provide a detailed
discussion on different GANs variants proposed within each solution and their
relationships. Finally, based on the insights gained, we present the promising
research directions in this rapidly growing field.Comment: 42 pages, Figure 13, Table
Mode Regularized Generative Adversarial Networks
Although Generative Adversarial Networks achieve state-of-the-art results on
a variety of generative tasks, they are regarded as highly unstable and prone
to miss modes. We argue that these bad behaviors of GANs are due to the very
particular functional shape of the trained discriminators in high dimensional
spaces, which can easily make training stuck or push probability mass in the
wrong direction, towards that of higher concentration than that of the data
generating distribution. We introduce several ways of regularizing the
objective, which can dramatically stabilize the training of GAN models. We also
show that our regularizers can help the fair distribution of probability mass
across the modes of the data generating distribution, during the early phases
of training and thus providing a unified solution to the missing modes problem.Comment: Published as a conference paper at ICLR 201
Addressing Appearance Change in Outdoor Robotics with Adversarial Domain Adaptation
Appearance changes due to weather and seasonal conditions represent a strong
impediment to the robust implementation of machine learning systems in outdoor
robotics. While supervised learning optimises a model for the training domain,
it will deliver degraded performance in application domains that underlie
distributional shifts caused by these changes. Traditionally, this problem has
been addressed via the collection of labelled data in multiple domains or by
imposing priors on the type of shift between both domains. We frame the problem
in the context of unsupervised domain adaptation and develop a framework for
applying adversarial techniques to adapt popular, state-of-the-art network
architectures with the additional objective to align features across domains.
Moreover, as adversarial training is notoriously unstable, we first perform an
extensive ablation study, adapting many techniques known to stabilise
generative adversarial networks, and evaluate on a surrogate classification
task with the same appearance change. The distilled insights are applied to the
problem of free-space segmentation for motion planning in autonomous driving.Comment: In Proceedings of the 2017 IEEE/RSJ International Conference on
Intelligent Robots and Systems (IROS 2017
- …