993 research outputs found
Generative Adversarial Networks (GANs): Challenges, Solutions, and Future Directions
Generative Adversarial Networks (GANs) is a novel class of deep generative
models which has recently gained significant attention. GANs learns complex and
high-dimensional distributions implicitly over images, audio, and data.
However, there exists major challenges in training of GANs, i.e., mode
collapse, non-convergence and instability, due to inappropriate design of
network architecture, use of objective function and selection of optimization
algorithm. Recently, to address these challenges, several solutions for better
design and optimization of GANs have been investigated based on techniques of
re-engineered network architectures, new objective functions and alternative
optimization algorithms. To the best of our knowledge, there is no existing
survey that has particularly focused on broad and systematic developments of
these solutions. In this study, we perform a comprehensive survey of the
advancements in GANs design and optimization solutions proposed to handle GANs
challenges. We first identify key research issues within each design and
optimization technique and then propose a new taxonomy to structure solutions
by key research issues. In accordance with the taxonomy, we provide a detailed
discussion on different GANs variants proposed within each solution and their
relationships. Finally, based on the insights gained, we present the promising
research directions in this rapidly growing field.Comment: 42 pages, Figure 13, Table
Semantic-Aware Adversarial Training for Reliable Deep Hashing Retrieval
Deep hashing has been intensively studied and successfully applied in
large-scale image retrieval systems due to its efficiency and effectiveness.
Recent studies have recognized that the existence of adversarial examples poses
a security threat to deep hashing models, that is, adversarial vulnerability.
Notably, it is challenging to efficiently distill reliable semantic
representatives for deep hashing to guide adversarial learning, and thereby it
hinders the enhancement of adversarial robustness of deep hashing-based
retrieval models. Moreover, current researches on adversarial training for deep
hashing are hard to be formalized into a unified minimax structure. In this
paper, we explore Semantic-Aware Adversarial Training (SAAT) for improving the
adversarial robustness of deep hashing models. Specifically, we conceive a
discriminative mainstay features learning (DMFL) scheme to construct semantic
representatives for guiding adversarial learning in deep hashing. Particularly,
our DMFL with the strict theoretical guarantee is adaptively optimized in a
discriminative learning manner, where both discriminative and semantic
properties are jointly considered. Moreover, adversarial examples are
fabricated by maximizing the Hamming distance between the hash codes of
adversarial samples and mainstay features, the efficacy of which is validated
in the adversarial attack trials. Further, we, for the first time, formulate
the formalized adversarial training of deep hashing into a unified minimax
optimization under the guidance of the generated mainstay codes. Extensive
experiments on benchmark datasets show superb attack performance against the
state-of-the-art algorithms, meanwhile, the proposed adversarial training can
effectively eliminate adversarial perturbations for trustworthy deep
hashing-based retrieval. Our code is available at
https://github.com/xandery-geek/SAAT
- …