3,697 research outputs found
KBGAN: Adversarial Learning for Knowledge Graph Embeddings
We introduce KBGAN, an adversarial learning framework to improve the
performances of a wide range of existing knowledge graph embedding models.
Because knowledge graphs typically only contain positive facts, sampling useful
negative training examples is a non-trivial task. Replacing the head or tail
entity of a fact with a uniformly randomly selected entity is a conventional
method for generating negative facts, but the majority of the generated
negative facts can be easily discriminated from positive facts, and will
contribute little towards the training. Inspired by generative adversarial
networks (GANs), we use one knowledge graph embedding model as a negative
sample generator to assist the training of our desired model, which acts as the
discriminator in GANs. This framework is independent of the concrete form of
generator and discriminator, and therefore can utilize a wide variety of
knowledge graph embedding models as its building blocks. In experiments, we
adversarially train two translation-based models, TransE and TransD, each with
assistance from one of the two probability-based models, DistMult and ComplEx.
We evaluate the performances of KBGAN on the link prediction task, using three
knowledge base completion datasets: FB15k-237, WN18 and WN18RR. Experimental
results show that adversarial training substantially improves the performances
of target embedding models under various settings.Comment: To appear at NAACL HLT 201
Improving the Improved Training of Wasserstein GANs: A Consistency Term and Its Dual Effect
Despite being impactful on a variety of problems and applications, the
generative adversarial nets (GANs) are remarkably difficult to train. This
issue is formally analyzed by \cite{arjovsky2017towards}, who also propose an
alternative direction to avoid the caveats in the minmax two-player training of
GANs. The corresponding algorithm, called Wasserstein GAN (WGAN), hinges on the
1-Lipschitz continuity of the discriminator. In this paper, we propose a novel
approach to enforcing the Lipschitz continuity in the training procedure of
WGANs. Our approach seamlessly connects WGAN with one of the recent
semi-supervised learning methods. As a result, it gives rise to not only better
photo-realistic samples than the previous methods but also state-of-the-art
semi-supervised learning results. In particular, our approach gives rise to the
inception score of more than 5.0 with only 1,000 CIFAR-10 images and is the
first that exceeds the accuracy of 90% on the CIFAR-10 dataset using only 4,000
labeled images, to the best of our knowledge.Comment: Accepted as a conference paper in International Conference on
Learning Representation(ICLR). Xiang Wei and Boqing Gong contributed equally
in this wor
Generative Adversarial Networks (GANs): Challenges, Solutions, and Future Directions
Generative Adversarial Networks (GANs) is a novel class of deep generative
models which has recently gained significant attention. GANs learns complex and
high-dimensional distributions implicitly over images, audio, and data.
However, there exists major challenges in training of GANs, i.e., mode
collapse, non-convergence and instability, due to inappropriate design of
network architecture, use of objective function and selection of optimization
algorithm. Recently, to address these challenges, several solutions for better
design and optimization of GANs have been investigated based on techniques of
re-engineered network architectures, new objective functions and alternative
optimization algorithms. To the best of our knowledge, there is no existing
survey that has particularly focused on broad and systematic developments of
these solutions. In this study, we perform a comprehensive survey of the
advancements in GANs design and optimization solutions proposed to handle GANs
challenges. We first identify key research issues within each design and
optimization technique and then propose a new taxonomy to structure solutions
by key research issues. In accordance with the taxonomy, we provide a detailed
discussion on different GANs variants proposed within each solution and their
relationships. Finally, based on the insights gained, we present the promising
research directions in this rapidly growing field.Comment: 42 pages, Figure 13, Table
- …