3,329 research outputs found
A Kernel Perspective for Regularizing Deep Neural Networks
We propose a new point of view for regularizing deep neural networks by using
the norm of a reproducing kernel Hilbert space (RKHS). Even though this norm
cannot be computed, it admits upper and lower approximations leading to various
practical strategies. Specifically, this perspective (i) provides a common
umbrella for many existing regularization principles, including spectral norm
and gradient penalties, or adversarial training, (ii) leads to new effective
regularization penalties, and (iii) suggests hybrid strategies combining lower
and upper bounds to get better approximations of the RKHS norm. We
experimentally show this approach to be effective when learning on small
datasets, or to obtain adversarially robust models.Comment: ICM
Robust Speech Recognition Using Generative Adversarial Networks
This paper describes a general, scalable, end-to-end framework that uses the
generative adversarial network (GAN) objective to enable robust speech
recognition. Encoders trained with the proposed approach enjoy improved
invariance by learning to map noisy audio to the same embedding space as that
of clean audio. Unlike previous methods, the new framework does not rely on
domain expertise or simplifying assumptions as are often needed in signal
processing, and directly encourages robustness in a data-driven way. We show
the new approach improves simulated far-field speech recognition of vanilla
sequence-to-sequence models without specialized front-ends or preprocessing
Improving the Improved Training of Wasserstein GANs: A Consistency Term and Its Dual Effect
Despite being impactful on a variety of problems and applications, the
generative adversarial nets (GANs) are remarkably difficult to train. This
issue is formally analyzed by \cite{arjovsky2017towards}, who also propose an
alternative direction to avoid the caveats in the minmax two-player training of
GANs. The corresponding algorithm, called Wasserstein GAN (WGAN), hinges on the
1-Lipschitz continuity of the discriminator. In this paper, we propose a novel
approach to enforcing the Lipschitz continuity in the training procedure of
WGANs. Our approach seamlessly connects WGAN with one of the recent
semi-supervised learning methods. As a result, it gives rise to not only better
photo-realistic samples than the previous methods but also state-of-the-art
semi-supervised learning results. In particular, our approach gives rise to the
inception score of more than 5.0 with only 1,000 CIFAR-10 images and is the
first that exceeds the accuracy of 90% on the CIFAR-10 dataset using only 4,000
labeled images, to the best of our knowledge.Comment: Accepted as a conference paper in International Conference on
Learning Representation(ICLR). Xiang Wei and Boqing Gong contributed equally
in this wor
Learning Compositional Visual Concepts with Mutual Consistency
Compositionality of semantic concepts in image synthesis and analysis is
appealing as it can help in decomposing known and generatively recomposing
unknown data. For instance, we may learn concepts of changing illumination,
geometry or albedo of a scene, and try to recombine them to generate physically
meaningful, but unseen data for training and testing. In practice however we
often do not have samples from the joint concept space available: We may have
data on illumination change in one data set and on geometric change in another
one without complete overlap. We pose the following question: How can we learn
two or more concepts jointly from different data sets with mutual consistency
where we do not have samples from the full joint space? We present a novel
answer in this paper based on cyclic consistency over multiple concepts,
represented individually by generative adversarial networks (GANs). Our method,
ConceptGAN, can be understood as a drop in for data augmentation to improve
resilience for real world applications. Qualitative and quantitative
evaluations demonstrate its efficacy in generating semantically meaningful
images, as well as one shot face verification as an example application.Comment: 10 pages, 8 figures, 4 tables, CVPR 201
Learn to synthesize and synthesize to learn
Attribute guided face image synthesis aims to manipulate attributes on a face
image. Most existing methods for image-to-image translation can either perform
a fixed translation between any two image domains using a single attribute or
require training data with the attributes of interest for each subject.
Therefore, these methods could only train one specific model for each pair of
image domains, which limits their ability in dealing with more than two
domains. Another disadvantage of these methods is that they often suffer from
the common problem of mode collapse that degrades the quality of the generated
images. To overcome these shortcomings, we propose attribute guided face image
generation method using a single model, which is capable to synthesize multiple
photo-realistic face images conditioned on the attributes of interest. In
addition, we adopt the proposed model to increase the realism of the simulated
face images while preserving the face characteristics. Compared to existing
models, synthetic face images generated by our method present a good
photorealistic quality on several face datasets. Finally, we demonstrate that
generated facial images can be used for synthetic data augmentation, and
improve the performance of the classifier used for facial expression
recognition.Comment: Accepted to Computer Vision and Image Understanding (CVIU
- …