27,512 research outputs found
AAANE: Attention-based Adversarial Autoencoder for Multi-scale Network Embedding
Network embedding represents nodes in a continuous vector space and preserves
structure information from the Network. Existing methods usually adopt a
"one-size-fits-all" approach when concerning multi-scale structure information,
such as first- and second-order proximity of nodes, ignoring the fact that
different scales play different roles in the embedding learning. In this paper,
we propose an Attention-based Adversarial Autoencoder Network Embedding(AAANE)
framework, which promotes the collaboration of different scales and lets them
vote for robust representations. The proposed AAANE consists of two components:
1) Attention-based autoencoder effectively capture the highly non-linear
network structure, which can de-emphasize irrelevant scales during training. 2)
An adversarial regularization guides the autoencoder learn robust
representations by matching the posterior distribution of the latent embeddings
to given prior distribution. This is the first attempt to introduce attention
mechanisms to multi-scale network embedding. Experimental results on real-world
networks show that our learned attention parameters are different for every
network and the proposed approach outperforms existing state-of-the-art
approaches for network embedding.Comment: 8 pages, 5 figure
Zero-Shot Visual Recognition using Semantics-Preserving Adversarial Embedding Networks
We propose a novel framework called Semantics-Preserving Adversarial
Embedding Network (SP-AEN) for zero-shot visual recognition (ZSL), where test
images and their classes are both unseen during training. SP-AEN aims to tackle
the inherent problem --- semantic loss --- in the prevailing family of
embedding-based ZSL, where some semantics would be discarded during training if
they are non-discriminative for training classes, but could become critical for
recognizing test classes. Specifically, SP-AEN prevents the semantic loss by
introducing an independent visual-to-semantic space embedder which disentangles
the semantic space into two subspaces for the two arguably conflicting
objectives: classification and reconstruction. Through adversarial learning of
the two subspaces, SP-AEN can transfer the semantics from the reconstructive
subspace to the discriminative one, accomplishing the improved zero-shot
recognition of unseen classes. Comparing with prior works, SP-AEN can not only
improve classification but also generate photo-realistic images, demonstrating
the effectiveness of semantic preservation. On four popular benchmarks: CUB,
AWA, SUN and aPY, SP-AEN considerably outperforms other state-of-the-art
methods by an absolute performance difference of 12.2\%, 9.3\%, 4.0\%, and
3.6\% in terms of harmonic mean value
Beyond Empirical Risk Minimization: Local Structure Preserving Regularization for Improving Adversarial Robustness
It is broadly known that deep neural networks are susceptible to being fooled
by adversarial examples with perturbations imperceptible by humans. Various
defenses have been proposed to improve adversarial robustness, among which
adversarial training methods are most effective. However, most of these methods
treat the training samples independently and demand a tremendous amount of
samples to train a robust network, while ignoring the latent structural
information among these samples. In this work, we propose a novel Local
Structure Preserving (LSP) regularization, which aims to preserve the local
structure of the input space in the learned embedding space. In this manner,
the attacking effect of adversarial samples lying in the vicinity of clean
samples can be alleviated. We show strong empirical evidence that with or
without adversarial training, our method consistently improves the performance
of adversarial robustness on several image classification datasets compared to
the baselines and some state-of-the-art approaches, thus providing promising
direction for future research.Comment: 13 pages, 4 figure
- …