1,983 research outputs found
AAANE: Attention-based Adversarial Autoencoder for Multi-scale Network Embedding
Network embedding represents nodes in a continuous vector space and preserves
structure information from the Network. Existing methods usually adopt a
"one-size-fits-all" approach when concerning multi-scale structure information,
such as first- and second-order proximity of nodes, ignoring the fact that
different scales play different roles in the embedding learning. In this paper,
we propose an Attention-based Adversarial Autoencoder Network Embedding(AAANE)
framework, which promotes the collaboration of different scales and lets them
vote for robust representations. The proposed AAANE consists of two components:
1) Attention-based autoencoder effectively capture the highly non-linear
network structure, which can de-emphasize irrelevant scales during training. 2)
An adversarial regularization guides the autoencoder learn robust
representations by matching the posterior distribution of the latent embeddings
to given prior distribution. This is the first attempt to introduce attention
mechanisms to multi-scale network embedding. Experimental results on real-world
networks show that our learned attention parameters are different for every
network and the proposed approach outperforms existing state-of-the-art
approaches for network embedding.Comment: 8 pages, 5 figure
Evidence Transfer for Improving Clustering Tasks Using External Categorical Evidence
In this paper we introduce evidence transfer for clustering, a deep learning
method that can incrementally manipulate the latent representations of an
autoencoder, according to external categorical evidence, in order to improve a
clustering outcome. By evidence transfer we define the process by which the
categorical outcome of an external, auxiliary task is exploited to improve a
primary task, in this case representation learning for clustering. Our proposed
method makes no assumptions regarding the categorical evidence presented, nor
the structure of the latent space. We compare our method, against the baseline
solution by performing k-means clustering before and after its deployment.
Experiments with three different kinds of evidence show that our method
effectively manipulates the latent representations when introduced with real
corresponding evidence, while remaining robust when presented with low quality
evidence
Autoencoding beyond pixels using a learned similarity metric
We present an autoencoder that leverages learned representations to better
measure similarities in data space. By combining a variational autoencoder with
a generative adversarial network we can use learned feature representations in
the GAN discriminator as basis for the VAE reconstruction objective. Thereby,
we replace element-wise errors with feature-wise errors to better capture the
data distribution while offering invariance towards e.g. translation. We apply
our method to images of faces and show that it outperforms VAEs with
element-wise similarity measures in terms of visual fidelity. Moreover, we show
that the method learns an embedding in which high-level abstract visual
features (e.g. wearing glasses) can be modified using simple arithmetic
Deep Clustering: A Comprehensive Survey
Cluster analysis plays an indispensable role in machine learning and data
mining. Learning a good data representation is crucial for clustering
algorithms. Recently, deep clustering, which can learn clustering-friendly
representations using deep neural networks, has been broadly applied in a wide
range of clustering tasks. Existing surveys for deep clustering mainly focus on
the single-view fields and the network architectures, ignoring the complex
application scenarios of clustering. To address this issue, in this paper we
provide a comprehensive survey for deep clustering in views of data sources.
With different data sources and initial conditions, we systematically
distinguish the clustering methods in terms of methodology, prior knowledge,
and architecture. Concretely, deep clustering methods are introduced according
to four categories, i.e., traditional single-view deep clustering,
semi-supervised deep clustering, deep multi-view clustering, and deep transfer
clustering. Finally, we discuss the open challenges and potential future
opportunities in different fields of deep clustering
- …