13,068 research outputs found
Recommended from our members
Deep Multiple Auto-Encoder based Multi-view Clustering
© The Author(s) 2021. Multi-view clustering (MVC), which aims to explore the underlying structure of data by leveraging heterogeneous information of different views, has brought along a growth of attention. Multi-view clustering algorithms based on different theories have been proposed and extended in various applications. However, most existing MVC algorithms are shallow models, which learn structure information of multi-view data by mapping multi-view data to low-dimensional representation space directly, ignoring the nonlinear structure information hidden in each view, and thus, the performance of multi-view clustering is weakened to a certain extent. In this paper, we propose a deep multi-view clustering algorithm based on multiple auto-encoder, termed MVC-MAE, to cluster multi-view data. MVC-MAE adopts auto-encoder to capture the nonlinear structure information of each view in a layer-wise manner and incorporate the local invariance within each view and consistent as well as complementary information between any two views together. Besides, we integrate the representation learning and clustering into a unified framework, such that two tasks can be jointly optimized. Extensive experiments on six real-world datasets demonstrate the promising performance of our algorithm compared with 15 baseline algorithms in terms of two evaluation metrics.National Natural Science Foundation of China; Program for Innovation Research Team (University of Yunnan Province); National Social Science Foundation of Chin
DIVA: A Dirichlet Process Based Incremental Deep Clustering Algorithm via Variational Auto-Encoder
Generative model-based deep clustering frameworks excel in classifying
complex data, but are limited in handling dynamic and complex features because
they require prior knowledge of the number of clusters. In this paper, we
propose a nonparametric deep clustering framework that employs an infinite
mixture of Gaussians as a prior. Our framework utilizes a memoized online
variational inference method that enables the "birth" and "merge" moves of
clusters, allowing our framework to cluster data in a "dynamic-adaptive"
manner, without requiring prior knowledge of the number of features. We name
the framework as DIVA, a Dirichlet Process-based Incremental deep clustering
framework via Variational Auto-Encoder. Our framework, which outperforms
state-of-the-art baselines, exhibits superior performance in classifying
complex data with dynamically changing features, particularly in the case of
incremental features. We released our source code implementation at:
https://github.com/Ghiara/divaComment: update supplementary material
AugDMC: Data Augmentation Guided Deep Multiple Clustering
Clustering aims to group similar objects together while separating dissimilar
ones apart. Thereafter, structures hidden in data can be identified to help
understand data in an unsupervised manner. Traditional clustering methods such
as k-means provide only a single clustering for one data set. Deep clustering
methods such as auto-encoder based clustering methods have shown a better
performance, but still provide a single clustering. However, a given dataset
might have multiple clustering structures and each represents a unique
perspective of the data. Therefore, some multiple clustering methods have been
developed to discover multiple independent structures hidden in data. Although
deep multiple clustering methods provide better performance, how to efficiently
capture the alternative perspectives in data is still a problem. In this paper,
we propose AugDMC, a novel data Augmentation guided Deep Multiple Clustering
method, to tackle the challenge. Specifically, AugDMC leverages data
augmentations to automatically extract features related to a certain aspect of
the data using a self-supervised prototype-based representation learning, where
different aspects of the data can be preserved under different data
augmentations. Moreover, a stable optimization strategy is proposed to
alleviate the unstable problem from different augmentations. Thereafter,
multiple clusterings based on different aspects of the data can be obtained.
Experimental results on three real-world datasets compared with
state-of-the-art methods validate the effectiveness of the proposed method
Neural Collaborative Subspace Clustering
We introduce the Neural Collaborative Subspace Clustering, a neural model
that discovers clusters of data points drawn from a union of low-dimensional
subspaces. In contrast to previous attempts, our model runs without the aid of
spectral clustering. This makes our algorithm one of the kinds that can
gracefully scale to large datasets. At its heart, our neural model benefits
from a classifier which determines whether a pair of points lies on the same
subspace or not. Essential to our model is the construction of two affinity
matrices, one from the classifier and the other from a notion of subspace
self-expressiveness, to supervise training in a collaborative scheme. We
thoroughly assess and contrast the performance of our model against various
state-of-the-art clustering algorithms including deep subspace-based ones.Comment: Accepted to ICML 201
- …