15 research outputs found
Reconsidering Representation Alignment for Multi-view Clustering
Aligning distributions of view representations is a core component of today's
state of the art models for deep multi-view clustering. However, we identify
several drawbacks with na\"ively aligning representation distributions. We
demonstrate that these drawbacks both lead to less separable clusters in the
representation space, and inhibit the model's ability to prioritize views.
Based on these observations, we develop a simple baseline model for deep
multi-view clustering. Our baseline model avoids representation alignment
altogether, while performing similar to, or better than, the current state of
the art. We also expand our baseline model by adding a contrastive learning
component. This introduces a selective alignment procedure that preserves the
model's ability to prioritize views. Our experiments show that the contrastive
learning component enhances the baseline model, improving on the current state
of the art by a large margin on several datasets.Comment: To appear in CVPR 2021. Code available at
https://github.com/DanielTrosten/mv
A Novel Deep Learning Framework to Identify Latent Neuroendophenotypes from Multimodal Brain Imaging Data
The expertise required to ensure adequate treatment for patients with complex cases is significantly deficient, which leads to the high demand for subtyping or clustering analysis on different clinical situations. The identification and refinement of disease-related subtypes will support both medical treatments and pathological research. Clinically, clustering can narrow down the possible causes and provide effective treatment options. However, the clustering on non-invasive multimodal brain imaging data has not been well addressed.
In this thesis, we explore this clustering issue using a deep unsupervised embedded clustering (DEMC) method on multimodal brain imaging data. T1-weighted magnetic resonance imaging (MRI) features and resting-state functional MRI-derived brain networks are learned by a sparse autoencoder and a stacked autoencoder separately and then transformed into the embedding space. Then, the K-Means approach was adopted to set the initial center of the deeply embedded clustering structure (DEC) as the centroids, after which DEC clusters with the KL divergence. In the entire processing, the deep embedding and clustering are optimized simultaneously. This new framework was tested on 994 subjects from Human Connectome Project (HCP) and the results show that this new framework has better clustering performance in comparison with other benchmark algorithms