99 research outputs found
Localized Sparse Incomplete Multi-view Clustering
Incomplete multi-view clustering, which aims to solve the clustering problem
on the incomplete multi-view data with partial view missing, has received more
and more attention in recent years. Although numerous methods have been
developed, most of the methods either cannot flexibly handle the incomplete
multi-view data with arbitrary missing views or do not consider the negative
factor of information imbalance among views. Moreover, some methods do not
fully explore the local structure of all incomplete views. To tackle these
problems, this paper proposes a simple but effective method, named localized
sparse incomplete multi-view clustering (LSIMVC). Different from the existing
methods, LSIMVC intends to learn a sparse and structured consensus latent
representation from the incomplete multi-view data by optimizing a sparse
regularized and novel graph embedded multi-view matrix factorization model.
Specifically, in such a novel model based on the matrix factorization, a l1
norm based sparse constraint is introduced to obtain the sparse low-dimensional
individual representations and the sparse consensus representation. Moreover, a
novel local graph embedding term is introduced to learn the structured
consensus representation. Different from the existing works, our local graph
embedding term aggregates the graph embedding task and consensus representation
learning task into a concise term. Furthermore, to reduce the imbalance factor
of incomplete multi-view learning, an adaptive weighted learning scheme is
introduced to LSIMVC. Finally, an efficient optimization strategy is given to
solve the optimization problem of our proposed model. Comprehensive
experimental results performed on six incomplete multi-view databases verify
that the performance of our LSIMVC is superior to the state-of-the-art IMC
approaches. The code is available in https://github.com/justsmart/LSIMVC.Comment: Published in IEEE Transactions on Multimedia (TMM). The code is
available at Github https://github.com/justsmart/LSIMV
Scalable Incomplete Multi-View Clustering with Structure Alignment
The success of existing multi-view clustering (MVC) relies on the assumption
that all views are complete. However, samples are usually partially available
due to data corruption or sensor malfunction, which raises the research of
incomplete multi-view clustering (IMVC). Although several anchor-based IMVC
methods have been proposed to process the large-scale incomplete data, they
still suffer from the following drawbacks: i) Most existing approaches neglect
the inter-view discrepancy and enforce cross-view representation to be
consistent, which would corrupt the representation capability of the model; ii)
Due to the samples disparity between different views, the learned anchor might
be misaligned, which we referred as the Anchor-Unaligned Problem for Incomplete
data (AUP-ID). Such the AUP-ID would cause inaccurate graph fusion and degrades
clustering performance. To tackle these issues, we propose a novel incomplete
anchor graph learning framework termed Scalable Incomplete Multi-View
Clustering with Structure Alignment (SIMVC-SA). Specially, we construct the
view-specific anchor graph to capture the complementary information from
different views. In order to solve the AUP-ID, we propose a novel structure
alignment module to refine the cross-view anchor correspondence. Meanwhile, the
anchor graph construction and alignment are jointly optimized in our unified
framework to enhance clustering quality. Through anchor graph construction
instead of full graphs, the time and space complexity of the proposed SIMVC-SA
is proven to be linearly correlated with the number of samples. Extensive
experiments on seven incomplete benchmark datasets demonstrate the
effectiveness and efficiency of our proposed method. Our code is publicly
available at https://github.com/wy1019/SIMVC-SA
Information Recovery-Driven Deep Incomplete Multiview Clustering Network
Incomplete multi-view clustering is a hot and emerging topic. It is well
known that unavoidable data incompleteness greatly weakens the effective
information of multi-view data. To date, existing incomplete multi-view
clustering methods usually bypass unavailable views according to prior missing
information, which is considered as a second-best scheme based on evasion.
Other methods that attempt to recover missing information are mostly applicable
to specific two-view datasets. To handle these problems, in this paper, we
propose an information recovery-driven deep incomplete multi-view clustering
network, termed as RecFormer. Concretely, a two-stage autoencoder network with
the self-attention structure is built to synchronously extract high-level
semantic representations of multiple views and recover the missing data.
Besides, we develop a recurrent graph reconstruction mechanism that cleverly
leverages the restored views to promote the representation learning and the
further data reconstruction. Visualization of recovery results are given and
sufficient experimental results confirm that our RecFormer has obvious
advantages over other top methods.Comment: Accepted by TNNLS 2023. Please contact me if you have any questions:
[email protected]. The code is available at:
https://github.com/justsmart/RecForme
Joint Projection Learning and Tensor Decomposition Based Incomplete Multi-view Clustering
Incomplete multi-view clustering (IMVC) has received increasing attention
since it is often that some views of samples are incomplete in reality. Most
existing methods learn similarity subgraphs from original incomplete multi-view
data and seek complete graphs by exploring the incomplete subgraphs of each
view for spectral clustering. However, the graphs constructed on the original
high-dimensional data may be suboptimal due to feature redundancy and noise.
Besides, previous methods generally ignored the graph noise caused by the
inter-class and intra-class structure variation during the transformation of
incomplete graphs and complete graphs. To address these problems, we propose a
novel Joint Projection Learning and Tensor Decomposition Based method (JPLTD)
for IMVC. Specifically, to alleviate the influence of redundant features and
noise in high-dimensional data, JPLTD introduces an orthogonal projection
matrix to project the high-dimensional features into a lower-dimensional space
for compact feature learning.Meanwhile, based on the lower-dimensional space,
the similarity graphs corresponding to instances of different views are
learned, and JPLTD stacks these graphs into a third-order low-rank tensor to
explore the high-order correlations across different views. We further consider
the graph noise of projected data caused by missing samples and use a
tensor-decomposition based graph filter for robust clustering.JPLTD decomposes
the original tensor into an intrinsic tensor and a sparse tensor. The intrinsic
tensor models the true data similarities. An effective optimization algorithm
is adopted to solve the JPLTD model. Comprehensive experiments on several
benchmark datasets demonstrate that JPLTD outperforms the state-of-the-art
methods. The code of JPLTD is available at https://github.com/weilvNJU/JPLTD.Comment: IEEE Transactions on Neural Networks and Learning Systems, 202
- …