2 research outputs found
Capped norm linear discriminant analysis and its applications
Classical linear discriminant analysis (LDA) is based on squared Frobenious
norm and hence is sensitive to outliers and noise. To improve the robustness of
LDA, in this paper, we introduce capped l_{2,1}-norm of a matrix, which employs
non-squared l_2-norm and "capped" operation, and further propose a novel capped
l_{2,1}-norm linear discriminant analysis, called CLDA. Due to the use of
capped l_{2,1}-norm, CLDA can effectively remove extreme outliers and suppress
the effect of noise data. In fact, CLDA can be also viewed as a weighted LDA.
CLDA is solved through a series of generalized eigenvalue problems with
theoretical convergency. The experimental results on an artificial data set,
some UCI data sets and two image data sets demonstrate the effectiveness of
CLDA
Deep Self-representative Concept Factorization Network for Representation Learning
In this paper, we investigate the unsupervised deep representation learning
issue and technically propose a novel framework called Deep Self-representative
Concept Factorization Network (DSCF-Net), for clustering deep features. To
improve the representation and clustering abilities, DSCF-Net explicitly
considers discovering hidden deep semantic features, enhancing the robustness
proper-ties of the deep factorization to noise and preserving the local
man-ifold structures of deep features. Specifically, DSCF-Net seamlessly
integrates the robust deep concept factorization, deep self-expressive
representation and adaptive locality preserving feature learning into a unified
framework. To discover hidden deep repre-sentations, DSCF-Net designs a
hierarchical factorization architec-ture using multiple layers of linear
transformations, where the hierarchical representation is performed by
formulating the prob-lem as optimizing the basis concepts in each layer to
improve the representation indirectly. DSCF-Net also improves the robustness by
subspace recovery for sparse error correction firstly and then performs the
deep factorization in the recovered visual subspace. To obtain
locality-preserving representations, we also present an adaptive deep
self-representative weighting strategy by using the coefficient matrix as the
adaptive reconstruction weights to keep the locality of representations.
Extensive comparison results with several other related models show that
DSCF-Net delivers state-of-the-art performance on several public databases.Comment: Accepted by SDM 202