3,106 research outputs found
On optimizing subspaces for face recognition
Abstract We propose a subspace learning algorithm for face recognition by directly optimizing recognition performance scores
Building Deep Networks on Grassmann Manifolds
Learning representations on Grassmann manifolds is popular in quite a few
visual recognition tasks. In order to enable deep learning on Grassmann
manifolds, this paper proposes a deep network architecture by generalizing the
Euclidean network paradigm to Grassmann manifolds. In particular, we design
full rank mapping layers to transform input Grassmannian data to more desirable
ones, exploit re-orthonormalization layers to normalize the resulting matrices,
study projection pooling layers to reduce the model complexity in the
Grassmannian context, and devise projection mapping layers to respect
Grassmannian geometry and meanwhile achieve Euclidean forms for regular output
layers. To train the Grassmann networks, we exploit a stochastic gradient
descent setting on manifolds of the connection weights, and study a matrix
generalization of backpropagation to update the structured data. The
evaluations on three visual recognition tasks show that our Grassmann networks
have clear advantages over existing Grassmann learning methods, and achieve
results comparable with state-of-the-art approaches.Comment: AAAI'18 pape
Probabilistic Sparse Subspace Clustering Using Delayed Association
Discovering and clustering subspaces in high-dimensional data is a
fundamental problem of machine learning with a wide range of applications in
data mining, computer vision, and pattern recognition. Earlier methods divided
the problem into two separate stages of finding the similarity matrix and
finding clusters. Similar to some recent works, we integrate these two steps
using a joint optimization approach. We make the following contributions: (i)
we estimate the reliability of the cluster assignment for each point before
assigning a point to a subspace. We group the data points into two groups of
"certain" and "uncertain", with the assignment of latter group delayed until
their subspace association certainty improves. (ii) We demonstrate that delayed
association is better suited for clustering subspaces that have ambiguities,
i.e. when subspaces intersect or data are contaminated with outliers/noise.
(iii) We demonstrate experimentally that such delayed probabilistic association
leads to a more accurate self-representation and final clusters. The proposed
method has higher accuracy both for points that exclusively lie in one
subspace, and those that are on the intersection of subspaces. (iv) We show
that delayed association leads to huge reduction of computational cost, since
it allows for incremental spectral clustering
Non-Negative Local Sparse Coding for Subspace Clustering
Subspace sparse coding (SSC) algorithms have proven to be beneficial to
clustering problems. They provide an alternative data representation in which
the underlying structure of the clusters can be better captured. However, most
of the research in this area is mainly focused on enhancing the sparse coding
part of the problem. In contrast, we introduce a novel objective term in our
proposed SSC framework which focuses on the separability of data points in the
coding space. We also provide mathematical insights into how this
local-separability term improves the clustering result of the SSC framework.
Our proposed non-linear local SSC algorithm (NLSSC) also benefits from the
efficient choice of its sparsity terms and constraints. The NLSSC algorithm is
also formulated in the kernel-based framework (NLKSSC) which can represent the
nonlinear structure of data. In addition, we address the possibility of having
redundancies in sparse coding results and its negative effect on graph-based
clustering problems. We introduce the link-restore post-processing step to
improve the representation graph of non-negative SSC algorithms such as ours.
Empirical evaluations on well-known clustering benchmarks show that our
proposed NLSSC framework results in better clusterings compared to the
state-of-the-art baselines and demonstrate the effectiveness of the
link-restore post-processing in improving the clustering accuracy via
correcting the broken links of the representation graph.Comment: 15 pages, IDA 2018 conferenc
- …