11,143 research outputs found
Robust Localized Multi-view Subspace Clustering
In multi-view clustering, different views may have different confidence
levels when learning a consensus representation. Existing methods usually
address this by assigning distinctive weights to different views. However, due
to noisy nature of real-world applications, the confidence levels of samples in
the same view may also vary. Thus considering a unified weight for a view may
lead to suboptimal solutions. In this paper, we propose a novel localized
multi-view subspace clustering model that considers the confidence levels of
both views and samples. By assigning weight to each sample under each view
properly, we can obtain a robust consensus representation via fusing the
noiseless structures among views and samples. We further develop a regularizer
on weight parameters based on the convex conjugacy theory, and samples weights
are determined in an adaptive manner. An efficient iterative algorithm is
developed with a convergence guarantee. Experimental results on four benchmarks
demonstrate the correctness and effectiveness of the proposed model.Comment: 7 page
Multi-view Low-rank Sparse Subspace Clustering
Most existing approaches address multi-view subspace clustering problem by
constructing the affinity matrix on each view separately and afterwards propose
how to extend spectral clustering algorithm to handle multi-view data. This
paper presents an approach to multi-view subspace clustering that learns a
joint subspace representation by constructing affinity matrix shared among all
views. Relying on the importance of both low-rank and sparsity constraints in
the construction of the affinity matrix, we introduce the objective that
balances between the agreement across different views, while at the same time
encourages sparsity and low-rankness of the solution. Related low-rank and
sparsity constrained optimization problem is for each view solved using the
alternating direction method of multipliers. Furthermore, we extend our
approach to cluster data drawn from nonlinear subspaces by solving the
corresponding problem in a reproducing kernel Hilbert space. The proposed
algorithm outperforms state-of-the-art multi-view subspace clustering
algorithms on one synthetic and four real-world datasets
A Survey on Multi-View Clustering
With advances in information acquisition technologies, multi-view data become
ubiquitous. Multi-view learning has thus become more and more popular in
machine learning and data mining fields. Multi-view unsupervised or
semi-supervised learning, such as co-training, co-regularization has gained
considerable attention. Although recently, multi-view clustering (MVC) methods
have been developed rapidly, there has not been a survey to summarize and
analyze the current progress. Therefore, this paper reviews the common
strategies for combining multiple views of data and based on this summary we
propose a novel taxonomy of the MVC approaches. We further discuss the
relationships between MVC and multi-view representation, ensemble clustering,
multi-task clustering, multi-view supervised and semi-supervised learning.
Several representative real-world applications are elaborated. To promote
future development of MVC, we envision several open problems that may require
further investigation and thorough examination.Comment: 17 pages, 4 figure
A Method Based on Convex Cone Model for Image-Set Classification with CNN Features
In this paper, we propose a method for image-set classification based on
convex cone models, focusing on the effectiveness of convolutional neural
network (CNN) features as inputs. CNN features have non-negative values when
using the rectified linear unit as an activation function. This naturally leads
us to model a set of CNN features by a convex cone and measure the geometric
similarity of convex cones for classification. To establish this framework, we
sequentially define multiple angles between two convex cones by repeating the
alternating least squares method and then define the geometric similarity
between the cones using the obtained angles. Moreover, to enhance our method,
we introduce a discriminant space, maximizing the between-class variance (gaps)
and minimizes the within-class variance of the projected convex cones onto the
discriminant space, similar to a Fisher discriminant analysis. Finally,
classification is based on the similarity between projected convex cones. The
effectiveness of the proposed method was demonstrated experimentally using a
private, multi-view hand shape dataset and two public databases.Comment: Accepted at the International Joint Conference on Neural Networks,
IJCNN, 201
Feature Concatenation Multi-view Subspace Clustering
Multi-view clustering aims to achieve more promising clustering results than
single-view clustering by exploring the multi-view information. Since statistic
properties of different views are diverse, even incompatible, few approaches
implement multi-view clustering based on the concatenated features directly.
However, feature concatenation is a natural way to combine multiple views. To
this end, this paper proposes a novel multi-view subspace clustering approach
dubbed Feature Concatenation Multi-view Subspace Clustering (FCMSC).
Specifically, by exploring the consensus information, multi-view data are
concatenated into a joint representation firstly, then, -norm is
integrated into the objective function to deal with the sample-specific and
cluster-specific corruptions of multiple views for benefiting the clustering
performance. Furthermore, by introducing graph Laplacians of multiple views, a
graph regularized FCMSC is also introduced to explore both the consensus
information and complementary information for clustering. It is noteworthy that
the obtained coefficient matrix is not derived by directly applying the
Low-Rank Representation (LRR) to the joint view representation simply. Finally,
an effective algorithm based on the Augmented Lagrangian Multiplier (ALM) is
designed to optimized the objective functions. Comprehensive experiments on six
real world datasets illustrate the superiority of the proposed methods over
several state-of-the-art approaches for multi-view clustering
Convex Sparse Spectral Clustering: Single-view to Multi-view
Spectral Clustering (SC) is one of the most widely used methods for data
clustering. It first finds a low-dimensonal embedding of data by computing
the eigenvectors of the normalized Laplacian matrix, and then performs k-means
on to get the final clustering result. In this work, we observe that,
in the ideal case, should be block diagonal and thus sparse.
Therefore we propose the Sparse Spectral Clustering (SSC) method which extends
SC with sparse regularization on . To address the computational issue
of the nonconvex SSC model, we propose a novel convex relaxation of SSC based
on the convex hull of the fixed rank projection matrices. Then the convex SSC
model can be efficiently solved by the Alternating Direction Method of
\canyi{Multipliers} (ADMM). Furthermore, we propose the Pairwise Sparse
Spectral Clustering (PSSC) which extends SSC to boost the clustering
performance by using the multi-view information of data. Experimental
comparisons with several baselines on real-world datasets testify to the
efficacy of our proposed methods
Joint Adaptive Neighbours and Metric Learning for Multi-view Subspace Clustering
Due to the existence of various views or representations in many real-world
data, multi-view learning has drawn much attention recently. Multi-view
spectral clustering methods based on similarity matrixes or graphs are pretty
popular. Generally, these algorithms learn informative graphs by directly
utilizing original data. However, in the real-world applications, original data
often contain noises and outliers that lead to unreliable graphs. In addition,
different views may have different contributions to data clustering. In this
paper, a novel Multiview Subspace Clustering method unifying Adaptive
neighbours and Metric learning (MSCAM), is proposed to address the above
problems. In this method, we use the subspace representations of different
views to adaptively learn a consensus similarity matrix, uncovering the
subspace structure and avoiding noisy nature of original data. For all views,
we also learn different Mahalanobis matrixes that parameterize the squared
distances and consider the contributions of different views. Further, we
constrain the graph constructed by the similarity matrix to have exact c (c is
the number of clusters) connected components. An iterative algorithm is
developed to solve this optimization problem. Moreover, experiments on a
synthetic dataset and different real-world datasets demonstrate the
effectiveness of MSCAM.Comment: 9 page
A Survey on Multi-Task Learning
Multi-Task Learning (MTL) is a learning paradigm in machine learning and its
aim is to leverage useful information contained in multiple related tasks to
help improve the generalization performance of all the tasks. In this paper, we
give a survey for MTL. First, we classify different MTL algorithms into several
categories, including feature learning approach, low-rank approach, task
clustering approach, task relation learning approach, and decomposition
approach, and then discuss the characteristics of each approach. In order to
improve the performance of learning tasks further, MTL can be combined with
other learning paradigms including semi-supervised learning, active learning,
unsupervised learning, reinforcement learning, multi-view learning and
graphical models. When the number of tasks is large or the data dimensionality
is high, batch MTL models are difficult to handle this situation and online,
parallel and distributed MTL models as well as dimensionality reduction and
feature hashing are reviewed to reveal their computational and storage
advantages. Many real-world applications use MTL to boost their performance and
we review representative works. Finally, we present theoretical analyses and
discuss several future directions for MTL
Evolutionary Self-Expressive Models for Subspace Clustering
The problem of organizing data that evolves over time into clusters is
encountered in a number of practical settings. We introduce evolutionary
subspace clustering, a method whose objective is to cluster a collection of
evolving data points that lie on a union of low-dimensional evolving subspaces.
To learn the parsimonious representation of the data points at each time step,
we propose a non-convex optimization framework that exploits the
self-expressiveness property of the evolving data while taking into account
representation from the preceding time step. To find an approximate solution to
the aforementioned non-convex optimization problem, we develop a scheme based
on alternating minimization that both learns the parsimonious representation as
well as adaptively tunes and infers a smoothing parameter reflective of the
rate of data evolution. The latter addresses a fundamental challenge in
evolutionary clustering -- determining if and to what extent one should
consider previous clustering solutions when analyzing an evolving data
collection. Our experiments on both synthetic and real-world datasets
demonstrate that the proposed framework outperforms state-of-the-art static
subspace clustering algorithms and existing evolutionary clustering schemes in
terms of both accuracy and running time, in a range of scenarios
Decomposition into Low-rank plus Additive Matrices for Background/Foreground Separation: A Review for a Comparative Evaluation with a Large-Scale Dataset
Recent research on problem formulations based on decomposition into low-rank
plus sparse matrices shows a suitable framework to separate moving objects from
the background. The most representative problem formulation is the Robust
Principal Component Analysis (RPCA) solved via Principal Component Pursuit
(PCP) which decomposes a data matrix in a low-rank matrix and a sparse matrix.
However, similar robust implicit or explicit decompositions can be made in the
following problem formulations: Robust Non-negative Matrix Factorization
(RNMF), Robust Matrix Completion (RMC), Robust Subspace Recovery (RSR), Robust
Subspace Tracking (RST) and Robust Low-Rank Minimization (RLRM). The main goal
of these similar problem formulations is to obtain explicitly or implicitly a
decomposition into low-rank matrix plus additive matrices. In this context,
this work aims to initiate a rigorous and comprehensive review of the similar
problem formulations in robust subspace learning and tracking based on
decomposition into low-rank plus additive matrices for testing and ranking
existing algorithms for background/foreground separation. For this, we first
provide a preliminary review of the recent developments in the different
problem formulations which allows us to define a unified view that we called
Decomposition into Low-rank plus Additive Matrices (DLAM). Then, we examine
carefully each method in each robust subspace learning/tracking frameworks with
their decomposition, their loss functions, their optimization problem and their
solvers. Furthermore, we investigate if incremental algorithms and real-time
implementations can be achieved for background/foreground separation. Finally,
experimental results on a large-scale dataset called Background Models
Challenge (BMC 2012) show the comparative performance of 32 different robust
subspace learning/tracking methods.Comment: 121 pages, 5 figures, submitted to Computer Science Review. arXiv
admin note: text overlap with arXiv:1312.7167, arXiv:1109.6297,
arXiv:1207.3438, arXiv:1105.2126, arXiv:1404.7592, arXiv:1210.0805,
arXiv:1403.8067 by other authors, Computer Science Review, November 201
- …