3,727 research outputs found
Robust and Efficient Subspace Segmentation via Least Squares Regression
This paper studies the subspace segmentation problem which aims to segment
data drawn from a union of multiple linear subspaces. Recent works by using
sparse representation, low rank representation and their extensions attract
much attention. If the subspaces from which the data drawn are independent or
orthogonal, they are able to obtain a block diagonal affinity matrix, which
usually leads to a correct segmentation. The main differences among them are
their objective functions. We theoretically show that if the objective function
satisfies some conditions, and the data are sufficiently drawn from independent
subspaces, the obtained affinity matrix is always block diagonal. Furthermore,
the data sampling can be insufficient if the subspaces are orthogonal. Some
existing methods are all special cases. Then we present the Least Squares
Regression (LSR) method for subspace segmentation. It takes advantage of data
correlation, which is common in real data. LSR encourages a grouping effect
which tends to group highly correlated data together. Experimental results on
the Hopkins 155 database and Extended Yale Database B show that our method
significantly outperforms state-of-the-art methods. Beyond segmentation
accuracy, all experiments demonstrate that LSR is much more efficient.Comment: European Conference on Computer Vision, 201
Scaled Simplex Representation for Subspace Clustering
The self-expressive property of data points, i.e., each data point can be
linearly represented by the other data points in the same subspace, has proven
effective in leading subspace clustering methods. Most self-expressive methods
usually construct a feasible affinity matrix from a coefficient matrix,
obtained by solving an optimization problem. However, the negative entries in
the coefficient matrix are forced to be positive when constructing the affinity
matrix via exponentiation, absolute symmetrization, or squaring operations.
This consequently damages the inherent correlations among the data. Besides,
the affine constraint used in these methods is not flexible enough for
practical applications. To overcome these problems, in this paper, we introduce
a scaled simplex representation (SSR) for subspace clustering problem.
Specifically, the non-negative constraint is used to make the coefficient
matrix physically meaningful, and the coefficient vector is constrained to be
summed up to a scalar s<1 to make it more discriminative. The proposed SSR
based subspace clustering (SSRSC) model is reformulated as a linear
equality-constrained problem, which is solved efficiently under the alternating
direction method of multipliers framework. Experiments on benchmark datasets
demonstrate that the proposed SSRSC algorithm is very efficient and outperforms
state-of-the-art subspace clustering methods on accuracy. The code can be found
at https://github.com/csjunxu/SSRSC.Comment: Accepted by IEEE Transactions on Cybernetics. 13 pages, 9 figures, 10
tables. Code can be found at https://github.com/csjunxu/SSRS
Correlation Adaptive Subspace Segmentation by Trace Lasso
This paper studies the subspace segmentation problem. Given a set of data
points drawn from a union of subspaces, the goal is to partition them into
their underlying subspaces they were drawn from. The spectral clustering method
is used as the framework. It requires to find an affinity matrix which is close
to block diagonal, with nonzero entries corresponding to the data point pairs
from the same subspace. In this work, we argue that both sparsity and the
grouping effect are important for subspace segmentation. A sparse affinity
matrix tends to be block diagonal, with less connections between data points
from different subspaces. The grouping effect ensures that the highly corrected
data which are usually from the same subspace can be grouped together. Sparse
Subspace Clustering (SSC), by using -minimization, encourages sparsity
for data selection, but it lacks of the grouping effect. On the contrary,
Low-Rank Representation (LRR), by rank minimization, and Least Squares
Regression (LSR), by -regularization, exhibit strong grouping effect,
but they are short in subset selection. Thus the obtained affinity matrix is
usually very sparse by SSC, yet very dense by LRR and LSR.
In this work, we propose the Correlation Adaptive Subspace Segmentation
(CASS) method by using trace Lasso. CASS is a data correlation dependent method
which simultaneously performs automatic data selection and groups correlated
data together. It can be regarded as a method which adaptively balances SSC and
LSR. Both theoretical and experimental results show the effectiveness of CASS.Comment: International Conference on Computer Vision (ICCV), 201
Image Segmentation Using Subspace Representation and Sparse Decomposition
Image foreground extraction is a classical problem in image processing and
vision, with a large range of applications. In this dissertation, we focus on
the extraction of text and graphics in mixed-content images, and design novel
approaches for various aspects of this problem.
We first propose a sparse decomposition framework, which models the
background by a subspace containing smooth basis vectors, and foreground as a
sparse and connected component. We then formulate an optimization framework to
solve this problem, by adding suitable regularizations to the cost function to
promote the desired characteristics of each component. We present two
techniques to solve the proposed optimization problem, one based on alternating
direction method of multipliers (ADMM), and the other one based on robust
regression. Promising results are obtained for screen content image
segmentation using the proposed algorithm.
We then propose a robust subspace learning algorithm for the representation
of the background component using training images that could contain both
background and foreground components, as well as noise. With the learnt
subspace for the background, we can further improve the segmentation results,
compared to using a fixed subspace. Lastly, we investigate a different class of
signal/image decomposition problem, where only one signal component is active
at each signal element. In this case, besides estimating each component, we
need to find their supports, which can be specified by a binary mask. We
propose a mixed-integer programming problem, that jointly estimates the two
components and their supports through an alternating optimization scheme. We
show the application of this algorithm on various problems, including image
segmentation, video motion segmentation, and also separation of text from
textured images.Comment: PhD Dissertation, NYU, 201
Fast Approximate L_infty Minimization: Speeding Up Robust Regression
Minimization of the norm, which can be viewed as approximately
solving the non-convex least median estimation problem, is a powerful method
for outlier removal and hence robust regression. However, current techniques
for solving the problem at the heart of norm minimization are slow,
and therefore cannot scale to large problems. A new method for the minimization
of the norm is presented here, which provides a speedup of multiple
orders of magnitude for data with high dimension. This method, termed Fast
Minimization, allows robust regression to be applied to a class of
problems which were previously inaccessible. It is shown how the
norm minimization problem can be broken up into smaller sub-problems, which can
then be solved extremely efficiently. Experimental results demonstrate the
radical reduction in computation time, along with robustness against large
numbers of outliers in a few model-fitting problems.Comment: 11 page
Accelerated Sparse Subspace Clustering
State-of-the-art algorithms for sparse subspace clustering perform spectral
clustering on a similarity matrix typically obtained by representing each data
point as a sparse combination of other points using either basis pursuit (BP)
or orthogonal matching pursuit (OMP). BP-based methods are often prohibitive in
practice while the performance of OMP-based schemes are unsatisfactory,
especially in settings where data points are highly similar. In this paper, we
propose a novel algorithm that exploits an accelerated variant of orthogonal
least-squares to efficiently find the underlying subspaces. We show that under
certain conditions the proposed algorithm returns a subspace-preserving
solution. Simulation results illustrate that the proposed method compares
favorably with BP-based method in terms of running time while being
significantly more accurate than OMP-based schemes
Evolutionary Self-Expressive Models for Subspace Clustering
The problem of organizing data that evolves over time into clusters is
encountered in a number of practical settings. We introduce evolutionary
subspace clustering, a method whose objective is to cluster a collection of
evolving data points that lie on a union of low-dimensional evolving subspaces.
To learn the parsimonious representation of the data points at each time step,
we propose a non-convex optimization framework that exploits the
self-expressiveness property of the evolving data while taking into account
representation from the preceding time step. To find an approximate solution to
the aforementioned non-convex optimization problem, we develop a scheme based
on alternating minimization that both learns the parsimonious representation as
well as adaptively tunes and infers a smoothing parameter reflective of the
rate of data evolution. The latter addresses a fundamental challenge in
evolutionary clustering -- determining if and to what extent one should
consider previous clustering solutions when analyzing an evolving data
collection. Our experiments on both synthetic and real-world datasets
demonstrate that the proposed framework outperforms state-of-the-art static
subspace clustering algorithms and existing evolutionary clustering schemes in
terms of both accuracy and running time, in a range of scenarios
Fast Subspace Clustering Based on the Kronecker Product
Subspace clustering is a useful technique for many computer vision
applications in which the intrinsic dimension of high-dimensional data is often
smaller than the ambient dimension. Spectral clustering, as one of the main
approaches to subspace clustering, often takes on a sparse representation or a
low-rank representation to learn a block diagonal self-representation matrix
for subspace generation. However, existing methods require solving a large
scale convex optimization problem with a large set of data, with computational
complexity reaches O(N^3) for N data points. Therefore, the efficiency and
scalability of traditional spectral clustering methods can not be guaranteed
for large scale datasets. In this paper, we propose a subspace clustering model
based on the Kronecker product. Due to the property that the Kronecker product
of a block diagonal matrix with any other matrix is still a block diagonal
matrix, we can efficiently learn the representation matrix which is formed by
the Kronecker product of k smaller matrices. By doing so, our model
significantly reduces the computational complexity to O(kN^{3/k}). Furthermore,
our model is general in nature, and can be adapted to different regularization
based subspace clustering methods. Experimental results on two public datasets
show that our model significantly improves the efficiency compared with several
state-of-the-art methods. Moreover, we have conducted experiments on synthetic
data to verify the scalability of our model for large scale datasets.Comment: 16 pages, 2 figure
Correntropy Induced L2 Graph for Robust Subspace Clustering
In this paper, we study the robust subspace clustering problem, which aims to
cluster the given possibly noisy data points into their underlying subspaces. A
large pool of previous subspace clustering methods focus on the graph
construction by different regularization of the representation coefficient. We
instead focus on the robustness of the model to non-Gaussian noises. We propose
a new robust clustering method by using the correntropy induced metric, which
is robust for handling the non-Gaussian and impulsive noises. Also we further
extend the method for handling the data with outlier rows/features. The
multiplicative form of half-quadratic optimization is used to optimize the
non-convex correntropy objective function of the proposed models. Extensive
experiments on face datasets well demonstrate that the proposed methods are
more robust to corruptions and occlusions.Comment: International Conference on Computer Vision (ICCV), 201
Low-Rank Modeling and Its Applications in Image Analysis
Low-rank modeling generally refers to a class of methods that solve problems
by representing variables of interest as low-rank matrices. It has achieved
great success in various fields including computer vision, data mining, signal
processing and bioinformatics. Recently, much progress has been made in
theories, algorithms and applications of low-rank modeling, such as exact
low-rank matrix recovery via convex programming and matrix completion applied
to collaborative filtering. These advances have brought more and more
attentions to this topic. In this paper, we review the recent advance of
low-rank modeling, the state-of-the-art algorithms, and related applications in
image analysis. We first give an overview to the concept of low-rank modeling
and challenging problems in this area. Then, we summarize the models and
algorithms for low-rank matrix recovery and illustrate their advantages and
limitations with numerical experiments. Next, we introduce a few applications
of low-rank modeling in the context of image analysis. Finally, we conclude
this paper with some discussions.Comment: To appear in ACM Computing Survey
- …