4 research outputs found
Accelerated Sparse Subspace Clustering
State-of-the-art algorithms for sparse subspace clustering perform spectral
clustering on a similarity matrix typically obtained by representing each data
point as a sparse combination of other points using either basis pursuit (BP)
or orthogonal matching pursuit (OMP). BP-based methods are often prohibitive in
practice while the performance of OMP-based schemes are unsatisfactory,
especially in settings where data points are highly similar. In this paper, we
propose a novel algorithm that exploits an accelerated variant of orthogonal
least-squares to efficiently find the underlying subspaces. We show that under
certain conditions the proposed algorithm returns a subspace-preserving
solution. Simulation results illustrate that the proposed method compares
favorably with BP-based method in terms of running time while being
significantly more accurate than OMP-based schemes
Accelerated Sampling of Bandlimited Graph Signals
We study the problem of sampling and reconstructing bandlimited graph signals
where the objective is to select a subset of nodes of pre-specified cardinality
that ensures interpolation of the original signal with the lowest possible
reconstruction error. First, we consider a non-Bayesian scenario and propose an
efficient iterative sampling procedure that in the noiseless case enables exact
recovery of the original signal from the set of selected nodes. In the case of
noisy measurements, a bound on the reconstruction error of the proposed
algorithm is established. Then, we consider the Bayesian scenario where we
formulate the sampling task as the problem of maximizing a monotone weak
submodular function, and propose a randomized-greedy algorithm to find a
sub-optimal subset. We derive a worst-case performance guarantee on the
mean-square error achieved by the randomized-greedy algorithm for general
non-stationary graph signals. The efficacy of the proposed methods is
illustrated through extensive numerical simulations on synthetic and real-world
graphs.Comment: arXiv admin note: text overlap with arXiv:1807.0718
Evolutionary Self-Expressive Models for Subspace Clustering
The problem of organizing data that evolves over time into clusters is
encountered in a number of practical settings. We introduce evolutionary
subspace clustering, a method whose objective is to cluster a collection of
evolving data points that lie on a union of low-dimensional evolving subspaces.
To learn the parsimonious representation of the data points at each time step,
we propose a non-convex optimization framework that exploits the
self-expressiveness property of the evolving data while taking into account
representation from the preceding time step. To find an approximate solution to
the aforementioned non-convex optimization problem, we develop a scheme based
on alternating minimization that both learns the parsimonious representation as
well as adaptively tunes and infers a smoothing parameter reflective of the
rate of data evolution. The latter addresses a fundamental challenge in
evolutionary clustering -- determining if and to what extent one should
consider previous clustering solutions when analyzing an evolving data
collection. Our experiments on both synthetic and real-world datasets
demonstrate that the proposed framework outperforms state-of-the-art static
subspace clustering algorithms and existing evolutionary clustering schemes in
terms of both accuracy and running time, in a range of scenarios
LSTM-Assisted Evolutionary Self-Expressive Subspace Clustering
Massive volumes of high-dimensional data that evolves over time is
continuously collected by contemporary information processing systems, which
brings up the problem of organizing this data into clusters, i.e. achieve the
purpose of dimensional deduction, and meanwhile learning its temporal evolution
patterns. In this paper, a framework for evolutionary subspace clustering,
referred to as LSTM-ESCM, is introduced, which aims at clustering a set of
evolving high-dimensional data points that lie in a union of low-dimensional
evolving subspaces. In order to obtain the parsimonious data representation at
each time step, we propose to exploit the so-called self-expressive trait of
the data at each time point. At the same time, LSTM networks are implemented to
extract the inherited temporal patterns behind data in an overall time frame.
An efficient algorithm has been proposed based on MATLAB. Next, experiments are
carried out on real-world datasets to demonstrate the effectiveness of our
proposed approach. And the results show that the suggested algorithm
dramatically outperforms other known similar approaches in terms of both run
time and accuracy