71,639 research outputs found
Online Unsupervised Multi-view Feature Selection
In the era of big data, it is becoming common to have data with multiple
modalities or coming from multiple sources, known as "multi-view data".
Multi-view data are usually unlabeled and come from high-dimensional spaces
(such as language vocabularies), unsupervised multi-view feature selection is
crucial to many applications. However, it is nontrivial due to the following
challenges. First, there are too many instances or the feature dimensionality
is too large. Thus, the data may not fit in memory. How to select useful
features with limited memory space? Second, how to select features from
streaming data and handles the concept drift? Third, how to leverage the
consistent and complementary information from different views to improve the
feature selection in the situation when the data are too big or come in as
streams? To the best of our knowledge, none of the previous works can solve all
the challenges simultaneously. In this paper, we propose an Online unsupervised
Multi-View Feature Selection, OMVFS, which deals with large-scale/streaming
multi-view data in an online fashion. OMVFS embeds unsupervised feature
selection into a clustering algorithm via NMF with sparse learning. It further
incorporates the graph regularization to preserve the local structure
information and help select discriminative features. Instead of storing all the
historical data, OMVFS processes the multi-view data chunk by chunk and
aggregates all the necessary information into several small matrices. By using
the buffering technique, the proposed OMVFS can reduce the computational and
storage cost while taking advantage of the structure information. Furthermore,
OMVFS can capture the concept drifts in the data streams. Extensive experiments
on four real-world datasets show the effectiveness and efficiency of the
proposed OMVFS method. More importantly, OMVFS is about 100 times faster than
the off-line methods
Provable Deterministic Leverage Score Sampling
We explain theoretically a curious empirical phenomenon: "Approximating a
matrix by deterministically selecting a subset of its columns with the
corresponding largest leverage scores results in a good low-rank matrix
surrogate". To obtain provable guarantees, previous work requires randomized
sampling of the columns with probabilities proportional to their leverage
scores.
In this work, we provide a novel theoretical analysis of deterministic
leverage score sampling. We show that such deterministic sampling can be
provably as accurate as its randomized counterparts, if the leverage scores
follow a moderately steep power-law decay. We support this power-law assumption
by providing empirical evidence that such decay laws are abundant in real-world
data sets. We then demonstrate empirically the performance of deterministic
leverage score sampling, which many times matches or outperforms the
state-of-the-art techniques.Comment: 20th ACM SIGKDD Conference on Knowledge Discovery and Data Minin
Block CUR: Decomposing Matrices using Groups of Columns
A common problem in large-scale data analysis is to approximate a matrix
using a combination of specifically sampled rows and columns, known as CUR
decomposition. Unfortunately, in many real-world environments, the ability to
sample specific individual rows or columns of the matrix is limited by either
system constraints or cost. In this paper, we consider matrix approximation by
sampling predefined \emph{blocks} of columns (or rows) from the matrix. We
present an algorithm for sampling useful column blocks and provide novel
guarantees for the quality of the approximation. This algorithm has application
in problems as diverse as biometric data analysis to distributed computing. We
demonstrate the effectiveness of the proposed algorithms for computing the
Block CUR decomposition of large matrices in a distributed setting with
multiple nodes in a compute cluster, where such blocks correspond to columns
(or rows) of the matrix stored on the same node, which can be retrieved with
much less overhead than retrieving individual columns stored across different
nodes. In the biometric setting, the rows correspond to different users and
columns correspond to users' biometric reaction to external stimuli, {\em
e.g.,}~watching video content, at a particular time instant. There is
significant cost in acquiring each user's reaction to lengthy content so we
sample a few important scenes to approximate the biometric response. An
individual time sample in this use case cannot be queried in isolation due to
the lack of context that caused that biometric reaction. Instead, collections
of time segments ({\em i.e.,} blocks) must be presented to the user. The
practical application of these algorithms is shown via experimental results
using real-world user biometric data from a content testing environment.Comment: shorter version to appear in ECML-PKDD 201
10,000+ Times Accelerated Robust Subset Selection (ARSS)
Subset selection from massive data with noised information is increasingly
popular for various applications. This problem is still highly challenging as
current methods are generally slow in speed and sensitive to outliers. To
address the above two issues, we propose an accelerated robust subset selection
(ARSS) method. Specifically in the subset selection area, this is the first
attempt to employ the -norm based measure for the
representation loss, preventing large errors from dominating our objective. As
a result, the robustness against outlier elements is greatly enhanced.
Actually, data size is generally much larger than feature length, i.e. . Based on this observation, we propose a speedup solver (via ALM and
equivalent derivations) to highly reduce the computational cost, theoretically
from to . Extensive experiments on ten benchmark
datasets verify that our method not only outperforms state of the art methods,
but also runs 10,000+ times faster than the most related method
- …