290 research outputs found
Multi-view predictive partitioning in high dimensions
Many modern data mining applications are concerned with the analysis of
datasets in which the observations are described by paired high-dimensional
vectorial representations or "views". Some typical examples can be found in web
mining and genomics applications. In this article we present an algorithm for
data clustering with multiple views, Multi-View Predictive Partitioning (MVPP),
which relies on a novel criterion of predictive similarity between data points.
We assume that, within each cluster, the dependence between multivariate views
can be modelled by using a two-block partial least squares (TB-PLS) regression
model, which performs dimensionality reduction and is particularly suitable for
high-dimensional settings. The proposed MVPP algorithm partitions the data such
that the within-cluster predictive ability between views is maximised. The
proposed objective function depends on a measure of predictive influence of
points under the TB-PLS model which has been derived as an extension of the
PRESS statistic commonly used in ordinary least squares regression. Using
simulated data, we compare the performance of MVPP to that of competing
multi-view clustering methods which rely upon geometric structures of points,
but ignore the predictive relationship between the two views. State-of-art
results are obtained on benchmark web mining datasets.Comment: 31 pages, 12 figure
Variance Reduced Stochastic Gradient Descent with Neighbors
Stochastic Gradient Descent (SGD) is a workhorse in machine learning, yet its
slow convergence can be a computational bottleneck. Variance reduction
techniques such as SAG, SVRG and SAGA have been proposed to overcome this
weakness, achieving linear convergence. However, these methods are either based
on computations of full gradients at pivot points, or on keeping per data point
corrections in memory. Therefore speed-ups relative to SGD may need a minimal
number of epochs in order to materialize. This paper investigates algorithms
that can exploit neighborhood structure in the training data to share and
re-use information about past stochastic gradients across data points, which
offers advantages in the transient optimization phase. As a side-product we
provide a unified convergence analysis for a family of variance reduction
algorithms, which we call memorization algorithms. We provide experimental
results supporting our theory.Comment: Appears in: Advances in Neural Information Processing Systems 28
(NIPS 2015). 13 page
- …