930 research outputs found
Parallel Algorithms for Constrained Tensor Factorization via the Alternating Direction Method of Multipliers
Tensor factorization has proven useful in a wide range of applications, from
sensor array processing to communications, speech and audio signal processing,
and machine learning. With few recent exceptions, all tensor factorization
algorithms were originally developed for centralized, in-memory computation on
a single machine; and the few that break away from this mold do not easily
incorporate practically important constraints, such as nonnegativity. A new
constrained tensor factorization framework is proposed in this paper, building
upon the Alternating Direction method of Multipliers (ADMoM). It is shown that
this simplifies computations, bypassing the need to solve constrained
optimization problems in each iteration; and it naturally leads to distributed
algorithms suitable for parallel implementation on regular high-performance
computing (e.g., mesh) architectures. This opens the door for many emerging big
data-enabled applications. The methodology is exemplified using nonnegativity
as a baseline constraint, but the proposed framework can more-or-less readily
incorporate many other types of constraints. Numerical experiments are very
encouraging, indicating that the ADMoM-based nonnegative tensor factorization
(NTF) has high potential as an alternative to state-of-the-art approaches.Comment: Submitted to the IEEE Transactions on Signal Processin
On Convergence of Epanechnikov Mean Shift
Epanechnikov Mean Shift is a simple yet empirically very effective algorithm
for clustering. It localizes the centroids of data clusters via estimating
modes of the probability distribution that generates the data points, using the
`optimal' Epanechnikov kernel density estimator. However, since the procedure
involves non-smooth kernel density functions, the convergence behavior of
Epanechnikov mean shift lacks theoretical support as of this writing---most of
the existing analyses are based on smooth functions and thus cannot be applied
to Epanechnikov Mean Shift. In this work, we first show that the original
Epanechnikov Mean Shift may indeed terminate at a non-critical point, due to
the non-smoothness nature. Based on our analysis, we propose a simple remedy to
fix it. The modified Epanechnikov Mean Shift is guaranteed to terminate at a
local maximum of the estimated density, which corresponds to a cluster
centroid, within a finite number of iterations. We also propose a way to avoid
running the Mean Shift iterates from every data point, while maintaining good
clustering accuracies under non-overlapping spherical Gaussian mixture models.
This further pushes Epanechnikov Mean Shift to handle very large and
high-dimensional data sets. Experiments show surprisingly good performance
compared to the Lloyd's K-means algorithm and the EM algorithm.Comment: AAAI 201
- …