3 research outputs found
Fast Multiplier Methods to Optimize Non-exhaustive, Overlapping Clustering
Clustering is one of the most fundamental and important tasks in data mining.
Traditional clustering algorithms, such as K-means, assign every data point to
exactly one cluster. However, in real-world datasets, the clusters may overlap
with each other. Furthermore, often, there are outliers that should not belong
to any cluster. We recently proposed the NEO-K-Means (Non-Exhaustive,
Overlapping K-Means) objective as a way to address both issues in an integrated
fashion. Optimizing this discrete objective is NP-hard, and even though there
is a convex relaxation of the objective, straightforward convex optimization
approaches are too expensive for large datasets. A practical alternative is to
use a low-rank factorization of the solution matrix in the convex formulation.
The resulting optimization problem is non-convex, and we can locally optimize
the objective function using an augmented Lagrangian method. In this paper, we
consider two fast multiplier methods to accelerate the convergence of an
augmented Lagrangian scheme: a proximal method of multipliers and an
alternating direction method of multipliers (ADMM). For the proximal augmented
Lagrangian or proximal method of multipliers, we show a convergence result for
the non-convex case with bound-constrained subproblems. These methods are up to
13 times faster---with no change in quality---compared with a standard
augmented Lagrangian method on problems with over 10,000 variables and bring
runtimes down from over an hour to around 5 minutes.Comment: 9 pages. 2 figure
Real-Time Reinforcement Learning of Constrained Markov Decision Processes with Weak Derivatives
We present on-line policy gradient algorithms for computing the locally
optimal policy of a constrained, average cost, finite state Markov Decision
Process. The stochastic approximation algorithms require estimation of the
gradient of the cost function with respect to the parameter that characterizes
the randomized policy. We propose a spherical coordinate parametrization and
present a novel simulation based gradient estimation scheme involving weak
derivatives (measure-valued differentiation). Such methods have substantially
reduced variance compared to the widely used score function method. Similar to
neuro-dynamic programming algorithms (e.g. Q-learning or Temporal Difference
methods), the algorithms proposed in this paper are simulation based and do not
require explicit knowledge of the underlying parameters such as transition
probabilities. However, unlike neuro-dynamic programming methods, the
algorithms proposed here can handle constraints and time varying parameters.
Numerical examples are given to illustrate the performance of the algorithms.
This paper was originally written in 2004. One reason we are putting this on
arxiv now is that the score function gradient estimator continues to be used in
the online reinforcement learning literature even though its variance grows as
given data points (for a Markov process). In comparison the weak
derivative estimator has significantly smaller variance of as reported
in this paper (and elsewhere)