1,016 research outputs found
A Coordinate-Descent Algorithm for Tracking Solutions in Time-Varying Optimal Power Flows
Consider a polynomial optimisation problem, whose instances vary continuously
over time. We propose to use a coordinate-descent algorithm for solving such
time-varying optimisation problems. In particular, we focus on relaxations of
transmission-constrained problems in power systems.
On the example of the alternating-current optimal power flows (ACOPF), we
bound the difference between the current approximate optimal cost generated by
our algorithm and the optimal cost for a relaxation using the most recent data
from above by a function of the properties of the instance and the rate of
change to the instance over time. We also bound the number of floating-point
operations that need to be performed between two updates in order to guarantee
the error is bounded from above by a given constant
Primal-Dual Rates and Certificates
We propose an algorithm-independent framework to equip existing optimization
methods with primal-dual certificates. Such certificates and corresponding rate
of convergence guarantees are important for practitioners to diagnose progress,
in particular in machine learning applications. We obtain new primal-dual
convergence rates, e.g., for the Lasso as well as many L1, Elastic Net, group
Lasso and TV-regularized problems. The theory applies to any norm-regularized
generalized linear model. Our approach provides efficiently computable duality
gaps which are globally defined, without modifying the original problems in the
region of interest.Comment: appearing at ICML 2016 - Proceedings of the 33rd International
Conference on Machine Learning, New York, NY, USA, 2016. JMLR: W&CP volume 4
CoCoA: A General Framework for Communication-Efficient Distributed Optimization
The scale of modern datasets necessitates the development of efficient
distributed optimization methods for machine learning. We present a
general-purpose framework for distributed computing environments, CoCoA, that
has an efficient communication scheme and is applicable to a wide variety of
problems in machine learning and signal processing. We extend the framework to
cover general non-strongly-convex regularizers, including L1-regularized
problems like lasso, sparse logistic regression, and elastic net
regularization, and show how earlier work can be derived as a special case. We
provide convergence guarantees for the class of convex regularized loss
minimization objectives, leveraging a novel approach in handling
non-strongly-convex regularizers and non-smooth loss functions. The resulting
framework has markedly improved performance over state-of-the-art methods, as
we illustrate with an extensive set of experiments on real distributed
datasets
- …