8,286 research outputs found
Distributed Model Predictive Consensus via the Alternating Direction Method of Multipliers
We propose a distributed optimization method for solving a distributed model
predictive consensus problem. The goal is to design a distributed controller
for a network of dynamical systems to optimize a coupled objective function
while respecting state and input constraints. The distributed optimization
method is an augmented Lagrangian method called the Alternating Direction
Method of Multipliers (ADMM), which was introduced in the 1970s but has seen a
recent resurgence in the context of dramatic increases in computing power and
the development of widely available distributed computing platforms. The method
is applied to position and velocity consensus in a network of double
integrators. We find that a few tens of ADMM iterations yield closed-loop
performance near what is achieved by solving the optimization problem
centrally. Furthermore, the use of recent code generation techniques for
solving local subproblems yields fast overall computation times.Comment: 7 pages, 5 figures, 50th Allerton Conference on Communication,
Control, and Computing, Monticello, IL, USA, 201
Fast ADMM Algorithm for Distributed Optimization with Adaptive Penalty
We propose new methods to speed up convergence of the Alternating Direction
Method of Multipliers (ADMM), a common optimization tool in the context of
large scale and distributed learning. The proposed method accelerates the speed
of convergence by automatically deciding the constraint penalty needed for
parameter consensus in each iteration. In addition, we also propose an
extension of the method that adaptively determines the maximum number of
iterations to update the penalty. We show that this approach effectively leads
to an adaptive, dynamic network topology underlying the distributed
optimization. The utility of the new penalty update schemes is demonstrated on
both synthetic and real data, including a computer vision application of
distributed structure from motion.Comment: 8 pages manuscript, 2 pages appendix, 5 figure
A General Analysis of the Convergence of ADMM
We provide a new proof of the linear convergence of the alternating direction
method of multipliers (ADMM) when one of the objective terms is strongly
convex. Our proof is based on a framework for analyzing optimization algorithms
introduced in Lessard et al. (2014), reducing algorithm convergence to
verifying the stability of a dynamical system. This approach generalizes a
number of existing results and obviates any assumptions about specific choices
of algorithm parameters. On a numerical example, we demonstrate that minimizing
the derived bound on the convergence rate provides a practical approach to
selecting algorithm parameters for particular ADMM instances. We complement our
upper bound by constructing a nearly-matching lower bound on the worst-case
rate of convergence.Comment: 10 pages, 6 figure
Alternating direction method of multipliers for penalized zero-variance discriminant analysis
We consider the task of classification in the high dimensional setting where
the number of features of the given data is significantly greater than the
number of observations. To accomplish this task, we propose a heuristic, called
sparse zero-variance discriminant analysis (SZVD), for simultaneously
performing linear discriminant analysis and feature selection on high
dimensional data. This method combines classical zero-variance discriminant
analysis, where discriminant vectors are identified in the null space of the
sample within-class covariance matrix, with penalization applied to induce
sparse structures in the resulting vectors. To approximately solve the
resulting nonconvex problem, we develop a simple algorithm based on the
alternating direction method of multipliers. Further, we show that this
algorithm is applicable to a larger class of penalized generalized eigenvalue
problems, including a particular relaxation of the sparse principal component
analysis problem. Finally, we establish theoretical guarantees for convergence
of our algorithm to stationary points of the original nonconvex problem, and
empirically demonstrate the effectiveness of our heuristic for classifying
simulated data and data drawn from applications in time-series classification
- …