758 research outputs found
On the Q-linear convergence of Distributed Generalized ADMM under non-strongly convex function components
Solving optimization problems in multi-agent networks where each agent only
has partial knowledge of the problem has become an increasingly important
problem. In this paper we consider the problem of minimizing the sum of
convex functions. We assume that each function is only known by one agent. We
show that Generalized Distributed ADMM converges Q-linearly to the solution of
the mentioned optimization problem if the over all objective function is
strongly convex but the functions known by each agent are allowed to be only
convex. Establishing Q-linear convergence allows for tracking statements that
can not be made if only R-linear convergence is guaranteed. Further, we
establish the equivalence between Generalized Distributed ADMM and P-EXTRA for
a sub-set of mixing matrices. This equivalence yields insights in the
convergence of P-EXTRA when overshooting to accelerate convergence.Comment: Submitted to IEEE Transactions on Signal and Information Processing
over Network
Linearized ADMM for Non-convex Non-smooth Optimization with Convergence Analysis
Linearized alternating direction method of multipliers (ADMM) as an extension
of ADMM has been widely used to solve linearly constrained problems in signal
processing, machine leaning, communications, and many other fields. Despite its
broad applications in nonconvex optimization, for a great number of nonconvex
and nonsmooth objective functions, its theoretical convergence guarantee is
still an open problem. In this paper, we propose a two-block linearized ADMM
and a multi-block parallel linearized ADMM for problems with nonconvex and
nonsmooth objectives. Mathematically, we present that the algorithms can
converge for a broader class of objective functions under less strict
assumptions compared with previous works. Furthermore, our proposed algorithm
can update coupled variables in parallel and work for less restrictive
nonconvex problems, where the traditional ADMM may have difficulties in solving
subproblems.Comment: 29 pages, 2 tables, 2 figure
Scalable Electric Vehicle Charging Protocols
Although electric vehicles are considered a viable solution to reduce
greenhouse gas emissions, their uncoordinated charging could have adverse
effects on power system operation. Nevertheless, the task of optimal electric
vehicle charging scales unfavorably with the fleet size and the number of
control periods, especially when distribution grid limitations are enforced. To
this end, vehicle charging is first tackled using the recently revived
Frank-Wolfe method. The novel decentralized charging protocol has minimal
computational requirements from vehicle controllers, enjoys provable
acceleration over existing alternatives, enhances the security of the pricing
mechanism against data attacks, and protects user privacy. To comply with
voltage limits, a network-constrained EV charging problem is subsequently
formulated. Leveraging a linearized model for unbalanced distribution grids,
the goal is to minimize the power supply cost while respecting critical voltage
regulation and substation capacity limitations. Optimizing variables across
grid nodes is accomplished by exchanging information only between neighboring
buses via the alternating direction method of multipliers. Numerical tests
corroborate the optimality and efficiency of the novel schemes
DQM: Decentralized Quadratically Approximated Alternating Direction Method of Multipliers
This paper considers decentralized consensus optimization problems where
nodes of a network have access to different summands of a global objective
function. Nodes cooperate to minimize the global objective by exchanging
information with neighbors only. A decentralized version of the alternating
directions method of multipliers (DADMM) is a common method for solving this
category of problems. DADMM exhibits linear convergence rate to the optimal
objective but its implementation requires solving a convex optimization problem
at each iteration. This can be computationally costly and may result in large
overall convergence times. The decentralized quadratically approximated ADMM
algorithm (DQM), which minimizes a quadratic approximation of the objective
function that DADMM minimizes at each iteration, is proposed here. The
consequent reduction in computational time is shown to have minimal effect on
convergence properties. Convergence still proceeds at a linear rate with a
guaranteed constant that is asymptotically equivalent to the DADMM linear
convergence rate constant. Numerical results demonstrate advantages of DQM
relative to DADMM and other alternatives in a logistic regression problem.Comment: 13 page
A Linearly Convergent Proximal Gradient Algorithm for Decentralized Optimization
Decentralized optimization is a powerful paradigm that finds applications in
engineering and learning design. This work studies decentralized composite
optimization problems with non-smooth regularization terms. Most existing
gradient-based proximal decentralized methods are known to converge to the
optimal solution with sublinear rates, and it remains unclear whether this
family of methods can achieve global linear convergence. To tackle this
problem, this work assumes the non-smooth regularization term is common across
all networked agents, which is the case for many machine learning problems.
Under this condition, we design a proximal gradient decentralized algorithm
whose fixed point coincides with the desired minimizer. We then provide a
concise proof that establishes its linear convergence. In the absence of the
non-smooth term, our analysis technique covers the well known EXTRA algorithm
and provides useful bounds on the convergence rate and step-size.Comment: NeurIPS 201
Distributed Robust Power System State Estimation
Deregulation of energy markets, penetration of renewables, advanced metering
capabilities, and the urge for situational awareness, all call for system-wide
power system state estimation (PSSE). Implementing a centralized estimator
though is practically infeasible due to the complexity scale of an
interconnection, the communication bottleneck in real-time monitoring, regional
disclosure policies, and reliability issues. In this context, distributed PSSE
methods are treated here under a unified and systematic framework. A novel
algorithm is developed based on the alternating direction method of
multipliers. It leverages existing PSSE solvers, respects privacy policies,
exhibits low communication load, and its convergence to the centralized
estimates is guaranteed even in the absence of local observability. Beyond the
conventional least-squares based PSSE, the decentralized framework accommodates
a robust state estimator. By exploiting interesting links to the compressive
sampling advances, the latter jointly estimates the state and identifies
corrupted measurements. The novel algorithms are numerically evaluated using
the IEEE 14-, 118-bus, and a 4,200-bus benchmarks. Simulations demonstrate that
the attainable accuracy can be reached within a few inter-area exchanges, while
largest residual tests are outperformed.Comment: Revised submission to IEEE Trans. on Power System
Input-output analysis and decentralized optimal control of inter-area oscillations in power systems
Local and inter-area oscillations in bulk power systems are typically
identified using spatial profiles of poorly damped modes, and they are
mitigated via carefully tuned decentralized controllers. In this paper, we
employ non-modal tools to analyze and control inter-area oscillations. Our
input-output analysis examines power spectral density and variance
amplification of stochastically forced systems and offers new insights relative
to modal approaches. To improve upon the limitations of conventional wide-area
control strategies, we also study the problem of signal selection and optimal
design of sparse and block-sparse wide-area controllers. In our design, we
preserve rotational symmetry of the power system by allowing only relative
angle measurements in the distributed controllers. For the IEEE 39 New England
model, we examine performance tradeoffs and robustness of different control
architectures and show that optimal retuning of fully-decentralized control
strategies can effectively guard against local and inter-area oscillations.Comment: Submitted to IEEE Trans. Power Sys
Decentralized Accelerated Gradient Methods With Increasing Penalty Parameters
In this paper, we study the communication and (sub)gradient computation costs
in distributed optimization and give a sharp complexity analysis for the
proposed distributed accelerated gradient methods. We present two algorithms
based on the framework of the accelerated penalty method with increasing
penalty parameters. Our first algorithm is for smooth distributed optimization
and it obtains the near optimal
communication complexity and the optimal
gradient computation complexity for
-smooth convex problems, where denotes the second largest
singular value of the weight matrix associated to the network and
is the target accuracy. When the problem is -strongly convex
and -smooth, our algorithm has the near optimal
complexity for communications and the optimal
complexity for
gradient computations. Our communication complexities are only worse by a
factor of than the lower bounds for the
smooth distributed optimization. %As far as we know, our method is the first to
achieve both communication and gradient computation lower bounds up to an extra
logarithm factor for smooth distributed optimization. Our second algorithm is
designed for non-smooth distributed optimization and it achieves both the
optimal communication
complexity and subgradient computation
complexity, which match the communication and subgradient computation
complexity lower bounds for non-smooth distributed optimization.Comment: The previous name of this paper was "A Sharp Convergence Rate
Analysis for Distributed Accelerated Gradient Methods". The contents are
consisten
On the Duality Gap Convergence of ADMM Methods
This paper provides a duality gap convergence analysis for the standard ADMM
as well as a linearized version of ADMM. It is shown that under appropriate
conditions, both methods achieve linear convergence. However, the standard ADMM
achieves a faster accelerated convergence rate than that of the linearized
ADMM. A simple numerical example is used to illustrate the difference in
convergence behavior
Decentralized Dynamic Optimization for Power Network Voltage Control
Voltage control in power distribution networks has been greatly challenged by
the increasing penetration of volatile and intermittent devices. These devices
can also provide limited reactive power resources that can be used to regulate
the network-wide voltage. A decentralized voltage control strategy can be
designed by minimizing a quadratic voltage mismatch error objective using
gradient-projection (GP) updates. Coupled with the power network flow, the
local voltage can provide the instantaneous gradient information. This paper
aims to analyze the performance of this decentralized GP-based voltage control
design under two dynamic scenarios: i) the nodes perform the decentralized
update in an asynchronous fashion, and ii) the network operating condition is
time-varying. For the asynchronous voltage control, we improve the existing
convergence condition by recognizing that the voltage based gradient is always
up-to-date. By modeling the network dynamics using an autoregressive process
and considering time-varying resource constraints, we provide an error bound in
tracking the instantaneous optimal solution to the quadratic error objective.
This result can be extended to more general \textit{constrained dynamic
optimization} problems with smooth strongly convex objective functions under
stochastic processes that have bounded iterative changes. Extensive numerical
tests have been performed to demonstrate and validate our analytical results
for realistic power networks
- …