6 research outputs found
Optimal Ensemble Control of Loads in Distribution Grids with Network Constraints
Flexible loads, e.g. thermostatically controlled loads (TCLs), are
technically feasible to participate in demand response (DR) programs. On the
other hand, there is a number of challenges that need to be resolved before it
can be implemented in practice en masse. First, individual TCLs must be
aggregated and operated in sync to scale DR benefits. Second, the uncertainty
of TCLs needs to be accounted for. Third, exercising the flexibility of TCLs
needs to be coordinated with distribution system operations to avoid
unnecessary power losses and compliance with power flow and voltage limits.
This paper addresses these challenges. We propose a network-constrained,
open-loop, stochastic optimal control formulation. The first part of this
formulation represents ensembles of collocated TCLs modelled by an aggregated
Markov Process (MP), where each MP state is associated with a given power
consumption or production level. The second part extends MPs to a multi-period
distribution power flow optimization. In this optimization, the control of TCL
ensembles is regulated by transition probability matrices and physically
enabled by local active and reactive power controls at TCL locations. The
optimization is solved with a Spatio-Temporal Dual Decomposition (ST-D2)
algorithm. The performance of the proposed formulation and algorithm is
demonstrated on the IEEE 33-bus distribution model.Comment: 7 pages, 6 figures, accepted PSCC 201
Universal Convexification via Risk-Aversion
We develop a framework for convexifying a fairly general class of
optimization problems. Under additional assumptions, we analyze the
suboptimality of the solution to the convexified problem relative to the
original nonconvex problem and prove additive approximation guarantees. We then
develop algorithms based on stochastic gradient methods to solve the resulting
optimization problems and show bounds on convergence rates. %We show a simple
application of this framework to supervised learning, where one can perform
integration explicitly and can use standard (non-stochastic) optimization
algorithms with better convergence guarantees. We then extend this framework to
apply to a general class of discrete-time dynamical systems. In this context,
our convexification approach falls under the well-studied paradigm of
risk-sensitive Markov Decision Processes. We derive the first known model-based
and model-free policy gradient optimization algorithms with guaranteed
convergence to the optimal solution. Finally, we present numerical results
validating our formulation in different applications