4,196 research outputs found

    Distributed Online Modified Greedy Algorithm for Networked Storage Operation under Uncertainty

    Full text link
    The integration of intermittent and stochastic renewable energy resources requires increased flexibility in the operation of the electric grid. Storage, broadly speaking, provides the flexibility of shifting energy over time; network, on the other hand, provides the flexibility of shifting energy over geographical locations. The optimal control of storage networks in stochastic environments is an important open problem. The key challenge is that, even in small networks, the corresponding constrained stochastic control problems on continuous spaces suffer from curses of dimensionality, and are intractable in general settings. For large networks, no efficient algorithm is known to give optimal or provably near-optimal performance for this problem. This paper provides an efficient algorithm to solve this problem with performance guarantees. We study the operation of storage networks, i.e., a storage system interconnected via a power network. An online algorithm, termed Online Modified Greedy algorithm, is developed for the corresponding constrained stochastic control problem. A sub-optimality bound for the algorithm is derived, and a semidefinite program is constructed to minimize the bound. In many cases, the bound approaches zero so that the algorithm is near-optimal. A task-based distributed implementation of the online algorithm relying only on local information and neighbor communication is then developed based on the alternating direction method of multipliers. Numerical examples verify the established theoretical performance bounds, and demonstrate the scalability of the algorithm.Comment: arXiv admin note: text overlap with arXiv:1405.778

    Linear Programming for Large-Scale Markov Decision Problems

    Get PDF
    We consider the problem of controlling a Markov decision process (MDP) with a large state space, so as to minimize average cost. Since it is intractable to compete with the optimal policy for large scale problems, we pursue the more modest goal of competing with a low-dimensional family of policies. We use the dual linear programming formulation of the MDP average cost problem, in which the variable is a stationary distribution over state-action pairs, and we consider a neighborhood of a low-dimensional subset of the set of stationary distributions (defined in terms of state-action features) as the comparison class. We propose two techniques, one based on stochastic convex optimization, and one based on constraint sampling. In both cases, we give bounds that show that the performance of our algorithms approaches the best achievable by any policy in the comparison class. Most importantly, these results depend on the size of the comparison class, but not on the size of the state space. Preliminary experiments show the effectiveness of the proposed algorithms in a queuing application.Comment: 27 pages, 3 figure

    Sparse and Constrained Stochastic Predictive Control for Networked Systems

    Full text link
    This article presents a novel class of control policies for networked control of Lyapunov-stable linear systems with bounded inputs. The control channel is assumed to have i.i.d. Bernoulli packet dropouts and the system is assumed to be affected by additive stochastic noise. Our proposed class of policies is affine in the past dropouts and saturated values of the past disturbances. We further consider a regularization term in a quadratic performance index to promote sparsity in control. We demonstrate how to augment the underlying optimization problem with a constant negative drift constraint to ensure mean-square boundedness of the closed-loop states, yielding a convex quadratic program to be solved periodically online. The states of the closed-loop plant under the receding horizon implementation of the proposed class of policies are mean square bounded for any positive bound on the control and any non-zero probability of successful transmission
    corecore