4 research outputs found

    An Asynchronous, Decentralized Solution Framework for the Large Scale Unit Commitment Problem

    Full text link
    With increased reliance on cyber infrastructure, large scale power networks face new challenges owing to computational scalability. In this paper we focus on developing an asynchronous decentralized solution framework for the Unit Commitment(UC) problem for large scale power networks. We exploit the inherent asynchrony in a region based decomposition arising out of imbalance in regional subproblems to boost computational efficiency. A two phase algorithm is proposed that relies on the convex relaxation and privacy preserving valid inequalities in order to deliver algorithmic improvements. Our algorithm employs a novel interleaved binary mechanism that locally switches from the convex subproblem to its binary counterpart based on consistent local convergent behavior. We develop a high performance computing (HPC) oriented software framework that uses Message Passing Interface (MPI) to drive our benchmark studies. Our simulations performed on the IEEE 3012 bus case are benchmarked against the centralized and a state of the art synchronous decentralized method. The results demonstrate that the asynchronous method improves computational efficiency by a significant amount and provides a competitive solution quality rivaling the benchmark methods

    Fully Asynchronous Policy Evaluation in Distributed Reinforcement Learning over Networks

    Full text link
    This paper proposes a \emph{fully asynchronous} scheme for the policy evaluation problem of distributed reinforcement learning (DisRL) over directed peer-to-peer networks. Without waiting for any other node of the network, each node can locally update its value function at any time by using (possibly delayed) information from its neighbors. This is in sharp contrast to the gossip-based scheme where a pair of nodes concurrently update. Though the fully asynchronous setting involves a difficult multi-timescale decision problem, we design a novel stochastic average gradient (SAG) based distributed algorithm and develop a push-pull augmented graph approach to prove its exact convergence at a linear rate of O(ck)\mathcal{O}(c^k) where c∈(0,1)c\in(0,1) and kk increases by one no matter on which node updates. Finally, numerical experiments validate that our method speeds up linearly with respect to the number of nodes, and is robust to straggler nodes

    AsySPA: An Exact Asynchronous Algorithm for Convex Optimization Over Digraphs

    Full text link
    This paper proposes a novel exact distributed asynchronous subgradient-push algorithm (AsySPA) to solve an additive cost optimization problem over directed graphs where each node only has access to a local convex function and updates asynchronously with an arbitrary rate. Specifically, each node of a strongly connected digraph does not wait for updates from other nodes but simply starts a new update within any bounded time interval by using local information available from its in-neighbors. "Exact" means that every node of the AsySPA can asymptotically converge to the same optimal solution, even under different update rates among nodes and bounded communication delays. To address uneven update rates, we design a simple mechanism to adaptively adjust stepsizes per update in each node, which is substantially different from the existing works. Then, we construct a delay-free augmented system to address asynchrony and delays, and study its convergence by proposing a generalized subgradient algorithm, which clearly has its own significance and helps us to explicitly evaluate the convergence rate of the AsySPA. Finally, we demonstrate advantages of the AsySPA in both theory and simulation.Comment: Accepted by IEEE Transactions on Automatic Control. 15 pages, 9 figure

    Asynchronous Gradient-Push

    Full text link
    We consider a multi-agent framework for distributed optimization where each agent has access to a local smooth strongly convex function, and the collective goal is to achieve consensus on the parameters that minimize the sum of the agents' local functions. We propose an algorithm wherein each agent operates asynchronously and independently of the other agents. When the local functions are strongly-convex with Lipschitz-continuous gradients, we show that the iterates at each agent converge to a neighborhood of the global minimum, where the neighborhood size depends on the degree of asynchrony in the multi-agent network. When the agents work at the same rate, convergence to the global minimizer is achieved. Numerical experiments demonstrate that Asynchronous Gradient-Push can minimize the global objective faster than state-of-the-art synchronous first-order methods, is more robust to failing or stalling agents, and scales better with the network size.Comment: 33 pages, 9 figures, accepted to IEEE Transactions on Automatic Contro
    corecore