1,103 research outputs found

    The continuous Newton-Raphson method can look ahead

    Get PDF
    This paper is about an intriguing property of the continuous Newton-Raphson method for the minimization of a continuous objective function f: if x is a point in the domain of attraction of a strict local minimizer x* then the flux line of the Newton-Raphson flow that starts in x approaches x* from a direction that depends only on the behavior of f in arbitrarily small neighborhoods around x and x*. In fact, if F is a sufficiently benign perturbation of f on an open region D not containing x, then the two flux lines through x defined by the Newton-Raphson vector fields that correspond to f and F differ from one another only within D.\ud \ud The work was supported by EPSRC grant GR/S34472 (R. Hauser) and by the Clarendon Fund, Oxford University Press and ORS Award, Universities UK (J Nedic

    Distributed optimization over time-varying directed graphs

    Full text link
    We consider distributed optimization by a collection of nodes, each having access to its own convex function, whose collective goal is to minimize the sum of the functions. The communications between nodes are described by a time-varying sequence of directed graphs, which is uniformly strongly connected. For such communications, assuming that every node knows its out-degree, we develop a broadcast-based algorithm, termed the subgradient-push, which steers every node to an optimal value under a standard assumption of subgradient boundedness. The subgradient-push requires no knowledge of either the number of agents or the graph sequence to implement. Our analysis shows that the subgradient-push algorithm converges at a rate of O(ln(t)/t)O(\ln(t)/\sqrt{t}), where the constant depends on the initial values at the nodes, the subgradient norms, and, more interestingly, on both the consensus speed and the imbalances of influence among the nodes

    Cloud-Based Centralized/Decentralized Multi-Agent Optimization with Communication Delays

    Get PDF
    We present and analyze a computational hybrid architecture for performing multi-agent optimization. The optimization problems under consideration have convex objective and constraint functions with mild smoothness conditions imposed on them. For such problems, we provide a primal-dual algorithm implemented in the hybrid architecture, which consists of a decentralized network of agents into which centralized information is occasionally injected, and we establish its convergence properties. To accomplish this, a central cloud computer aggregates global information, carries out computations of the dual variables based on this information, and then distributes the updated dual variables to the agents. The agents update their (primal) state variables and also communicate among themselves with each agent sharing and receiving state information with some number of its neighbors. Throughout, communications with the cloud are not assumed to be synchronous or instantaneous, and communication delays are explicitly accounted for in the modeling and analysis of the system. Experimental results are presented to support the theoretical developments made.Comment: 8 pages, 4 figure
    corecore