168 research outputs found

    Distributed Delay-Tolerant Strategies for Equality-Constraint Sum-Preserving Resource Allocation

    Full text link
    This paper proposes two nonlinear dynamics to solve constrained distributed optimization problem for resource allocation over a multi-agent network. In this setup, coupling constraint refers to resource-demand balance which is preserved at all-times. The proposed solutions can address various model nonlinearities, for example, due to quantization and/or saturation. Further, it allows to reach faster convergence or to robustify the solution against impulsive noise or uncertainties. We prove convergence over weakly connected networks using convex analysis and Lyapunov theory. Our findings show that convergence can be reached for general sign-preserving odd nonlinearity. We further propose delay-tolerant mechanisms to handle general bounded heterogeneous time-varying delays over the communication network of agents while preserving all-time feasibility. This work finds application in CPU scheduling and coverage control among others. This paper advances the state-of-the-art by addressing (i) possible nonlinearity on the agents/links, meanwhile handling (ii) resource-demand feasibility at all times, (iii) uniform-connectivity instead of all-time connectivity, and (iv) possible heterogeneous and time-varying delays. To our best knowledge, no existing work addresses contributions (i)-(iv) altogether. Simulations and comparative analysis are provided to corroborate our contributions

    Opinion Dynamics in Social Networks with Hostile Camps: Consensus vs. Polarization

    Get PDF
    Most of the distributed protocols for multi-agent consensus assume that the agents are mutually cooperative and "trustful," and so the couplings among the agents bring the values of their states closer. Opinion dynamics in social groups, however, require beyond these conventional models due to ubiquitous competition and distrust between some pairs of agents, which are usually characterized by repulsive couplings and may lead to clustering of the opinions. A simple yet insightful model of opinion dynamics with both attractive and repulsive couplings was proposed recently by C. Altafini, who examined first-order consensus algorithms over static signed graphs. This protocol establishes modulus consensus, where the opinions become the same in modulus but may differ in signs. In this paper, we extend the modulus consensus model to the case where the network topology is an arbitrary time-varying signed graph and prove reaching modulus consensus under mild sufficient conditions of uniform connectivity of the graph. For cut-balanced graphs, not only sufficient, but also necessary conditions for modulus consensus are given.Comment: scheduled for publication in IEEE Transactions on Automatic Control, 2016, vol. 61, no. 7 (accepted in August 2015

    Mean-field sparse Jurdjevic-Quinn control

    Get PDF
    International audienceWe consider nonlinear transport equations with non-local velocity, describing the time-evolution of a measure, which in practice may represent the density of a crowd. Such equations often appear by taking the mean-field limit of finite-dimensional systems modelling collective dynamics. We first give a sense to dissipativity of these mean-field equations in terms of Lie derivatives of a Lyapunov function depending on the measure. Then, we address the problem of controlling such equations by means of a time-varying bounded control action localized on a time-varying control subset with bounded Lebesgue measure (sparsity space constraint). Finite-dimensional versions are given by control-affine systems, which can be stabilized by the well known Jurdjevic–Quinn procedure. In this paper, assuming that the uncontrolled dynamics are dissipative, we develop an approach in the spirit of the classical Jurdjevic–Quinn theorem, showing how to steer the system to an invariant sublevel of the Lyapunov function. The control function and the control domain are designed in terms of the Lie derivatives of the Lyapunov function, and enjoy sparsity properties in the sense that the control support is small. Finally, we show that our result applies to a large class of kinetic equations modelling multi-agent dynamics

    Stochastic Optimization For Multi-Agent Statistical Learning And Control

    Get PDF
    The goal of this thesis is to develop a mathematical framework for optimal, accurate, and affordable complexity statistical learning among networks of autonomous agents. We begin by noting the connection between statistical inference and stochastic programming, and consider extensions of this setup to settings in which a network of agents each observes a local data stream and would like to make decisions that are good with respect to information aggregated across the entire network. There is an open-ended degree of freedom in this problem formulation, however: the selection of the estimator function class which defines the feasible set of the stochastic program. Our central contribution is the design of stochastic optimization tools in reproducing kernel Hilbert spaces that yield optimal, accurate, and affordable complexity statistical learning for a multi-agent network. To obtain this result, we first explore the relative merits and drawbacks of different function class selections. In Part I, we consider multi-agent expected risk minimization this problem setting for the case that each agent seems to learn a common globally optimal generalized linear models (GLMs) by developing a stochastic variant of Arrow-Hurwicz primal-dual method. We establish convergence to the primal-dual optimal pair when either consensus or ``proximity constraints encode the fact that we want all agents\u27 to agree, or nearby agents to make decisions that are close to one another. Empirically, we observe that these convergence results are substantiated but that convergence may not translate into statistical accuracy. More broadly, optimality within a given estimator function class is not the same as one that makes minimal inference errors. The optimality-accuracy tradeoff of GLMs motivates subsequent efforts to learn more sophisticated estimators based upon learned feature encodings of the data that is fed into the statistical model. The specific tool we turn to in Part II is dictionary learning, where we optimize both over regression weights and an encoding of the data, which yields a non-convex problem. We investigate the use of stochastic methods for online task-driven dictionary learning, and obtain promising performance for the task of a ground robot learning to anticipate control uncertainty based on its past experience. Heartened by this implementation, we then consider extensions of this framework for a multi-agent network to each learn globally optimal task-driven dictionaries based on stochastic primal-dual methods. However, it is here the non-convexity of the optimization problem causes problems: stringent conditions on stochastic errors and the duality gap limit the applicability of the convergence guarantees, and impractically small learning rates are required for convergence in practice. Thus, we seek to learn nonlinear statistical models while preserving convexity, which is possible through kernel methods ( Part III). However, the increased descriptive power of nonparametric estimation comes at the cost of infinite complexity. Thus, we develop a stochastic approximation algorithm in reproducing kernel Hilbert spaces (RKHS) that ameliorates this complexity issue while preserving optimality: we combine the functional generalization of stochastic gradient method (FSGD) with greedily constructed low-dimensional subspace projections based on matching pursuit. We establish that the proposed method yields a controllable trade-off between optimality and memory, and yields highly accurate parsimonious statistical models in practice. % Then, we develop a multi-agent extension of this method by proposing a new node-separable penalty function and applying FSGD together with low-dimensional subspace projections. This extension allows a network of autonomous agents to learn a memory-efficient approximation to the globally optimal regression function based only on their local data stream and message passing with neighbors. In practice, we observe agents are able to stably learn highly accurate and memory-efficient nonlinear statistical models from streaming data. From here, we shift focus to a more challenging class of problems, motivated by the fact that true learning is not just revising predictions based upon data but augmenting behavior over time based on temporal incentives. This goal may be described by Markov Decision Processes (MDPs): at each point, an agent is in some state of the world, takes an action and then receives a reward while randomly transitioning to a new state. The goal of the agent is to select the action sequence to maximize its long-term sum of rewards, but determining how to select this action sequence when both the state and action spaces are infinite has eluded researchers for decades. As a precursor to this feat, we consider the problem of policy evaluation in infinite MDPs, in which we seek to determine the long-term sum of rewards when starting in a given state when actions are chosen according to a fixed distribution called a policy. We reformulate this problem as a RKHS-valued compositional stochastic program and we develop a functional extension of stochastic quasi-gradient algorithm operating in tandem with the greedy subspace projections mentioned above. We prove convergence with probability 1 to the Bellman fixed point restricted to this function class, and we observe a state of the art trade off in memory versus Bellman error for the proposed method on the Mountain Car driving task, which bodes well for incorporating policy evaluation into more sophisticated, provably stable reinforcement learning techniques, and in time, developing optimal collaborative multi-agent learning-based control systems
    • …
    corecore