1,112 research outputs found
Recommended from our members
Fixed-Time Connectivity Preserving Tracking Consensus of Multiagent Systems with Disturbances
This text studies the fixed-time tracking consensus for nonlinear multiagent systems with disturbances. To make the fixed-time tracking consensus, the distributed control protocol based on the integral sliding mode control is proposed; meanwhile, the adjacent followers can be maintained in a limited sensing range. By using the nonsmooth analysis method, sufficient conditions for the fixed-time consensus together with the upper and lower bounds of convergence time are obtained. An example is given to illustrate the potential correctness of the main results. © 2020 Fenglan Sun et al
Suboptimal Event-Triggered Consensus of Multiagent Systems
In this paper the suboptimal event-triggered consensus problem of Multiagent systems is investigated. Using the combinational measurement approach, each agent only updates its control input at its own event time instants. Thus the total number of events and the amount of controller updates can be significantly reduced in practice. Then, based on the observation of increasing the consensus rate and reducing the number of triggering events, we have proposed the time-average cost of the agent system and developed a suboptimal approach to determine the triggering condition. The effectiveness of the proposed strategy is illustrated by numerical examples
Connectivity Preservation in Multi-Agent Systems using Model Predictive Control
Flocking of multiagent systems is one of the basic behaviors in the field of control of multiagent systems and it is an essential element of many real-life applications. Such systems under various network structures and environment modes have been extensively studied in the past decades. Navigation of agents in a leader-follower structure while operating in environments with obstacles is particularly challenging. One of the main challenges in flocking of multiagent systems is to preserve connectivity. Gradient descent method is widely utilized to achieve this goal. But the main shortcoming of applying this method for the leader-follower structure is the need for continuous data transmission between agents and/or the preservation of a fixed connection topology. In this research, we propose an innovative model predictive controller based on a potential field that maintains the connectivity of a flock of agents in a leader-follower structure with dynamic topology. The agents navigate through an environment with obstacles that form a path leading to a certain target. Such a control technique avoids collisions of followers with each other without using any communication links while following their leader which navigates in the environment through potential functions for modelling the neighbors and obstacles. The potential field is dynamically updated by introducing weight variables in order to preserve connectivity among the followers as we assume only the leader knows the target position. The values of these weights are changed in real-time according to trajectories of the agents when the critical neighbors of each agent is determined. We compare the performance of our predictive-control based algorithm with other approaches. The results show that our algorithm causes the agents to reach the target in less time. However, our algorithm faces more deadlock cases when the agents go through relatively narrow paths. Due to the consideration of the input costs in our controller, the group of agents reaching the target faster does not necessarily result in the followers consuming more energy than the leader
Distributed Nonconvex Multiagent Optimization Over Time-Varying Networks
We study nonconvex distributed optimization in multiagent networks where the
communications between nodes is modeled as a time-varying sequence of arbitrary
digraphs. We introduce a novel broadcast-based distributed algorithmic
framework for the (constrained) minimization of the sum of a smooth (possibly
nonconvex and nonseparable) function, i.e., the agents' sum-utility, plus a
convex (possibly nonsmooth and nonseparable) regularizer. The latter is usually
employed to enforce some structure in the solution, typically sparsity. The
proposed method hinges on Successive Convex Approximation (SCA) techniques
coupled with i) a tracking mechanism instrumental to locally estimate the
gradients of agents' cost functions; and ii) a novel broadcast protocol to
disseminate information and distribute the computation among the agents.
Asymptotic convergence to stationary solutions is established. A key feature of
the proposed algorithm is that it neither requires the double-stochasticity of
the consensus matrices (but only column stochasticity) nor the knowledge of the
graph sequence to implement. To the best of our knowledge, the proposed
framework is the first broadcast-based distributed algorithm for convex and
nonconvex constrained optimization over arbitrary, time-varying digraphs.
Numerical results show that our algorithm outperforms current schemes on both
convex and nonconvex problems.Comment: Copyright 2001 SS&C. Published in the Proceedings of the 50th annual
Asilomar conference on signals, systems, and computers, Nov. 6-9, 2016, CA,
US
An Overview of Recent Progress in the Study of Distributed Multi-agent Coordination
This article reviews some main results and progress in distributed
multi-agent coordination, focusing on papers published in major control systems
and robotics journals since 2006. Distributed coordination of multiple
vehicles, including unmanned aerial vehicles, unmanned ground vehicles and
unmanned underwater vehicles, has been a very active research subject studied
extensively by the systems and control community. The recent results in this
area are categorized into several directions, such as consensus, formation
control, optimization, task assignment, and estimation. After the review, a
short discussion section is included to summarize the existing research and to
propose several promising research directions along with some open problems
that are deemed important for further investigations
-Learning: A Collaborative Distributed Strategy for Multi-Agent Reinforcement Learning Through Consensus + Innovations
The paper considers a class of multi-agent Markov decision processes (MDPs),
in which the network agents respond differently (as manifested by the
instantaneous one-stage random costs) to a global controlled state and the
control actions of a remote controller. The paper investigates a distributed
reinforcement learning setup with no prior information on the global state
transition and local agent cost statistics. Specifically, with the agents'
objective consisting of minimizing a network-averaged infinite horizon
discounted cost, the paper proposes a distributed version of -learning,
-learning, in which the network agents collaborate by means of
local processing and mutual information exchange over a sparse (possibly
stochastic) communication network to achieve the network goal. Under the
assumption that each agent is only aware of its local online cost data and the
inter-agent communication network is \emph{weakly} connected, the proposed
distributed scheme is almost surely (a.s.) shown to yield asymptotically the
desired value function and the optimal stationary control policy at each
network agent. The analytical techniques developed in the paper to address the
mixed time-scale stochastic dynamics of the \emph{consensus + innovations}
form, which arise as a result of the proposed interactive distributed scheme,
are of independent interest.Comment: Submitted to the IEEE Transactions on Signal Processing, 33 page
Distributed sampled-data control of nonholonomic multi-robot systems with proximity networks
This paper considers the distributed sampled-data control problem of a group
of mobile robots connected via distance-induced proximity networks. A dwell
time is assumed in order to avoid chattering in the neighbor relations that may
be caused by abrupt changes of positions when updating information from
neighbors. Distributed sampled-data control laws are designed based on nearest
neighbour rules, which in conjunction with continuous-time dynamics results in
hybrid closed-loop systems. For uniformly and independently initial states, a
sufficient condition is provided to guarantee synchronization for the system
without leaders. In order to steer all robots to move with the desired
orientation and speed, we then introduce a number of leaders into the system,
and quantitatively establish the proportion of leaders needed to track either
constant or time-varying signals. All these conditions depend only on the
neighborhood radius, the maximum initial moving speed and the dwell time,
without assuming a prior properties of the neighbor graphs as are used in most
of the existing literature.Comment: 15 pages, 3 figure
- …