22,198 research outputs found
Iterative learning control for multi-agent systems with impulsive consensus tracking
In this paper, we adopt D-type and PD-type learning laws with the initial state of iteration to achieve uniform tracking problem of multi-agent systems subjected to impulsive input. For the multi-agent system with impulse, we show that all agents are driven to achieve a given asymptotical consensus as the iteration number increases via the proposed learning laws if the virtual leader has a path to any follower agent. Finally, an example is illustrated to verify the effectiveness by tracking a continuous or piecewise continuous desired trajectory
Consensus tracking problem for linear fractional multi-agent systems with initial state error
In this paper, we discuss the consensus tracking problem by introducing two iterative learning control (ILC) protocols (namely, Dα-type and PDα-type) with initial state error for fractional-order homogenous and heterogenous multi-agent systems (MASs), respectively. The initial state of each agent is fixed at the same position away from the desired one for iterations. For both homogenous and heterogenous MASs, the Dα-type ILC rule is first designed and analyzed, and the asymptotical convergence property is carefully derived. Then, an additional P-type component is added to formulate a PDα-type ILC rule, which also guarantees the asymptotical consensus performance. Moreover, it turns out that the PDα-type ILC rule can further adjust the final performance. Two numerical examples are provided to verify the theoretical results
Data Driven Distributed Bipartite Consensus Tracking for Nonlinear Multiagent Systems via Iterative Learning Control
This article explores a data-driven distributed bipartite consensus tracking (DBCT) problem for discrete-time multi-agent systems (MASs) with coopetition networks under repeatable operations. To solve this problem, a time-varying linearization model along the iteration axis is first established by using the measurement input and output (I/O) data of agents. Then a data-driven distributed bipartite consensus iterative learning control (DBCILC) algorithm is proposed considering both fixed and switching topologies. Compared with existing bipartite consensus, the main characteristic is to construct the proposed control protocol without requiring any explicit or implicit information of MASs’ mathematical model. The difference from existing iterative learning control (ILC) approaches is that both the cooperative interactions and antagonistic interactions, and time-varying switching topologies are considered. Furthermore, through rigorous theoretical analysis, the proposed DBCILC approach can guarantee the bipartite consensus reducing tracking errors in the limited iteration steps. Moreover, although not all agents can receive information from the virtual leader directly, the proposed distributed scheme can maintain the performance and reduce the costs of communication. The results of three examples further illustrate the correctness, effectiveness, and applicability of the proposed algorithm
Distributed Big-Data Optimization via Block Communications
We study distributed multi-agent large-scale optimization problems, wherein
the cost function is composed of a smooth possibly nonconvex sum-utility plus a
DC (Difference-of-Convex) regularizer. We consider the scenario where the
dimension of the optimization variables is so large that optimizing and/or
transmitting the entire set of variables could cause unaffordable computation
and communication overhead. To address this issue, we propose the first
distributed algorithm whereby agents optimize and communicate only a portion of
their local variables. The scheme hinges on successive convex approximation
(SCA) to handle the nonconvexity of the objective function, coupled with a
novel block-signal tracking scheme, aiming at locally estimating the average of
the agents' gradients. Asymptotic convergence to stationary solutions of the
nonconvex problem is established. Numerical results on a sparse regression
problem show the effectiveness of the proposed algorithm and the impact of the
block size on its practical convergence speed and communication cost
Distributed Big-Data Optimization via Block-Iterative Convexification and Averaging
In this paper, we study distributed big-data nonconvex optimization in
multi-agent networks. We consider the (constrained) minimization of the sum of
a smooth (possibly) nonconvex function, i.e., the agents' sum-utility, plus a
convex (possibly) nonsmooth regularizer. Our interest is in big-data problems
wherein there is a large number of variables to optimize. If treated by means
of standard distributed optimization algorithms, these large-scale problems may
be intractable, due to the prohibitive local computation and communication
burden at each node. We propose a novel distributed solution method whereby at
each iteration agents optimize and then communicate (in an uncoordinated
fashion) only a subset of their decision variables. To deal with non-convexity
of the cost function, the novel scheme hinges on Successive Convex
Approximation (SCA) techniques coupled with i) a tracking mechanism
instrumental to locally estimate gradient averages; and ii) a novel block-wise
consensus-based protocol to perform local block-averaging operations and
gradient tacking. Asymptotic convergence to stationary solutions of the
nonconvex problem is established. Finally, numerical results show the
effectiveness of the proposed algorithm and highlight how the block dimension
impacts on the communication overhead and practical convergence speed
On the genericity properties in networked estimation: Topology design and sensor placement
In this paper, we consider networked estimation of linear, discrete-time
dynamical systems monitored by a network of agents. In order to minimize the
power requirement at the (possibly, battery-operated) agents, we require that
the agents can exchange information with their neighbors only \emph{once per
dynamical system time-step}; in contrast to consensus-based estimation where
the agents exchange information until they reach a consensus. It can be
verified that with this restriction on information exchange, measurement fusion
alone results in an unbounded estimation error at every such agent that does
not have an observable set of measurements in its neighborhood. To over come
this challenge, state-estimate fusion has been proposed to recover the system
observability. However, we show that adding state-estimate fusion may not
recover observability when the system matrix is structured-rank (-rank)
deficient.
In this context, we characterize the state-estimate fusion and measurement
fusion under both full -rank and -rank deficient system matrices.Comment: submitted for IEEE journal publicatio
Resilient Autonomous Control of Distributed Multi-agent Systems in Contested Environments
An autonomous and resilient controller is proposed for leader-follower
multi-agent systems under uncertainties and cyber-physical attacks. The leader
is assumed non-autonomous with a nonzero control input, which allows changing
the team behavior or mission in response to environmental changes. A resilient
learning-based control protocol is presented to find optimal solutions to the
synchronization problem in the presence of attacks and system dynamic
uncertainties. An observer-based distributed H_infinity controller is first
designed to prevent propagating the effects of attacks on sensors and actuators
throughout the network, as well as to attenuate the effect of these attacks on
the compromised agent itself. Non-homogeneous game algebraic Riccati equations
are derived to solve the H_infinity optimal synchronization problem and
off-policy reinforcement learning is utilized to learn their solution without
requiring any knowledge of the agent's dynamics. A trust-confidence based
distributed control protocol is then proposed to mitigate attacks that hijack
the entire node and attacks on communication links. A confidence value is
defined for each agent based solely on its local evidence. The proposed
resilient reinforcement learning algorithm employs the confidence value of each
agent to indicate the trustworthiness of its own information and broadcast it
to its neighbors to put weights on the data they receive from it during and
after learning. If the confidence value of an agent is low, it employs a trust
mechanism to identify compromised agents and remove the data it receives from
them from the learning process. Simulation results are provided to show the
effectiveness of the proposed approach
Iterative learning control for impulsive multi-agent systems with varying trial lengths
In this paper, we introduce iterative learning control (ILC) schemes with varying trial lengths (VTL) to control impulsive multi-agent systems (I-MAS). We use domain alignment operator to characterize each tracking error to ensure that the error can completely update the control function during each iteration. Then we analyze the system’s uniform convergence to the target leader. Further, we use two local average operators to optimize the control function such that it can make full use of the iteration error. Finally, numerical examples are provided to verify the theoretical results
Accelerated Consensus via Min-Sum Splitting
We apply the Min-Sum message-passing protocol to solve the consensus problem
in distributed optimization. We show that while the ordinary Min-Sum algorithm
does not converge, a modified version of it known as Splitting yields
convergence to the problem solution. We prove that a proper choice of the
tuning parameters allows Min-Sum Splitting to yield subdiffusive accelerated
convergence rates, matching the rates obtained by shift-register methods. The
acceleration scheme embodied by Min-Sum Splitting for the consensus problem
bears similarities with lifted Markov chains techniques and with multi-step
first order methods in convex optimization
- …