48,009 research outputs found
Event-Triggered Algorithms for Leader-Follower Consensus of Networked Euler-Lagrange Agents
This paper proposes three different distributed event-triggered control
algorithms to achieve leader-follower consensus for a network of Euler-Lagrange
agents. We firstly propose two model-independent algorithms for a subclass of
Euler-Lagrange agents without the vector of gravitational potential forces. By
model-independent, we mean that each agent can execute its algorithm with no
knowledge of the agent self-dynamics. A variable-gain algorithm is employed
when the sensing graph is undirected; algorithm parameters are selected in a
fully distributed manner with much greater flexibility compared to all previous
work concerning event-triggered consensus problems. When the sensing graph is
directed, a constant-gain algorithm is employed. The control gains must be
centrally designed to exceed several lower bounding inequalities which require
limited knowledge of bounds on the matrices describing the agent dynamics,
bounds on network topology information and bounds on the initial conditions.
When the Euler-Lagrange agents have dynamics which include the vector of
gravitational potential forces, an adaptive algorithm is proposed which
requires more information about the agent dynamics but can estimate uncertain
agent parameters.
For each algorithm, a trigger function is proposed to govern the event update
times. At each event, the controller is updated, which ensures that the control
input is piecewise constant and saves energy resources. We analyse each
controllers and trigger function and exclude Zeno behaviour. Extensive
simulations show 1) the advantages of our proposed trigger function as compared
to those in existing literature, and 2) the effectiveness of our proposed
controllers.Comment: Extended manuscript of journal submission, containing omitted proofs
and simulation
-Learning: A Collaborative Distributed Strategy for Multi-Agent Reinforcement Learning Through Consensus + Innovations
The paper considers a class of multi-agent Markov decision processes (MDPs),
in which the network agents respond differently (as manifested by the
instantaneous one-stage random costs) to a global controlled state and the
control actions of a remote controller. The paper investigates a distributed
reinforcement learning setup with no prior information on the global state
transition and local agent cost statistics. Specifically, with the agents'
objective consisting of minimizing a network-averaged infinite horizon
discounted cost, the paper proposes a distributed version of -learning,
-learning, in which the network agents collaborate by means of
local processing and mutual information exchange over a sparse (possibly
stochastic) communication network to achieve the network goal. Under the
assumption that each agent is only aware of its local online cost data and the
inter-agent communication network is \emph{weakly} connected, the proposed
distributed scheme is almost surely (a.s.) shown to yield asymptotically the
desired value function and the optimal stationary control policy at each
network agent. The analytical techniques developed in the paper to address the
mixed time-scale stochastic dynamics of the \emph{consensus + innovations}
form, which arise as a result of the proposed interactive distributed scheme,
are of independent interest.Comment: Submitted to the IEEE Transactions on Signal Processing, 33 page
- …