36,848 research outputs found
Pose consensus based on dual quaternion algebra with application to decentralized formation control of mobile manipulators
This paper presents a solution based on dual quaternion algebra to the
general problem of pose (i.e., position and orientation) consensus for systems
composed of multiple rigid-bodies. The dual quaternion algebra is used to model
the agents' poses and also in the distributed control laws, making the proposed
technique easily applicable to time-varying formation control of general
robotic systems. The proposed pose consensus protocol has guaranteed
convergence when the interaction among the agents is represented by directed
graphs with directed spanning trees, which is a more general result when
compared to the literature on formation control. In order to illustrate the
proposed pose consensus protocol and its extension to the problem of formation
control, we present a numerical simulation with a large number of free-flying
agents and also an application of cooperative manipulation by using real mobile
manipulators
Resilient Autonomous Control of Distributed Multi-agent Systems in Contested Environments
An autonomous and resilient controller is proposed for leader-follower
multi-agent systems under uncertainties and cyber-physical attacks. The leader
is assumed non-autonomous with a nonzero control input, which allows changing
the team behavior or mission in response to environmental changes. A resilient
learning-based control protocol is presented to find optimal solutions to the
synchronization problem in the presence of attacks and system dynamic
uncertainties. An observer-based distributed H_infinity controller is first
designed to prevent propagating the effects of attacks on sensors and actuators
throughout the network, as well as to attenuate the effect of these attacks on
the compromised agent itself. Non-homogeneous game algebraic Riccati equations
are derived to solve the H_infinity optimal synchronization problem and
off-policy reinforcement learning is utilized to learn their solution without
requiring any knowledge of the agent's dynamics. A trust-confidence based
distributed control protocol is then proposed to mitigate attacks that hijack
the entire node and attacks on communication links. A confidence value is
defined for each agent based solely on its local evidence. The proposed
resilient reinforcement learning algorithm employs the confidence value of each
agent to indicate the trustworthiness of its own information and broadcast it
to its neighbors to put weights on the data they receive from it during and
after learning. If the confidence value of an agent is low, it employs a trust
mechanism to identify compromised agents and remove the data it receives from
them from the learning process. Simulation results are provided to show the
effectiveness of the proposed approach
Second-Order Consensus of Networked Mechanical Systems With Communication Delays
In this paper, we consider the second-order consensus problem for networked
mechanical systems subjected to nonuniform communication delays, and the
mechanical systems are assumed to interact on a general directed topology. We
propose an adaptive controller plus a distributed velocity observer to realize
the objective of second-order consensus. It is shown that both the positions
and velocities of the mechanical agents synchronize, and furthermore, the
velocities of the mechanical agents converge to the scaled weighted average
value of their initial ones. We further demonstrate that the proposed
second-order consensus scheme can be used to solve the leader-follower
synchronization problem with a constant-velocity leader and under constant
communication delays. Simulation results are provided to illustrate the
performance of the proposed adaptive controllers.Comment: 16 pages, 5 figures, submitted to IEEE Transactions on Automatic
Contro
- …