232 research outputs found
An Overview of Recent Progress in the Study of Distributed Multi-agent Coordination
This article reviews some main results and progress in distributed
multi-agent coordination, focusing on papers published in major control systems
and robotics journals since 2006. Distributed coordination of multiple
vehicles, including unmanned aerial vehicles, unmanned ground vehicles and
unmanned underwater vehicles, has been a very active research subject studied
extensively by the systems and control community. The recent results in this
area are categorized into several directions, such as consensus, formation
control, optimization, task assignment, and estimation. After the review, a
short discussion section is included to summarize the existing research and to
propose several promising research directions along with some open problems
that are deemed important for further investigations
Cooperative Control Reconfiguration in Networked Multi-Agent Systems
Development of a network of autonomous cooperating vehicles has attracted significant
attention during the past few years due to its broad range of applications in areas
such as autonomous underwater vehicles for exploring deep sea oceans, satellite formations
for space missions, and mobile robots in industrial sites where human involvement
is impossible or restricted, to name a few. Motivated by the stringent specifications
and requirements for depth, speed, position or attitude of the team and the possibility
of having unexpected actuators and sensors faults in missions for these vehicles have
led to the proposed research in this thesis on cooperative fault-tolerant control design of
autonomous networked vehicles.
First, a multi-agent system under a fixed and undirected network topology and subject
to actuator faults is studied. A reconfigurable control law is proposed and the so-called
distributed Hamilton-Jacobi-Bellman equations for the faulty agents are derived. Then,
the reconfigured controller gains are designed by solving these equations subject to the
faulty agent dynamics as well as the network structural constraints to ensure that the
agents can reach a consensus even in presence of a fault while simultaneously the team
performance index is minimized.
Next, a multi-agent network subject to simultaneous as well as subsequent actuator
faults and under directed fixed topology and subject to bounded energy disturbances is considered. An H∞ performance fault recovery control strategy is proposed that guarantees:
the state consensus errors remain bounded, the output of the faulty system behaves
exactly the same as that of the healthy system, and the specified H∞ performance bound
is guaranteed to be minimized. Towards this end, the reconfigured control law gains
are selected first by employing a geometric control approach where a set of controllers
guarantees that the output of the faulty agent imitates that of the healthy agent and the
consensus achievement objectives are satisfied. Then, the remaining degrees of freedom
in the selection of the control law gains are used to minimize the bound on a specified
H∞ performance index.
Then, control reconfiguration problem in a team subject to directed switching topology
networks as well as actuator faults and their severity estimation uncertainties is considered.
The consensus achievement of the faulty network is transformed into two stability
problems, in which one can be solved offline while the other should be solved online
and by utilizing information that each agent has received from the fault detection and
identification module. Using quadratic and convex hull Lyapunov functions the control
gains are designed and selected such that the team consensus achievement is guaranteed
while the upper bound of the team cost performance index is minimized.
Finally, a team of non-identical agents subject to actuator faults is considered. A
distributed output feedback control strategy is proposed which guarantees that agents
outputs’ follow the outputs of the exo-system and the agents states remains stable even
when agents are subject to different actuator faults
Distributed Model Reference Control for Cooperative Tracking of Vehicle Platoons Subjected to External Disturbances and Bounded Leader Input
This paper proposes a distributed model reference controller (DMRC) for cooperative tracking of vehicle platoons subjected to unknown external disturbances and bounded, non-zero leader input. The vehicle-to-vehicle communication network topology is assumed to be directed and contains at least one spanning tree with the leader as a root node. The proposed scheme utilizes the cooperative tracking error reference as a virtual reference for each follower. The main control system is designed using cooperative tracking error and cooperative disagreement error to attenuate the effects of unknown external disturbance and allow for bounded leader input. The global disagreement error is shown to be uniformly ultimately bounded through detailed stability analysis, such that the states of each follower synchronize to the leader states with bounded residual error, and guarantees input-to-state string stability (ISSS) of the platoon. Performance verification is conducted through simulations and validates the efficacy of DMRC for the vehicle platoon problem
Cooperative control of a network of multi-vehicle unmanned systems
Development of unmanned systems network is currently among one of the most important areas of activity and research with implications in variety of disciplines, such as communications, controls, and multi-vehicle systems. The main motivation for this interest can be traced back to practical applications wherein direct human involvement may not be possible due to environmental hazards or the extraordinary complexity of the tasks. This thesis seeks to develop, design, and analyze techniques and solutions that would ensure and guarantee the fundamental stringent requirements that are envisaged for these dynamical networks. In this thesis, the problem of team cooperation is solved by using synthesis-based approaches. The consensus problem is defined and solved for a team of agents having a general linear dynamical model. Stability of the team is guaranteed by using modified consensus algorithms that are achieved by minimizing a set of individual cost functions. An alternative approach for obtaining an optimal consensus algorithm is obtained by invoking a state decomposition methodology and by transforming the consensus seeking problem into a stabilization problem. In another methodology, the game theory approach is used to formulate the consensus seeking problem in a "more" cooperative framework. For this purpose, a team cost function is defined and a min-max problem is solved to obtain a cooperative optimal solution. It is shown that the results obtained yield lower cost values when compared to those obtained by using the optimal control technique. In game theory and optimal control approaches that are developed based on state decomposition, linear matrix inequalities are used to impose simultaneously the decentralized nature of the problem as well as the consensus constraints on the designed controllers. Moreover, performance and stability properties of the designed cooperative team is analyzed in presence of actuator anomalies corresponding to three types of faults. Steady state behavior of the team members are analyzed under faulty scenarios. Adaptability of the team members to the above unanticipated circumstances is demonstrated and verified. Finally, the assumption of having a fixed and undirected network topology is relaxed to address and solve a more realistic and practical situation. It is shown that the stability and consensus achievement of the network with a switching structure and leader assignment can still be achieved. Moreover, by introducing additional criteria, the desirable performance specifications of the team can still be ensured and guaranteed
- …