2,307 research outputs found

    Resilient Autonomous Control of Distributed Multi-agent Systems in Contested Environments

    Full text link
    An autonomous and resilient controller is proposed for leader-follower multi-agent systems under uncertainties and cyber-physical attacks. The leader is assumed non-autonomous with a nonzero control input, which allows changing the team behavior or mission in response to environmental changes. A resilient learning-based control protocol is presented to find optimal solutions to the synchronization problem in the presence of attacks and system dynamic uncertainties. An observer-based distributed H_infinity controller is first designed to prevent propagating the effects of attacks on sensors and actuators throughout the network, as well as to attenuate the effect of these attacks on the compromised agent itself. Non-homogeneous game algebraic Riccati equations are derived to solve the H_infinity optimal synchronization problem and off-policy reinforcement learning is utilized to learn their solution without requiring any knowledge of the agent's dynamics. A trust-confidence based distributed control protocol is then proposed to mitigate attacks that hijack the entire node and attacks on communication links. A confidence value is defined for each agent based solely on its local evidence. The proposed resilient reinforcement learning algorithm employs the confidence value of each agent to indicate the trustworthiness of its own information and broadcast it to its neighbors to put weights on the data they receive from it during and after learning. If the confidence value of an agent is low, it employs a trust mechanism to identify compromised agents and remove the data it receives from them from the learning process. Simulation results are provided to show the effectiveness of the proposed approach

    High-Order Leader-Follower Tracking Control under Limited Information Availability

    Full text link
    Limited information availability represents a fundamental challenge for control of multi-agent systems, since an agent often lacks sensing capabilities to measure certain states of its own and can exchange data only with its neighbors. The challenge becomes even greater when agents are governed by high-order dynamics. The present work is motivated to conduct control design for linear and nonlinear high-order leader-follower multi-agent systems in a context where only the first state of an agent is measured. To address this open challenge, we develop novel distributed observers to enable followers to reconstruct unmeasured or unknown quantities about themselves and the leader and on such a basis, build observer-based tracking control approaches. We analyze the convergence properties of the proposed approaches and validate their performance through simulation

    Adaptive and learning-based formation control of swarm robots

    Get PDF
    Autonomous aerial and wheeled mobile robots play a major role in tasks such as search and rescue, transportation, monitoring, and inspection. However, these operations are faced with a few open challenges including robust autonomy, and adaptive coordination based on the environment and operating conditions, particularly in swarm robots with limited communication and perception capabilities. Furthermore, the computational complexity increases exponentially with the number of robots in the swarm. This thesis examines two different aspects of the formation control problem. On the one hand, we investigate how formation could be performed by swarm robots with limited communication and perception (e.g., Crazyflie nano quadrotor). On the other hand, we explore human-swarm interaction (HSI) and different shared-control mechanisms between human and swarm robots (e.g., BristleBot) for artistic creation. In particular, we combine bio-inspired (i.e., flocking, foraging) techniques with learning-based control strategies (using artificial neural networks) for adaptive control of multi- robots. We first review how learning-based control and networked dynamical systems can be used to assign distributed and decentralized policies to individual robots such that the desired formation emerges from their collective behavior. We proceed by presenting a novel flocking control for UAV swarm using deep reinforcement learning. We formulate the flocking formation problem as a partially observable Markov decision process (POMDP), and consider a leader-follower configuration, where consensus among all UAVs is used to train a shared control policy, and each UAV performs actions based on the local information it collects. In addition, to avoid collision among UAVs and guarantee flocking and navigation, a reward function is added with the global flocking maintenance, mutual reward, and a collision penalty. We adapt deep deterministic policy gradient (DDPG) with centralized training and decentralized execution to obtain the flocking control policy using actor-critic networks and a global state space matrix. In the context of swarm robotics in arts, we investigate how the formation paradigm can serve as an interaction modality for artists to aesthetically utilize swarms. In particular, we explore particle swarm optimization (PSO) and random walk to control the communication between a team of robots with swarming behavior for musical creation

    Vision-based control of multi-agent systems

    Get PDF
    Scope and Methodology of Study: Creating systems with multiple autonomous vehicles places severe demands on the design of decision-making supervisors, cooperative control schemes, and communication strategies. In last years, several approaches have been developed in the literature. Most of them solve the vehicle coordination problem assuming some kind of communications between team members. However, communications make the group sensitive to failure and restrict the applicability of the controllers to teams of friendly robots. This dissertation deals with the problem of designing decentralized controllers that use just local sensor information to achieve some group goals.Findings and Conclusions: This dissertation presents a decentralized architecture for vision-based stabilization of unmanned vehicles moving in formation. The architecture consists of two main components: (i) a vision system, and (ii) vision-based control algorithms. The vision system is capable of recognizing and localizing robots. It is a model-based scheme composed of three main components: image acquisition and processing, robot identification, and pose estimation.Using vision information, we address the problem of stabilizing groups of mobile robots in leader- or two leader-follower formations. The strategies use relative pose between a robot and its designated leader or leaders to achieve formation objectives. Several leader-follower formation control algorithms, which ensure asymptotic coordinated motion, are described and compared. Lyapunov's stability theory-based analysis and numerical simulations in a realistic tridimensional environment show the stability properties of the control approaches

    A Survey and Analysis of Cooperative Multi-Agent Robot Systems: Challenges and Directions

    Get PDF
    Research in the area of cooperative multi-agent robot systems has received wide attention among researchers in recent years. The main concern is to find the effective coordination among autonomous agents to perform the task in order to achieve a high quality of overall performance. Therefore, this paper reviewed various selected literatures primarily from recent conference proceedings and journals related to cooperation and coordination of multi-agent robot systems (MARS). The problems, issues, and directions of MARS research have been investigated in the literature reviews. Three main elements of MARS which are the type of agents, control architectures, and communications were discussed thoroughly in the beginning of this paper. A series of problems together with the issues were analyzed and reviewed, which included centralized and decentralized control, consensus, containment, formation, task allocation, intelligences, optimization and communications of multi-agent robots. Since the research in the field of multi-agent robot research is expanding, some issues and future challenges in MARS are recalled, discussed and clarified with future directions. Finally, the paper is concluded with some recommendations with respect to multi-agent systems

    Estimator-based adaptive neural network control of leader-follower high-order nonlinear multiagent systems with actuator faults

    Get PDF
    The problem of distributed cooperative control for networked multiagent systems is investigated in this paper. Each agent is modeled as an uncertain nonlinear high-order system incorporating with model uncertainty, unknown external disturbance, and actuator fault. The communication network between followers can be an undirected or a directed graph, and only some of the follower agents can obtain the commands from the leader. To develop the distributed cooperative control algorithm, a prefilter is designed, which can derive the state-space representation to a newly constructed plant. Then, a set of distributed adaptive neural network controllers are designed by making certain modifications on traditional backstepping techniques with the aid of adaptive control, neural network control, and a second-order sliding mode estimator. Rigorous proving procedures are provided,which show that uniform ultimate boundedness of all the tracking errors can be achieved in a networked multiagent system. Finally, a numerical simulation is carried out to evaluate the theoretical results
    • …
    corecore