523 research outputs found

    Second-Order Consensus of Networked Mechanical Systems With Communication Delays

    Full text link
    In this paper, we consider the second-order consensus problem for networked mechanical systems subjected to nonuniform communication delays, and the mechanical systems are assumed to interact on a general directed topology. We propose an adaptive controller plus a distributed velocity observer to realize the objective of second-order consensus. It is shown that both the positions and velocities of the mechanical agents synchronize, and furthermore, the velocities of the mechanical agents converge to the scaled weighted average value of their initial ones. We further demonstrate that the proposed second-order consensus scheme can be used to solve the leader-follower synchronization problem with a constant-velocity leader and under constant communication delays. Simulation results are provided to illustrate the performance of the proposed adaptive controllers.Comment: 16 pages, 5 figures, submitted to IEEE Transactions on Automatic Contro

    An Overview of Recent Progress in the Study of Distributed Multi-agent Coordination

    Get PDF
    This article reviews some main results and progress in distributed multi-agent coordination, focusing on papers published in major control systems and robotics journals since 2006. Distributed coordination of multiple vehicles, including unmanned aerial vehicles, unmanned ground vehicles and unmanned underwater vehicles, has been a very active research subject studied extensively by the systems and control community. The recent results in this area are categorized into several directions, such as consensus, formation control, optimization, task assignment, and estimation. After the review, a short discussion section is included to summarize the existing research and to propose several promising research directions along with some open problems that are deemed important for further investigations

    Comprehensive review on controller for leader-follower robotic system

    Get PDF
    985-1007This paper presents a comprehensive review of the leader-follower robotics system. The aim of this paper is to find and elaborate on the current trends in the swarm robotic system, leader-follower, and multi-agent system. Another part of this review will focus on finding the trend of controller utilized by previous researchers in the leader-follower system. The controller that is commonly applied by the researchers is mostly adaptive and non-linear controllers. The paper also explores the subject of study or system used during the research which normally employs multi-robot, multi-agent, space flying, reconfigurable system, multi-legs system or unmanned system. Another aspect of this paper concentrates on the topology employed by the researchers when they conducted simulation or experimental studies

    Adaptive and learning-based formation control of swarm robots

    Get PDF
    Autonomous aerial and wheeled mobile robots play a major role in tasks such as search and rescue, transportation, monitoring, and inspection. However, these operations are faced with a few open challenges including robust autonomy, and adaptive coordination based on the environment and operating conditions, particularly in swarm robots with limited communication and perception capabilities. Furthermore, the computational complexity increases exponentially with the number of robots in the swarm. This thesis examines two different aspects of the formation control problem. On the one hand, we investigate how formation could be performed by swarm robots with limited communication and perception (e.g., Crazyflie nano quadrotor). On the other hand, we explore human-swarm interaction (HSI) and different shared-control mechanisms between human and swarm robots (e.g., BristleBot) for artistic creation. In particular, we combine bio-inspired (i.e., flocking, foraging) techniques with learning-based control strategies (using artificial neural networks) for adaptive control of multi- robots. We first review how learning-based control and networked dynamical systems can be used to assign distributed and decentralized policies to individual robots such that the desired formation emerges from their collective behavior. We proceed by presenting a novel flocking control for UAV swarm using deep reinforcement learning. We formulate the flocking formation problem as a partially observable Markov decision process (POMDP), and consider a leader-follower configuration, where consensus among all UAVs is used to train a shared control policy, and each UAV performs actions based on the local information it collects. In addition, to avoid collision among UAVs and guarantee flocking and navigation, a reward function is added with the global flocking maintenance, mutual reward, and a collision penalty. We adapt deep deterministic policy gradient (DDPG) with centralized training and decentralized execution to obtain the flocking control policy using actor-critic networks and a global state space matrix. In the context of swarm robotics in arts, we investigate how the formation paradigm can serve as an interaction modality for artists to aesthetically utilize swarms. In particular, we explore particle swarm optimization (PSO) and random walk to control the communication between a team of robots with swarming behavior for musical creation
    corecore