391 research outputs found

    An Overview of Recent Progress in the Study of Distributed Multi-agent Coordination

    Get PDF
    This article reviews some main results and progress in distributed multi-agent coordination, focusing on papers published in major control systems and robotics journals since 2006. Distributed coordination of multiple vehicles, including unmanned aerial vehicles, unmanned ground vehicles and unmanned underwater vehicles, has been a very active research subject studied extensively by the systems and control community. The recent results in this area are categorized into several directions, such as consensus, formation control, optimization, task assignment, and estimation. After the review, a short discussion section is included to summarize the existing research and to propose several promising research directions along with some open problems that are deemed important for further investigations

    MAS-based Distributed Coordinated Control and Optimization in Microgrid and Microgrid Clusters:A Comprehensive Overview

    Get PDF

    ADVANCES IN MULTI-AGENT FLOCKING: CONTINUOUS-TIME AND DISCRETE-TIME ALGORITHMS

    Get PDF
    We present multi-agent control methods that address flocking in continuous-time and discrete-time settings. The method is decentralized, that is, each agents controller relies on local sensing to determine the relative positions and velocities of nearby agents. In the continuous-time setting, each agent has double-integrator dynamics. In the discrete-time setting, each agent has the discrete-time double-integrator dynamics obtained by sampling the continuous-time double integrator and applying a zero-order hold on the control input. We demonstrate using analysis, numerical simulations, and experimental demonstrations that agents using the flocking methods converge to flocking formations and follow the centralized leader (if applicable)

    Distributed Cooperative Control of Multi-Agent Systems Under Detectability and Communication Constraints

    Get PDF
    Cooperative control of multi-agent systems has recently gained widespread attention from the scientific communities due to numerous applications in areas such as the formation control in unmanned vehicles, cooperative attitude control of spacecrafts, clustering of micro-satellites, environmental monitoring and exploration by mobile sensor networks, etc. The primary goal of a cooperative control problem for multi-agent systems is to design a decentralized control algorithm for each agent, relying on the local coordination of their actions to exhibit a collective behavior. Common challenges encountered in the study of cooperative control problems are unavailable group-level information, and limited bandwidth of the shared communication. In this dissertation, we investigate one of such cooperative control problems, namely cooperative output regulation, under various local and global level constraints coming from physical and communication limitations. The objective of the cooperative output regulation problem (CORP) for multi-agent systems is to design a distributed control strategy for the agents to synchronize their state with an external system, called the leader, in the presence of disturbance inputs. For the problem at hand, we additionally consider the scenario in which none of the agents can independently access the synchronization signal from their view of the leader, and therefore it is not possible for the agents to achieve the group objective by themselves unless they cooperate among members. To this end, we devise a novel distributed estimation algorithm to collectively gather the leader states under the discussed detectability constraint, and then use this estimation to synthesize a distributed control solution to the problem. Next, we extend our results in CORP to the case with uncertain agent dynamics arising from modeling errors. In addition to the detectability constraint, we also assumed that the local regulated error signals are not available to the agents for feedback, and thus none of the agents have all the required measurements to independently synthesize a control solution. By combining the distributed observer and a control law based on the internal model principle for the agents, we offer a solution to the robust CORP under these added constraints. In practical applications of multi-agent systems, it is difficult to consistently maintain a reliable communication between the agents. By considering such challenge in the communication, we study the CORP for the case when agents are connected through a time-varying communication topology. Due to the presence of the detectability constraint that none of the agents can independently access all the leader states at any switching instant, we devise a distributed estimation algorithm for the agents to collectively reconstruct the leader states. Then by using this estimation, a distributed dynamic control solution is offered to solve the CORP under the added communication constraint. Since the fixed communication network is a special case of this time-varying counterpart, the offered control solution can be viewed as a generalization of the former results. For effective validation of previous theoretical results, we apply the control algorithms to a practical case study problem on synchronizing the position of networked motors under time-varying communication. Based on our experimental results, we also demonstrate the uniqueness of derived control solutions. Another communication constraint affecting the cooperative control performance is the presence of network delays. To this regard, first we study the distributed state estimation problem of an autonomous plant by a network of observers under heterogeneous time-invariant delays and then extend to the time-varying counterpart. With the use of a low gain based estimation technique, we derive a sufficient stability condition in terms of the upper bound of the low gain parameter or the time delay to guarantee the convergence of estimation errors. Additionally, when the plant measurements are subject to bounded disturbances, we find that that the local estimation errors also remain bounded. Lastly, by using this estimation, we present a distributed control solution for a leader-follower synchronization problem of a multi-agent system. Next, we present another case study concerning a synchronization control problem of a group of distributed generators in an islanded microgrid under unknown time-varying latency. Similar to the case of delayed communication in aforementioned works, we offer a low gain based distributed control protocol to synchronize the terminal voltage and inverter operating frequency

    Problems in Control, Estimation, and Learning in Complex Robotic Systems

    Get PDF
    In this dissertation, we consider a range of different problems in systems, control, and learning theory and practice. In Part I, we look at problems in control of complex networks. In Chapter 1, we consider the performance analysis of a class of linear noisy dynamical systems. In Chapter 2, we look at the optimal design problems for these networks. In Chapter 3, we consider dynamical networks where interactions between the networks occur randomly in time. And in the last chapter of this part, in Chapter 4, we look at dynamical networks wherein coupling between the subsystems (or agents) changes nonlinearly based on the difference between the state of the subsystems. In Part II, we consider estimation problems wherein we deal with a large body of variables (i.e., at large scale). This part starts with Chapter 5, in which we consider the problem of sampling from a dynamical network in space and time for initial state recovery. In Chapter 6, we consider a similar problem with the difference that the observations instead of point samples become continuous observations that happen in Lebesgue measurable observations. In Chapter 7, we consider an estimation problem in which the location of a robot during the navigation is estimated using the information of a large number of surrounding features and we would like to select the most informative features using an efficient algorithm. In Part III, we look at active perception problems, which are approached using reinforcement learning techniques. This part starts with Chapter 8, in which we tackle the problem of multi-agent reinforcement learning where the agents communicate and classify as a team. In Chapter 9, we consider a single agent version of the same problem, wherein a layered architecture replaces the architectures of the previous chapter. Then, we use reinforcement learning to design the meta-layer (to select goals), action-layer (to select local actions), and perception-layer (to conduct classification)

    Distributed Control of Networked Nonlinear Euler-Lagrange Systems

    Get PDF
    Motivated by recent developments in formation and cooperative control of networked multi-agent systems, the main goal of this thesis is development of efficient synchronization and formation control algorithms for distributed control of networked nonlinear systems whose dynamics can be described by Euler-Lagrange (EL) equations. One of the main challenges in the design of the formation control algorithm is its optimality and robustness to parametric uncertainties, external disturbances and ability to reconfigure in presence of component, actuator, or sensor faults. Furthermore, the controller should be capable of handling switchings in the communication network topology. In this work, nonlinear optimal control techniques are studied for developing distributed controllers for networked EL systems. An individual cost function is introduced to design a controller that relies on only local information exchanges among the agents. In the development of the controller, it is assumed that the communication graph is not fixed (in other words the topology is switching). Additionally, parametric uncertainties and faults in the EL systems are considered and two approaches, namely adaptive and robust techniques are introduced to compensate for the effects of uncertainties and actuator faults. Next, a distributed H_infinity performance measure is considered to develop distributed robust controllers for uncertain networked EL systems. The developed distributed controller is obtained through rigorous analysis and by considering an individual cost function to enhance the robustness of the controllers in presence of parametric uncertainties and external bounded disturbances. Moreover, a rigorous analysis is conducted on the performance of the developed controllers in presence of actuator faults as well as fault diagnostic and identification (FDI) imperfections. Next, synchronization and set-point tracking control of networked EL systems are investigated in presence of three constraints, namely, (i) input saturation constraints, (ii) unavailability of velocity feedback, and (iii) lack of knowledge on the system parameters. It is shown that the developed distributed controllers can accomplish the desired requirements and specification under the above constraints. Finally, a quaternion-based approach is considered for the attitude synchronization and set-point tracking control problem of formation flying spacecraft. Employing the quaternion in the control law design enables handling large rotations in the spacecraft attitude and, therefore, any singularities in the control laws are avoided. Furthermore, using the quaternion also enables one to guarantee boundedness of the control signals both with and without velocity feedback
    corecore