1,045 research outputs found

    Distributed Cooperative Control of Multi-Agent Systems Under Detectability and Communication Constraints

    Get PDF
    Cooperative control of multi-agent systems has recently gained widespread attention from the scientific communities due to numerous applications in areas such as the formation control in unmanned vehicles, cooperative attitude control of spacecrafts, clustering of micro-satellites, environmental monitoring and exploration by mobile sensor networks, etc. The primary goal of a cooperative control problem for multi-agent systems is to design a decentralized control algorithm for each agent, relying on the local coordination of their actions to exhibit a collective behavior. Common challenges encountered in the study of cooperative control problems are unavailable group-level information, and limited bandwidth of the shared communication. In this dissertation, we investigate one of such cooperative control problems, namely cooperative output regulation, under various local and global level constraints coming from physical and communication limitations. The objective of the cooperative output regulation problem (CORP) for multi-agent systems is to design a distributed control strategy for the agents to synchronize their state with an external system, called the leader, in the presence of disturbance inputs. For the problem at hand, we additionally consider the scenario in which none of the agents can independently access the synchronization signal from their view of the leader, and therefore it is not possible for the agents to achieve the group objective by themselves unless they cooperate among members. To this end, we devise a novel distributed estimation algorithm to collectively gather the leader states under the discussed detectability constraint, and then use this estimation to synthesize a distributed control solution to the problem. Next, we extend our results in CORP to the case with uncertain agent dynamics arising from modeling errors. In addition to the detectability constraint, we also assumed that the local regulated error signals are not available to the agents for feedback, and thus none of the agents have all the required measurements to independently synthesize a control solution. By combining the distributed observer and a control law based on the internal model principle for the agents, we offer a solution to the robust CORP under these added constraints. In practical applications of multi-agent systems, it is difficult to consistently maintain a reliable communication between the agents. By considering such challenge in the communication, we study the CORP for the case when agents are connected through a time-varying communication topology. Due to the presence of the detectability constraint that none of the agents can independently access all the leader states at any switching instant, we devise a distributed estimation algorithm for the agents to collectively reconstruct the leader states. Then by using this estimation, a distributed dynamic control solution is offered to solve the CORP under the added communication constraint. Since the fixed communication network is a special case of this time-varying counterpart, the offered control solution can be viewed as a generalization of the former results. For effective validation of previous theoretical results, we apply the control algorithms to a practical case study problem on synchronizing the position of networked motors under time-varying communication. Based on our experimental results, we also demonstrate the uniqueness of derived control solutions. Another communication constraint affecting the cooperative control performance is the presence of network delays. To this regard, first we study the distributed state estimation problem of an autonomous plant by a network of observers under heterogeneous time-invariant delays and then extend to the time-varying counterpart. With the use of a low gain based estimation technique, we derive a sufficient stability condition in terms of the upper bound of the low gain parameter or the time delay to guarantee the convergence of estimation errors. Additionally, when the plant measurements are subject to bounded disturbances, we find that that the local estimation errors also remain bounded. Lastly, by using this estimation, we present a distributed control solution for a leader-follower synchronization problem of a multi-agent system. Next, we present another case study concerning a synchronization control problem of a group of distributed generators in an islanded microgrid under unknown time-varying latency. Similar to the case of delayed communication in aforementioned works, we offer a low gain based distributed control protocol to synchronize the terminal voltage and inverter operating frequency

    Cooperative optimal preview tracking for linear descriptor multi-agent systems

    Get PDF
    © 2018 The Franklin Institute. In this paper, a cooperative optimal preview tracking problem is considered for continuous-time descriptor multi-agent systems with a directed topology containing a spanning tree. By the acyclic assumption and state augmentation technique, it is shown that the cooperative tracking problem is equivalent to local optimal regulation problems of a set of low-dimensional descriptor augmented subsystems. To design distributed optimal preview controllers, restricted system equivalent (r.s.e.) and preview control theory are first exploited to obtain optimal preview controllers for reduced-order normal subsystems. Then, by using the invertibility of restricted equivalent relations, a constructive method for designing distributed controller is presented which also yields an explicit admissible solution for the generalized algebraic Riccati equation. Sufficient conditions for achieving global cooperative preview tracking are proposed proving that the distributed controllers are able to stabilize the descriptor augmented subsystems asymptotically. Finally, the validity of the theoretical results is illustrated via numerical simulation

    Reinforcement Learning, Intelligent Control and their Applications in Connected and Autonomous Vehicles

    Get PDF
    Reinforcement learning (RL) has attracted large attention over the past few years. Recently, we developed a data-driven algorithm to solve predictive cruise control (PCC) and games output regulation problems. This work integrates our recent contributions to the application of RL in game theory, output regulation problems, robust control, small-gain theory and PCC. The algorithm was developed for H∞H_\infty adaptive optimal output regulation of uncertain linear systems, and uncertain partially linear systems to reject disturbance and also force the output of the systems to asymptotically track a reference. In the PCC problem, we determined the reference velocity for each autonomous vehicle in the platoon using the traffic information broadcasted from the lights to reduce the vehicles\u27 trip time. Then we employed the algorithm to design an approximate optimal controller for the vehicles. This controller is able to regulate the headway, velocity and acceleration of each vehicle to the desired values. Simulation results validate the effectiveness of the algorithms
    • …
    corecore