457 research outputs found
Event-triggering architectures for adaptive control of uncertain dynamical systems
In this dissertation, new approaches are presented for the design and implementation of networked adaptive control systems to reduce the wireless network utilization while guaranteeing system stability in the presence of system uncertainties. Specifically, the design and analysis of state feedback adaptive control systems over wireless networks using event-triggering control theory is first presented. The state feedback adaptive control results are then generalized to the output feedback case for dynamical systems with unmeasurable state vectors. This event-triggering approach is then adopted for large-scale uncertain dynamical systems. In particular, decentralized and distributed adaptive control methodologies are proposed with reduced wireless network utilization with stability guarantees.
In addition, for systems in the absence of uncertainties, a new observer-free output feedback cooperative control architecture is developed. Specifically, the proposed architecture is predicated on a nonminimal state-space realization that generates an expanded set of states only using the filtered input and filtered output and their derivatives for each vehicle, without the need for designing an observer for each vehicle. Building on the results of this new observer-free output feedback cooperative control architecture, an event-triggering methodology is next proposed for the output feedback cooperative control to schedule the exchanged output measurements information between the agents in order to reduce wireless network utilization. Finally, the output feedback cooperative control architecture is generalized to adaptive control for handling exogenous disturbances in the follower vehicles.
For each methodology, the closed-loop system stability properties are rigorously analyzed, the effect of the user-defined event-triggering thresholds and the controller design parameters on the overall system performance are characterized, and Zeno behavior is shown not to occur with the proposed algorithms --Abstract, page iv
Resilient Autonomous Control of Distributed Multi-agent Systems in Contested Environments
An autonomous and resilient controller is proposed for leader-follower
multi-agent systems under uncertainties and cyber-physical attacks. The leader
is assumed non-autonomous with a nonzero control input, which allows changing
the team behavior or mission in response to environmental changes. A resilient
learning-based control protocol is presented to find optimal solutions to the
synchronization problem in the presence of attacks and system dynamic
uncertainties. An observer-based distributed H_infinity controller is first
designed to prevent propagating the effects of attacks on sensors and actuators
throughout the network, as well as to attenuate the effect of these attacks on
the compromised agent itself. Non-homogeneous game algebraic Riccati equations
are derived to solve the H_infinity optimal synchronization problem and
off-policy reinforcement learning is utilized to learn their solution without
requiring any knowledge of the agent's dynamics. A trust-confidence based
distributed control protocol is then proposed to mitigate attacks that hijack
the entire node and attacks on communication links. A confidence value is
defined for each agent based solely on its local evidence. The proposed
resilient reinforcement learning algorithm employs the confidence value of each
agent to indicate the trustworthiness of its own information and broadcast it
to its neighbors to put weights on the data they receive from it during and
after learning. If the confidence value of an agent is low, it employs a trust
mechanism to identify compromised agents and remove the data it receives from
them from the learning process. Simulation results are provided to show the
effectiveness of the proposed approach
Robust Distributed Stabilization of Interconnected Multiagent Systems
Many large-scale systems can be modeled as groups of individual dynamics, e.g., multi-vehicle systems, as well as interconnected multiagent systems, power systems and biological networks as a few examples. Due to the high-dimension and complexity in configuration of these infrastructures, only a few internal variables of each agent might be measurable and the exact knowledge of the model might be unavailable for the control design purpose. The collective objectives may range from consensus to decoupling, stabilization, reference tracking, and global performance guarantees. Depending on the objectives, the designer may choose agent-level low-dimension or multiagent system-level high-dimension approaches to develop distributed algorithms. With an inappropriately designed algorithm, the effect of modeling uncertainty may propagate over the communication and coupling topologies and degrade the overall performance of the system. We address this problem by proposing single- and multi-layer structures. The former is used for both individual and interconnected multiagent systems. The latter, inspired by cyber-physical systems, is devoted to the interconnected multiagent systems. We focus on developing a single control-theoretic tool to be used for the relative information-based distributed control design purpose for any combinations of the aforementioned configuration, objective, and approach. This systematic framework guarantees robust stability and performance of the closed-loop multiagent systems. We validate these theoretical results through various simulation studies
Designing Fully Distributed Consensus Protocols for Linear Multi-agent Systems with Directed Graphs
This paper addresses the distributed consensus protocol design problem for
multi-agent systems with general linear dynamics and directed communication
graphs. Existing works usually design consensus protocols using the smallest
real part of the nonzero eigenvalues of the Laplacian matrix associated with
the communication graph, which however is global information. In this paper,
based on only the agent dynamics and the relative states of neighboring agents,
a distributed adaptive consensus protocol is designed to achieve
leader-follower consensus for any communication graph containing a directed
spanning tree with the leader as the root node. The proposed adaptive protocol
is independent of any global information of the communication graph and thereby
is fully distributed. Extensions to the case with multiple leaders are further
studied.Comment: 16 page, 3 figures. To appear in IEEE Transactions on Automatic
Contro
- …