286 research outputs found

    Distributed Tracking Control for Discrete-Time Multiagent Systems with Novel Markovian Switching Topologies

    Get PDF
    Distributed discrete-time coordinated tracking control problem is investigated for multiagent systems in the ideal case, where agents with a fixed graph combine with a leader-following group, aiming to expand the function of the traditional one in some scenes. The modified union switching topology is derived from a set of Markov chains to the edges by introducing a novel mapping. The issue on how to guarantee all the agents tracking the leader is solved through a PD-like consensus algorithm. The available sampling period and the feasible control gain are calculated in terms of the trigonometric function theory, and the mean-square bound of tracking errors is provided finally. Simulation example is presented to demonstrate the validity of the theoretical results

    An Overview of Recent Progress in the Study of Distributed Multi-agent Coordination

    Get PDF
    This article reviews some main results and progress in distributed multi-agent coordination, focusing on papers published in major control systems and robotics journals since 2006. Distributed coordination of multiple vehicles, including unmanned aerial vehicles, unmanned ground vehicles and unmanned underwater vehicles, has been a very active research subject studied extensively by the systems and control community. The recent results in this area are categorized into several directions, such as consensus, formation control, optimization, task assignment, and estimation. After the review, a short discussion section is included to summarize the existing research and to propose several promising research directions along with some open problems that are deemed important for further investigations

    Output Feedback Control for Couple-Group Consensus of Multiagent Systems

    Get PDF
    This paper deals with the couple-group consensus problem for multiagent systems via output feedback control. Both continuous- and discrete-time cases are considered. The consensus problems are converted into the stability problem of the error systems by the system transformation. We obtain two necessary and sufficient conditions of couple-group consensus in different forms for each case. Two different algorithms are used to design the control gains for continuous- and discrete-time case, respectively. Finally, simulation examples are given to show the effectiveness of the proposed results

    Necessary and Sufficient Conditions for Circle Formations of Mobile Agents with Coupling Delay via Sampled-Data Control

    Get PDF
    A circle forming problem for a group of mobile agents governed by first-order system is investigated, where each agent can only sense the relative angular positions of its neighboring two agents with time delay and move on the one-dimensional space of a given circle. To solve this problem, a novel decentralized sampled-data control law is proposed. By combining algebraic graph theory with control theory, some necessary and sufficient conditions are established to guarantee that all the mobile agents form a pregiven circle formation asymptotically. Moreover, the ranges of the sampling period and the coupling delay are determined, respectively. Finally, the theoretical results are demonstrated by numerical simulations

    Data-Driven Architecture to Increase Resilience In Multi-Agent Coordinated Missions

    Get PDF
    The rise in the use of Multi-Agent Systems (MASs) in unpredictable and changing environments has created the need for intelligent algorithms to increase their autonomy, safety and performance in the event of disturbances and threats. MASs are attractive for their flexibility, which also makes them prone to threats that may result from hardware failures (actuators, sensors, onboard computer, power source) and operational abnormal conditions (weather, GPS denied location, cyber-attacks). This dissertation presents research on a bio-inspired approach for resilience augmentation in MASs in the presence of disturbances and threats such as communication link and stealthy zero-dynamics attacks. An adaptive bio-inspired architecture is developed for distributed consensus algorithms to increase fault-tolerance in a network of multiple high-order nonlinear systems under directed fixed topologies. In similarity with the natural organisms’ ability to recognize and remember specific pathogens to generate its immunity, the immunity-based architecture consists of a Distributed Model-Reference Adaptive Control (DMRAC) with an Artificial Immune System (AIS) adaptation law integrated within a consensus protocol. Feedback linearization is used to modify the high-order nonlinear model into four decoupled linear subsystems. A stability proof of the adaptation law is conducted using Lyapunov methods and Jordan decomposition. The DMRAC is proven to be stable in the presence of external time-varying bounded disturbances and the tracking error trajectories are shown to be bounded. The effectiveness of the proposed architecture is examined through numerical simulations. The proposed controller successfully ensures that consensus is achieved among all agents while the adaptive law v simultaneously rejects the disturbances in the agent and its neighbors. The architecture also includes a health management system to detect faulty agents within the global network. Further numerical simulations successfully test and show that the Global Health Monitoring (GHM) does effectively detect faults within the network

    Consensus-type stochastic approximation algorithms

    Get PDF
    This work is concerned with asymptotic properties of consensus-type algorithms for networked systems whose topologies switch randomly. The regime-switching process is modeled as a discrete-time Markov chain with a nite state space. The consensus control is achieved by designing stochastic approximation algorithms. In the setup, the regime-switching process (the Markov chain) contains a rate parameter Ε\u3e 0 in the transition probability matrix that characterizes how frequently the topology switches. On the other hand, the consensus control algorithm uses a step-size Μ that denes how fast the network states are updated. Depending on their relative values, three distinct scenarios emerge. Under suitable conditions, we show that when 0 \u3c Ε =O(Μ), a continuous-time interpolation of the iterates converges weakly to a system of randomly switching ordinary dierential equations modulated by a continuous-time Markov chain. In this case, a scaled sequence of tracking errors converges to a system of switching diusion. When 0 \u3c Ε \u3c\u3cΜ, the network topology is almost non-switching during consensus control transient intervals, and hence the limit dynamic system is simply an autonomous dierential equation. When Μ\u3c\u3cΕ, the Markov chain acts as a fast varying noise, and only its average is relevant, resulting in a limit dierential equation that is an average with respect to the stationary measure of the Markov chain. Simulation results are presented to demonstrate these findings. By introducing a post-iteration averaging algorithm, this dissertation demonstrates that asymptotic optimality can be achieved in convergence rates of stochastic approximation algorithms for consensus control with structural constraints. The algorithm involves two stages. The first stage is a coarse approximation obtained using a sequence of large stepsizes. Then the second stage provides a renement by averaging the iterates from the rst stage. We show that the new algorithm is asymptotically efficient and gives the optimal convergence rates in the sense of the best scaling factor and smallest possible asymptotic variance. Numerical results are presented to illustrate the performance of the algorithm

    Event-triggered distributed control for optimal consensus of unknown nonlinear agents in normal form

    Get PDF
    Εθνικό Μετσόβιο Πολυτεχνείο--Μεταπτυχιακή Εργασία. Διεπιστημονικό-Διατμηματικό Πρόγραμμα Μεταπτυχιακών Σπουδών (Δ.Π.Μ.Σ.) “Συστήματα Αυτοματισμού

    Robust stability analysis of formation control in local frames under time-varying delays and actuator faults

    Get PDF
    This paper investigates the robust stability of a multiagent system moving to a desired rigid formation in presence of unknown time-varying communication delays and actuator faults. Each agent uses relative position measurements to implement the proposed control method, which does not require common coordinate references. However, the presence of time delays in the measurements, which is inherent to the communication links between agents, has a negative impact in the control system performance leading, in some cases, to instability. Furthermore, the robust stability analysis becomes more complex if failures on actuators are taken into account. In addition, delays may be subject to time variations, depending on network load, availability of communication resources, dynamic routing protocols, or other environmental conditions. To cope with these problems, a sufficient condition based on Linear Matrix Inequalities (LMI) is provided to ensure the robust asymptotic convergence of the agents to the desired formation. This condition is valid for any arbitrarily fast time-varying delays and actuator faults, given a worst-case point-to-point delay. Finally, simulation results show the performance of the proposed approach
    corecore