555 research outputs found

    Comprehensive review on controller for leader-follower robotic system

    Get PDF
    985-1007This paper presents a comprehensive review of the leader-follower robotics system. The aim of this paper is to find and elaborate on the current trends in the swarm robotic system, leader-follower, and multi-agent system. Another part of this review will focus on finding the trend of controller utilized by previous researchers in the leader-follower system. The controller that is commonly applied by the researchers is mostly adaptive and non-linear controllers. The paper also explores the subject of study or system used during the research which normally employs multi-robot, multi-agent, space flying, reconfigurable system, multi-legs system or unmanned system. Another aspect of this paper concentrates on the topology employed by the researchers when they conducted simulation or experimental studies

    Event-triggering architectures for adaptive control of uncertain dynamical systems

    Get PDF
    In this dissertation, new approaches are presented for the design and implementation of networked adaptive control systems to reduce the wireless network utilization while guaranteeing system stability in the presence of system uncertainties. Specifically, the design and analysis of state feedback adaptive control systems over wireless networks using event-triggering control theory is first presented. The state feedback adaptive control results are then generalized to the output feedback case for dynamical systems with unmeasurable state vectors. This event-triggering approach is then adopted for large-scale uncertain dynamical systems. In particular, decentralized and distributed adaptive control methodologies are proposed with reduced wireless network utilization with stability guarantees. In addition, for systems in the absence of uncertainties, a new observer-free output feedback cooperative control architecture is developed. Specifically, the proposed architecture is predicated on a nonminimal state-space realization that generates an expanded set of states only using the filtered input and filtered output and their derivatives for each vehicle, without the need for designing an observer for each vehicle. Building on the results of this new observer-free output feedback cooperative control architecture, an event-triggering methodology is next proposed for the output feedback cooperative control to schedule the exchanged output measurements information between the agents in order to reduce wireless network utilization. Finally, the output feedback cooperative control architecture is generalized to adaptive control for handling exogenous disturbances in the follower vehicles. For each methodology, the closed-loop system stability properties are rigorously analyzed, the effect of the user-defined event-triggering thresholds and the controller design parameters on the overall system performance are characterized, and Zeno behavior is shown not to occur with the proposed algorithms --Abstract, page iv

    Adaptive and learning-based formation control of swarm robots

    Get PDF
    Autonomous aerial and wheeled mobile robots play a major role in tasks such as search and rescue, transportation, monitoring, and inspection. However, these operations are faced with a few open challenges including robust autonomy, and adaptive coordination based on the environment and operating conditions, particularly in swarm robots with limited communication and perception capabilities. Furthermore, the computational complexity increases exponentially with the number of robots in the swarm. This thesis examines two different aspects of the formation control problem. On the one hand, we investigate how formation could be performed by swarm robots with limited communication and perception (e.g., Crazyflie nano quadrotor). On the other hand, we explore human-swarm interaction (HSI) and different shared-control mechanisms between human and swarm robots (e.g., BristleBot) for artistic creation. In particular, we combine bio-inspired (i.e., flocking, foraging) techniques with learning-based control strategies (using artificial neural networks) for adaptive control of multi- robots. We first review how learning-based control and networked dynamical systems can be used to assign distributed and decentralized policies to individual robots such that the desired formation emerges from their collective behavior. We proceed by presenting a novel flocking control for UAV swarm using deep reinforcement learning. We formulate the flocking formation problem as a partially observable Markov decision process (POMDP), and consider a leader-follower configuration, where consensus among all UAVs is used to train a shared control policy, and each UAV performs actions based on the local information it collects. In addition, to avoid collision among UAVs and guarantee flocking and navigation, a reward function is added with the global flocking maintenance, mutual reward, and a collision penalty. We adapt deep deterministic policy gradient (DDPG) with centralized training and decentralized execution to obtain the flocking control policy using actor-critic networks and a global state space matrix. In the context of swarm robotics in arts, we investigate how the formation paradigm can serve as an interaction modality for artists to aesthetically utilize swarms. In particular, we explore particle swarm optimization (PSO) and random walk to control the communication between a team of robots with swarming behavior for musical creation

    Distributed Model-Free Bipartite Consensus Tracking for Unknown Heterogeneous Multi-Agent Systems with Switching Topology

    Get PDF
    This paper proposes a distributed model-free adaptive bipartite consensus tracking (DMFABCT) scheme. The proposed scheme is independent of a precise mathematical model, but can achieve both bipartite time-invariant and time-varying trajectory tracking for unknown dynamic discrete-time heterogeneous multi-agent systems (MASs) with switching topology and coopetition networks. The main innovation of this algorithm is to estimate an equivalent dynamic linearization data model by the pseudo partial derivative (PPD) approach, where only the input–output (I/O) data of each agent is required, and the cooperative interactions among agents are investigated. The rigorous proof of the convergent property is given for DMFABCT, which reveals that the trajectories error can be reduced. Finally, three simulations results show that the novel DMFABCT scheme is effective and robust for unknown heterogeneous discrete-time MASs with switching topologies to complete bipartite consensus tracking tasks

    Dynamic event-triggered-based human-in-the-loop formation control for stochastic nonlinear MASs

    Get PDF
    The dynamic event-triggered (DET) formation control problem of a class of stochastic nonlinear multi-agent systems (MASs) with full state constraints is investigated in this article. Supposing that the human operator sends commands to the leader as control input signals, all followers keep formation through network topology communication. Under the command-filter-based backstepping technique, the radial basis function neural networks (RBF NNs) and the barrier Lyapunov function (BLF) are utilized to resolve the problems of unknown nonlinear terms and full state constraints, respectively. Furthermore, a DET control mechanism is proposed to reduce the occupation of communication bandwidth. The presented distributed formation control strategy guarantees that all signals of the MASs are semi-globally uniformly ultimately bounded (SGUUB) in probability. Finally, the feasibility of the theoretical research result is demonstrated by a simulation example

    Interactive inference: a multi-agent model of cooperative joint actions

    Full text link
    We advance a novel computational model of multi-agent, cooperative joint actions that is grounded in the cognitive framework of active inference. The model assumes that to solve a joint task, such as pressing together a red or blue button, two (or more) agents engage in a process of interactive inference. Each agent maintains probabilistic beliefs about the goal of the joint task (e.g., should we press the red or blue button?) and updates them by observing the other agent's movements, while in turn selecting movements that make his own intentions legible and easy to infer by the other agent (i.e., sensorimotor communication). Over time, the interactive inference aligns both the beliefs and the behavioral strategies of the agents, hence ensuring the success of the joint action. We exemplify the functioning of the model in two simulations. The first simulation illustrates a ''leaderless'' joint action. It shows that when two agents lack a strong preference about their joint task goal, they jointly infer it by observing each other's movements. In turn, this helps the interactive alignment of their beliefs and behavioral strategies. The second simulation illustrates a "leader-follower" joint action. It shows that when one agent ("leader") knows the true joint goal, it uses sensorimotor communication to help the other agent ("follower") infer it, even if doing this requires selecting a more costly individual plan. These simulations illustrate that interactive inference supports successful multi-agent joint actions and reproduces key cognitive and behavioral dynamics of "leaderless" and "leader-follower" joint actions observed in human-human experiments. In sum, interactive inference provides a cognitively inspired, formal framework to realize cooperative joint actions and consensus in multi-agent systems.Comment: 32 pages, 16 figure
    corecore