1,277 research outputs found
Beyond Reynolds: A Constraint-Driven Approach to Cluster Flocking
In this paper, we present an original set of flocking rules using an
ecologically-inspired paradigm for control of multi-robot systems. We translate
these rules into a constraint-driven optimal control problem where the agents
minimize energy consumption subject to safety and task constraints. We prove
several properties about the feasible space of the optimal control problem and
show that velocity consensus is an optimal solution. We also motivate the
inclusion of slack variables in constraint-driven problems when the global
state is only partially observable by each agent. Finally, we analyze the case
where the communication topology is fixed and connected, and prove that our
proposed flocking rules achieve velocity consensus.Comment: 6 page
Declarative vs Rule-based Control for Flocking Dynamics
The popularity of rule-based flocking models, such as Reynolds' classic
flocking model, raises the question of whether more declarative flocking models
are possible. This question is motivated by the observation that declarative
models are generally simpler and easier to design, understand, and analyze than
operational models. We introduce a very simple control law for flocking based
on a cost function capturing cohesion (agents want to stay together) and
separation (agents do not want to get too close). We refer to it as {\textit
declarative flocking} (DF). We use model-predictive control (MPC) to define
controllers for DF in centralized and distributed settings. A thorough
performance comparison of our declarative flocking with Reynolds' model, and
with more recent flocking models that use MPC with a cost function based on
lattice structures, demonstrate that DF-MPC yields the best cohesion and least
fragmentation, and maintains a surprisingly good level of geometric regularity
while still producing natural flock shapes similar to those produced by
Reynolds' model. We also show that DF-MPC has high resilience to sensor noise.Comment: 7 Page
Distributed Model Predictive Consensus via the Alternating Direction Method of Multipliers
We propose a distributed optimization method for solving a distributed model
predictive consensus problem. The goal is to design a distributed controller
for a network of dynamical systems to optimize a coupled objective function
while respecting state and input constraints. The distributed optimization
method is an augmented Lagrangian method called the Alternating Direction
Method of Multipliers (ADMM), which was introduced in the 1970s but has seen a
recent resurgence in the context of dramatic increases in computing power and
the development of widely available distributed computing platforms. The method
is applied to position and velocity consensus in a network of double
integrators. We find that a few tens of ADMM iterations yield closed-loop
performance near what is achieved by solving the optimization problem
centrally. Furthermore, the use of recent code generation techniques for
solving local subproblems yields fast overall computation times.Comment: 7 pages, 5 figures, 50th Allerton Conference on Communication,
Control, and Computing, Monticello, IL, USA, 201
An Optimal Control Approach to Flocking
Flocking behavior has attracted considerable attention in multi-agent
systems. The structure of flocking has been predominantly studied through the
application of artificial potential fields coupled with velocity consensus.
These approaches, however, do not consider the energy cost of the agents during
flocking, which is especially important in large-scale robot swarms. This paper
introduces an optimal control framework to induce flocking in a group of
agents. Guarantees of energy minimization and safety are provided, along with a
decentralized algorithm that satisfies the optimality conditions and can be
realized in real time. The efficacy of the proposed control algorithm is
evaluated through simulation in both MATLAB and Gazebo.Comment: 6 pages, 4 figures. To appear at the 2020 American Control Conferenc
Connectivity Preservation in Multi-Agent Systems using Model Predictive Control
Flocking of multiagent systems is one of the basic behaviors in the field of control of multiagent systems and it is an essential element of many real-life applications. Such systems under various network structures and environment modes have been extensively studied in the past decades. Navigation of agents in a leader-follower structure while operating in environments with obstacles is particularly challenging. One of the main challenges in flocking of multiagent systems is to preserve connectivity. Gradient descent method is widely utilized to achieve this goal. But the main shortcoming of applying this method for the leader-follower structure is the need for continuous data transmission between agents and/or the preservation of a fixed connection topology. In this research, we propose an innovative model predictive controller based on a potential field that maintains the connectivity of a flock of agents in a leader-follower structure with dynamic topology. The agents navigate through an environment with obstacles that form a path leading to a certain target. Such a control technique avoids collisions of followers with each other without using any communication links while following their leader which navigates in the environment through potential functions for modelling the neighbors and obstacles. The potential field is dynamically updated by introducing weight variables in order to preserve connectivity among the followers as we assume only the leader knows the target position. The values of these weights are changed in real-time according to trajectories of the agents when the critical neighbors of each agent is determined. We compare the performance of our predictive-control based algorithm with other approaches. The results show that our algorithm causes the agents to reach the target in less time. However, our algorithm faces more deadlock cases when the agents go through relatively narrow paths. Due to the consideration of the input costs in our controller, the group of agents reaching the target faster does not necessarily result in the followers consuming more energy than the leader
- …