6,181 research outputs found
Safety Barrier Certificates for Heterogeneous Multi-Robot Systems
This paper presents a formal framework for collision avoidance in multi-robot
systems, wherein an existing controller is modified in a minimally invasive
fashion to ensure safety. We build this framework through the use of control
barrier functions (CBFs) which guarantee forward invariance of a safe set;
these yield safety barrier certificates in the context of heterogeneous robot
dynamics subject to acceleration bounds. Moreover, safety barrier certificates
are extended to a distributed control framework, wherein neighboring agent
dynamics are unknown, through local parameter identification. The end result is
an optimization-based controller that formally guarantees collision free
behavior in heterogeneous multi-agent systems by minimally modifying the
desired controller via safety barrier constraints. This formal result is
verified in simulation on a multi-robot system consisting of both cumbersome
and agile robots, is demonstrated experimentally on a system with a Magellan
Pro robot and three Khepera III robots.Comment: 8 pages version of 2016ACC conference paper, experimental results
adde
Collision-aware Task Assignment for Multi-Robot Systems
We propose a novel formulation of the collision-aware task assignment (CATA)
problem and a decentralized auction-based algorithm to solve the problem with
optimality bound. Using a collision cone, we predict potential collisions and
introduce a binary decision variable into the local reward function for task
bidding. We further improve CATA by implementing a receding collision horizon
to address the stopping robot scenario, i.e. when robots are confined to their
task location and become static obstacles to other moving robots. The
auction-based algorithm encourages the robots to bid for tasks with collision
mitigation considerations. We validate the improved task assignment solution
with both simulation and experimental results, which show significant reduction
of overlapping paths as well as deadlocks
Analysis of Dynamic Task Allocation in Multi-Robot Systems
Dynamic task allocation is an essential requirement for multi-robot systems
operating in unknown dynamic environments. It allows robots to change their
behavior in response to environmental changes or actions of other robots in
order to improve overall system performance. Emergent coordination algorithms
for task allocation that use only local sensing and no direct communication
between robots are attractive because they are robust and scalable. However, a
lack of formal analysis tools makes emergent coordination algorithms difficult
to design. In this paper we present a mathematical model of a general dynamic
task allocation mechanism. Robots using this mechanism have to choose between
two types of task, and the goal is to achieve a desired task division in the
absence of explicit communication and global knowledge. Robots estimate the
state of the environment from repeated local observations and decide which task
to choose based on these observations. We model the robots and observations as
stochastic processes and study the dynamics of the collective behavior.
Specifically, we analyze the effect that the number of observations and the
choice of the decision function have on the performance of the system. The
mathematical models are validated in a multi-robot multi-foraging scenario. The
model's predictions agree very closely with experimental results from
sensor-based simulations.Comment: Preprint version of the paper published in International Journal of
Robotics, March 2006, Volume 25, pp. 225-24
Resilience of multi-robot systems to physical masquerade attacks
The advent of autonomous mobile multi-robot systems has driven innovation in both the industrial and defense sectors. The integration of such systems in safety-and security-critical applications has raised concern over their resilience to attack. In this work, we investigate the security problem of a stealthy adversary masquerading as a properly functioning agent. We show that conventional multi-agent pathfinding solutions are vulnerable to these physical masquerade attacks. Furthermore, we provide a constraint-based formulation of multi-agent pathfinding that yields multi-agent plans that are provably resilient to physical masquerade attacks. This formalization leverages inter-agent observations to facilitate introspective monitoring to guarantee resilience.Accepted manuscrip
Towards adaptive multi-robot systems: self-organization and self-adaptation
Dieser Beitrag ist mit Zustimmung des Rechteinhabers aufgrund einer (DFG geförderten) Allianz- bzw. Nationallizenz frei zugänglich.This publication is with permission of the rights owner freely accessible due to an Alliance licence and a national licence (funded by the DFG, German Research Foundation) respectively.The development of complex systems ensembles that operate in uncertain environments is a major challenge. The reason for this is that system designers are not able to fully specify the system during specification and development and before it is being deployed. Natural swarm systems enjoy similar characteristics, yet, being self-adaptive and being able to self-organize, these systems show beneficial emergent behaviour. Similar concepts can be extremely helpful for artificial systems, especially when it comes to multi-robot scenarios, which require such solution in order to be applicable to highly uncertain real world application. In this article, we present a comprehensive overview over state-of-the-art solutions in emergent systems, self-organization, self-adaptation, and robotics. We discuss these approaches in the light of a framework for multi-robot systems and identify similarities, differences missing links and open gaps that have to be addressed in order to make this framework possible
A Decentralized Mobile Computing Network for Multi-Robot Systems Operations
Collective animal behaviors are paradigmatic examples of fully decentralized
operations involving complex collective computations such as collective turns
in flocks of birds or collective harvesting by ants. These systems offer a
unique source of inspiration for the development of fault-tolerant and
self-healing multi-robot systems capable of operating in dynamic environments.
Specifically, swarm robotics emerged and is significantly growing on these
premises. However, to date, most swarm robotics systems reported in the
literature involve basic computational tasks---averages and other algebraic
operations. In this paper, we introduce a novel Collective computing framework
based on the swarming paradigm, which exhibits the key innate features of
swarms: robustness, scalability and flexibility. Unlike Edge computing, the
proposed Collective computing framework is truly decentralized and does not
require user intervention or additional servers to sustain its operations. This
Collective computing framework is applied to the complex task of collective
mapping, in which multiple robots aim at cooperatively map a large area. Our
results confirm the effectiveness of the cooperative strategy, its robustness
to the loss of multiple units, as well as its scalability. Furthermore, the
topology of the interconnecting network is found to greatly influence the
performance of the collective action.Comment: Accepted for Publication in Proc. 9th IEEE Annual Ubiquitous
Computing, Electronics & Mobile Communication Conferenc
Masquerade attack detection through observation planning for multi-robot systems
The increasing adoption of autonomous mobile robots comes with
a rising concern over the security of these systems. In this work, we
examine the dangers that an adversary could pose in a multi-agent
robot system. We show that conventional multi-agent plans are
vulnerable to strong attackers masquerading as a properly functioning
agent. We propose a novel technique to incorporate attack
detection into the multi-agent path-finding problem through the
simultaneous synthesis of observation plans. We show that by
specially crafting the multi-agent plan, the induced inter-agent
observations can provide introspective monitoring guarantees; we
achieve guarantees that any adversarial agent that plans to break
the system-wide security specification must necessarily violate the
induced observation plan.Accepted manuscrip
- …