2 research outputs found
On the Minimal Set of Inputs Required for Efficient Neuro-Evolved Foraging
In this paper, we perform an ablation study of \neatfa, a neuro-evolved
foraging algorithm that has recently been shown to forage efficiently under
different resource distributions. Through selective disabling of input signals,
we identify a \emph{sufficiently} minimal set of input features that contribute
the most towards determining search trajectories which favor high resource
collection rates. Our experiments reveal that, independent of how the resources
are distributed in the arena, the signals involved in imparting the controller
the ability to switch from searching of resources to transporting them back to
the nest are the most critical. Additionally, we find that pheromones play a
key role in boosting performance of the controller by providing signals for
informed locomotion in search for unforaged resources.Comment: Presented at BDA 2019 (Colocated with PODC 2019
ForMIC: Foraging via Multiagent RL with Implicit Communication
Multi-agent foraging (MAF) involves distributing a team of agents to search
an environment and extract resources from it. Many foraging algorithms use
biologically-inspired signaling mechanisms, such as pheromones, to help agents
navigate from resources back to a central nest while relying on local sensing
only. However, these approaches often rely on predictable pheromone dynamics
and/or perfect robot localization. In nature, certain environmental factors
(e.g., heat or rain) can disturb or destroy pheromone trails, while imperfect
sensing can lead robots astray. In this work, we propose ForMIC, a distributed
reinforcement learning MAF approach that relies on pheromones as a way to endow
agents with implicit communication abilities via their shared environment.
Specifically, full agents involuntarily lay trails of pheromones as they move;
other agents can then measure the local levels of pheromones to guide their
individual decisions. We show how these stigmergic interactions among agents
can lead to a highly-scalable, decentralized MAF policy that is naturally
resilient to common environmental disturbances, such as depleting resources and
sudden pheromone disappearance. We present simulation results that compare our
learning policy against existing state-of-the-art MAF algorithms, in a set of
experiments varying team sizes, number and placement of resources, and key
environmental disturbances. Our results demonstrate that our learned policy
outperforms these baselines, approaching the performance of a planner with full
observability and centralized agent allocation. Preprint of the paper submitted
to the IEEE Transactions on Robotics (T-RO) journal's special issue on
Resilience in Networked Robotic Systems in June 2020Comment: \c{opyright} 20XX IEEE. Personal use of this material is permitted.
Permission from IEEE must be obtained for all other uses, in any current or
future media, including reprinting/republishing this material for advertising
or promotional purposes, creating new collective works, for resale or
redistribution to servers or lists, or reuse of any copyrighted component of
this work in other work