unknown

Quadcopter drone formation control via onboard visual perception

Abstract

Quadcopter drone formation control is an important capability for fields like area surveillance, search and rescue, agriculture, and reconnaissance. Of particular interest is formation control in environments where radio communications and/or GPS may be either denied or not sufficiently accurate for the desired application. To address this, we focus on vision as the sensing modality. We train an Hourglass Convolutional Neural Network (CNN) to discriminate between quadcopter pixels and non-quadcopter pixels in a live video feed and use it to guide a formation of quadcopters. The CNN outputs "heatmaps" - pixel-by-pixel likelihood estimates of the presence of a quadcopter. These heatmaps suffer from short-lived false detections. To mitigate these, we apply a version of the Siamese networks technique on consecutive frames for clutter mitigation and to promote temporal smoothness in the heatmaps. The heatmaps give an estimate of the range and bearing to the other quadcopter(s), which we use to calculate flight control commands and maintain the desired formation. We implement the algorithm on a single-board computer (ODROID XU4) with a standard webcam mounted to a quadcopter drone. Flight tests in a motion capture volume demonstrate successful formation control with two quadcopters in a leader-follower setup

    Similar works