The deployment of an expert system running over a wireless acoustic sensors
network made up of bioacoustic monitoring devices that recognise bird species
from their sounds would enable the automation of many tasks of ecological
value, including the analysis of bird population composition or the detection
of endangered species in areas of environmental interest. Endowing these
devices with accurate audio classification capabilities is possible thanks to
the latest advances in artificial intelligence, among which deep learning
techniques excel. However, a key issue to make bioacoustic devices affordable
is the use of small footprint deep neural networks that can be embedded in
resource and battery constrained hardware platforms. For this reason, this work
presents a critical comparative analysis between two heavy and large footprint
deep neural networks (VGG16 and ResNet50) and a lightweight alternative,
MobileNetV2. Our experimental results reveal that MobileNetV2 achieves an
average F1-score less than a 5\% lower than ResNet50 (0.789 vs. 0.834),
performing better than VGG16 with a footprint size nearly 40 times smaller.
Moreover, to compare the models, we have created and made public the Western
Mediterranean Wetland Birds dataset, consisting of 201.6 minutes and 5,795
audio excerpts of 20 endemic bird species of the Aiguamolls de l'Empord\`a
Natural Park.Comment: 17 pages, 8 figures, 3 table