14 research outputs found
Automated Visual Monitoring of Nocturnal Insects with Light-based Camera Traps
Automatic camera-assisted monitoring of insects for abundance estimations is
crucial to understand and counteract ongoing insect decline. In this paper, we
present two datasets of nocturnal insects, especially moths as a subset of
Lepidoptera, photographed in Central Europe. One of the datasets, the EU-Moths
dataset, was captured manually by citizen scientists and contains species
annotations for 200 different species and bounding box annotations for those.
We used this dataset to develop and evaluate a two-stage pipeline for insect
detection and moth species classification in previous work. We further
introduce a prototype for an automated visual monitoring system. This prototype
produced the second dataset consisting of more than 27,000 images captured on
95 nights. For evaluation and bootstrapping purposes, we annotated a subset of
the images with bounding boxes enframing nocturnal insects. Finally, we present
first detection and classification baselines for these datasets and encourage
other scientists to use this publicly available data.Comment: Presented at the FGVC workshop at the CVPR202
Keeping the Human in the Loop: Towards Automatic Visual Monitoring in Biodiversity Research
More and more methods in the area of biodiversity research grounds upon new opportunities arising from modern sensing devices that in principle make it possible to continuously record sensor data from the environment. However, these opportunities allow easy recording of huge amount of data, while its evaluation is difficult, if not impossible due to the enormous effort of manual inspection by the researchers. At the same time, we observe impressive results in computer vision and machine learning that are based on two major developments: firstly, the increased performance of hardware together with the advent of powerful graphical processing units applied in scientific computing. Secondly, the huge amount of, in part, annotated image data provided by today's generation of Facebook and Twitter users that are available easily over databases (e.g., Flickr) and/or search engines. However, for biodiversity applications appropriate data bases of annotated images are still missing.
In this presentation we discuss already available methods from computer vision and machine learning together with upcoming challenges in automatic monitoring in biodiversity research. We argue that the key
element towards success of any automatic method is the possibility to keep the human in the loop - either for correcting errors and improving the system's quality over time, for providing annotation data at moderate
effort, or for acceptance and validation reasons. Thus, we summarize already existing techniques from active and life-long learning together with the enormous developments in automatic visual recognition during the past years. In addition, to allow detection of the unexpected such an automatic system must be capable to find anomalies or novel events in the data.
We discuss a generic framework for automatic monitoring in biodiversity research which is the result of collaboration between computer scientists and ecologists of the past years. The key ingredients of such a framework are initial, generic classifier, for example, powerful deep learning architectures, active learning to reduce costly annotation effort by experts, fine-grained recognition to differentiate between visually very similar species, and efficient incremental update of the classifier's model over time. For most of these challenges, we present initial solutions in sample applications. The results comprise the automatic evaluation of images from camera traps, attribute estimation for species, as well as monitoring in-situ data in environmental science. Overall, we like to demonstrate the potentials and open issues in bringing together computer scientists and ecologist to open new research directions for either area
Generate (non-software) Bugs to Fool Classifiers
In adversarial attacks intended to confound deep learning models, most
studies have focused on limiting the magnitude of the modification so that
humans do not notice the attack. On the other hand, during an attack against
autonomous cars, for example, most drivers would not find it strange if a small
insect image were placed on a stop sign, or they may overlook it. In this
paper, we present a systematic approach to generate natural adversarial
examples against classification models by employing such natural-appearing
perturbations that imitate a certain object or signal. We first show the
feasibility of this approach in an attack against an image classifier by
employing generative adversarial networks that produce image patches that have
the appearance of a natural object to fool the target model. We also introduce
an algorithm to optimize placement of the perturbation in accordance with the
input image, which makes the generation of adversarial examples fast and likely
to succeed. Moreover, we experimentally show that the proposed approach can be
extended to the audio domain, for example, to generate perturbations that sound
like the chirping of birds to fool a speech classifier.Comment: Accepted by AAAI 202
Deep Learning Pipeline for Automated Visual Moth Monitoring: Insect Localization and Species Classification
Biodiversity monitoring is crucial for tracking and counteracting adverse
trends in population fluctuations. However, automatic recognition systems are
rarely applied so far, and experts evaluate the generated data masses manually.
Especially the support of deep learning methods for visual monitoring is not
yet established in biodiversity research, compared to other areas like
advertising or entertainment. In this paper, we present a deep learning
pipeline for analyzing images captured by a moth scanner, an automated visual
monitoring system of moth species developed within the AMMOD project. We first
localize individuals with a moth detector and afterward determine the species
of detected insects with a classifier. Our detector achieves up to 99.01% mean
average precision and our classifier distinguishes 200 moth species with an
accuracy of 93.13% on image cutouts depicting single insects. Combining both in
our pipeline improves the accuracy for species identification in images of the
moth scanner from 79.62% to 88.05%