32 research outputs found

    Crowd-sourced plant occurrence data provide a reliable description of macroecological gradients

    Get PDF
    Deep learning algorithms classify plant species with high accuracy, and smartphone applications leverage this technology to enable users to identify plant species in the field. The question we address here is whether such crowd-sourced data contain substantial macroecological information. In particular, we aim to understand if we can detect known environmental gradients shaping plant co-occurrences. In this study we analysed 1 million data points collected through the use of the mobile app Flora Incognita between 2018 and 2019 in Germany and compared them with Florkart, containing plant occurrence data collected by more than 5000 floristic experts over a 70-year period. The direct comparison of the two data sets reveals that the crowd-sourced data particularly undersample areas of low population density. However, using nonlinear dimensionality reduction we were able to uncover macroecological patterns in both data sets that correspond well to each other. Mean annual temperature, temperature seasonality and wind dynamics as well as soil water content and soil texture represent the most important gradients shaping species composition in both data collections. Our analysis describes one way of how automated species identification could soon enable near real-time monitoring of macroecological patterns and their changes, but also discusses biases that must be carefully considered before crowd-sourced biodiversity data can effectively guide conservation measures

    Image-based automated recognition of 31 Poaceae species: the most relevant perspectives

    Get PDF
    Poaceae represent one of the largest plant families in the world. Many species are of great economic importance as food and forage plants while others represent important weeds in agriculture. Although a large number of studies currently address the question of how plants can be best recognized on images, there is a lack of studies evaluating specific approaches for uniform species groups considered difficult to identify because they lack obvious visual characteristics. Poaceae represent an example of such a species group, especially when they are non-flowering. Here we present the results from an experiment to automatically identify Poaceae species based on images depicting six well-defined perspectives. One perspective shows the inflorescence while the others show vegetative parts of the plant such as the collar region with the ligule, adaxial and abaxial side of the leaf and culm nodes. For each species we collected 80 observations, each representing a series of six images taken with a smartphone camera. We extract feature representations from the images using five different convolutional neural networks (CNN) trained on objects from different domains and classify them using four state-of-the art classification algorithms. We combine these perspectives via score level fusion. In order to evaluate the potential of identifying non-flowering Poaceae we separately compared perspective combinations either comprising inflorescences or not. We find that for a fusion of all six perspectives, using the best combination of feature extraction CNN and classifier, an accuracy of 96.1% can be achieved. Without the inflorescence, the overall accuracy is still as high as 90.3%. In all but one case the perspective conveying the most information about the species (excluding inflorescence) is the ligule in frontal view. Our results show that even species considered very difficult to identify can achieve high accuracies in automatic identification as long as images depicting suitable perspectives are available. We suggest that our approach could be transferred to other difficult-to-distinguish species groups in order to identify the most relevant perspectives

    The Flora Incognita app - interactive plant species identification

    Get PDF
    Being able to identify plant species is an important factor for understanding biodiversity and its change due to natural and anthropogenic drivers. We discuss the freely available Flora Incognita app for Android, iOS and Harmony OS devices that allows users to interactively identify plant species and capture their observations. Specifically developed deep learning algorithms, trained on an extensive repository of plant observations, classify plant images with yet unprecedented accuracy. By using this technology in a context-adaptive and interactive identification process, users are now able to reliably identify plants regardless of their botanical knowledge level. Users benefit from an intuitive interface and supplementary educational materials. The captured observations in combination with their metadata provide a rich resource for researching, monitoring and understanding plant diversity. Mobile applications such as Flora Incognita stimulate the successful interplay of citizen science, conservation and education

    Plant species classification using flower images - a comparative study of local feature representations

    Get PDF
    Steady improvements of image description methods induced a growing interest in imagebased plant species classification, a task vital to the study of biodiversity and ecological sensitivity. Various techniques have been proposed for general object classification over the past years and several of them have already been studied for plant species classification. However, results of these studies are selective in the evaluated steps of a classification pipeline, in the utilized datasets for evaluation, and in the compared baseline methods. No study is available that evaluates the main competing methods for building an image representation on the same datasets allowing for generalized findings regarding flower-based plant species classification. The aim of this paper is to comparatively evaluate methods, method combinations, and their parameters towards classification accuracy. The investigated methods span from detection, extraction, fusion, pooling, to encoding of local features for quantifying shape and color information of flower images. We selected the flower image datasets Oxford Flower 17 and Oxford Flower 102 as well as our own Jena Flower 30 dataset for our experiments. Findings show large differences among the various studied techniques and that their wisely chosen orchestration allows for high accuracies in species classification. We further found that true local feature detectors in combination with advanced encoding methods yield higher classification results at lower computational costs compared to commonly used dense sampling and spatial pooling methods. Color was found to be an indispensable feature for high classification results, especially while preserving spatial correspondence to gray-level features. In result, our study provides a comprehensive overview of competing techniques and the implications of their main parameters for flowerbased plant species classification

    Emerging technologies revolutionise insect ecology and monitoring

    Get PDF
    Insects are the most diverse group of animals on Earth, but their small size and high diversity have always made them challenging to study. Recent technologi- cal advances have the potential to revolutionise insect ecology and monitoring. We describe the state of the art of four technologies (computer vision, acoustic monitoring, radar, and molecular methods), and assess their advantages, current limitations, and future potential. We discuss how these technologies can adhere to modern standards of data curation and transparency, their implications for citizen science, and their potential for integration among different monitoring programmes and technologies. We argue that they provide unprecedented possibilities for insect ecology and monitoring, but it will be important to foster international standards via collaborationpublishedVersio

    Flora Incognita – mehr als Pfanzenbestimmung

    No full text
    Eine Pfanze am Wegrand, ein Smartphone und eine Prise Neugier – mehr braucht es heute nicht, um wildwachsende Pfanzen zu bestimmen. Flora Incognita ist eine Pfanzenbestimmungsapp, die genau das möglich macht

    Combining high-throughput imaging flow cytometry and deep learning for efficient species and life-cycle stage identification of phytoplankton

    Get PDF
    Abstract Background Phytoplankton species identification and counting is a crucial step of water quality assessment. Especially drinking water reservoirs, bathing and ballast water need to be regularly monitored for harmful species. In times of multiple environmental threats like eutrophication, climate warming and introduction of invasive species more intensive monitoring would be helpful to develop adequate measures. However, traditional methods such as microscopic counting by experts or high throughput flow cytometry based on scattering and fluorescence signals are either too time-consuming or inaccurate for species identification tasks. The combination of high qualitative microscopy with high throughput and latest development in machine learning techniques can overcome this hurdle. Results In this study, image based cytometry was used to collect ~ 47,000 images for brightfield and Chl a fluorescence at 60× magnification for nine common freshwater species of nano- and micro-phytoplankton. A deep neuronal network trained on these images was applied to identify the species and the corresponding life cycle stage during the batch cultivation. The results show the high potential of this approach, where species identity and their respective life cycle stage could be predicted with a high accuracy of 97%. Conclusions These findings could pave the way for reliable and fast phytoplankton species determination of indicator species as a crucial step in water quality assessment

    Image-based classification of plant genus and family for trained and untrained plant species

    Get PDF
    Abstract Background Modern plant taxonomy reflects phylogenetic relationships among taxa based on proposed morphological and genetic similarities. However, taxonomical relation is not necessarily reflected by close overall resemblance, but rather by commonality of very specific morphological characters or similarity on the molecular level. It is an open research question to which extent phylogenetic relations within higher taxonomic levels such as genera and families are reflected by shared visual characters of the constituting species. As a consequence, it is even more questionable whether the taxonomy of plants at these levels can be identified from images using machine learning techniques. Results Whereas previous studies on automated plant identification from images focused on the species level, we investigated classification at higher taxonomic levels such as genera and families. We used images of 1000 plant species that are representative for the flora of Western Europe. We tested how accurate a visual representation of genera and families can be learned from images of their species in order to identify the taxonomy of species included in and excluded from learning. Using natural images with random content, roughly 500 images per species are required for accurate classification. The classification accuracy for 1000 species amounts to 82.2% and increases to 85.9% and 88.4% on genus and family level. Classifying species excluded from training, the accuracy significantly reduces to 38.3% and 38.7% on genus and family level. Excluded species of well represented genera and families can be classified with 67.8% and 52.8% accuracy. Conclusion Our results show that shared visual characters are indeed present at higher taxonomic levels. Most dominantly they are preserved in flowers and leaves, and enable state-of-the-art classification algorithms to learn accurate visual representations of plant genera and families. Given a sufficient amount and composition of training data, we show that this allows for high classification accuracy increasing with the taxonomic level and even facilitating the taxonomic identification of species excluded from the training process

    Jena Leaf Images 17

    No full text
    2902 leaf images of 17 wild-flowering angiosperm species found on semi-arid grasslands around the city of Jena in Germany. Images were systematically sampled under varying conditions in terms of natural/plain background, leaf top/back side, and illumination and used for a computer vision study [1]. See the file „images_info.csv“ for the images’ annotations. Each image is enriched by a crop of the relevant image part along with a binary segmentation mask. See our corresponding flower images dataset, i.e., „the Jena Flowers 30“ [2,3]. [1] https://doi.org/10.1186/s13007-017-0245-8 [2] http://dx.doi.org/10.7910/DVN/QDHYST [3] https://doi.org/10.1371/journal.pone.017062

    Automated plant species identification—Trends and future directions - Fig 3

    Get PDF
    <p>Visual variation of <i>Lapsana communis</i>'s flower throughout the day from two perspectives (left) and visual variation of <i>Centaurea pseudophrygia</i>'s flower throughout the season and flowering stage (right).</p
    corecore