34 research outputs found

    Crowd-sourced plant occurrence data provide a reliable description of macroecological gradients

    Get PDF
    Deep learning algorithms classify plant species with high accuracy, and smartphone applications leverage this technology to enable users to identify plant species in the field. The question we address here is whether such crowd-sourced data contain substantial macroecological information. In particular, we aim to understand if we can detect known environmental gradients shaping plant co-occurrences. In this study we analysed 1 million data points collected through the use of the mobile app Flora Incognita between 2018 and 2019 in Germany and compared them with Florkart, containing plant occurrence data collected by more than 5000 floristic experts over a 70-year period. The direct comparison of the two data sets reveals that the crowd-sourced data particularly undersample areas of low population density. However, using nonlinear dimensionality reduction we were able to uncover macroecological patterns in both data sets that correspond well to each other. Mean annual temperature, temperature seasonality and wind dynamics as well as soil water content and soil texture represent the most important gradients shaping species composition in both data collections. Our analysis describes one way of how automated species identification could soon enable near real-time monitoring of macroecological patterns and their changes, but also discusses biases that must be carefully considered before crowd-sourced biodiversity data can effectively guide conservation measures

    Image-based automated recognition of 31 Poaceae species: the most relevant perspectives

    Get PDF
    Poaceae represent one of the largest plant families in the world. Many species are of great economic importance as food and forage plants while others represent important weeds in agriculture. Although a large number of studies currently address the question of how plants can be best recognized on images, there is a lack of studies evaluating specific approaches for uniform species groups considered difficult to identify because they lack obvious visual characteristics. Poaceae represent an example of such a species group, especially when they are non-flowering. Here we present the results from an experiment to automatically identify Poaceae species based on images depicting six well-defined perspectives. One perspective shows the inflorescence while the others show vegetative parts of the plant such as the collar region with the ligule, adaxial and abaxial side of the leaf and culm nodes. For each species we collected 80 observations, each representing a series of six images taken with a smartphone camera. We extract feature representations from the images using five different convolutional neural networks (CNN) trained on objects from different domains and classify them using four state-of-the art classification algorithms. We combine these perspectives via score level fusion. In order to evaluate the potential of identifying non-flowering Poaceae we separately compared perspective combinations either comprising inflorescences or not. We find that for a fusion of all six perspectives, using the best combination of feature extraction CNN and classifier, an accuracy of 96.1% can be achieved. Without the inflorescence, the overall accuracy is still as high as 90.3%. In all but one case the perspective conveying the most information about the species (excluding inflorescence) is the ligule in frontal view. Our results show that even species considered very difficult to identify can achieve high accuracies in automatic identification as long as images depicting suitable perspectives are available. We suggest that our approach could be transferred to other difficult-to-distinguish species groups in order to identify the most relevant perspectives

    Plant species classification using flower images - a comparative study of local feature representations

    Get PDF
    Steady improvements of image description methods induced a growing interest in imagebased plant species classification, a task vital to the study of biodiversity and ecological sensitivity. Various techniques have been proposed for general object classification over the past years and several of them have already been studied for plant species classification. However, results of these studies are selective in the evaluated steps of a classification pipeline, in the utilized datasets for evaluation, and in the compared baseline methods. No study is available that evaluates the main competing methods for building an image representation on the same datasets allowing for generalized findings regarding flower-based plant species classification. The aim of this paper is to comparatively evaluate methods, method combinations, and their parameters towards classification accuracy. The investigated methods span from detection, extraction, fusion, pooling, to encoding of local features for quantifying shape and color information of flower images. We selected the flower image datasets Oxford Flower 17 and Oxford Flower 102 as well as our own Jena Flower 30 dataset for our experiments. Findings show large differences among the various studied techniques and that their wisely chosen orchestration allows for high accuracies in species classification. We further found that true local feature detectors in combination with advanced encoding methods yield higher classification results at lower computational costs compared to commonly used dense sampling and spatial pooling methods. Color was found to be an indispensable feature for high classification results, especially while preserving spatial correspondence to gray-level features. In result, our study provides a comprehensive overview of competing techniques and the implications of their main parameters for flowerbased plant species classification

    The Flora Incognita app - interactive plant species identification

    Get PDF
    Being able to identify plant species is an important factor for understanding biodiversity and its change due to natural and anthropogenic drivers. We discuss the freely available Flora Incognita app for Android, iOS and Harmony OS devices that allows users to interactively identify plant species and capture their observations. Specifically developed deep learning algorithms, trained on an extensive repository of plant observations, classify plant images with yet unprecedented accuracy. By using this technology in a context-adaptive and interactive identification process, users are now able to reliably identify plants regardless of their botanical knowledge level. Users benefit from an intuitive interface and supplementary educational materials. The captured observations in combination with their metadata provide a rich resource for researching, monitoring and understanding plant diversity. Mobile applications such as Flora Incognita stimulate the successful interplay of citizen science, conservation and education

    Removing subordinate species in a biodiversity experiment to mimic observational field studies

    Full text link
    Background: Positive effects of plant species richness on community biomass in biodiversity experiments are often stronger than those from observational field studies. This may be because experiments are initiated with randomly assembled species compositions whereas field communities have experienced filtering. Methods: We compared aboveground biomass production of randomly assembled communities of 2–16 species (controls) with experimentally filtered communities from which subordinate species were removed, resulting in removal communities of 1–8 species. Results: Removal communities had (1) 12.6% higher biomass than control communities from which they were derived, that is, with double species richness and (2) 32.0% higher biomass than control communities of equal richness. These differences were maintained along the richness gradient. The increased productivity of removal communities was paralleled by increased species evenness and complementarity. Conclusions: Result (1) indicates that subordinate species can reduce community biomass production, suggesting a possible explanation for why the most diverse field communities sometimes do not have the highest productivity. Result (2) suggests that if a community of S species has been derived by filtering from a pool of 2S randomly chosen species it is more productive than a community derived from a pool of S randomly chosen species without filtering

    Image-based classification of plant genus and family for trained and untrained plant species

    Get PDF
    Abstract Background Modern plant taxonomy reflects phylogenetic relationships among taxa based on proposed morphological and genetic similarities. However, taxonomical relation is not necessarily reflected by close overall resemblance, but rather by commonality of very specific morphological characters or similarity on the molecular level. It is an open research question to which extent phylogenetic relations within higher taxonomic levels such as genera and families are reflected by shared visual characters of the constituting species. As a consequence, it is even more questionable whether the taxonomy of plants at these levels can be identified from images using machine learning techniques. Results Whereas previous studies on automated plant identification from images focused on the species level, we investigated classification at higher taxonomic levels such as genera and families. We used images of 1000 plant species that are representative for the flora of Western Europe. We tested how accurate a visual representation of genera and families can be learned from images of their species in order to identify the taxonomy of species included in and excluded from learning. Using natural images with random content, roughly 500 images per species are required for accurate classification. The classification accuracy for 1000 species amounts to 82.2% and increases to 85.9% and 88.4% on genus and family level. Classifying species excluded from training, the accuracy significantly reduces to 38.3% and 38.7% on genus and family level. Excluded species of well represented genera and families can be classified with 67.8% and 52.8% accuracy. Conclusion Our results show that shared visual characters are indeed present at higher taxonomic levels. Most dominantly they are preserved in flowers and leaves, and enable state-of-the-art classification algorithms to learn accurate visual representations of plant genera and families. Given a sufficient amount and composition of training data, we show that this allows for high classification accuracy increasing with the taxonomic level and even facilitating the taxonomic identification of species excluded from the training process

    Opportunistic plant observations reveal spatial and temporal gradients in phenology

    No full text
    Abstract Opportunistic plant records provide a rapidly growing source of spatiotemporal plant observation data. Here, we used such data to explore the question whether they can be used to detect changes in species phenologies. Examining 19 herbaceous and one woody plant species in two consecutive years across Europe, we observed significant shifts in their flowering phenology, being more pronounced for spring-flowering species (6-17 days) compared to summer-flowering species (1-6 days). Moreover, we show that these data are suitable to model large-scale relationships such as “Hopkins’ bioclimatic law” which quantifies the phenological delay with increasing elevation, latitude, and longitude. Here, we observe spatial shifts, ranging from –5 to 50 days per 1000 m elevation to latitudinal shifts ranging from –1 to 4 days per degree northwards, and longitudinal shifts ranging from –1 to 1 day per degree eastwards, depending on the species. Our findings show that the increasing volume of purely opportunistic plant observation data already provides reliable phenological information, and therewith can be used to support global, high-resolution phenology monitoring in the face of ongoing climate change

    Automated plant species identification—Trends and future directions - Fig 3

    Get PDF
    <p>Visual variation of <i>Lapsana communis</i>'s flower throughout the day from two perspectives (left) and visual variation of <i>Centaurea pseudophrygia</i>'s flower throughout the season and flowering stage (right).</p

    Flora Capture: a citizen science application for collecting structured plant observations

    Get PDF
    Digital plant images are becoming increasingly important. First, given a large number of images deep learning algorithms can be trained to automatically identify plants. Second, structured image-based observations provide information about plant morphological characteristics. Finally in the course of digitalization, digital plant collections receive more and more interest in schools and universities
    corecore