170 research outputs found

    Flowers, leaves or both? How to obtain suitable images for automated plant identification

    Get PDF
    Background: Deep learning algorithms for automated plant identification need large quantities of precisely labelled images in order to produce reliable classification results. Here, we explore what kind of perspectives and their combinations contain more characteristic information and therefore allow for higher identification accuracy. Results: We developed an image-capturing scheme to create observations of flowering plants. Each observation comprises five in-situ images of the same individual from predefined perspectives (entire plant, flower frontal- and lateral view, leaf top- and back side view). We collected a completely balanced dataset comprising 100 observations for each of 101 species with an emphasis on groups of conspecific and visually similar species including twelve Poaceae species. We used this dataset to train convolutional neural networks and determine the prediction accuracy for each single perspective and their combinations via score level fusion. Top-1 accuracies ranged between 77% (entire plant) and 97% (fusion of all perspectives) when averaged across species. Flower frontal view achieved the highest accuracy (88%). Fusing flower frontal, flower lateral and leaf top views yields the most reasonable compromise with respect to acquisition effort and accuracy (96%). The perspective achieving the highest accuracy was species dependent. Conclusions: We argue that image databases of herbaceous plants would benefit from multi organ observations, comprising at least the front and lateral perspective of flowers and the leaf top view

    Unmasking Clever Hans Predictors and Assessing What Machines Really Learn

    Full text link
    Current learning machines have successfully solved hard application problems, reaching high accuracy and displaying seemingly "intelligent" behavior. Here we apply recent techniques for explaining decisions of state-of-the-art learning machines and analyze various tasks from computer vision and arcade games. This showcases a spectrum of problem-solving behaviors ranging from naive and short-sighted, to well-informed and strategic. We observe that standard performance evaluation metrics can be oblivious to distinguishing these diverse problem solving behaviors. Furthermore, we propose our semi-automated Spectral Relevance Analysis that provides a practically effective way of characterizing and validating the behavior of nonlinear learning machines. This helps to assess whether a learned model indeed delivers reliably for the problem that it was conceived for. Furthermore, our work intends to add a voice of caution to the ongoing excitement about machine intelligence and pledges to evaluate and judge some of these recent successes in a more nuanced manner.Comment: Accepted for publication in Nature Communication

    Plant image identification application demonstrates high accuracy in Northern Europe

    Get PDF

    Deep learning in plant phenological research: A systematic literature review

    Get PDF
    Climate change represents one of the most critical threats to biodiversity with far-reaching consequences for species interactions, the functioning of ecosystems, or the assembly of biotic communities. Plant phenology research has gained increasing attention as the timing of periodic events in plants is strongly affected by seasonal and interannual climate variation. Recent technological development allowed us to gather invaluable data at a variety of spatial and ecological scales. The feasibility of phenological monitoring today and in the future depends heavily on developing tools capable of efficiently analyzing these enormous amounts of data. Deep Neural Networks learn representations from data with impressive accuracy and lead to significant breakthroughs in, e.g., image processing. This article is the first systematic literature review aiming to thoroughly analyze all primary studies on deep learning approaches in plant phenology research. In a multi-stage process, we selected 24 peer-reviewed studies published in the last five years (2016–2021). After carefully analyzing these studies, we describe the applied methods categorized according to the studied phenological stages, vegetation type, spatial scale, data acquisition- and deep learning methods. Furthermore, we identify and discuss research trends and highlight promising future directions. We present a systematic overview of previously applied methods on different tasks that can guide this emerging complex research field

    Towards more effective identification keys: A study of people identifying plant species characters

    Get PDF
    Abstract Accurate species identification is essential for ecological monitoring and biodiversity conservation. Interactive plant identification keys have been considerably improved in recent years, mainly by providing iconic symbols, illustrations, or images for the users, as these keys are also commonly used by people with relatively little plant knowledge. Only a few studies have investigated how well morphological characteristics can be recognized and correctly identified by people, which is ultimately the basis of an identification key's success. This study consists of a systematic evaluation of people's abilities in identifying plant‐specific morphological characters. We conducted an online survey where 484 participants were asked to identify 25 different plant character states on six images showing a plant from different perspectives. We found that survey participants correctly identified 79% of the plant characters, with botanical novices with little or no previous experience in plant identification performing slightly worse than experienced botanists. We also found that flower characters are more often correctly identified than leaf characteristics and that characters with more states resulted in higher identification errors. Additionally, the longer the time a participant needed for answering, the higher the probability of a wrong answer. Understanding what influences users' plant character identification abilities can improve the development of interactive identification keys, for example, by designing keys that adapt to novices as well as experts. Furthermore, our study can act as a blueprint for the empirical evaluation of identifications keys. Read the free Plain Language Summary for this article on the Journal blog

    Image-based automated recognition of 31 Poaceae species: the most relevant perspectives

    Get PDF
    Poaceae represent one of the largest plant families in the world. Many species are of great economic importance as food and forage plants while others represent important weeds in agriculture. Although a large number of studies currently address the question of how plants can be best recognized on images, there is a lack of studies evaluating specific approaches for uniform species groups considered difficult to identify because they lack obvious visual characteristics. Poaceae represent an example of such a species group, especially when they are non-flowering. Here we present the results from an experiment to automatically identify Poaceae species based on images depicting six well-defined perspectives. One perspective shows the inflorescence while the others show vegetative parts of the plant such as the collar region with the ligule, adaxial and abaxial side of the leaf and culm nodes. For each species we collected 80 observations, each representing a series of six images taken with a smartphone camera. We extract feature representations from the images using five different convolutional neural networks (CNN) trained on objects from different domains and classify them using four state-of-the art classification algorithms. We combine these perspectives via score level fusion. In order to evaluate the potential of identifying non-flowering Poaceae we separately compared perspective combinations either comprising inflorescences or not. We find that for a fusion of all six perspectives, using the best combination of feature extraction CNN and classifier, an accuracy of 96.1% can be achieved. Without the inflorescence, the overall accuracy is still as high as 90.3%. In all but one case the perspective conveying the most information about the species (excluding inflorescence) is the ligule in frontal view. Our results show that even species considered very difficult to identify can achieve high accuracies in automatic identification as long as images depicting suitable perspectives are available. We suggest that our approach could be transferred to other difficult-to-distinguish species groups in order to identify the most relevant perspectives

    Light-induced cell damage in live-cell super-resolution microscopy

    Get PDF
    Super-resolution microscopy can unravel previously hidden details of cellular structures but requires high irradiation intensities to use the limited photon budget efficiently. Such high photon densities are likely to induce cellular damage in live-cell experiments. We applied single-molecule localization microscopy conditions and tested the influence of irradiation intensity, illumination-mode, wavelength, light-dose, temperature and fluorescence labeling on the survival probability of different cell lines 20-24 hours after irradiation. In addition, we measured the microtubule growth speed after irradiation. The photo-sensitivity is dramatically increased at lower irradiation wavelength. We observed fixation, plasma membrane permeabilization and cytoskeleton destruction upon irradiation with shorter wavelengths. While cells stand light intensities of ~1 kW cm(-2) at 640 nm for several minutes, the maximum dose at 405 nm is only ~50 J cm(-2), emphasizing red fluorophores for live-cell localization microscopy. We also present strategies to minimize phototoxic factors and maximize the cells ability to cope with higher irradiation intensities

    Crowd-sourced plant occurrence data provide a reliable description of macroecological gradients

    Get PDF
    Deep learning algorithms classify plant species with high accuracy, and smartphone applications leverage this technology to enable users to identify plant species in the field. The question we address here is whether such crowd-sourced data contain substantial macroecological information. In particular, we aim to understand if we can detect known environmental gradients shaping plant co-occurrences. In this study we analysed 1 million data points collected through the use of the mobile app Flora Incognita between 2018 and 2019 in Germany and compared them with Florkart, containing plant occurrence data collected by more than 5000 floristic experts over a 70-year period. The direct comparison of the two data sets reveals that the crowd-sourced data particularly undersample areas of low population density. However, using nonlinear dimensionality reduction we were able to uncover macroecological patterns in both data sets that correspond well to each other. Mean annual temperature, temperature seasonality and wind dynamics as well as soil water content and soil texture represent the most important gradients shaping species composition in both data collections. Our analysis describes one way of how automated species identification could soon enable near real-time monitoring of macroecological patterns and their changes, but also discusses biases that must be carefully considered before crowd-sourced biodiversity data can effectively guide conservation measures

    Real‐world evidence on clinical outcomes of people with type 1 diabetes using open‐source and commercial automated insulin dosing systems: A systematic review

    Get PDF
    Aims: Several commercial and open-source automated insulin dosing (AID) systems have recently been developed and are now used by an increasing number of people with diabetes (PwD). This systematic review explored the current status of real-world evidence on the latest available AID systems in helping to understand their safety and effectiveness. Methods: A systematic review of real-world studies on the effect of commercial and open-source AID system use on clinical outcomes was conducted employing a devised protocol (PROSPERO ID 257354). Results: Of 441 initially identified studies, 21 published 2018-2021 were included: 12 for Medtronic 670G; one for Tandem Control-IQ; one for Diabeloop DBLG1; two for AndroidAPS; one for OpenAPS; one for Loop; three comparing various types of AID systems. These studies found that several types of AID systems improve Time-in-Range and haemoglobin A1c (HbA1c ) with minimal concerns around severe hypoglycaemia. These improvements were observed in open-source and commercially developed AID systems alike. Conclusions: Commercially developed and open-source AID systems represent effective and safe treatment options for PwD of several age groups and genders. Alongside evidence from randomized clinical trials, real-world studies on AID systems and their effects on glycaemic outcomes are a helpful method for evaluating their safety and effectiveness
    corecore