80 research outputs found

    Plant image identification application demonstrates high accuracy in Northern Europe

    Get PDF

    Flowers, leaves or both? How to obtain suitable images for automated plant identification

    Get PDF
    Background: Deep learning algorithms for automated plant identification need large quantities of precisely labelled images in order to produce reliable classification results. Here, we explore what kind of perspectives and their combinations contain more characteristic information and therefore allow for higher identification accuracy. Results: We developed an image-capturing scheme to create observations of flowering plants. Each observation comprises five in-situ images of the same individual from predefined perspectives (entire plant, flower frontal- and lateral view, leaf top- and back side view). We collected a completely balanced dataset comprising 100 observations for each of 101 species with an emphasis on groups of conspecific and visually similar species including twelve Poaceae species. We used this dataset to train convolutional neural networks and determine the prediction accuracy for each single perspective and their combinations via score level fusion. Top-1 accuracies ranged between 77% (entire plant) and 97% (fusion of all perspectives) when averaged across species. Flower frontal view achieved the highest accuracy (88%). Fusing flower frontal, flower lateral and leaf top views yields the most reasonable compromise with respect to acquisition effort and accuracy (96%). The perspective achieving the highest accuracy was species dependent. Conclusions: We argue that image databases of herbaceous plants would benefit from multi organ observations, comprising at least the front and lateral perspective of flowers and the leaf top view

    Deep learning in plant phenological research: A systematic literature review

    Get PDF
    Climate change represents one of the most critical threats to biodiversity with far-reaching consequences for species interactions, the functioning of ecosystems, or the assembly of biotic communities. Plant phenology research has gained increasing attention as the timing of periodic events in plants is strongly affected by seasonal and interannual climate variation. Recent technological development allowed us to gather invaluable data at a variety of spatial and ecological scales. The feasibility of phenological monitoring today and in the future depends heavily on developing tools capable of efficiently analyzing these enormous amounts of data. Deep Neural Networks learn representations from data with impressive accuracy and lead to significant breakthroughs in, e.g., image processing. This article is the first systematic literature review aiming to thoroughly analyze all primary studies on deep learning approaches in plant phenology research. In a multi-stage process, we selected 24 peer-reviewed studies published in the last five years (2016–2021). After carefully analyzing these studies, we describe the applied methods categorized according to the studied phenological stages, vegetation type, spatial scale, data acquisition- and deep learning methods. Furthermore, we identify and discuss research trends and highlight promising future directions. We present a systematic overview of previously applied methods on different tasks that can guide this emerging complex research field

    Towards more effective identification keys: A study of people identifying plant species characters

    Get PDF
    Abstract Accurate species identification is essential for ecological monitoring and biodiversity conservation. Interactive plant identification keys have been considerably improved in recent years, mainly by providing iconic symbols, illustrations, or images for the users, as these keys are also commonly used by people with relatively little plant knowledge. Only a few studies have investigated how well morphological characteristics can be recognized and correctly identified by people, which is ultimately the basis of an identification key's success. This study consists of a systematic evaluation of people's abilities in identifying plant‐specific morphological characters. We conducted an online survey where 484 participants were asked to identify 25 different plant character states on six images showing a plant from different perspectives. We found that survey participants correctly identified 79% of the plant characters, with botanical novices with little or no previous experience in plant identification performing slightly worse than experienced botanists. We also found that flower characters are more often correctly identified than leaf characteristics and that characters with more states resulted in higher identification errors. Additionally, the longer the time a participant needed for answering, the higher the probability of a wrong answer. Understanding what influences users' plant character identification abilities can improve the development of interactive identification keys, for example, by designing keys that adapt to novices as well as experts. Furthermore, our study can act as a blueprint for the empirical evaluation of identifications keys. Read the free Plain Language Summary for this article on the Journal blog

    Going for 2D or 3D? : investigating various machine learning approaches for peach variety identification

    Get PDF
    Machine learning-based pattern recognition methods are about to revolution-ize the farming sector. For breeding and cultivation purposes, the identifica-tion of plant varieties is a particularly important problem that involves spe-cific challenges for the different crop species. In this contribution, we con-sider the problem of peach variety identification for which alternatives to DNA-based analysis are being sought. While a traditional procedure would suggest using manually designed shape descriptors as the basis for classifica-tion, the technical developments of the last decade have opened up possibili-ties for fully automated approaches, either based on 3D scanning technology or by employing deep learning methods for 2D image classification. In our feasibility study, we investigate the potential of various machine learning ap-proaches with a focus on the comparison of methods based on 2D images and 3D scans. We provide and discuss first results, paving the way for future use of the methods in the field

    Bright ligand-activatable fluorescent protein for high-quality multicolor live-cell super-resolution microscopy

    Get PDF
    We introduce UnaG as a green-to-dark photoswitching fluorescent protein capable of high-quality super-resolution imaging with photon numbers equivalent to the brightest photoswitchable red protein. UnaG only fluoresces upon binding of a fluorogenic metabolite, bilirubin, enabling UV-free reversible photoswitching with easily controllable kinetics and low background under Epi illumination. The on- and off-switching rates are controlled by the concentration of the ligand and the excitation light intensity, respectively, where the dissolved oxygen also promotes the off-switching. The photo-oxidation reaction mechanism of bilirubin in UnaG suggests that the lack of ligand-protein covalent bond allows the oxidized ligand to detach from the protein, emptying the binding cavity for rebinding to a fresh ligand molecule. We demonstrate super-resolution single-molecule localization imaging of various subcellular structures genetically encoded with UnaG, which enables facile labeling and simultaneous multicolor imaging of live cells. UnaG has the promise of becoming a default protein for high-performance super-resolution imaging. Photoconvertible proteins occupy two color channels thereby limiting multicolour localisation microscopy applications. Here the authors present UnaG, a new green-to-dark photoswitching fluorescent protein for super-resolution imaging, whose activation is based on a noncovalent binding with bilirubin

    Machine learning for image based species identification

    No full text
    Accurate species identification is the basis for all aspects of taxonomic research and is an essential component of workflows in biological research. Biologists are asking for more efficient methods to meet the identification demand. Smart mobile devices, digital cameras as well as the mass digitisation of natural history collections led to an explosion of openly available image data depicting living organisms. This rapid increase in biological image data in combination with modern machine learning methods, such as deep learning, offers tremendous opportunities for automated species identification. In this paper, we focus on deep learning neural networks as a technology that enabled breakthroughs in automated species identification in the last 2 years. In order to stimulate more work in this direction, we provide a brief overview of machine learning frameworks applicable to the species identification problem. We review selected deep learning approaches for image based species identification and introduce publicly available applications. Eventually, this article aims to provide insights into the current state‐of‐the‐art in automated identification and to serve as a starting point for researchers willing to apply novel machine learning techniques in their biological studies. While modern machine learning approaches only slowly pave their way into the field of species identification, we argue that we are going to see a proliferation of these techniques being applied to the problem in the future. Artificial intelligence systems will provide alternative tools for taxonomic identification in the near future

    Plant species identification using computer vision: A systematic literature review

    Get PDF
    Species knowledge is essential for protecting biodiversity. The identification of plants by conventional keys is complex, time consuming, and due to the use of specific botanical terms frustrating for non-experts. This creates a hard to overcome hurdle for novices interested in acquiring species knowledge. Today, there is an increasing interest in automating the process of species identification. The availability and ubiquity of relevant technologies, such as, digital cameras and mobile devices, the remote access to databases, new techniques in image processing and pattern recognition let the idea of automated species identification become reality. This paper is the first systematic literature review with the aim of a thorough analysis and comparison of primary studies on computer vision approaches for plant species identification. We identified 120 peer-reviewed studies, selected through a multi-stage process, published in the last 10 years (2005–2015). After a careful analysis of these studies, we describe the applied methods categorized according to the studied plant organ, and the studied features, i.e., shape, texture, color, margin, and vein structure. Furthermore, we compare methods based on classification accuracy achieved on publicly available datasets. Our results are relevant to researches in ecology as well as computer vision for their ongoing research. The systematic andconcise overview will also be helpful for beginners in those research fields, as they can use the comparable analyses of applied methods as a guide in this complex activit
    corecore