12,247 research outputs found

    Flowers, leaves or both? How to obtain suitable images for automated plant identification

    Get PDF
    Background: Deep learning algorithms for automated plant identification need large quantities of precisely labelled images in order to produce reliable classification results. Here, we explore what kind of perspectives and their combinations contain more characteristic information and therefore allow for higher identification accuracy. Results: We developed an image-capturing scheme to create observations of flowering plants. Each observation comprises five in-situ images of the same individual from predefined perspectives (entire plant, flower frontal- and lateral view, leaf top- and back side view). We collected a completely balanced dataset comprising 100 observations for each of 101 species with an emphasis on groups of conspecific and visually similar species including twelve Poaceae species. We used this dataset to train convolutional neural networks and determine the prediction accuracy for each single perspective and their combinations via score level fusion. Top-1 accuracies ranged between 77% (entire plant) and 97% (fusion of all perspectives) when averaged across species. Flower frontal view achieved the highest accuracy (88%). Fusing flower frontal, flower lateral and leaf top views yields the most reasonable compromise with respect to acquisition effort and accuracy (96%). The perspective achieving the highest accuracy was species dependent. Conclusions: We argue that image databases of herbaceous plants would benefit from multi organ observations, comprising at least the front and lateral perspective of flowers and the leaf top view

    PlantNet Participation at LifeCLEF2014 Plant Identification Task

    Get PDF
    International audienceThis paper describes the participation of Inria within the Pl@ntNet project7 at the LifeCLEF2014 plant identication task. The aim of the task was to produce a list of relevant species for each plant observation in a test dataset according to a training dataset. Each plant observation contains several annotated pictures with organ/view tags: Flower, Leaf, Fruit, Stem, Branch, Entire, Scan (exclusively of leaf). Our system treated independently each category of organ/view and then a late hierarchical fusion is used in order to combine the results on visual content analysis from the most local level analysis in pictures to the highest level related to a plant observation. For the photographs of flowers, leaves, fruits, stems, branches and entire views of plants, a large scale matching approach of local features extracted using different spatial constraints is used. For scans, the method combines the large scale matching approach with shape descriptors and geometric parameters on shape boundary. Then, several fusion methods are experimented through the four submitted runs in order to combine hierarchically the local responses to the final response at the plant observation level. The four submitted runs obtained good results and got the 4th to the 7th place over 27 submitted runs by 10 participating team

    Late Information Fusion for Multi-modality Plant Species Identification

    Get PDF
    International audienceThis article presents the participation of the ReVeS project to the ImageCLEF 2013 Plant Identification challenge. Our primary target being tree leaves, some extra effort had to be done this year to process images containing other plant organs. The proposed method tries to benefit from the presence of multiple sources of information for a same individual through the introduction of a late fusion system based on the decisions of classifiers for the different modalities. It also presents a way to incorporate the geographical information in the determination of the species by estimating their plausibility at the considered location. While maintaining its performance on leaf images (ranking 3rd on natural images and 4th on plain backgrounds) our team performed honorably on the brand new modalities with a 6th position

    Late Information Fusion for Multi-modality Plant Species Identification

    No full text
    International audienceThis article presents the participation of the ReVeS project to the ImageCLEF 2013 Plant Identification challenge. Our primary target being tree leaves, some extra effort had to be done this year to process images containing other plant organs. The proposed method tries to benefit from the presence of multiple sources of information for a same individual through the introduction of a late fusion system based on the decisions of classifiers for the different modalities. It also presents a way to incorporate the geographical information in the determination of the species by estimating their plausibility at the considered location. While maintaining its performance on leaf images (ranking 3rd on natural images and 4th on plain backgrounds) our team performed honorably on the brand new modalities with a 6th position

    The ImageCLEF 2013 Plant Identification Task

    Get PDF
    International audienceThe ImageCLEF's plant identification task provides a testbed for a system-oriented evaluation of plant identification about 250 species trees and herbaceous plants based on detailed views of leaves, flowers, fruits, stems and bark or some entire views of the plants. Two types of image content are considered: SheetAsBackgroud which contains only leaves in a front of a generally white uniform background, and NaturalBackground which contains the 5 kinds of detailed views with unconstrained conditions, directly photographed on the plant. The main originality of this data is that it was specifically built through a citizen sciences initiative conducted by Tela Botanica, a French social network of amateur and expert botanists. This makes the task closer to the conditions of a real-world application. This overview presents more precisely the resources and assessments of task, summarizes the retrieval approaches employed by the participating groups, and provides an analysis of the main evaluation results. With a total of twelve groups from nine countries and with a total of thirty three runs submitted, involving distinct and original methods, this third year task confirms Image Retrieval community interest for biodiversity and botany, and highlights further challenging studies in plant identification

    LifeCLEF Plant Identification Task 2015

    Get PDF
    International audienceThe LifeCLEF plant identification challenge aims at evaluating plant identification methods and systems at a very large scale, close to the conditions of a real-world biodiversity monitoring scenario. The 2015 evaluation was actually conducted on a set of more than 100K images illustrating 1000 plant species living in West Europe. The main originality of this dataset is that it was built through a large-scale partic-ipatory sensing plateform initiated in 2011 and which now involves tens of thousands of contributors. This overview presents more precisely the resources and assessments of the challenge, summarizes the approaches and systems employed by the participating research groups, and provides an analysis of the main outcomes

    Image-based automated recognition of 31 Poaceae species: the most relevant perspectives

    Get PDF
    Poaceae represent one of the largest plant families in the world. Many species are of great economic importance as food and forage plants while others represent important weeds in agriculture. Although a large number of studies currently address the question of how plants can be best recognized on images, there is a lack of studies evaluating specific approaches for uniform species groups considered difficult to identify because they lack obvious visual characteristics. Poaceae represent an example of such a species group, especially when they are non-flowering. Here we present the results from an experiment to automatically identify Poaceae species based on images depicting six well-defined perspectives. One perspective shows the inflorescence while the others show vegetative parts of the plant such as the collar region with the ligule, adaxial and abaxial side of the leaf and culm nodes. For each species we collected 80 observations, each representing a series of six images taken with a smartphone camera. We extract feature representations from the images using five different convolutional neural networks (CNN) trained on objects from different domains and classify them using four state-of-the art classification algorithms. We combine these perspectives via score level fusion. In order to evaluate the potential of identifying non-flowering Poaceae we separately compared perspective combinations either comprising inflorescences or not. We find that for a fusion of all six perspectives, using the best combination of feature extraction CNN and classifier, an accuracy of 96.1% can be achieved. Without the inflorescence, the overall accuracy is still as high as 90.3%. In all but one case the perspective conveying the most information about the species (excluding inflorescence) is the ligule in frontal view. Our results show that even species considered very difficult to identify can achieve high accuracies in automatic identification as long as images depicting suitable perspectives are available. We suggest that our approach could be transferred to other difficult-to-distinguish species groups in order to identify the most relevant perspectives

    Inria's participation at ImageCLEF 2013 Plant Identification Task

    Get PDF
    International audienceThis paper describes the participation of Inria within the Pl@ntNet project at ImageCLEF2013 plant identification task. For the SheetAsBackground category (scans or photographs of leaves with a uniform background), the submitted runs used a multiscale triangle-based approaches, either alone or combined with other shape-based descriptors. For the NaturalBackground category (unconstrained photographs of leaves, flowers, fruits, stems,...), the four submitted runs used local features extracted using different geometric constraints. Three of them were based on large scale matching of individual local feature, while the last one used a Fisher vector representation. Metadata like the flowering date or/and plant identifier were successfully combined to the visual content. Overall the proposed methods performed very well for all categories and sub-categories
    • …
    corecore