4 research outputs found

    A comparative study of fine-grained classification methods in the context of the LifeCLEF plant identification challenge 2015

    Get PDF
    International audienceThis paper describes the participation of Inria to the plant identification task of the LifeCLEF 2015 challenge. The aim of the task was to produce a list of relevant species for a large set of plant observations related to 1000 species of trees, herbs and ferns living in Western Europe. Each plant observation contained several annotated pictures with organ/view tags: Flower, Leaf, Fruit, Stem, Branch, Entire, Scan (exclusively of leaf). To address this challenge, we experimented two popular families of classification techniques, i.e. convolutional neural networks (CNN) on one side and fisher vectors-based discriminant models on the other side. Our results show that the CNN approach achieves much better performance than the fisher vectors. Beyond, we show that the fusion of both techniques, based on a Bayesian inference using the confusion matrix of each classifier, did not improve the results of the CNN alone

    Open-set plant identification using an ensemble of deep convolutional neural networks

    Get PDF
    Open-set recognition, a challenging problem in computer vision, is concerned with identification or verification tasks where queries may belong to unknown classes. This work describes a fine-grained plant identification system consisting of an ensemble of deep convolutional neural networks within an open-set identification framework. Two wellknown deep learning architectures of VGGNet and GoogLeNet, pretrained on the object recognition dataset of ILSVRC 2012, are finetuned using the plant dataset of LifeCLEF 2015. Moreover, GoogLeNet is fine-tuned using plant and non-plant images for rejecting samples from non-plant classes. Our systems have been evaluated on the test dataset of PlantCLEF 2016 by the campaign organizers and our best proposed model has achieved an official score of 0.738 in terms of the mean average precision, while the best official score is 0.742

    Flowers, leaves or both? How to obtain suitable images for automated plant identification

    Get PDF
    Background: Deep learning algorithms for automated plant identification need large quantities of precisely labelled images in order to produce reliable classification results. Here, we explore what kind of perspectives and their combinations contain more characteristic information and therefore allow for higher identification accuracy. Results: We developed an image-capturing scheme to create observations of flowering plants. Each observation comprises five in-situ images of the same individual from predefined perspectives (entire plant, flower frontal- and lateral view, leaf top- and back side view). We collected a completely balanced dataset comprising 100 observations for each of 101 species with an emphasis on groups of conspecific and visually similar species including twelve Poaceae species. We used this dataset to train convolutional neural networks and determine the prediction accuracy for each single perspective and their combinations via score level fusion. Top-1 accuracies ranged between 77% (entire plant) and 97% (fusion of all perspectives) when averaged across species. Flower frontal view achieved the highest accuracy (88%). Fusing flower frontal, flower lateral and leaf top views yields the most reasonable compromise with respect to acquisition effort and accuracy (96%). The perspective achieving the highest accuracy was species dependent. Conclusions: We argue that image databases of herbaceous plants would benefit from multi organ observations, comprising at least the front and lateral perspective of flowers and the leaf top view
    corecore