23,054 research outputs found

    Recognition of Ginger Seed Growth Stages Using a Two-Stage Deep Learning Approach

    Get PDF
    Monitoring the growth of ginger seed relies on human experts due to the lack of salient features for effective recognition. In this study, a region-based convolutional neural network (R-CNN) hybrid detector-classifier model is developed to address the natural variations in ginger sprouts, enabling automatic recognition into three growth stages. Out of 1,746 images containing 2,277 sprout instances, the model predictions revealed significant confusion between growth stages, aligning with the human perception in data annotation, as indicated by Cohen’s Kappa scores. The developed hybrid detector-classifier model achieved an 85.50% mean average precision (mAP) at 0.5 intersections over union (IoU), tested with 402 images containing 561 sprout instances, with an inference time of 0.383 seconds per image. The results confirm the potential of the hybrid model as an alternative to current manual operations. This study serves as a practical case, for extensions to other applications within plant phenotyping communities

    Hybrid image representation methods for automatic image annotation: a survey

    Get PDF
    In most automatic image annotation systems, images are represented with low level features using either global methods or local methods. In global methods, the entire image is used as a unit. Local methods divide images into blocks where fixed-size sub-image blocks are adopted as sub-units; or into regions by using segmented regions as sub-units in images. In contrast to typical automatic image annotation methods that use either global or local features exclusively, several recent methods have considered incorporating the two kinds of information, and believe that the combination of the two levels of features is beneficial in annotating images. In this paper, we provide a survey on automatic image annotation techniques according to one aspect: feature extraction, and, in order to complement existing surveys in literature, we focus on the emerging image annotation methods: hybrid methods that combine both global and local features for image representation

    Automatic Annotation of Images from the Practitioner Perspective

    No full text
    This paper describes an ongoing project which seeks to contribute to a wider understanding of the realities of bridging the semantic gap in visual image retrieval. A comprehensive survey of the means by which real image retrieval transactions are realised is being undertaken. An image taxonomy has been developed, in order to provide a framework within which account may be taken of the plurality of image types, user needs and forms of textual metadata. Significant limitations exhibited by current automatic annotation techniques are discussed, and a possible way forward using ontologically supported automatic content annotation is briefly considered as a potential means of mitigating these limitations

    MIRACLE’s Naive Approach to Medical Images Annotation

    Full text link
    One of the proposed tasks of the ImageCLEF 2005 campaign has been an Automatic Annotation Task. The objective is to provide the classification of a given set of 1,000 previously unseen medical (radiological) images according to 57 predefined categories covering different medical pathologies. 9,000 classified training images are given which can be used in any way to train a classifier. The Automatic Annotation task uses no textual information, but image-content information only. This paper describes our participation in the automatic annotation task of ImageCLEF 2005

    Extending the Foundational Model of Anatomy with Automatically Acquired Spatial Relations

    Get PDF
    Formal ontologies have made significant impact in bioscience over the last ten years. Among them, the Foundational Model of Anatomy Ontology (FMA) is the most comprehensive model for the spatio-structural representation of human anatomy. In the research project MEDICO we use the FMA as our main source of background knowledge about human anatomy. Our ultimate goals are to use spatial knowledge from the FMA (1) to improve automatic parsing algorithms for 3D volume data sets generated by Computed Tomography and Magnetic Resonance Imaging and (2) to generate semantic annotations using the concepts from the FMA to allow semantic search on medical image repositories. We argue that in this context more spatial relation instances are needed than those currently available in the FMA. In this publication we present a technique for the automatic inductive acquisition of spatial relation instances by generalizing from expert-annotated volume datasets
    • 

    corecore