10 research outputs found

    Decision Support System for Bat Identification using Random Forest and C5.0

    Get PDF
    Morphometric and morphological bat identification are a conventional method of identification and requires precision, significant experience, and encyclopedic knowledge. Morphological features of a species may sometimes similar to that of another species and this causes several problems for the beginners working with bat taxonomy. The purpose of the study was to implement and conduct the random forest and C5.0 algorithm analysis in order to decide characteristics and carry out identification of bat species. It also aims at developing supporting decision-making system based on the model to find out the characteristics and identification of the bat species. The study showed that C5.0 algorithm prevailed and was selected with the mean score of accuracy of 98.98%, while the mean score of accuracy for the random forest was 97.26%. As many 50 rules were implemented in the DSS to identify common and rare bat species with morphometric and morphological attributes

    Bird Species Categorization Using Pose Normalized Deep Convolutional Nets

    Get PDF
    We propose an architecture for fine-grained visual categorization that approaches expert human performance in the classification of bird species. Our architecture first computes an estimate of the object's pose; this is used to compute local image features which are, in turn, used for classification. The features are computed by applying deep convolutional nets to image patches that are located and normalized by the pose. We perform an empirical study of a number of pose normalization schemes, including an investigation of higher order geometric warping functions. We propose a novel graph-based clustering algorithm for learning a compact pose normalization space. We perform a detailed investigation of state-of-the-art deep convolutional feature implementations and fine-tuning feature learning for fine-grained classification. We observe that a model that integrates lower-level feature layers with pose-normalized extraction routines and higher-level feature layers with unaligned image features works best. Our experiments advance state-of-the-art performance on bird species recognition, with a large improvement of correct classification rates over previous methods (75% vs. 55-65%)

    Deep Learning based Automatic Multi-Class Wild Pest Monitoring Approach using Hybrid Global and Local Activated Features with Stationary Trap Devices

    Get PDF
    Specialized control of pests and diseases have been a high-priority issue for agriculture industry in many countries. On account of automation and cost-effectiveness, image analytic based pest recognition systems are widely utilized in practical crops prevention applications. But due to powerless handcrafted features, current image analytic approaches achieve low accuracy and poor robustness in practical large-scale multi-class pest detection and recognition. To tackle this problem, this paper proposes a novel deep learning based automatic approach using hybrid and local activated features for pest monitoring solution. In the presented method, we exploit the global information from feature maps to build our Global activated Feature Pyramid Network (GaFPN) to extract pests highly discriminative features across various scales over both depth and position levels. It makes changes of depth or spatial sensitive features in pest images more visible during downsampling. Next, an improved pest localization module named Local activated Region Proposal Network (LaRPN) is proposed to find the precise pest objects positions by augmenting contextualized and attentional information for feature completion and enhancement in local level. The approach is evaluated on our 7-year large-scale pest dataset containing 88.6K images (16 types of pests) with 582.1K manually labelled pest objects. The experimental results show that our solution performs over 74.24% mAP in industrial circumstances, which outweighs two other state-of-the-art methods: Faster R-CNN [12] with mAP up to 70% and FPN [13] mAP up to 72%. Our code and dataset will be made publicly available

    Bird Species Categorization Using Pose Normalized Deep Convolutional Nets

    Get PDF
    We propose an architecture for fine-grained visual categorization that approaches expert human performance in the classification of bird species. Our architecture first computes an estimate of the object's pose; this is used to compute local image features which are, in turn, used for classification. The features are computed by applying deep convolutional nets to image patches that are located and normalized by the pose. We perform an empirical study of a number of pose normalization schemes, including an investigation of higher order geometric warping functions. We propose a novel graph-based clustering algorithm for learning a compact pose normalization space. We perform a detailed investigation of state-of-the-art deep convolutional feature implementations and fine-tuning feature learning for fine-grained classification. We observe that a model that integrates lower-level feature layers with pose-normalized extraction routines and higher-level feature layers with unaligned image features works best. Our experiments advance state-of-the-art performance on bird species recognition, with a large improvement of correct classification rates over previous methods (75% vs. 55-65%)

    Identification of species and geographical strains of Sitophilus oryzae and Sitophilus zeamaisusing the visible/near-infrared hyperspectral imaging technique

    Get PDF
    BACKGROUND Identifying stored-product insects is essential for granary management. Automated, computer-based classification methods are rapidly developing in many areas. A hyperspectral imaging technique could potentially be developed to identify stored-product insect species and geographical strains. This study tested and adapted the technique using four geographical strains of each of two insect species, the rice weevil and maize weevil, to collect and analyse the resultant hyperspectral data. RESULTS Three characteristic images that corresponded to the dominant wavelengths, 505, 659 and 955 nm, were selected by multivariate image analysis. Each image was processed, and 22 morphological and textural features from regions of interest were extracted as the inputs for an identification model. We found the backpropagation neural network model to be the superior method for distinguishing between the insect species and geographical strains. The overall recognition rates of the classification model for insect species were 100 and 98.13% for the calibration and prediction sets respectively, while the rates of the model for geographical strains were 94.17 and 86.88% respectively. CONCLUSION This study has demonstrated that hyperspectral imaging, together with the appropriate recognition method, could provide a potential instrument for identifying insects and could become a useful tool for identification of Sitophilus oryzae and Sitophilus zeamais to aid in the management of stored-product insects

    Deep learning based automatic multi-class wild pest monitoring approach using hybrid global and local activated features

    Get PDF
    Specialized control of pests and diseases have been a high-priority issue for agriculture industry in many countries. On account of automation and cost-effectiveness, image analytic based pest recognition systems are widely utilized in practical crops prevention applications. But due to powerless handcrafted features, current image analytic approaches achieve low accuracy and poor robustness in practical large-scale multi-class pest detection and recognition. To tackle this problem, this paper proposes a novel deep learning based automatic approach using hybrid and local activated features for pest monitoring solution. In the presented method, we exploit the global information from feature maps to build our Global activated Feature Pyramid Network (GaFPN) to extract pests’ highly discriminative features across various scales over both depth and position levels. It makes changes of depth or spatial sensitive features in pest images more visible during downsampling. Next, an improved pest localization module named Local activated Region Proposal Network (LaRPN) is proposed to find the precise pest objects’ positions by augmenting contextualized and attentional information for feature completion and enhancement in local level. The approach is evaluated on our 7-year large-scale pest dataset containing 88.6K images (16 types of pests) with 582.1K manually labelled pest objects. The experimental results show that our solution performs over 75.03% mAP in industrial circumstances, which outweighs two other state-of-the-art methods: Faster R-CNN with mAP up to 70% and FPN mAP up to 72%. Our code and dataset will be made publicly available

    PestNet : an end-to-end deep learning approach for large-scale multi-class pest detection and classification

    Get PDF
    Multi-class pest detection is one of the crucial components in pest management involving localization in addition to classification which is much more difficult than generic object detection because of the apparent differences among pest species. This paper proposes a region-based end-to-end approach named PestNet for large-scale multi-class pest detection and classification based on deep learning. PestNet consists of three major parts. First, a novel module channel-spatial attention (CSA) is proposed to be fused into the convolutional neural network (CNN) backbone for feature extraction and enhancement. The second one is called region proposal network (RPN) that is adopted for providing region proposals as potential pest positions based on extracted feature maps from images. Position-sensitive score map (PSSM), the third component, is used to replace fully connected (FC) layers for pest classification and bounding box regression. Furthermore, we apply contextual regions of interest (RoIs) as contextual information of pest features to improve detection accuracy. We evaluate PestNet on our newly collected large-scale pests' image dataset, Multi-class Pests Dataset 2018 (MPD2018) captured by our designed task-specific image acquisition equipment, covering more than 80k images with over 580k pests labeled by agricultural experts and categorized in 16 classes. The experimental results show that the proposed PestNet performs well on multi-class pest detection with 75.46% mean average precision (mAP), which outperforms the state-of-the-art methods

    Haar Random Forest Features and SVM Spatial Matching Kernel for Stonefly Species Identification

    No full text
    Abstract—This paper proposes an image classification method based on extracting image features using Haar random forests and combining them with a spatial matching kernel SVM. The method works by combining multiple efficient, yet powerful, learning algorithms at every stage of the recognition process. On the task of identifying aquatic stonefly larvae, the method has state-of-the-art or better performance, but with much higher efficiency. Keywords-object-class recognition; machine learning; SVM; Random Forests; Haar-like features; I
    corecore