4 research outputs found

    DALF: An AI Enabled Adversarial Framework for Classification of Hyperspectral Images

    Get PDF
    Hyperspectral image classification is very complex and challenging process. However, with deep neural networks like Convolutional Neural Networks (CNN) with explicit dimensionality reduction, the capability of classifier is greatly increased. However, there is still problem with sufficient training samples. In this paper, we overcome this problem by proposing an Artificial Intelligence (AI) based framework named Deep Adversarial Learning Framework (DALF) that exploits deep autoencoder for dimensionality reduction, Generative Adversarial Network (GAN) for generating new Hyperspectral Imaging (HSI) samples that are to be verified by a discriminator in a non-cooperative game setting besides using aclassifier. Convolutional Neural Network (CNN) is used for both generator and discriminator while classifier role is played by Support Vector Machine (SVM) and Neural Network (NN). An algorithm named Generative Model based Hybrid Approach for HSI Classification (GMHA-HSIC) which drives the functionality of the proposed framework is proposed. The success of DALF in accurate classification is largely dependent on the synthesis and labelling of spectra on regular basis. The synthetic samples made with an iterative process and being verified by discriminator result in useful spectra. By training GAN with associated deep learning models, the framework leverages classification performance. Our experimental results revealed that the proposed framework has potential to improve the state of the art besides having an effective data augmentation strategy

    Oil Palm USB (Unstripped Bunch) Detector Trained on Synthetic Images Generated by PGGAN

    Get PDF
    Identifying Unstriped Bunches (USB) is a pivotal challenge in palm oil production, contributing to reduced mill efficiency. Existing manual detection methods are proven time-consuming and prone to inaccuracies. Therefore, we propose an innovative solution harnessing computer vision technology. Specifically, we leverage the Faster R-CNN (Region-based Convolution Neural Network), a robust object detection algorithm, and complement it with Progressive Growing Generative Adversarial Networks (PGGAN) for synthetic image generation. Nevertheless, a scarcity of authentic USB images may hinder the application of Faster R-CNN. Herein, PGGAN is assumed to be pivotal in generating synthetic images of Empty Fruit Bunches (EFB) and USB. Our approach pairs synthetic images with authentic ones to train the Faster R-CNN. The VGG16 feature generator serves as the architectural backbone, fostering enhanced learning. According to our experimental results, USB detectors that were trained solely with authentic images resulted in an accuracy of 77.1%, which highlights the potential of this methodology. However, employing solely synthetic images leads to a slightly reduced accuracy of 75.3%. Strikingly, the fusion of authentic and synthetic images in a balanced ratio of 1:1 fuels a remarkable accuracy surge to 87.9%, signifying a 10.1% improvement. This innovative amalgamation underscores the potential of synthetic data augmentation in refining detection systems. By amalgamating authentic and synthetic data, we unlock a novel dimension of accuracy in USB detection, which was previously unattainable. This contribution holds significant implications for the industry, ensuring further exploration into advanced data synthesis techniques and refining detection models

    Oil palm USB (Unstripped Bunch) detector trained on synthetic images generated by PGGAN

    Get PDF
    Identifying Unstriped Bunches (USB) is a pivotal challenge in palm oil production, contributing to reduced mill efficiency. Existing manual detection methods are proven time-consuming and prone to inaccuracies. Therefore, we propose an innovative solution harnessing computer vision technology. Specifically, we leverage the Faster R-CNN (Region-based Convolution Neural Network), a robust object detection algorithm, and complement it with Progressive Growing Generative Adversarial Networks (PGGAN) for synthetic image generation. Nevertheless, a scarcity of authentic USB images may hinder the application of Faster R-CNN. Herein, PGGAN is assumed to be pivotal in generating synthetic images of Empty Fruit Bunches (EFB) and USB. Our approach pairs synthetic images with authentic ones to train the Faster R-CNN. The VGG16 feature generator serves as the architectural backbone, fostering enhanced learning. According to our experimental results, USB detectors that were trained solely with authentic images resulted in an accuracy of 77.1%, which highlights the potential of this methodology. However, employing solely synthetic images leads to a slightly reduced accuracy of 75.3%. Strikingly, the fusion of authentic and synthetic images in a balanced ratio of 1:1 fuels a remarkable accuracy surge to 87.9%, signifying a 10.1% improvement. This innovative amalgamation underscores the potential of synthetic data augmentation in refining detection systems. By amalgamating authentic and synthetic data, we unlock a novel dimension of accuracy in USB detection, which was previously unattainable. This contribution holds significant implications for the industry, ensuring further exploration into advanced data synthesis techniques and refining detection models
    corecore