701 research outputs found

    Capsule Networks for Hyperspectral Image Classification

    Get PDF
    Convolutional neural networks (CNNs) have recently exhibited an excellent performance in hyperspectral image classification tasks. However, the straightforward CNN-based network architecture still finds obstacles when effectively exploiting the relationships between hyperspectral imaging (HSI) features in the spectral-spatial domain, which is a key factor to deal with the high level of complexity present in remotely sensed HSI data. Despite the fact that deeper architectures try to mitigate these limitations, they also find challenges with the convergence of the network parameters, which eventually limit the classification performance under highly demanding scenarios. In this paper, we propose a new CNN architecture based on spectral-spatial capsule networks in order to achieve a highly accurate classification of HSIs while significantly reducing the network design complexity. Specifically, based on Hinton's capsule networks, we develop a CNN model extension that redefines the concept of capsule units to become spectral-spatial units specialized in classifying remotely sensed HSI data. The proposed model is composed by several building blocks, called spectral-spatial capsules, which are able to learn HSI spectral-spatial features considering their corresponding spatial positions in the scene, their associated spectral signatures, and also their possible transformations. Our experiments, conducted using five well-known HSI data sets and several state-of-the-art classification methods, reveal that our HSI classification approach based on spectral-spatial capsules is able to provide competitive advantages in terms of both classification accuracy and computational time

    Knowledge Extraction using Capsule Deep Learning Approaches

    Get PDF
    Limited training data, high dimensionality, image (generated from spatiotemporal signals in BCI) complexity and similarity between classes are the main challenges confronting deep learning (DL) methods and can result in suboptimal classification performance. Most DL methods employ Convolutional Neural networks (CNN), which contain pooling in their architecture. Pooling loses valuable information and exact spatial correlations between different entity parts. More importantly, the new viewpoint of an object in the image cannot be preserved by pooling. The Capsule Neural Network (CapsNet) has been introduced to address these shortcomings by preserving the hierarchy between different entity parts in an image, even when using limited training samples [1], [2]. The potential for advancements in CapsNets methods has been demonstrated in the multidisciplinary field, including hyperspectral imaging, image classification, segmentation, video detection, and human movement recognition. Motivated by CapsNet, we have recently developed an end-to-end DL architecture, the Hybrid Capsule Network (HCapsNet), with the state of the art result for hyperspectral image classification while using extremely fewer training samples [3]. Also in another study, the proposed CapsNet architecture yielded encouraging results for the investigation of infant intrinsic movement at various phases of the new experiment in collaboration with the Human Brain and Behavior Lab at Florida Atlantic University (Prof Kelso and colleagues) [4]. The result showed the performance of 2D CapsNets in assessing the spatial relationships between different body parts using 2D histogram features.The non-invasive electroencephalography (EEG) provided by wearable neurotechnology is a massive challenge for AI. My study also focused on developing novel AI algorithms to address the issues involved with decoding EEG signals into control signals for neurotechnology based on brain-computer interfaces (BCIs). These methods are expected to be well-suited for BCI applications, particularly when learning various EEG properties with limited training data (typically the case for BCIs). Upon recent work for decoding imagined speech using CNNs we also applied CapsNet to direct speech BCIs [5]. In this research, the CapsNet architecture is modified using multi-level feature maps and multiple capsule layers. In addition, the new Tier 2 Northern Ireland High-Performance Computing facility enabled us to train models in deep approaches with enormous processing power. Therefore, Massively parallel computing using Asynchronous Successive Halving Algorithm (ASHA) is used for hyperparameter optimisation.Since CapsNet is still in its early stages of development and has demonstrated promising results on several challenging datasets, this method has the potential to develop relationships with colleagues in other disciplines, which could result in new research applicationsReferences[1]S. Sabour, N. Frosst, and G. E. Hinton, “Dynamic routing between capsules,” in Advances in Neural Information Processing Systems, 2017, pp. 3857–3867. Accessed: Apr. 09, 2022. [Online]. Available: https://proceedings.neurips.cc/paper/2017/hash/2cad8fa47bbef282badbb8de5374b894-Abstract.html[2]G. E. Hinton, S. Sabour, and N. Frosst, “Matrix capsules with {EM} routing,” International Conference on Learning Representations (ICLR), pp. 1–15, 2018, [Online]. Available: https://openreview.net/pdf?id=HJWLfGWRb[3]M. Khodadadzadeh, X. Ding, P. Chaurasia, and D. Coyle, “A Hybrid Capsule Network for Hyperspectral Image Classification,” IEEE J Sel Top Appl Earth Obs Remote Sens, vol. 14, pp. 11824–11839, 2021, doi: 10.1109/JSTARS.2021.3126427.[4]Massoud Khodadadzadeh, Aliza Sloan, Scott Kelso, and Damien Coyle, “2D Capsule Networks Detect Perceived Changes in Infant~Environment Relationship Reflected in 3D Infant Movement Dynamics. Manuscript submitted for publication in Scientific Reports, Nature,” 2023.[5]M. Khodadadzadeh and D. Coyle, “Imagined Speech Classification from Electroencephalography with a Features-Guided Capsule Neural Network.” Dec. 18, 2022. Accessed: Mar. 03, 2023. [Online]. Available: https://pure.ulster.ac.uk/en/publications/imagined-speech-classification-from-electroencephalography-with-a <br/

    A Hybrid Capsule Network for Hyperspectral Image Classification

    Get PDF

    Study on comparison of biochemistry between Trogoderma granarium Everts and Trogoderma variabile Ballion

    Get PDF
    Stored grains are paramount commodities to be preserved and stocked for future supply to the market according to the requirement. However, one of the major problems during storage is insect pests, of which insects from Trogoderma sp. especially khapra beetle (Trogoderma granarium) is considered the world most dangerous stored grain insect pests. Therefore, it has been listed as quarantine insect pests in many counties. For timely management of quarantine pest, effective and rapid diagnostic methods are required. Until now, diagnostic technology is mainly based on morphology of insects which require trained taxonomists. Recently, diagnostics based on metabolites and hyperspectral imaging coupled with machine learning is gaining importance. However, very little is known about the metabolites in Trogoderma sp. and how the host grain, gender, and geographical distribution affect the metabolomic profiling in these species is still unknown. In this thesis, volatile organic compounds (VOCs) emitted by Trogoderma variabile at different life stages were analysed as biomarkers which can help us to understand the biochemistry and metabolomic. Some compounds were identified from T. variabile different stages, which could be used as diagnostic tool for this insect. Gas chromatography coupled to mass spectrometry (GC–MS) was used as a technique to study the metabolite profile of T. variabile in different host grains. However, there are several factors that affect the volatile organic compounds including extraction time and number of insects. The results indicated that the optimal number of insects required for volatile organic compounds (VOC) extraction at each life stage was 25 and 20 for larvae and adults respectively. Sixteen hours were selected as the optimal extraction time for larvae and adults. Some of the VOCs compounds identified from this insect can be used as biomarkers such as pentanoic acid; diethoxymethyl acetate; 1-decyne; naphthalene, 2-methyl-; n-decanoic acid; dodecane, 1-iodo- and m-camphorene from larvae. While butanoic acid, 2-methyl-; pentanoic acid; heptane, 1,1'-oxybis- 2(3H)-Furanone, 5-ethyldihydro-; pentadecane, 2,6,10-trimethyl-; and 1,14-tetradecanediol VOCs, were found in male, whereas pentadecane; nonanic acid; pentadecane, 2,6,10-trimethyl-; undecanal and hexadecanal were identified from female. Additionaly, direct immersion-solid phase microextraction (DI-SPME) was employed, followed by gas chromatography mass spectrometry analysis (GC-MS) for the collection, separation, and identification of the chemical compounds from T. variabile adults fed on four different host grains. Results showed that insect host grains have a significant difference on the chemical compounds that were identified from female and male. There were 23 compounds identified from adults reared on canola and wheat. However, there were 26 and 28 compounds detected from adults reared on oats and barley respectively. Results showed that 11-methylpentacosane; 13-methylheptacosane; heptacosane; docosane, 1-iodo- and nonacosane were the most significant compounds that identified form T. variabile male reared on different host grains. However, the main compounds identified from female cultured on different host grains include docosane, 1-iodo-; 1-butanamine, N-butyl-; oleic acid; heptacosane; 13-methylheptacosane; hexacosane; nonacosane; 2-methyloctacosane; n-hexadecanoic acid and docosane. A novel diagnostic tool to identify between T. granarium and T. variabile were developed using visible near infrared hyperspectral imaging and deep learning models including Convolutional Neural Networks (CNN) and Capsule Network. Ventral orientation showed a better accuracy over dorsal orientation of the insects for both larvae and adult stages. This technology offers a new approach and possibility of an effective identification of T. granarium and T. variabile. from its body fragments and larvae skins. The results showed high accuracy to identify between T. granarium and T. variabile. The accuracy was 93.4 and 96.2% for adults and larvae respectively, and the accuracies of 91.6, 91.7 and 90.3% were achieved for larvae skin, adult fragments, larvae fragment respectively

    Capsule Networks for Object Detection in UAV Imagery

    Get PDF
    Recent advances in Convolutional Neural Networks (CNNs) have attracted great attention in remote sensing due to their high capability to model high-level semantic content of Remote Sensing (RS) images. However, CNNs do not explicitly retain the relative position of objects in an image and, thus, the effectiveness of the obtained features is limited in the framework of the complex object detection problems. To address this problem, in this paper we introduce Capsule Networks (CapsNets) for object detection in Unmanned Aerial Vehicle-acquired images. Unlike CNNs, CapsNets extract and exploit the information content about objects’ relative position across several layers, which enables parsing crowded scenes with overlapping objects. Experimental results obtained on two datasets for car and solar panel detection problems show that CapsNets provide similar object detection accuracies when compared to state-of-the-art deep models with significantly reduced computational time. This is due to the fact that CapsNets emphasize dynamic routine instead of the depth.EC/H2020/759764/EU/Accurate and Scalable Processing of Big Data in Earth Observation/BigEart

    DALF: An AI Enabled Adversarial Framework for Classification of Hyperspectral Images

    Get PDF
    Hyperspectral image classification is very complex and challenging process. However, with deep neural networks like Convolutional Neural Networks (CNN) with explicit dimensionality reduction, the capability of classifier is greatly increased. However, there is still problem with sufficient training samples. In this paper, we overcome this problem by proposing an Artificial Intelligence (AI) based framework named Deep Adversarial Learning Framework (DALF) that exploits deep autoencoder for dimensionality reduction, Generative Adversarial Network (GAN) for generating new Hyperspectral Imaging (HSI) samples that are to be verified by a discriminator in a non-cooperative game setting besides using aclassifier. Convolutional Neural Network (CNN) is used for both generator and discriminator while classifier role is played by Support Vector Machine (SVM) and Neural Network (NN). An algorithm named Generative Model based Hybrid Approach for HSI Classification (GMHA-HSIC) which drives the functionality of the proposed framework is proposed. The success of DALF in accurate classification is largely dependent on the synthesis and labelling of spectra on regular basis. The synthetic samples made with an iterative process and being verified by discriminator result in useful spectra. By training GAN with associated deep learning models, the framework leverages classification performance. Our experimental results revealed that the proposed framework has potential to improve the state of the art besides having an effective data augmentation strategy
    • …
    corecore