15 research outputs found

    Original overall architecture of GhostNet.

    No full text
    Unmanned Aerial Vehicles (UAVs) play an important role in remote sensing image classification because they are capable of autonomously monitoring specific areas and analyzing images. The embedded platform and deep learning are used to classify UAV images in real-time. However, given the limited memory and computational resources, deploying deep learning networks on embedded devices and real-time analysis of ground scenes still has challenges in actual applications. To balance computational cost and classification accuracy, a novel lightweight network based on the original GhostNet is presented. The computational cost of this network is reduced by changing the number of convolutional layers. Meanwhile, the fully connected layer at the end is replaced with the fully convolutional layer. To evaluate the performance of the Modified GhostNet in remote sensing scene classification, experiments are performed on three public datasets: UCMerced, AID, and NWPU-RESISC. Compared with the basic GhostNet, the Floating Point Operations (FLOPs) are reduced from 7.85 MFLOPs to 2.58 MFLOPs, the memory is reduced from 16.40 MB to 5.70 MB, and the predicted time is improved by 18.86%. Our modified GhostNet also increases the average accuracy (Acc) (4.70% in AID experiments, 3.39% in UCMerced experiments). These results indicate that our Modified GhostNet can improve the performance of lightweight networks for scene classification and effectively enable real-time monitoring of ground scenes.</div

    The loss and accuracy of the GhostNet model before and after AID image augmentation.

    No full text
    (a) Loss of training set. (b) Accuracy of training set. (c) Loss of validation set. (d) Accuracy of validation set.</p

    Feature maps derived from different layers.

    No full text
    (a) The 8th layer. (b) The 9th layer. (c) The 10th layer. (d) The 11th layer.</p

    The loss and accuracy of the GhostNet model before and after UCMerced image augmentation.

    No full text
    (a) Loss of training set. (b) Accuracy of training set. (c) Loss of validation set. (d) Accuracy of validation set.</p

    The entire training strategy of the Modified GhostNet model.

    No full text
    The entire training strategy of the Modified GhostNet model.</p

    Modified overall architecture of GhostNet.

    No full text
    Unmanned Aerial Vehicles (UAVs) play an important role in remote sensing image classification because they are capable of autonomously monitoring specific areas and analyzing images. The embedded platform and deep learning are used to classify UAV images in real-time. However, given the limited memory and computational resources, deploying deep learning networks on embedded devices and real-time analysis of ground scenes still has challenges in actual applications. To balance computational cost and classification accuracy, a novel lightweight network based on the original GhostNet is presented. The computational cost of this network is reduced by changing the number of convolutional layers. Meanwhile, the fully connected layer at the end is replaced with the fully convolutional layer. To evaluate the performance of the Modified GhostNet in remote sensing scene classification, experiments are performed on three public datasets: UCMerced, AID, and NWPU-RESISC. Compared with the basic GhostNet, the Floating Point Operations (FLOPs) are reduced from 7.85 MFLOPs to 2.58 MFLOPs, the memory is reduced from 16.40 MB to 5.70 MB, and the predicted time is improved by 18.86%. Our modified GhostNet also increases the average accuracy (Acc) (4.70% in AID experiments, 3.39% in UCMerced experiments). These results indicate that our Modified GhostNet can improve the performance of lightweight networks for scene classification and effectively enable real-time monitoring of ground scenes.</div

    Combination of rotation and different brightness and contrast augmentation.

    No full text
    (a) 90° + α = 1.4, β = 0.6. (b) 180° + α = 1.4, β = 0.6. (c) 270° + α = 1.4, β = 0.6. (d) 90° + α = 0.8, β = 1.2. (e) 180° + α = 0.8, β = 1.2. (f) 270° + α = 0.8, β = 1.2.</p

    FLOPs of different models on UCMerced dataset.

    No full text
    Unmanned Aerial Vehicles (UAVs) play an important role in remote sensing image classification because they are capable of autonomously monitoring specific areas and analyzing images. The embedded platform and deep learning are used to classify UAV images in real-time. However, given the limited memory and computational resources, deploying deep learning networks on embedded devices and real-time analysis of ground scenes still has challenges in actual applications. To balance computational cost and classification accuracy, a novel lightweight network based on the original GhostNet is presented. The computational cost of this network is reduced by changing the number of convolutional layers. Meanwhile, the fully connected layer at the end is replaced with the fully convolutional layer. To evaluate the performance of the Modified GhostNet in remote sensing scene classification, experiments are performed on three public datasets: UCMerced, AID, and NWPU-RESISC. Compared with the basic GhostNet, the Floating Point Operations (FLOPs) are reduced from 7.85 MFLOPs to 2.58 MFLOPs, the memory is reduced from 16.40 MB to 5.70 MB, and the predicted time is improved by 18.86%. Our modified GhostNet also increases the average accuracy (Acc) (4.70% in AID experiments, 3.39% in UCMerced experiments). These results indicate that our Modified GhostNet can improve the performance of lightweight networks for scene classification and effectively enable real-time monitoring of ground scenes.</div

    The average accuracy of GhostNet before and after image augmentation.

    No full text
    The average accuracy of GhostNet before and after image augmentation.</p

    The loss and accuracy of the GhostNet model before and after NWPU-RESISC image augmentation.

    No full text
    (a) Loss of training set. (b) Accuracy of training set. (c) Loss of validation set. (d) Accuracy of validation set.</p
    corecore