26 research outputs found
Original overall architecture of GhostNet.
Unmanned Aerial Vehicles (UAVs) play an important role in remote sensing image classification because they are capable of autonomously monitoring specific areas and analyzing images. The embedded platform and deep learning are used to classify UAV images in real-time. However, given the limited memory and computational resources, deploying deep learning networks on embedded devices and real-time analysis of ground scenes still has challenges in actual applications. To balance computational cost and classification accuracy, a novel lightweight network based on the original GhostNet is presented. The computational cost of this network is reduced by changing the number of convolutional layers. Meanwhile, the fully connected layer at the end is replaced with the fully convolutional layer. To evaluate the performance of the Modified GhostNet in remote sensing scene classification, experiments are performed on three public datasets: UCMerced, AID, and NWPU-RESISC. Compared with the basic GhostNet, the Floating Point Operations (FLOPs) are reduced from 7.85 MFLOPs to 2.58 MFLOPs, the memory is reduced from 16.40 MB to 5.70 MB, and the predicted time is improved by 18.86%. Our modified GhostNet also increases the average accuracy (Acc) (4.70% in AID experiments, 3.39% in UCMerced experiments). These results indicate that our Modified GhostNet can improve the performance of lightweight networks for scene classification and effectively enable real-time monitoring of ground scenes.</div
The memory usage and predicted time of different models.
The memory usage and predicted time of different models.</p
FLOPs of different models on UCMerced dataset.
Unmanned Aerial Vehicles (UAVs) play an important role in remote sensing image classification because they are capable of autonomously monitoring specific areas and analyzing images. The embedded platform and deep learning are used to classify UAV images in real-time. However, given the limited memory and computational resources, deploying deep learning networks on embedded devices and real-time analysis of ground scenes still has challenges in actual applications. To balance computational cost and classification accuracy, a novel lightweight network based on the original GhostNet is presented. The computational cost of this network is reduced by changing the number of convolutional layers. Meanwhile, the fully connected layer at the end is replaced with the fully convolutional layer. To evaluate the performance of the Modified GhostNet in remote sensing scene classification, experiments are performed on three public datasets: UCMerced, AID, and NWPU-RESISC. Compared with the basic GhostNet, the Floating Point Operations (FLOPs) are reduced from 7.85 MFLOPs to 2.58 MFLOPs, the memory is reduced from 16.40 MB to 5.70 MB, and the predicted time is improved by 18.86%. Our modified GhostNet also increases the average accuracy (Acc) (4.70% in AID experiments, 3.39% in UCMerced experiments). These results indicate that our Modified GhostNet can improve the performance of lightweight networks for scene classification and effectively enable real-time monitoring of ground scenes.</div
The average accuracy of GhostNet before and after image augmentation.
The average accuracy of GhostNet before and after image augmentation.</p
Feature maps derived from different layers.
(a) The 8th layer. (b) The 9th layer. (c) The 10th layer. (d) The 11th layer.</p
The study area in Shenzhen, China, and travel flows extracted from the mobile phone positioning data at the base tower level.
There are 5,929 cell phone towers, and the polygons were approximated by Voronoi tessellation of the towers representing the corresponding service areas. This dataset contains the positioning data of 9.7 million phone users (approximately 57.5% of the total population) during a workday in March 2012. The thicker lines indicate that more travel flows occurred between the two Voronoi polygons. The figure was created with an open source visualization toolkit: Processing (https://processing.org/). The administrative division of a shapefile sourced from the Bureau of Planning and Natural Resources of Shenzhen (http://pnr.sz.gov.cn/ywzy/chgl/bzdtfw/).</p
The average accuracy of the model before and after using transfer learning.
The average accuracy of the model before and after using transfer learning.</p
The average accuracy of dropout position.
Unmanned Aerial Vehicles (UAVs) play an important role in remote sensing image classification because they are capable of autonomously monitoring specific areas and analyzing images. The embedded platform and deep learning are used to classify UAV images in real-time. However, given the limited memory and computational resources, deploying deep learning networks on embedded devices and real-time analysis of ground scenes still has challenges in actual applications. To balance computational cost and classification accuracy, a novel lightweight network based on the original GhostNet is presented. The computational cost of this network is reduced by changing the number of convolutional layers. Meanwhile, the fully connected layer at the end is replaced with the fully convolutional layer. To evaluate the performance of the Modified GhostNet in remote sensing scene classification, experiments are performed on three public datasets: UCMerced, AID, and NWPU-RESISC. Compared with the basic GhostNet, the Floating Point Operations (FLOPs) are reduced from 7.85 MFLOPs to 2.58 MFLOPs, the memory is reduced from 16.40 MB to 5.70 MB, and the predicted time is improved by 18.86%. Our modified GhostNet also increases the average accuracy (Acc) (4.70% in AID experiments, 3.39% in UCMerced experiments). These results indicate that our Modified GhostNet can improve the performance of lightweight networks for scene classification and effectively enable real-time monitoring of ground scenes.</div
The samples of a single category in training set before and after augmentation.
The samples of a single category in training set before and after augmentation.</p
The probability distributions of <i>D</i><sub><i>ave</i></sub> and parameters at the node level.
(a) and (c) for the LN set and (b) and (d) for the AN set. The different colors represent corresponding LN values or AN values, as shown on the top legend. As the group with LN = 1 represents those individuals who only have visited one location in one day, their parameters are always expressed as zero. The dashed horizontal line in (a) and (b) indicates the parameter values for the ensemble distribution, as referred to in Fig 5. The solid lines in (c) and (d) represent the power law with an exponential cut-off fit for each group of the LN set and AN set.</p
