1,135 research outputs found

    Deep learning in remote sensing: a review

    Get PDF
    Standing at the paradigm shift towards data-intensive science, machine learning techniques are becoming increasingly important. In particular, as a major breakthrough in the field, deep learning has proven as an extremely powerful tool in many fields. Shall we embrace deep learning as the key to all? Or, should we resist a 'black-box' solution? There are controversial opinions in the remote sensing community. In this article, we analyze the challenges of using deep learning for remote sensing data analysis, review the recent advances, and provide resources to make deep learning in remote sensing ridiculously simple to start with. More importantly, we advocate remote sensing scientists to bring their expertise into deep learning, and use it as an implicit general model to tackle unprecedented large-scale influential challenges, such as climate change and urbanization.Comment: Accepted for publication IEEE Geoscience and Remote Sensing Magazin

    Ship Detection Feature Analysis in Optical Satellite Imagery through Machine Learning Applications

    Get PDF
    Ship detection remains an important challenge within the government and the commercial industry. Current research has focused on deep learning and has found high success with large labeled datasets. However, deep learning becomes insufficient for limited datasets as well as when explainability is required. There exist scenarios in which explainability and human-in-the-loop processing are needed, such as in naval applications. In these scenarios, handcrafted features and traditional classification algorithms can be useful. This research aims at analyzing multiple textures and statistical features on a small optical satellite imagery dataset. The feature analysis consists of Haar-like features, Haralick features, Hu moments, Histogram of Oriented Gradients, grayscale intensity histograms, and Local Binary Patterns. Feature performance is measured using 8 different classification algorithms, including K-Nearest Neighbors, Logistic Regression, Gradient Boosting, Extreme Gradient Boosting, Support Vector Machine, Random Decision Forest, Extremely Randomized Trees, and Bagging. The features are analyzed individually and in different combinations. Individual feature analysis results found Haralick features achieved a precision of 92.2% and were computationally efficient. The best combination of features was Haralick features paired with Histogram of Oriented Gradients and grayscale intensity histograms. This combination achieved a precision score of 96.18% and an F1 score of 94.23%

    Ship recognition on the sea surface using aerial images taken by Uav : a deep learning approach

    Get PDF
    Dissertation submitted in partial fulfilment of the requirements for the Degree of Master of Science in Geospatial TechnologiesOceans are very important for mankind, because they are a very important source of food, they have a very large impact on the global environmental equilibrium, and it is over the oceans that most of the world commerce is done. Thus, maritime surveillance and monitoring, in particular identifying the ships used, is of great importance to oversee activities like fishing, marine transportation, navigation in general, illegal border encroachment, and search and rescue operations. In this thesis, we used images obtained with Unmanned Aerial Vehicles (UAVs) over the Atlantic Ocean to identify what type of ship (if any) is present in a given location. Images generated from UAV cameras suffer from camera motion, scale variability, variability in the sea surface and sun glares. Extracting information from these images is challenging and is mostly done by human operators, but advances in computer vision technology and development of deep learning techniques in recent years have made it possible to do so automatically. We used four of the state-of-art pretrained deep learning network models, namely VGG16, Xception, ResNet and InceptionResNet trained on ImageNet dataset, modified their original structure using transfer learning based fine tuning techniques and then trained them on our dataset to create new models. We managed to achieve very high accuracy (99.6 to 99.9% correct classifications) when classifying the ships that appear on the images of our dataset. With such a high success rate (albeit at the cost of high computing power), we can proceed to implement these algorithms on maritime patrol UAVs, and thus improve Maritime Situational Awareness

    Crucial Feature Capture and Discrimination for Limited Training Data SAR ATR

    Full text link
    Although deep learning-based methods have achieved excellent performance on SAR ATR, the fact that it is difficult to acquire and label a lot of SAR images makes these methods, which originally performed well, perform weakly. This may be because most of them consider the whole target images as input, but the researches find that, under limited training data, the deep learning model can't capture discriminative image regions in the whole images, rather focus on more useless even harmful image regions for recognition. Therefore, the results are not satisfactory. In this paper, we design a SAR ATR framework under limited training samples, which mainly consists of two branches and two modules, global assisted branch and local enhanced branch, feature capture module and feature discrimination module. In every training process, the global assisted branch first finishes the initial recognition based on the whole image. Based on the initial recognition results, the feature capture module automatically searches and locks the crucial image regions for correct recognition, which we named as the golden key of image. Then the local extract the local features from the captured crucial image regions. Finally, the overall features and local features are input into the classifier and dynamically weighted using the learnable voting parameters to collaboratively complete the final recognition under limited training samples. The model soundness experiments demonstrate the effectiveness of our method through the improvement of feature distribution and recognition probability. The experimental results and comparisons on MSTAR and OPENSAR show that our method has achieved superior recognition performance

    Artificial Neural Networks and Evolutionary Computation in Remote Sensing

    Get PDF
    Artificial neural networks (ANNs) and evolutionary computation methods have been successfully applied in remote sensing applications since they offer unique advantages for the analysis of remotely-sensed images. ANNs are effective in finding underlying relationships and structures within multidimensional datasets. Thanks to new sensors, we have images with more spectral bands at higher spatial resolutions, which clearly recall big data problems. For this purpose, evolutionary algorithms become the best solution for analysis. This book includes eleven high-quality papers, selected after a careful reviewing process, addressing current remote sensing problems. In the chapters of the book, superstructural optimization was suggested for the optimal design of feedforward neural networks, CNN networks were deployed for a nanosatellite payload to select images eligible for transmission to ground, a new weight feature value convolutional neural network (WFCNN) was applied for fine remote sensing image segmentation and extracting improved land-use information, mask regional-convolutional neural networks (Mask R-CNN) was employed for extracting valley fill faces, state-of-the-art convolutional neural network (CNN)-based object detection models were applied to automatically detect airplanes and ships in VHR satellite images, a coarse-to-fine detection strategy was employed to detect ships at different sizes, and a deep quadruplet network (DQN) was proposed for hyperspectral image classification

    Deep Metric Learning Based on Scalable Neighborhood Components for Remote Sensing Scene Characterization

    Get PDF
    With the development of convolutional neural networks (CNNs), the semantic understanding of remote sensing (RS) scenes has been significantly improved based on their prominent feature encoding capabilities. While many existing deep-learning models focus on designing different architectures, only a few works in the RS field have focused on investigating the performance of the learned feature embeddings and the associated metric space. In particular, two main loss functions have been exploited: the contrastive and the triplet loss. However, the straightforward application of these techniques to RS images may not be optimal in order to capture their neighborhood structures in the metric space due to the insufficient sampling of image pairs or triplets during the training stage and to the inherent semantic complexity of remotely sensed data. To solve these problems, we propose a new deep metric learning approach, which overcomes the limitation on the class discrimination by means of two different components: 1) scalable neighborhood component analysis (SNCA) that aims at discovering the neighborhood structure in the metric space and 2) the cross-entropy loss that aims at preserving the class discrimination capability based on the learned class prototypes. Moreover, in order to preserve feature consistency among all the minibatches during training, a novel optimization mechanism based on momentum update is introduced for minimizing the proposed loss. An extensive experimental comparison (using several state-of-the-art models and two different benchmark data sets) has been conducted to validate the effectiveness of the proposed method from different perspectives, including: 1) classification; 2) clustering; and 3) image retrieval. The related codes of this article will be made publicly available for reproducible research by the community

    Sea-Surface Object Detection Based on Electro-Optical Sensors: A Review

    Get PDF
    Sea-surface object detection is critical for navigation safety of autonomous ships. Electrooptical (EO) sensors, such as video cameras, complement radar on board in detecting small obstacle sea-surface objects. Traditionally, researchers have used horizon detection, background subtraction, and foreground segmentation techniques to detect sea-surface objects. Recently, deep learning-based object detection technologies have been gradually applied to sea-surface object detection. This article demonstrates a comprehensive overview of sea-surface object-detection approaches where the advantages and drawbacks of each technique are compared, covering four essential aspects: EO sensors and image types, traditional object-detection methods, deep learning methods, and maritime datasets collection. In particular, sea-surface object detections based on deep learning methods are thoroughly analyzed and compared with highly influential public datasets introduced as benchmarks to verify the effectiveness of these approaches. The arti
    corecore