1,468 research outputs found
Deep Learning for Aerial Scene Understanding in High Resolution Remote Sensing Imagery from the Lab to the Wild
Diese Arbeit präsentiert die Anwendung von Deep Learning beim Verständnis von Luftszenen, z. B. Luftszenenerkennung, Multi-Label-Objektklassifizierung und semantische Segmentierung. Abgesehen vom Training tiefer Netzwerke unter Laborbedingungen bietet diese Arbeit auch Lernstrategien für praktische Szenarien, z. B. werden Daten ohne Einschränkungen gesammelt oder Annotationen sind knapp
Low-Shot Learning for the Semantic Segmentation of Remote Sensing Imagery
Deep-learning frameworks have made remarkable progress thanks to the creation of large annotated datasets such as ImageNet, which has over one million training images. Although this works well for color (RGB) imagery, labeled datasets for other sensor modalities (e.g., multispectral and hyperspectral) are minuscule in comparison. This is because annotated datasets are expensive and man-power intensive to complete; and since this would be impractical to accomplish for each type of sensor, current state-of-the-art approaches in computer vision are not ideal for remote sensing problems. The shortage of annotated remote sensing imagery beyond the visual spectrum has forced researchers to embrace unsupervised feature extracting frameworks. These features are learned on a per-image basis, so they tend to not generalize well across other datasets. In this dissertation, we propose three new strategies for learning feature extracting frameworks with only a small quantity of annotated image data; including 1) self-taught feature learning, 2) domain adaptation with synthetic imagery, and 3) semi-supervised classification. ``Self-taught\u27\u27 feature learning frameworks are trained with large quantities of unlabeled imagery, and then these networks extract spatial-spectral features from annotated data for supervised classification. Synthetic remote sensing imagery can be used to boot-strap a deep convolutional neural network, and then we can fine-tune the network with real imagery. Semi-supervised classifiers prevent overfitting by jointly optimizing the supervised classification task along side one or more unsupervised learning tasks (i.e., reconstruction). Although obtaining large quantities of annotated image data would be ideal, our work shows that we can make due with less cost-prohibitive methods which are more practical to the end-user
Recommended from our members
Classification of Remote Sensing Image Data Using Rsscn-7 Dataset
A novel technique for remote sensing image scene classification is employed using the Compact Vision Transformer (CVT) architecture. This model strengthens the power of deep learning and self-attention algorithms to significantly intensify the accuracy and efficiency of scene classification in remote sensing imagery. Through extensive training and evaluation of the RSSCNN7 dataset, our CVT-based model has achieved an impressive accuracy rate of 87.46% on the original dataset. This remarkable result underscores the prospect of CVT models in the domain of remote sensing and underscores their applicability in real-world scenarios. Our report furnishes an elaborate account of the model\u27s architecture, training methodology, and evaluation process, shedding light on the key insights and advancements in remote sensing image analysis. This work holds promise for a variety of applications, including agriculture, environmental surveillance, and disaster control, where precise scene classification is of utmost importance
Urban scene description for a multi scale classication of high resolution imagery case of Cape Town urban Scene
Includes abstract.Includes bibliographical references.In this paper, a multi level contextual classification approach of the City of Cape Town, South Africa is presented. The methodology developed to identify the different objects using the multi level contextual technique comprised three important phases
Automated road network extraction from high resolution multi-spectral imagery
ABSTRACT In this paper, a new approach to road network extraction from multi-spectral (MS) imagery is presented. The proposed approach begins with an image segmentation using a spectral clustering algorithm. This step focuses on the exploitation of the spectral information for feature extraction. The road cluster(s) is automatically identified using a fuzzy classifier based on a set of predefined membership functions for road surfaces and the corresponding normalized digital numbers in each multi-spectral band. A number of shape descriptors from the refined Angular Texture Signature are defined and used to reduce the misclassifications between roads and other spectrally similar objects such as parking lots, buildings or crop fields. An iterative and localized Radon transform is then performed on the classified and refined road pixels to extract road centerline segments. The detected road segments are further grouped to form the final road network, which is evaluated against a reference dataset. Our experiments on Ikonos MS, Quickbird MS, and color aerial imagery show that the proposed approach is effective in automating road network extraction from high resolution multi-spectral imagery. Results from two different evaluation schemes also indicated that the proposed approach has achieves a performance comparable to other methods
- …