675 research outputs found

    The Digital Earth Observation Librarian: A Data Mining Approach for Large Satellite Images Archives

    Get PDF
    Throughout the years, various Earth Observation (EO) satellites have generated huge amounts of data. The extraction of latent information in the data repositories is not a trivial task. New methodologies and tools, being capable of handling the size, complexity and variety of data, are required. Data scientists require support for the data manipulation, labeling and information extraction processes. This paper presents our Earth Observation Image Librarian (EOLib), a modular software framework which offers innovative image data mining capabilities for TerraSAR-X and EO image data, in general. The main goal of EOLib is to reduce the time needed to bring information to end-users from Payload Ground Segments (PGS). EOLib is composed of several modules which offer functionalities such as data ingestion, feature extraction from SAR (Synthetic Aperture Radar) data, meta-data extraction, semantic definition of the image content through machine learning and data mining methods, advanced querying of the image archives based on content, meta-data and semantic categories, as well as 3-D visualization of the processed images. EOLib is operated by DLR’s (German Aerospace Center’s) Multi-Mission Payload Ground Segment of its Remote Sensing Data Center at Oberpfaffenhofen, Germany

    Road Segmentation for Remote Sensing Images using Adversarial Spatial Pyramid Networks

    Full text link
    Road extraction in remote sensing images is of great importance for a wide range of applications. Because of the complex background, and high density, most of the existing methods fail to accurately extract a road network that appears correct and complete. Moreover, they suffer from either insufficient training data or high costs of manual annotation. To address these problems, we introduce a new model to apply structured domain adaption for synthetic image generation and road segmentation. We incorporate a feature pyramid network into generative adversarial networks to minimize the difference between the source and target domains. A generator is learned to produce quality synthetic images, and the discriminator attempts to distinguish them. We also propose a feature pyramid network that improves the performance of the proposed model by extracting effective features from all the layers of the network for describing different scales objects. Indeed, a novel scale-wise architecture is introduced to learn from the multi-level feature maps and improve the semantics of the features. For optimization, the model is trained by a joint reconstruction loss function, which minimizes the difference between the fake images and the real ones. A wide range of experiments on three datasets prove the superior performance of the proposed approach in terms of accuracy and efficiency. In particular, our model achieves state-of-the-art 78.86 IOU on the Massachusetts dataset with 14.89M parameters and 86.78B FLOPs, with 4x fewer FLOPs but higher accuracy (+3.47% IOU) than the top performer among state-of-the-art approaches used in the evaluation

    Face recognition for occluded face with mask region convolutional neural network and fully convolutional network: a literature review

    Get PDF
    Face recognition technology has been used in many ways, such as in the authentication and identification process. The object raised is a piece of face image that does not have complete facial information (occluded face), it can be due to acquisition from a different point of view or shooting a face from a different angle. This object was raised because the object can affect the detection and identification performance of the face image as a whole. Deep leaning method can be used to solve face recognition problems. In previous research, more focused on face detection and recognition based on resolution, and detection of face. Mask region convolutional neural network (mask R-CNN) method still has deficiency in the segmentation section which results in a decrease in the accuracy of face identification with incomplete face information objects. The segmentation used in mask R-CNN is fully convolutional network (FCN). In this research, exploration and modification of many FCN parameters will be carried out using the CNN backbone pooling layer, and modification of mask R-CNN for face identification, besides that, modifications will be made to the bounding box regressor. it is expected that the modification results can provide the best recommendations based on accuracy

    A review of technical factors to consider when designing neural networks for semantic segmentation of Earth Observation imagery

    Full text link
    Semantic segmentation (classification) of Earth Observation imagery is a crucial task in remote sensing. This paper presents a comprehensive review of technical factors to consider when designing neural networks for this purpose. The review focuses on Convolutional Neural Networks (CNNs), Recurrent Neural Networks (RNNs), Generative Adversarial Networks (GANs), and transformer models, discussing prominent design patterns for these ANN families and their implications for semantic segmentation. Common pre-processing techniques for ensuring optimal data preparation are also covered. These include methods for image normalization and chipping, as well as strategies for addressing data imbalance in training samples, and techniques for overcoming limited data, including augmentation techniques, transfer learning, and domain adaptation. By encompassing both the technical aspects of neural network design and the data-related considerations, this review provides researchers and practitioners with a comprehensive and up-to-date understanding of the factors involved in designing effective neural networks for semantic segmentation of Earth Observation imagery.Comment: 145 pages with 32 figure
    • …
    corecore