1,042 research outputs found

    Two-path network with feedback connections for pan-sharpening in remote sensing

    Get PDF
    High-resolution multi-spectral images are desired for applications in remote sensing. However, multi-spectral images can only be provided in low resolutions by optical remote sensing satellites. The technique of pan-sharpening wants to generate high-resolution multi-spectral (MS) images based on a panchromatic (PAN) image and the low-resolution counterpart. The conventional deep learning based pan-sharpening methods process the panchromatic and the low-resolution image in a feedforward manner where shallow layers fail to access useful information from deep layers. To make full use of the powerful deep features that have strong representation ability, we propose a two-path network with feedback connections, through which the deep features can be rerouted for refining the shallow features in a feedback manner. Specifically, we leverage the structure of a recurrent neural network to pass the feedback information. Besides, a power feature extraction block with multiple projection pairs is designed to handle the feedback information and to produce power deep features. Extensive experimental results show the effectiveness of our proposed method

    Editorial for the Special Issue “Advanced Machine Learning for Time Series Remote Sensing Data Analysis”

    Get PDF
    This Special Issue intended to probe the impact of the adoption of advanced machine learning methods in remote sensing applications including those considering recent big data analysis, compression, multichannel, sensor and prediction techniques. In principal, this edition of the Special Issue is focused on time series data processing for remote sensing applications with special emphasis on advanced machine learning platforms. This issue is intended to provide a highly recognized international forum to present recent advances in time series remote sensing. After review, a total of eight papers have been accepted for publication in this issue

    Panchromatic and multispectral image fusion for remote sensing and earth observation: Concepts, taxonomy, literature review, evaluation methodologies and challenges ahead

    Get PDF
    Panchromatic and multispectral image fusion, termed pan-sharpening, is to merge the spatial and spectral information of the source images into a fused one, which has a higher spatial and spectral resolution and is more reliable for downstream tasks compared with any of the source images. It has been widely applied to image interpretation and pre-processing of various applications. A large number of methods have been proposed to achieve better fusion results by considering the spatial and spectral relationships among panchromatic and multispectral images. In recent years, the fast development of artificial intelligence (AI) and deep learning (DL) has significantly enhanced the development of pan-sharpening techniques. However, this field lacks a comprehensive overview of recent advances boosted by the rise of AI and DL. This paper provides a comprehensive review of a variety of pan-sharpening methods that adopt four different paradigms, i.e., component substitution, multiresolution analysis, degradation model, and deep neural networks. As an important aspect of pan-sharpening, the evaluation of the fused image is also outlined to present various assessment methods in terms of reduced-resolution and full-resolution quality measurement. Then, we conclude this paper by discussing the existing limitations, difficulties, and challenges of pan-sharpening techniques, datasets, and quality assessment. In addition, the survey summarizes the development trends in these areas, which provide useful methodological practices for researchers and professionals. Finally, the developments in pan-sharpening are summarized in the conclusion part. The aim of the survey is to serve as a referential starting point for newcomers and a common point of agreement around the research directions to be followed in this exciting area

    Panchromatic and multispectral image fusion for remote sensing and earth observation: Concepts, taxonomy, literature review, evaluation methodologies and challenges ahead

    Get PDF
    Panchromatic and multispectral image fusion, termed pan-sharpening, is to merge the spatial and spectral information of the source images into a fused one, which has a higher spatial and spectral resolution and is more reliable for downstream tasks compared with any of the source images. It has been widely applied to image interpretation and pre-processing of various applications. A large number of methods have been proposed to achieve better fusion results by considering the spatial and spectral relationships among panchromatic and multispectral images. In recent years, the fast development of artificial intelligence (AI) and deep learning (DL) has significantly enhanced the development of pan-sharpening techniques. However, this field lacks a comprehensive overview of recent advances boosted by the rise of AI and DL. This paper provides a comprehensive review of a variety of pan-sharpening methods that adopt four different paradigms, i.e., component substitution, multiresolution analysis, degradation model, and deep neural networks. As an important aspect of pan-sharpening, the evaluation of the fused image is also outlined to present various assessment methods in terms of reduced-resolution and full-resolution quality measurement. Then, we conclude this paper by discussing the existing limitations, difficulties, and challenges of pan-sharpening techniques, datasets, and quality assessment. In addition, the survey summarizes the development trends in these areas, which provide useful methodological practices for researchers and professionals. Finally, the developments in pan-sharpening are summarized in the conclusion part. The aim of the survey is to serve as a referential starting point for newcomers and a common point of agreement around the research directions to be followed in this exciting area

    Applied Deep Learning: Case Studies in Computer Vision and Natural Language Processing

    Get PDF
    Deep learning has proved to be successful for many computer vision and natural language processing applications. In this dissertation, three studies have been conducted to show the efficacy of deep learning models for computer vision and natural language processing. In the first study, an efficient deep learning model was proposed for seagrass scar detection in multispectral images which produced robust, accurate scars mappings. In the second study, an arithmetic deep learning model was developed to fuse multi-spectral images collected at different times with different resolutions to generate high-resolution images for downstream tasks including change detection, object detection, and land cover classification. In addition, a super-resolution deep model was implemented to further enhance remote sensing images. In the third study, a deep learning-based framework was proposed for fact-checking on social media to spot fake scientific news. The framework leveraged deep learning, information retrieval, and natural language processing techniques to retrieve pertinent scholarly papers for given scientific news and evaluate the credibility of the news

    Multi-source Remote Sensing for Forest Characterization and Monitoring

    Full text link
    As a dominant terrestrial ecosystem of the Earth, forest environments play profound roles in ecology, biodiversity, resource utilization, and management, which highlights the significance of forest characterization and monitoring. Some forest parameters can help track climate change and quantify the global carbon cycle and therefore attract growing attention from various research communities. Compared with traditional in-situ methods with expensive and time-consuming field works involved, airborne and spaceborne remote sensors collect cost-efficient and consistent observations at global or regional scales and have been proven to be an effective way for forest monitoring. With the looming paradigm shift toward data-intensive science and the development of remote sensors, remote sensing data with higher resolution and diversity have been the mainstream in data analysis and processing. However, significant heterogeneities in the multi-source remote sensing data largely restrain its forest applications urging the research community to come up with effective synergistic strategies. The work presented in this thesis contributes to the field by exploring the potential of the Synthetic Aperture Radar (SAR), SAR Polarimetry (PolSAR), SAR Interferometry (InSAR), Polarimetric SAR Interferometry (PolInSAR), Light Detection and Ranging (LiDAR), and multispectral remote sensing in forest characterization and monitoring from three main aspects including forest height estimation, active fire detection, and burned area mapping. First, the forest height inversion is demonstrated using airborne L-band dual-baseline repeat-pass PolInSAR data based on modified versions of the Random Motion over Ground (RMoG) model, where the scattering attenuation and wind-derived random motion are described in conditions of homogeneous and heterogeneous volume layer, respectively. A boreal and a tropical forest test site are involved in the experiment to explore the flexibility of different models over different forest types and based on that, a leveraging strategy is proposed to boost the accuracy of forest height estimation. The accuracy of the model-based forest height inversion is limited by the discrepancy between the theoretical models and actual scenarios and exhibits a strong dependency on the system and scenario parameters. Hence, high vertical accuracy LiDAR samples are employed to assist the PolInSAR-based forest height estimation. This multi-source forest height estimation is reformulated as a pan-sharpening task aiming to generate forest heights with high spatial resolution and vertical accuracy based on the synergy of the sparse LiDAR-derived heights and the information embedded in the PolInSAR data. This process is realized by a specifically designed generative adversarial network (GAN) allowing high accuracy forest height estimation less limited by theoretical models and system parameters. Related experiments are carried out over a boreal and a tropical forest to validate the flexibility of the method. An automated active fire detection framework is proposed for the medium resolution multispectral remote sensing data. The basic part of this framework is a deep-learning-based semantic segmentation model specifically designed for active fire detection. A dataset is constructed with open-access Sentinel-2 imagery for the training and testing of the deep-learning model. The developed framework allows an automated Sentinel-2 data download, processing, and generation of the active fire detection results through time and location information provided by the user. Related performance is evaluated in terms of detection accuracy and processing efficiency. The last part of this thesis explored whether the coarse burned area products can be further improved through the synergy of multispectral, SAR, and InSAR features with higher spatial resolutions. A Siamese Self-Attention (SSA) classification is proposed for the multi-sensor burned area mapping and a multi-source dataset is constructed at the object level for the training and testing. Results are analyzed by different test sites, feature sources, and classification methods to assess the improvements achieved by the proposed method. All developed methods are validated with extensive processing of multi-source data acquired by Uninhabited Aerial Vehicle Synthetic Aperture Radar (UAVSAR), Land, Vegetation, and Ice Sensor (LVIS), PolSARproSim+, Sentinel-1, and Sentinel-2. I hope these studies constitute a substantial contribution to the forest applications of multi-source remote sensing

    Deep Learning Meets Hyperspectral Image Analysis: A Multidisciplinary Review

    Get PDF
    Modern hyperspectral imaging systems produce huge datasets potentially conveying a great abundance of information; such a resource, however, poses many challenges in the analysis and interpretation of these data. Deep learning approaches certainly offer a great variety of opportunities for solving classical imaging tasks and also for approaching new stimulating problems in the spatial–spectral domain. This is fundamental in the driving sector of Remote Sensing where hyperspectral technology was born and has mostly developed, but it is perhaps even more true in the multitude of current and evolving application sectors that involve these imaging technologies. The present review develops on two fronts: on the one hand, it is aimed at domain professionals who want to have an updated overview on how hyperspectral acquisition techniques can combine with deep learning architectures to solve specific tasks in different application fields. On the other hand, we want to target the machine learning and computer vision experts by giving them a picture of how deep learning technologies are applied to hyperspectral data from a multidisciplinary perspective. The presence of these two viewpoints and the inclusion of application fields other than Remote Sensing are the original contributions of this review, which also highlights some potentialities and critical issues related to the observed development trends

    A review of technical factors to consider when designing neural networks for semantic segmentation of Earth Observation imagery

    Full text link
    Semantic segmentation (classification) of Earth Observation imagery is a crucial task in remote sensing. This paper presents a comprehensive review of technical factors to consider when designing neural networks for this purpose. The review focuses on Convolutional Neural Networks (CNNs), Recurrent Neural Networks (RNNs), Generative Adversarial Networks (GANs), and transformer models, discussing prominent design patterns for these ANN families and their implications for semantic segmentation. Common pre-processing techniques for ensuring optimal data preparation are also covered. These include methods for image normalization and chipping, as well as strategies for addressing data imbalance in training samples, and techniques for overcoming limited data, including augmentation techniques, transfer learning, and domain adaptation. By encompassing both the technical aspects of neural network design and the data-related considerations, this review provides researchers and practitioners with a comprehensive and up-to-date understanding of the factors involved in designing effective neural networks for semantic segmentation of Earth Observation imagery.Comment: 145 pages with 32 figure

    Advances in Image Processing, Analysis and Recognition Technology

    Get PDF
    For many decades, researchers have been trying to make computers’ analysis of images as effective as the system of human vision is. For this purpose, many algorithms and systems have previously been created. The whole process covers various stages, including image processing, representation and recognition. The results of this work can be applied to many computer-assisted areas of everyday life. They improve particular activities and provide handy tools, which are sometimes only for entertainment, but quite often, they significantly increase our safety. In fact, the practical implementation of image processing algorithms is particularly wide. Moreover, the rapid growth of computational complexity and computer efficiency has allowed for the development of more sophisticated and effective algorithms and tools. Although significant progress has been made so far, many issues still remain, resulting in the need for the development of novel approaches
    • …
    corecore