28 research outputs found

    A pan-sharpening network using multi-resolution transformer and two-stage feature fusion

    Get PDF
    Pan-sharpening is a fundamental and crucial task in the remote sensing image processing field, which generates a high-resolution multi-spectral image by fusing a low-resolution multi-spectral image and a high-resolution panchromatic image. Recently, deep learning techniques have shown competitive results in pan-sharpening. However, diverse features in the multi-spectral and panchromatic images are not fully extracted and exploited in existing deep learning methods, which leads to information loss in the pan-sharpening process. To solve this problem, a novel pan-sharpening method based on multi-resolution transformer and two-stage feature fusion is proposed in this article. Specifically, a transformer-based multi-resolution feature extractor is designed to extract diverse image features. Then, to fully exploit features with different content and characteristics, a two-stage feature fusion strategy is adopted. In the first stage, a multi-resolution fusion module is proposed to fuse multi-spectral and panchromatic features at each scale. In the second stage, a shallow-deep fusion module is proposed to fuse shallow and deep features for detail generation. Experiments over QuickBird and WorldView-3 datasets demonstrate that the proposed method outperforms current state-of-the-art approaches visually and quantitatively with fewer parameters. Moreover, the ablation study and feature map analysis also prove the effectiveness of the transformer-based multi-resolution feature extractor and the two-stage fusion scheme

    Two-path network with feedback connections for pan-sharpening in remote sensing

    Get PDF
    High-resolution multi-spectral images are desired for applications in remote sensing. However, multi-spectral images can only be provided in low resolutions by optical remote sensing satellites. The technique of pan-sharpening wants to generate high-resolution multi-spectral (MS) images based on a panchromatic (PAN) image and the low-resolution counterpart. The conventional deep learning based pan-sharpening methods process the panchromatic and the low-resolution image in a feedforward manner where shallow layers fail to access useful information from deep layers. To make full use of the powerful deep features that have strong representation ability, we propose a two-path network with feedback connections, through which the deep features can be rerouted for refining the shallow features in a feedback manner. Specifically, we leverage the structure of a recurrent neural network to pass the feedback information. Besides, a power feature extraction block with multiple projection pairs is designed to handle the feedback information and to produce power deep features. Extensive experimental results show the effectiveness of our proposed method

    Deep Learning based data-fusion methods for remote sensing applications

    Get PDF
    In the last years, an increasing number of remote sensing sensors have been launched to orbit around the Earth, with a continuously growing production of massive data, that are useful for a large number of monitoring applications, especially for the monitoring task. Despite modern optical sensors provide rich spectral information about Earth's surface, at very high resolution, they are weather-sensitive. On the other hand, SAR images are always available also in presence of clouds and are almost weather-insensitive, as well as daynight available, but they do not provide a rich spectral information and are severely affected by speckle "noise" that make difficult the information extraction. For the above reasons it is worth and challenging to fuse data provided by different sources and/or acquired at different times, in order to leverage on their diversity and complementarity to retrieve the target information. Motivated by the success of the employment of Deep Learning methods in many image processing tasks, in this thesis it has been faced different typical remote sensing data-fusion problems by means of suitably designed Convolutional Neural Networks

    A Deep Learning Framework in Selected Remote Sensing Applications

    Get PDF
    The main research topic is designing and implementing a deep learning framework applied to remote sensing. Remote sensing techniques and applications play a crucial role in observing the Earth evolution, especially nowadays, where the effects of climate change on our life is more and more evident. A considerable amount of data are daily acquired all over the Earth. Effective exploitation of this information requires the robustness, velocity and accuracy of deep learning. This emerging need inspired the choice of this topic. The conducted studies mainly focus on two European Space Agency (ESA) missions: Sentinel 1 and Sentinel 2. Images provided by the ESA Sentinel-2 mission are rapidly becoming the main source of information for the entire remote sensing community, thanks to their unprecedented combination of spatial, spectral and temporal resolution, as well as their open access policy. The increasing interest gained by these satellites in the research laboratory and applicative scenarios pushed us to utilize them in the considered framework. The combined use of Sentinel 1 and Sentinel 2 is crucial and very prominent in different contexts and different kinds of monitoring when the growing (or changing) dynamics are very rapid. Starting from this general framework, two specific research activities were identified and investigated, leading to the results presented in this dissertation. Both these studies can be placed in the context of data fusion. The first activity deals with a super-resolution framework to improve Sentinel 2 bands supplied at 20 meters up to 10 meters. Increasing the spatial resolution of these bands is of great interest in many remote sensing applications, particularly in monitoring vegetation, rivers, forests, and so on. The second topic of the deep learning framework has been applied to the multispectral Normalized Difference Vegetation Index (NDVI) extraction, and the semantic segmentation obtained fusing Sentinel 1 and S2 data. The S1 SAR data is of great importance for the quantity of information extracted in the context of monitoring wetlands, rivers and forests, and many other contexts. In both cases, the problem was addressed with deep learning techniques, and in both cases, very lean architectures were used, demonstrating that even without the availability of computing power, it is possible to obtain high-level results. The core of this framework is a Convolutional Neural Network (CNN). {CNNs have been successfully applied to many image processing problems, like super-resolution, pansharpening, classification, and others, because of several advantages such as (i) the capability to approximate complex non-linear functions, (ii) the ease of training that allows to avoid time-consuming handcraft filter design, (iii) the parallel computational architecture. Even if a large amount of "labelled" data is required for training, the CNN performances pushed me to this architectural choice.} In our S1 and S2 integration task, we have faced and overcome the problem of manually labelled data with an approach based on integrating these two different sensors. Therefore, apart from the investigation in Sentinel-1 and Sentinel-2 integration, the main contribution in both cases of these works is, in particular, the possibility of designing a CNN-based solution that can be distinguished by its lightness from a computational point of view and consequent substantial saving of time compared to more complex deep learning state-of-the-art solutions

    Optimized Deep Belief Neural Network for Semantic Change Detection in Multi-Temporal Image

    Get PDF
    Nowadays, a massive quantity of remote sensing images is utilized from tremendous earth observation platforms. For processing a wide range of remote sensing data to be transferred based on knowledge and information of them. Therefore, the necessity for providing the automated technologies to deal with multi-spectral image is done in terms of change detection. Multi-spectral images are associated with plenty of corrupted data like noise and illumination. In order to deal with such issues several techniques are utilized but they are not effective for sensitive noise and feature correlation may be missed. Several machine learning-based techniques are introduced to change detection but it is not effective for obtaining the relevant features. In other hand, the only limited datasets are available in open-source platform; therefore, the development of new proposed model is becoming difficult. In this work, an optimized deep belief neural network model is introduced based on semantic modification finding for multi-spectral images. Initially, input images with noise destruction and contrast normalization approaches are applied. Then to notice the semantic changes present in the image, the Semantic Change Detection Deep Belief Neural Network (SCD-DBN) is introduced. This research focusing on providing a change map based on balancing noise suppression and managing the edge of regions in an appropriate way. The new change detection method can automatically create features for different images and improve search results for changed regions. The projected technique shows a lower missed finding rate in the Semantic Change Detection dataset and a more ideal rate than other approaches

    Advances in Image Processing, Analysis and Recognition Technology

    Get PDF
    For many decades, researchers have been trying to make computers’ analysis of images as effective as the system of human vision is. For this purpose, many algorithms and systems have previously been created. The whole process covers various stages, including image processing, representation and recognition. The results of this work can be applied to many computer-assisted areas of everyday life. They improve particular activities and provide handy tools, which are sometimes only for entertainment, but quite often, they significantly increase our safety. In fact, the practical implementation of image processing algorithms is particularly wide. Moreover, the rapid growth of computational complexity and computer efficiency has allowed for the development of more sophisticated and effective algorithms and tools. Although significant progress has been made so far, many issues still remain, resulting in the need for the development of novel approaches
    corecore