11 research outputs found

    Challenges and Opportunities of Multimodality and Data Fusion in Remote Sensing

    No full text
    International audience—Remote sensing is one of the most common ways to extract relevant information about the Earth and our environment. Remote sensing acquisitions can be done by both active (synthetic aperture radar, LiDAR) and passive (optical and thermal range, multispectral and hyperspectral) devices. According to the sensor, a variety of information about the Earth's surface can be obtained. The data acquired by these sensors can provide information about the structure (optical, synthetic aperture radar), elevation (LiDAR) and material content (multi and hyperspectral) of the objects in the image. Once considered together their comple-mentarity can be helpful for characterizing land use (urban analysis, precision agriculture), damage detection (e.g., in natural disasters such as floods, hurricanes, earthquakes, oil-spills in seas), and give insights to potential exploitation of resources (oil fields, minerals). In addition, repeated acquisitions of a scene at different times allows one to monitor natural resources and environmental variables (vegetation phenology, snow cover), anthropological effects (urban sprawl, deforestation), climate changes (desertification, coastal erosion) among others. In this paper, we sketch the current opportunities and challenges related to the exploitation of multimodal data for Earth observation. This is done by leveraging the outcomes of the Data Fusion contests, organized by the IEEE Geoscience and Remote Sensing Society since 2006. We will report on the outcomes of these contests, presenting the multimodal sets of data made available to the community each year, the targeted applications and an analysis of the submitted methods and results: How was multimodality considered and integrated in the processing chain? What were the improvements/new opportunities offered by the fusion? What were the objectives to be addressed and the reported solutions? And from this, what will be the next challenges

    Advantages of nonlinear intensity components for contrast-based multispectral pansharpening

    Get PDF
    In this study, we investigate whether a nonlinear intensity component can be beneficial for multispectral (MS) pansharpening based on component-substitution (CS). In classical CS methods, the intensity component is a linear combination of the spectral components and lies on a hyperplane in the vector space that contains the MS pixel values. Starting from the hyperspherical color space (HCS) fusion technique, we devise a novel method, in which the intensity component lies on a hyper-ellipsoidal surface instead of on a hyperspherical surface. The proposed method is insensitive to the format of the data, either floating-point spectral radiance values or fixed-point packed digital numbers (DNs), thanks to the use of a multivariate linear regression between the squares of the interpolated MS bands and the squared lowpass filtered Pan. The regression of squared MS, instead of the Euclidean radius used by HCS, makes the intensity component no longer lie on a hypersphere in the vector space of the MS samples, but on a hyperellipsoid. Furthermore, before the fusion is accomplished, the interpolated MS bands are corrected for atmospheric haze, in order to build a multiplicative injection model with approximately de-hazed components. Experiments on GeoEye-1 and WorldView-3 images show consistent advantages over the baseline HCS and a performance slightly superior to those of some of the most advanced methodsPeer ReviewedPostprint (published version

    Advances in Image Processing, Analysis and Recognition Technology

    Get PDF
    For many decades, researchers have been trying to make computers’ analysis of images as effective as the system of human vision is. For this purpose, many algorithms and systems have previously been created. The whole process covers various stages, including image processing, representation and recognition. The results of this work can be applied to many computer-assisted areas of everyday life. They improve particular activities and provide handy tools, which are sometimes only for entertainment, but quite often, they significantly increase our safety. In fact, the practical implementation of image processing algorithms is particularly wide. Moreover, the rapid growth of computational complexity and computer efficiency has allowed for the development of more sophisticated and effective algorithms and tools. Although significant progress has been made so far, many issues still remain, resulting in the need for the development of novel approaches

    Sparse representation based pansharpening using trained dictionary,” Geos

    Get PDF
    Abstract-Sparse representation has been used to fuse highresolution panchromatic (HRP) and low-resolution multispectral (LRM) images. However, the approach faces the difficulty that the dictionary is generated from the high-resolution multispectral (HRM) images, which are unknown. In this letter, a two-step method is proposed to train the dictionary from the HRP and LRM images. In the first step, coarse HRM images are obtained by additive wavelet fusion method. The initial dictionary is composed of randomly sampled patches from the coarse HRM images. In the second step, a linear constraint K-SVD method is designed to train the dictionary to improve its representation ability. Experimental results using QuickBird and IKONOS data indicate that the trained dictionary yields comparable fusion products with raw-patch-dictionary sampled from HRM images

    A Benchmarking Protocol for SAR Colorization: From Regression to Deep Learning Approaches

    Full text link
    Synthetic aperture radar (SAR) images are widely used in remote sensing. Interpreting SAR images can be challenging due to their intrinsic speckle noise and grayscale nature. To address this issue, SAR colorization has emerged as a research direction to colorize gray scale SAR images while preserving the original spatial information and radiometric information. However, this research field is still in its early stages, and many limitations can be highlighted. In this paper, we propose a full research line for supervised learning-based approaches to SAR colorization. Our approach includes a protocol for generating synthetic color SAR images, several baselines, and an effective method based on the conditional generative adversarial network (cGAN) for SAR colorization. We also propose numerical assessment metrics for the problem at hand. To our knowledge, this is the first attempt to propose a research line for SAR colorization that includes a protocol, a benchmark, and a complete performance evaluation. Our extensive tests demonstrate the effectiveness of our proposed cGAN-based network for SAR colorization. The code will be made publicly available.Comment: 16 pages, 16 figures, 6 table

    GIS and Coastal Basemap Research Summer Internship with Clark Labs

    Get PDF
    Our summer internship with Clark Labs will last for one and half of years, from October 2014 till the end of August 2016, while we’ve worked under the director of Dr. Ronald Eastman. The main duty for this project is to create a basemap for coastal line in three counties in the Southeast Asia, while during this summer, our duties were to wrap up the remaining tasks, such as adjusting the classification images based on the accuracy assessment results, combining all the images, making a final map, and calculating the area for each categories in each province of the three counties. Following the requirement of the M.S. GISDE program at Clark University, the following pages in this report describes in detail Clark Labs, the project that we were participated in and our reflections on this internship

    GIS and Coastal Basemap Research Summer Internship with Clark Labs

    Get PDF
    Our summer internship with Clark Labs will last for one and half of years, from October 2014 till the end of August 2016, while we’ve worked under the director of Dr. Ronald Eastman. The main duty for this project is to create a basemap for coastal line in three counties in the Southeast Asia, while during this summer, our duties were to wrap up the remaining tasks, such as adjusting the classification images based on the accuracy assessment results, combining all the images, making a final map, and calculating the area for each categories in each province of the three counties. Following the requirement of the M.S. GISDE program at Clark University, the following pages in this report describes in detail Clark Labs, the project that we were participated in and our reflections on this internship

    Advanced Pre-Processing and Change-Detection Techniques for the Analysis of Multitemporal VHR Remote Sensing Images

    Get PDF
    Remote sensing images regularly acquired by satellite over the same geographical areas (multitemporal images) provide very important information on the land cover dynamic. In the last years the ever increasing availability of multitemporal very high geometrical resolution (VHR) remote sensing images (which have sub-metric resolution) resulted in new potentially relevant applications related to environmental monitoring and land cover control and management. The most of these applications are associated with the analysis of dynamic phenomena (both anthropic and non anthropic) that occur at different scales and result in changes on the Earth surface. In this context, in order to adequately exploit the huge amount of data acquired by remote sensing satellites, it is mandatory to develop unsupervised and automatic techniques for an efficient and effective analysis of such kind of multitemporal data. In the literature several techniques have been developed for the automatic analysis of multitemporal medium/high resolution data. However these techniques do not result effective when dealing with VHR images. The main reasons consist in their inability both to exploit the high geometrical detail content of VHR data and to model the multiscale nature of the scene (and therefore of possible changes). In this framework it is important to develop unsupervised change-detection(CD) methods able to automatically manage the large amount of information of VHR data, without the need of any prior information on the area under investigation. Even if these methods usually identify only the presence/absence of changes without giving information about the kind of change occurred, they are considered the most interesting from an operational perspective, as in the most of the applications no multitemporal ground truth information is available. Considering the above mentioned limitations, in this thesis we study the main problems related to multitemporal VHR images with particular attention to registration noise (i.e. the noise related to a non-perfect alignment of the multitemporal images under investigation). Then, on the basis of the results of the conducted analysis, we develop robust unsupervised and automatic change-detection methods. In particular, the following specific issues are addressed in this work: 1. Analysis of the effects of registration noise in multitemporal VHR images and definition of a method for the estimation of the distribution of such kind of noise useful for defining: a. Change-detection techniques robust to registration noise (RN); the proposed techniques are able to significantly reduce the false alarm rate due to RN that is raised by the standard CD techniques when dealing with VHR images. b. Effective registration methods; the proposed strategies are based on a multiscale analysis of the scene which allows one to extract accurate control points for the registration of VHR images. 2. Detection and discrimination of multiple changes in multitemporal images; this techniques allow one to overcome the limitation of the existing unsupervised techniques, as they are able to identify and separate different kinds of change without any prior information on the study areas. 3. Pre-processing techniques for optimizing change detection on VHR images; in particular, in this context we evaluate the impact of: a. Image transformation techniques on the results of the CD process; b. Different strategies of image pansharpening applied to the original multitemporal images on the results of the CD process. For each of the above mentioned topic an analysis of the state of the art is carried out, the limitations of existing methods are pointed out and the proposed solutions to the addressed problems are described in details. Finally, experimental results conducted on both simulated and real data are reported in order to show and confirm the validity of all the proposed methods

    Deep Learning based data-fusion methods for remote sensing applications

    Get PDF
    In the last years, an increasing number of remote sensing sensors have been launched to orbit around the Earth, with a continuously growing production of massive data, that are useful for a large number of monitoring applications, especially for the monitoring task. Despite modern optical sensors provide rich spectral information about Earth's surface, at very high resolution, they are weather-sensitive. On the other hand, SAR images are always available also in presence of clouds and are almost weather-insensitive, as well as daynight available, but they do not provide a rich spectral information and are severely affected by speckle "noise" that make difficult the information extraction. For the above reasons it is worth and challenging to fuse data provided by different sources and/or acquired at different times, in order to leverage on their diversity and complementarity to retrieve the target information. Motivated by the success of the employment of Deep Learning methods in many image processing tasks, in this thesis it has been faced different typical remote sensing data-fusion problems by means of suitably designed Convolutional Neural Networks
    corecore