257 research outputs found

    Key Information Retrieval in Hyperspectral Imagery through Spatial-Spectral Data Fusion

    Get PDF
    Hyperspectral (HS) imaging is measuring the radiance of materials within each pixel area at a large number of contiguous spectral wavelength bands. The key spatial information such as small targets and border lines are hard to be precisely detected from HS data due to the technological constraints. Therefore, the need for image processing techniques is an important field of research in HS remote sensing. A novel semisupervised spatial-spectral data fusion method for resolution enhancement of HS images through maximizing the spatial correlation of the endmembers (signature of pure or purest materials in the scene) using a superresolution mapping (SRM) technique is proposed in this paper. The method adopts a linear mixture model and a fully constrained least squares spectral unmixing algorithm to obtain the endmember abundances (fractional images) of HS images. Then, the extracted endmember distribution maps are fused with the spatial information using a spatial-spectral correlation maximizing model and a learning-based SRM technique to exploit the subpixel level data. The obtained results validate the reliability of the technique for key information retrieval. The proposed method is very efficient and is low in terms of computational cost which makes it favorable for real-time applications

    A Review of Landcover Classification with Very-High Resolution Remotely Sensed Optical Images—Analysis Unit, Model Scalability and Transferability

    Get PDF
    As an important application in remote sensing, landcover classification remains one of the most challenging tasks in very-high-resolution (VHR) image analysis. As the rapidly increasing number of Deep Learning (DL) based landcover methods and training strategies are claimed to be the state-of-the-art, the already fragmented technical landscape of landcover mapping methods has been further complicated. Although there exists a plethora of literature review work attempting to guide researchers in making an informed choice of landcover mapping methods, the articles either focus on the review of applications in a specific area or revolve around general deep learning models, which lack a systematic view of the ever advancing landcover mapping methods. In addition, issues related to training samples and model transferability have become more critical than ever in an era dominated by data-driven approaches, but these issues were addressed to a lesser extent in previous review articles regarding remote sensing classification. Therefore, in this paper, we present a systematic overview of existing methods by starting from learning methods and varying basic analysis units for landcover mapping tasks, to challenges and solutions on three aspects of scalability and transferability with a remote sensing classification focus including (1) sparsity and imbalance of data; (2) domain gaps across different geographical regions; and (3) multi-source and multi-view fusion. We discuss in detail each of these categorical methods and draw concluding remarks in these developments and recommend potential directions for the continued endeavor

    A Review of Landcover Classification with Very-High Resolution Remotely Sensed Optical Images—Analysis Unit, Model Scalability and Transferability

    Get PDF
    As an important application in remote sensing, landcover classification remains one of the most challenging tasks in very-high-resolution (VHR) image analysis. As the rapidly increasing number of Deep Learning (DL) based landcover methods and training strategies are claimed to be the state-of-the-art, the already fragmented technical landscape of landcover mapping methods has been further complicated. Although there exists a plethora of literature review work attempting to guide researchers in making an informed choice of landcover mapping methods, the articles either focus on the review of applications in a specific area or revolve around general deep learning models, which lack a systematic view of the ever advancing landcover mapping methods. In addition, issues related to training samples and model transferability have become more critical than ever in an era dominated by data-driven approaches, but these issues were addressed to a lesser extent in previous review articles regarding remote sensing classification. Therefore, in this paper, we present a systematic overview of existing methods by starting from learning methods and varying basic analysis units for landcover mapping tasks, to challenges and solutions on three aspects of scalability and transferability with a remote sensing classification focus including (1) sparsity and imbalance of data; (2) domain gaps across different geographical regions; and (3) multi-source and multi-view fusion. We discuss in detail each of these categorical methods and draw concluding remarks in these developments and recommend potential directions for the continued endeavor

    Multisource and Multitemporal Data Fusion in Remote Sensing

    Get PDF
    The sharp and recent increase in the availability of data captured by different sensors combined with their considerably heterogeneous natures poses a serious challenge for the effective and efficient processing of remotely sensed data. Such an increase in remote sensing and ancillary datasets, however, opens up the possibility of utilizing multimodal datasets in a joint manner to further improve the performance of the processing approaches with respect to the application at hand. Multisource data fusion has, therefore, received enormous attention from researchers worldwide for a wide variety of applications. Moreover, thanks to the revisit capability of several spaceborne sensors, the integration of the temporal information with the spatial and/or spectral/backscattering information of the remotely sensed data is possible and helps to move from a representation of 2D/3D data to 4D data structures, where the time variable adds new information as well as challenges for the information extraction algorithms. There are a huge number of research works dedicated to multisource and multitemporal data fusion, but the methods for the fusion of different modalities have expanded in different paths according to each research community. This paper brings together the advances of multisource and multitemporal data fusion approaches with respect to different research communities and provides a thorough and discipline-specific starting point for researchers at different levels (i.e., students, researchers, and senior researchers) willing to conduct novel investigations on this challenging topic by supplying sufficient detail and references

    Automated Synthetic Scene Generation

    Get PDF
    First principles, physics-based models help organizations developing new remote sensing instruments anticipate sensor performance by enabling the ability to create synthetic imagery for proposed sensor before a sensor is built. One of the largest challenges in modeling realistic synthetic imagery, however, is generating the spectrally attributed, three-dimensional scenes on which the models are based in a timely and affordable fashion. Additionally, manual and semi-automated approaches to synthetic scene construction which rely on spectral libraries may not adequately capture the spectral variability of real-world sites especially when the libraries consist of measurements made in other locations or in a lab. This dissertation presents a method to fully automate the generation of synthetic scenes when coincident lidar, Hyperspectral Imagery (HSI), and high-resolution imagery of a real-world site are available. The method, called the Lidar/HSI Direct (LHD) method, greatly reduces the time and manpower needed to generate a synthetic scene while also matching the modeled scene as closely as possible to a real-world site both spatially and spectrally. Furthermore, the LHD method enables the generation of synthetic scenes over sites in which ground access is not available providing the potential for improved military mission planning and increased ability to fuse information from multiple modalities and look angles. The LHD method quickly and accurately generates three-dimensional scenes providing the community with a tool to expand the library of synthetic scenes and therefore expand the potential applications of physics-based synthetic imagery modeling

    Recent Advances in Image Restoration with Applications to Real World Problems

    Get PDF
    In the past few decades, imaging hardware has improved tremendously in terms of resolution, making widespread usage of images in many diverse applications on Earth and planetary missions. However, practical issues associated with image acquisition are still affecting image quality. Some of these issues such as blurring, measurement noise, mosaicing artifacts, low spatial or spectral resolution, etc. can seriously affect the accuracy of the aforementioned applications. This book intends to provide the reader with a glimpse of the latest developments and recent advances in image restoration, which includes image super-resolution, image fusion to enhance spatial, spectral resolution, and temporal resolutions, and the generation of synthetic images using deep learning techniques. Some practical applications are also included

    Resolution Enhancement of Hyperspectral Data Exploiting Real Multi-Platform Data

    Get PDF
    Multi-platform data introduce new possibilities in the context of data fusion, as they allow to exploit several remotely sensed images acquired by different combinations of sensors. This scenario is particularly interesting for the sharpening of hyperspectral (HS) images, due to the limited availability of high-resolution (HR) sensors mounted onboard of the same platform as that of the HS device. However, the differences in the acquisition geometry and the nonsimultaneity of this kind of observations introduce further difficulties whose effects have to be taken into account in the design of data fusion algorithms. In this study, we present the most widespread HS image sharpening techniques and assess their performances by testing them over real acquisitions taken by the Earth Observing-1 (EO-1) and the WorldView-3 (WV3) satellites. We also highlight the difficulties arising from the use of multi-platform data and, at the same time, the benefits achievable through this approach
    • …
    corecore