28 research outputs found

    Processing of Sliding Spotlight and TOPS SAR Data Using Baseband Azimuth Scaling

    Get PDF
    This paper presents an efficient phase preserving processor for the focusing of data acquired in sliding spotlight and TOPS (Terrain Observation by Progressive Scans) imaging modes. They share in common a linear variation of the Doppler centroid along the azimuth dimension, which is due to a steering of the antenna (either mechanically or electronically) throughout the data take. Existing approaches for the azimuth processing can become inefficient due to the additional processing to overcome the folding in the focused domain. In this paper a new azimuth scaling approach is presented to perform the azimuth processing, whose kernel is exactly the same for sliding spotlight and TOPS modes. The possibility to use the proposed approach to process ScanSAR data, as well as a discussion concerning staring spotlight, are also included. Simulations with point-targets and real data acquired by TerraSAR-X in sliding spotlight and TOPS modes are used to validate the developed algorithm

    MirrorSAR: An HRWS Add-On for Single-Pass Multi-Baseline SAR Interferometry

    Get PDF
    This paper reports the Phase A study results of the interferometric extension of the High-Resolution Wide-Swath (HRWS) mission with three MirrorSAR satellites. According to the MirrorSAR concept, small, low cost, transponder-like receive-only satellites without radar signal demodulation, digitization, memory storage, downlink, and synchronization are added to the planned German X-band HRWS mission. The MirrorSAR satellites fly a triple helix orbit in close formation around the HRWS orbit and span multiple single-pass interferometric baselines. A comprehensive system engineering and performance analysis is provided that includes orbit formation, MirrorLink, Doppler steering, antenna pattern and swath design, multi-static echo window timing, SAR performance, height performance and coverage analysis. The overall interferometric system design analysis of Phase A is presented. The predicted performance of the global Digital Elevation Model (DEM) is improved by one order of magnitude compared to presently available global DEM products like the TanDEM-X DEM

    Spectral Superresolution of Multispectral Imagery with Joint Sparse and Low-Rank Learning

    Full text link
    Extensive attention has been widely paid to enhance the spatial resolution of hyperspectral (HS) images with the aid of multispectral (MS) images in remote sensing. However, the ability in the fusion of HS and MS images remains to be improved, particularly in large-scale scenes, due to the limited acquisition of HS images. Alternatively, we super-resolve MS images in the spectral domain by the means of partially overlapped HS images, yielding a novel and promising topic: spectral superresolution (SSR) of MS imagery. This is challenging and less investigated task due to its high ill-posedness in inverse imaging. To this end, we develop a simple but effective method, called joint sparse and low-rank learning (J-SLoL), to spectrally enhance MS images by jointly learning low-rank HS-MS dictionary pairs from overlapped regions. J-SLoL infers and recovers the unknown hyperspectral signals over a larger coverage by sparse coding on the learned dictionary pair. Furthermore, we validate the SSR performance on three HS-MS datasets (two for classification and one for unmixing) in terms of reconstruction, classification, and unmixing by comparing with several existing state-of-the-art baselines, showing the effectiveness and superiority of the proposed J-SLoL algorithm. Furthermore, the codes and datasets will be available at: https://github.com/danfenghong/IEEE\_TGRS\_J-SLoL, contributing to the RS community

    Coupled Convolutional Neural Network with Adaptive Response Function Learning for Unsupervised Hyperspectral Super-Resolution

    Full text link
    Due to the limitations of hyperspectral imaging systems, hyperspectral imagery (HSI) often suffers from poor spatial resolution, thus hampering many applications of the imagery. Hyperspectral super-resolution refers to fusing HSI and MSI to generate an image with both high spatial and high spectral resolutions. Recently, several new methods have been proposed to solve this fusion problem, and most of these methods assume that the prior information of the Point Spread Function (PSF) and Spectral Response Function (SRF) are known. However, in practice, this information is often limited or unavailable. In this work, an unsupervised deep learning-based fusion method - HyCoNet - that can solve the problems in HSI-MSI fusion without the prior PSF and SRF information is proposed. HyCoNet consists of three coupled autoencoder nets in which the HSI and MSI are unmixed into endmembers and abundances based on the linear unmixing model. Two special convolutional layers are designed to act as a bridge that coordinates with the three autoencoder nets, and the PSF and SRF parameters are learned adaptively in the two convolution layers during the training process. Furthermore, driven by the joint loss function, the proposed method is straightforward and easily implemented in an end-to-end training manner. The experiments performed in the study demonstrate that the proposed method performs well and produces robust results for different datasets and arbitrary PSFs and SRFs

    Effects of foam on ocean surface microwave emission inferred from radiometric observations of reproducible breaking waves

    Get PDF
    Includes bibliographical references.WindSat, the first satellite polarimetric microwave radiometer, and the NPOESS Conical Microwave Imager/Sounder both have as a key objective the retrieval of the ocean surface wind vector from radiometric brightness temperatures. Available observations and models to date show that the wind direction signal is only 1-3 K peak-to-peak at 19 and 37 GHz, much smaller than the wind speed signal. In order to obtain sufficient accuracy for reliable wind direction retrieval, uncertainties in geophysical modeling of the sea surface emission on the order of 0.2 K need to be removed. The surface roughness spectrum has been addressed by many studies, but the azimuthal signature of the microwave emission from breaking waves and foam has not been adequately addressed. RECENtly, a number of experiments have been conducted to quantify the increase in sea surface microwave emission due to foam. Measurements from the Floating Instrumentation Platform indicated that the increase in ocean surface emission due to breaking waves may depend on the incidence and azimuth angles of observation. The need to quantify this dependence motivated systematic measurement of the microwave emission from reproducible breaking waves as a function of incidence and azimuth angles. A number of empirical parameterizations of whitecap coverage with wind speed were used to estimate the increase in brightness temperatures measured by a satellite microwave radiometer due to wave breaking in the field of view. These results provide the first empirically based parameterization with wind speed of the effect of breaking waves and foam on satellite brightness temperatures at 10.8, 19, and 37 GHz.This work was supported in part by the Department of the Navy, Office of Naval Research under Awards N00014-00-1-0615 (ONR/YIP) and N00014-03-1-0044 (Space and Remote Sensing) to the University of Massachusetts Amherst, and N00014-00-1-0152 (Space and Remote Sensing) to the University of Washington. The National Polar-orbiting Operational environmental Satellite System Integrated Program Office supported the Naval Research Laboratory's participation through Award NA02AANEG0338 and supported data analysis at Colorado State University and the University of Washington through Award NA05AANEG0153

    Graph Relation Network: Modeling Relations Between Scenes for Multilabel Remote-Sensing Image Classification and Retrieval

    Get PDF
    Due to the proliferation of large-scale remote-sensing (RS) archives with multiple annotations, multilabel RS scene classification and retrieval are becoming increasingly popular. Although some recent deep learning-based methods are able to achieve promising results in this context, the lack of research on how to learn embedding spaces under the multilabel assumption often makes these models unable to preserve complex semantic relations pervading aerial scenes, which is an important limitation in RS applications. To fill this gap, we propose a new graph relation network (GRN) for multilabel RS scene categorization. Our GRN is able to model the relations between samples (or scenes) by making use of a graph structure which is fed into network learning. For this purpose, we define a new loss function called scalable neighbor discriminative loss with binary cross entropy (SNDL-BCE) that is able to embed the graph structures through the networks more effectively. The proposed approach can guide deep learning techniques (such as convolutional neural networks) to a more discriminative metric space, where semantically similar RS scenes are closely embedded and dissimilar images are separated from a novel multilabel viewpoint. To achieve this goal, our GRN jointly maximizes a weighted leave-one-out K-nearest neighbors (KNN) score in the training set, where the weight matrix describes the contributions of the nearest neighbors associated with each RS image on its class decision, and the likelihood of the class discrimination in the multilabel scenario. An extensive experimental comparison, conducted on three multilabel RS scene data archives, validates the effectiveness of the proposed GRN in terms of KNN classification and image retrieval. The codes of this article will be made publicly available for reproducible research in the community

    Endmember-Guided Unmixing Network (EGU-Net): A General Deep Learning Framework for Self-Supervised Hyperspectral Unmixing

    Get PDF
    Over the past decades, enormous efforts have been made to improve the performance of linear or nonlinear mixing models for hyperspectral unmixing (HU), yet their ability to simultaneously generalize various spectral variabilities (SVs) and extract physically meaningful endmembers still remains limited due to the poor ability in data fitting and reconstruction and the sensitivity to various SVs. Inspired by the powerful learning ability of deep learning (DL), we attempt to develop a general DL approach for HU, by fully considering the properties of endmembers extracted from the hyperspectral imagery, called endmember-guided unmixing network (EGU-Net). Beyond the alone autoencoder-like architecture, EGU-Net is a two-stream Siamese deep network, which learns an additional network from the pure or nearly pure endmembers to correct the weights of another unmixing network by sharing network parameters and adding spectrally meaningful constraints (e.g., nonnegativity and sum-to-one) toward a more accurate and interpretable unmixing solution. Furthermore, the resulting general framework is not only limited to pixelwise spectral unmixing but also applicable to spatial information modeling with convolutional operators for spatial–spectral unmixing. Experimental results conducted on three different datasets with the ground truth of abundance maps corresponding to each material demonstrate the effectiveness and superiority of the EGU-Net over state-of-the-art unmixing algorithms. The codes will be available from the website: https://github.com/danfenghong/IEEE_TNNLS_EGU-Net

    More Diverse Means Better: Multimodal Deep Learning Meets Remote Sensing Imagery Classification

    Full text link
    Classification and identification of the materials lying over or beneath the Earth's surface have long been a fundamental but challenging research topic in geoscience and remote sensing (RS) and have garnered a growing concern owing to the recent advancements of deep learning techniques. Although deep networks have been successfully applied in single-modality-dominated classification tasks, yet their performance inevitably meets the bottleneck in complex scenes that need to be finely classified, due to the limitation of information diversity. In this work, we provide a baseline solution to the aforementioned difficulty by developing a general multimodal deep learning (MDL) framework. In particular, we also investigate a special case of multi-modality learning (MML) -- cross-modality learning (CML) that exists widely in RS image classification applications. By focusing on "what", "where", and "how" to fuse, we show different fusion strategies as well as how to train deep networks and build the network architecture. Specifically, five fusion architectures are introduced and developed, further being unified in our MDL framework. More significantly, our framework is not only limited to pixel-wise classification tasks but also applicable to spatial information modeling with convolutional neural networks (CNNs). To validate the effectiveness and superiority of the MDL framework, extensive experiments related to the settings of MML and CML are conducted on two different multimodal RS datasets. Furthermore, the codes and datasets will be available at https://github.com/danfenghong/IEEE_TGRS_MDL-RS, contributing to the RS community

    Graph Relation Network: Modeling Relations between Scenes for Multi-Label Remote Sensing Image Classification and Retrieval

    Get PDF
    Due to the proliferation of large-scale remote-sensing (RS) archives with multiple annotations, multilabel RS scene classification and retrieval are becoming increasingly popular. Although some recent deep learning-based methods are able to achieve promising results in this context, the lack of research on how to learn embedding spaces under the multilabel assumption often makes these models unable to preserve complex semantic relations pervading aerial scenes, which is an important limitation in RS applications. To fill this gap, we propose a new graph relation network (GRN) for multilabel RS scene categorization. Our GRN is able to model the relations between samples (or scenes) by making use of a graph structure which is fed into network learning. For this purpose, we define a new loss function called scalable neighbor discriminative loss with binary cross entropy (SNDL-BCE) that is able to embed the graph structures through the networks more effectively. The proposed approach can guide deep learning techniques (such as convolutional neural networks) to a more discriminative metric space, where semantically similar RS scenes are closely embedded and dissimilar images are separated from a novel multilabel viewpoint. To achieve this goal, our GRN jointly maximizes a weighted leave-one-out K -nearest neighbors ( K NN) score in the training set, where the weight matrix describes the contributions of the nearest neighbors associated with each RS image on its class decision, and the likelihood of the class discrimination in the multilabel scenario. An extensive experimental comparison, conducted on three multilabel RS scene data archives, validates the effectiveness of the proposed GRN in terms of K NN classification and image retrieval. The codes of this article will be made publicly available for reproducible research in the community

    Blind hyperspectral unmixing using an Extended Linear Mixing Model to address spectral variability

    No full text
    International audienceSpectral Unmixing is one of the main research topics in hyperspectral imaging. It can be formulated as a source separation problem whose goal is to recover the spectral signatures of the materials present in the observed scene (called endmembers) as well as their relative proportions (called fractional abundances), and this for every pixel in the image. A Linear Mixture Model is often used for its simplicity and ease of use but it implicitly assumes that a single spectrum can be completely representative of a material. However, in many scenarios, this assumption does not hold since many factors, such as illumination conditions and intrinsic variability of the endmembers, induce modifications on the spectral signatures of the materials. In this paper, we propose an algorithm to unmix hyperspectral data using a recently proposed Extended Linear Mixing Model. The proposed approach allows a pixelwise spatially coherent local variation of the endmembers, leading to scaled versions of reference endmembers. We also show that the classic nonnegative least squares, as well as other approaches to tackle spectral variability can be interpreted in the framework of this model. The results of the proposed algorithm on two different synthetic datasets, including one simulating the effect of topography on the measured reflectance through physical modelling, and on two real datasets, show that the proposed technique outperforms other methods aimed at addressing spectral variability, and can provide an accurate estimation of endmember variability along the scene thanks to the scaling factors estimation
    corecore