30 research outputs found

    Processing of Sliding Spotlight and TOPS SAR Data Using Baseband Azimuth Scaling

    Get PDF
    This paper presents an efficient phase preserving processor for the focusing of data acquired in sliding spotlight and TOPS (Terrain Observation by Progressive Scans) imaging modes. They share in common a linear variation of the Doppler centroid along the azimuth dimension, which is due to a steering of the antenna (either mechanically or electronically) throughout the data take. Existing approaches for the azimuth processing can become inefficient due to the additional processing to overcome the folding in the focused domain. In this paper a new azimuth scaling approach is presented to perform the azimuth processing, whose kernel is exactly the same for sliding spotlight and TOPS modes. The possibility to use the proposed approach to process ScanSAR data, as well as a discussion concerning staring spotlight, are also included. Simulations with point-targets and real data acquired by TerraSAR-X in sliding spotlight and TOPS modes are used to validate the developed algorithm

    New instrument concepts for ocean sensing: analysis of the PAU-radiometer

    Get PDF
    Sea surface salinity can be remotely measured by means of L-band microwave radiometry. However, the brightness temperature also depends on the sea surface temperature and on the sea state, which is probably today one of the driving factors in the salinity retrieval error budgets of the European Space Agency's Soil Moisture and Ocean Salinity (SMOS) mission and the NASA-Comision Nacional de Actividades Espaciales Aquarius/SAC-D mission. This paper describes the Passive Advanced Unit (PAU) for ocean monitoring. PAU combines in a single instrument three different sensors: an L-band radiometer with digital beamforming (DBF) (PAU-RAD) to measure the brightness temperature of the sea at different incidence angles simultaneously, a global positioning system (GPS) reflectometer [PAU-reflectometer of Global Navigation Satellite Signals (GNSS-R)] also with DBF to measure the sea state from the delay-Doppler maps, and two infrared radiometers to provide sea surface temperature estimates. The key characteristic of this instrument is that both PAU-RAD and the PAU-GNSS/R share completely the RF/IF front-end, and analog-to-digital converters. Since in order to track the GPS-reflected signal, it is not possible to chop the antenna signal as in a Dicke radiometer, a new radiometer topology has been devised which makes uses of two receiving chains and a correlator, which has the additional advantage that both PAU-RAD and PAU-GNSS/R can be operated continuously and simultaneously to perform the sea-state corrections of the brightness temperature. This paper presents the main characteristics of the different PAU subsystems, and analyzes in detail the PAU-radiometer concept.Peer Reviewe

    MirrorSAR: An HRWS Add-On for Single-Pass Multi-Baseline SAR Interferometry

    Get PDF
    This paper reports the Phase A study results of the interferometric extension of the High-Resolution Wide-Swath (HRWS) mission with three MirrorSAR satellites. According to the MirrorSAR concept, small, low cost, transponder-like receive-only satellites without radar signal demodulation, digitization, memory storage, downlink, and synchronization are added to the planned German X-band HRWS mission. The MirrorSAR satellites fly a triple helix orbit in close formation around the HRWS orbit and span multiple single-pass interferometric baselines. A comprehensive system engineering and performance analysis is provided that includes orbit formation, MirrorLink, Doppler steering, antenna pattern and swath design, multi-static echo window timing, SAR performance, height performance and coverage analysis. The overall interferometric system design analysis of Phase A is presented. The predicted performance of the global Digital Elevation Model (DEM) is improved by one order of magnitude compared to presently available global DEM products like the TanDEM-X DEM

    Spectral Superresolution of Multispectral Imagery with Joint Sparse and Low-Rank Learning

    Full text link
    Extensive attention has been widely paid to enhance the spatial resolution of hyperspectral (HS) images with the aid of multispectral (MS) images in remote sensing. However, the ability in the fusion of HS and MS images remains to be improved, particularly in large-scale scenes, due to the limited acquisition of HS images. Alternatively, we super-resolve MS images in the spectral domain by the means of partially overlapped HS images, yielding a novel and promising topic: spectral superresolution (SSR) of MS imagery. This is challenging and less investigated task due to its high ill-posedness in inverse imaging. To this end, we develop a simple but effective method, called joint sparse and low-rank learning (J-SLoL), to spectrally enhance MS images by jointly learning low-rank HS-MS dictionary pairs from overlapped regions. J-SLoL infers and recovers the unknown hyperspectral signals over a larger coverage by sparse coding on the learned dictionary pair. Furthermore, we validate the SSR performance on three HS-MS datasets (two for classification and one for unmixing) in terms of reconstruction, classification, and unmixing by comparing with several existing state-of-the-art baselines, showing the effectiveness and superiority of the proposed J-SLoL algorithm. Furthermore, the codes and datasets will be available at: https://github.com/danfenghong/IEEE\_TGRS\_J-SLoL, contributing to the RS community

    Coupled Convolutional Neural Network with Adaptive Response Function Learning for Unsupervised Hyperspectral Super-Resolution

    Full text link
    Due to the limitations of hyperspectral imaging systems, hyperspectral imagery (HSI) often suffers from poor spatial resolution, thus hampering many applications of the imagery. Hyperspectral super-resolution refers to fusing HSI and MSI to generate an image with both high spatial and high spectral resolutions. Recently, several new methods have been proposed to solve this fusion problem, and most of these methods assume that the prior information of the Point Spread Function (PSF) and Spectral Response Function (SRF) are known. However, in practice, this information is often limited or unavailable. In this work, an unsupervised deep learning-based fusion method - HyCoNet - that can solve the problems in HSI-MSI fusion without the prior PSF and SRF information is proposed. HyCoNet consists of three coupled autoencoder nets in which the HSI and MSI are unmixed into endmembers and abundances based on the linear unmixing model. Two special convolutional layers are designed to act as a bridge that coordinates with the three autoencoder nets, and the PSF and SRF parameters are learned adaptively in the two convolution layers during the training process. Furthermore, driven by the joint loss function, the proposed method is straightforward and easily implemented in an end-to-end training manner. The experiments performed in the study demonstrate that the proposed method performs well and produces robust results for different datasets and arbitrary PSFs and SRFs

    Effects of foam on ocean surface microwave emission inferred from radiometric observations of reproducible breaking waves

    Get PDF
    Includes bibliographical references.WindSat, the first satellite polarimetric microwave radiometer, and the NPOESS Conical Microwave Imager/Sounder both have as a key objective the retrieval of the ocean surface wind vector from radiometric brightness temperatures. Available observations and models to date show that the wind direction signal is only 1-3 K peak-to-peak at 19 and 37 GHz, much smaller than the wind speed signal. In order to obtain sufficient accuracy for reliable wind direction retrieval, uncertainties in geophysical modeling of the sea surface emission on the order of 0.2 K need to be removed. The surface roughness spectrum has been addressed by many studies, but the azimuthal signature of the microwave emission from breaking waves and foam has not been adequately addressed. RECENtly, a number of experiments have been conducted to quantify the increase in sea surface microwave emission due to foam. Measurements from the Floating Instrumentation Platform indicated that the increase in ocean surface emission due to breaking waves may depend on the incidence and azimuth angles of observation. The need to quantify this dependence motivated systematic measurement of the microwave emission from reproducible breaking waves as a function of incidence and azimuth angles. A number of empirical parameterizations of whitecap coverage with wind speed were used to estimate the increase in brightness temperatures measured by a satellite microwave radiometer due to wave breaking in the field of view. These results provide the first empirically based parameterization with wind speed of the effect of breaking waves and foam on satellite brightness temperatures at 10.8, 19, and 37 GHz.This work was supported in part by the Department of the Navy, Office of Naval Research under Awards N00014-00-1-0615 (ONR/YIP) and N00014-03-1-0044 (Space and Remote Sensing) to the University of Massachusetts Amherst, and N00014-00-1-0152 (Space and Remote Sensing) to the University of Washington. The National Polar-orbiting Operational environmental Satellite System Integrated Program Office supported the Naval Research Laboratory's participation through Award NA02AANEG0338 and supported data analysis at Colorado State University and the University of Washington through Award NA05AANEG0153

    Graph Relation Network: Modeling Relations Between Scenes for Multilabel Remote-Sensing Image Classification and Retrieval

    Get PDF
    Due to the proliferation of large-scale remote-sensing (RS) archives with multiple annotations, multilabel RS scene classification and retrieval are becoming increasingly popular. Although some recent deep learning-based methods are able to achieve promising results in this context, the lack of research on how to learn embedding spaces under the multilabel assumption often makes these models unable to preserve complex semantic relations pervading aerial scenes, which is an important limitation in RS applications. To fill this gap, we propose a new graph relation network (GRN) for multilabel RS scene categorization. Our GRN is able to model the relations between samples (or scenes) by making use of a graph structure which is fed into network learning. For this purpose, we define a new loss function called scalable neighbor discriminative loss with binary cross entropy (SNDL-BCE) that is able to embed the graph structures through the networks more effectively. The proposed approach can guide deep learning techniques (such as convolutional neural networks) to a more discriminative metric space, where semantically similar RS scenes are closely embedded and dissimilar images are separated from a novel multilabel viewpoint. To achieve this goal, our GRN jointly maximizes a weighted leave-one-out K-nearest neighbors (KNN) score in the training set, where the weight matrix describes the contributions of the nearest neighbors associated with each RS image on its class decision, and the likelihood of the class discrimination in the multilabel scenario. An extensive experimental comparison, conducted on three multilabel RS scene data archives, validates the effectiveness of the proposed GRN in terms of KNN classification and image retrieval. The codes of this article will be made publicly available for reproducible research in the community

    Graph Relation Network: Modeling Relations between Scenes for Multi-Label Remote Sensing Image Classification and Retrieval

    Get PDF
    Due to the proliferation of large-scale remote-sensing (RS) archives with multiple annotations, multilabel RS scene classification and retrieval are becoming increasingly popular. Although some recent deep learning-based methods are able to achieve promising results in this context, the lack of research on how to learn embedding spaces under the multilabel assumption often makes these models unable to preserve complex semantic relations pervading aerial scenes, which is an important limitation in RS applications. To fill this gap, we propose a new graph relation network (GRN) for multilabel RS scene categorization. Our GRN is able to model the relations between samples (or scenes) by making use of a graph structure which is fed into network learning. For this purpose, we define a new loss function called scalable neighbor discriminative loss with binary cross entropy (SNDL-BCE) that is able to embed the graph structures through the networks more effectively. The proposed approach can guide deep learning techniques (such as convolutional neural networks) to a more discriminative metric space, where semantically similar RS scenes are closely embedded and dissimilar images are separated from a novel multilabel viewpoint. To achieve this goal, our GRN jointly maximizes a weighted leave-one-out K -nearest neighbors ( K NN) score in the training set, where the weight matrix describes the contributions of the nearest neighbors associated with each RS image on its class decision, and the likelihood of the class discrimination in the multilabel scenario. An extensive experimental comparison, conducted on three multilabel RS scene data archives, validates the effectiveness of the proposed GRN in terms of K NN classification and image retrieval. The codes of this article will be made publicly available for reproducible research in the community

    Endmember-Guided Unmixing Network (EGU-Net): A General Deep Learning Framework for Self-Supervised Hyperspectral Unmixing

    Get PDF
    Over the past decades, enormous efforts have been made to improve the performance of linear or nonlinear mixing models for hyperspectral unmixing (HU), yet their ability to simultaneously generalize various spectral variabilities (SVs) and extract physically meaningful endmembers still remains limited due to the poor ability in data fitting and reconstruction and the sensitivity to various SVs. Inspired by the powerful learning ability of deep learning (DL), we attempt to develop a general DL approach for HU, by fully considering the properties of endmembers extracted from the hyperspectral imagery, called endmember-guided unmixing network (EGU-Net). Beyond the alone autoencoder-like architecture, EGU-Net is a two-stream Siamese deep network, which learns an additional network from the pure or nearly pure endmembers to correct the weights of another unmixing network by sharing network parameters and adding spectrally meaningful constraints (e.g., nonnegativity and sum-to-one) toward a more accurate and interpretable unmixing solution. Furthermore, the resulting general framework is not only limited to pixelwise spectral unmixing but also applicable to spatial information modeling with convolutional operators for spatial–spectral unmixing. Experimental results conducted on three different datasets with the ground truth of abundance maps corresponding to each material demonstrate the effectiveness and superiority of the EGU-Net over state-of-the-art unmixing algorithms. The codes will be available from the website: https://github.com/danfenghong/IEEE_TNNLS_EGU-Net
    corecore