656 research outputs found

    Fusion of Bayesian Maximum Entropy Spectral Estimation and Variational Analysis Methods for Enhanced Radar Imaging

    Get PDF
    A new fused Bayesian maximum entropy–variational analysis (BMEVA) method for enhanced radar/synthetic aperture radar (SAR) imaging is addressed as required for high-resolution remote sensing (RS) imagery. The variational analysis (VA) paradigm is adapted via incorporating the image gradient flow norm preservation into the overall reconstruction problem to control the geometrical properties of the desired solution. The metrics structure in the corresponding image representation and solution spaces is adjusted to incorporate the VA image formalism and RS model-level considerations; in particular, system calibration data and total image gradient flow power constraints. The BMEVA method aggregates the image model and system-level considerations into the fused SSP reconstruction strategy providing a regularized balance between the noise suppression and gained spatial resolution with the VA-controlled geometrical properties of the resulting solution. The efficiency of the developed enhanced radar imaging approach is illustrated through the numerical simulations with the real-world SAR imagery.Cinvesta

    Dynamic Experiment Design Regularization Approach to Adaptive Imaging with Array Radar/SAR Sensor Systems

    Get PDF
    We consider a problem of high-resolution array radar/SAR imaging formalized in terms of a nonlinear ill-posed inverse problem of nonparametric estimation of the power spatial spectrum pattern (SSP) of the random wavefield scattered from a remotely sensed scene observed through a kernel signal formation operator and contaminated with random Gaussian noise. First, the Sobolev-type solution space is constructed to specify the class of consistent kernel SSP estimators with the reproducing kernel structures adapted to the metrics in such the solution space. Next, the “model-free” variational analysis (VA)-based image enhancement approach and the “model-based” descriptive experiment design (DEED) regularization paradigm are unified into a new dynamic experiment design (DYED) regularization framework. Application of the proposed DYED framework to the adaptive array radar/SAR imaging problem leads to a class of two-level (DEED-VA) regularized SSP reconstruction techniques that aggregate the kernel adaptive anisotropic windowing with the projections onto convex sets to enforce the consistency and robustness of the overall iterative SSP estimators. We also show how the proposed DYED regularization method may be considered as a generalization of the MVDR, APES and other high-resolution nonparametric adaptive radar sensing techniques. A family of the DYED-related algorithms is constructed and their effectiveness is finally illustrated via numerical simulations

    Unified Bayesian-Experiment Design Regularization Technique for High-Resolution of the Remote Sensing Imagery

    Get PDF
    In this paper, the problem of estimating from a finite set of measurements of the radar remotely sensed complex data signals, the power spatial spectrum pattern (SSP) of the wavefield sources distributed in the environment is cast in the framework of Bayesian minimum risk (MR) paradigm unified with the experiment design (ED) regularization technique. The fused MR-ED regularization of the ill- posed nonlinear inverse problem of the SSP reconstruction is performed via incorporating into the MR estimation strategy the projection-regularization ED constraints. The simulation examples are incorporated to illustrate the efficiency of the proposed unified MR-ED technique.Cinvesta

    Advancing Land Cover Mapping in Remote Sensing with Deep Learning

    Get PDF
    Automatic mapping of land cover in remote sensing data plays an increasingly significant role in several earth observation (EO) applications, such as sustainable development, autonomous agriculture, and urban planning. Due to the complexity of the real ground surface and environment, accurate classification of land cover types is facing many challenges. This thesis provides novel deep learning-based solutions to land cover mapping challenges such as how to deal with intricate objects and imbalanced classes in multi-spectral and high-spatial resolution remote sensing data. The first work presents a novel model to learn richer multi-scale and global contextual representations in very high-resolution remote sensing images, namely the dense dilated convolutions' merging (DDCM) network. The proposed method is light-weighted, flexible and extendable, so that it can be used as a simple yet effective encoder and decoder module to address different classification and semantic mapping challenges. Intensive experiments on different benchmark remote sensing datasets demonstrate that the proposed method can achieve better performance but consume much fewer computation resources compared with other published methods. Next, a novel graph model is developed for capturing long-range pixel dependencies in remote sensing images to improve land cover mapping. One key component in the method is the self-constructing graph (SCG) module that can effectively construct global context relations (latent graph structure) without requiring prior knowledge graphs. The proposed SCG-based models achieved competitive performance on different representative remote sensing datasets with faster training and lower computational cost compared to strong baseline models. The third work introduces a new framework, namely the multi-view self-constructing graph (MSCG) network, to extend the vanilla SCG model to be able to capture multi-view context representations with rotation invariance to achieve improved segmentation performance. Meanwhile, a novel adaptive class weighting loss function is developed to alleviate the issue of class imbalance commonly found in EO datasets for semantic segmentation. Experiments on benchmark data demonstrate the proposed framework is computationally efficient and robust to produce improved segmentation results for imbalanced classes. To address the key challenges in multi-modal land cover mapping of remote sensing data, namely, 'what', 'how' and 'where' to effectively fuse multi-source features and to efficiently learn optimal joint representations of different modalities, the last work presents a compact and scalable multi-modal deep learning framework (MultiModNet) based on two novel modules: the pyramid attention fusion module and the gated fusion unit. The proposed MultiModNet outperforms the strong baselines on two representative remote sensing datasets with fewer parameters and at a lower computational cost. Extensive ablation studies also validate the effectiveness and flexibility of the framework

    Pansharpening of images acquired with color filter arrays

    Get PDF
    International audienceIn remote sensing, a common scenario involves the simultaneous acquisition of a panchromatic (PAN), a broad-band high spatial resolution image, and a multispectral (MS) image, which is composed of several spectral bands but at lower spatial resolution. The two sensors mounted on the same platform can be found in several very high spatial resolution optical remote sensing satellites for Earth observation (e.g., Quickbird, WorldView and SPOT) In this work we investigate an alternative acquisition strategy, which combines the information from both images into a single band image with the same number of pixels of the PAN. This operation allows to significantly reduce the burden of data downlink by achieving a fixed compression ratio of 1/(1 + b/ρ 2) compared to the conventional acquisition modes. Here, b and ρ denote the amount of distinct bands in the MS image and the scale ratio between the PAN and MS, respectively (e.g.: b = ρ = 4 as in many commercial high spatial resolution satellites). Many strategies can be conceived to generate such a compressed image from a given set of PAN and MS sources. A simple option, which will be presented here, is based on an application of the color filter array (CFA) theory. Specifically, the value of each pixel in the spatial support of the synthetic image is taken from the corresponding sample either in the PAN or in a given band of the MS upsampled to the size of the PAN. The choice is deterministic and done according to a custom mask. There are several works in the literature proposing various methods to construct masks which are able to preserve as much spectral content as possible for conventional RGB images. However, those results are not directly applicable to the case at hand, since it deals with i) images with different spatial resolution, ii) potentially more than three spectral bands and, iii) in general, different radiometric dynamics across bands. A tentative approach to address these issues is presented in this work. The compressed image resulting from the proposed acquisition strategy will be processed to generate an image featuring both the spatial resolution of the PAN and the spectral bands of the MS. This final product allows a direct comparison with the result of any standard pansharpening algorithm; the latter refers to a specific instance of data fusion (i.e., fusion of a PAN and MS image), which differs from our scenario since both sources are separately taken as input. In our setting, the fusion step performed at the ground segment will jointly involve a fusion and reconstruction problem (also known as demosaicing in the CFA literature). We propose to address this problem with a variational approach. We present in this work preliminary results related to the proposed scheme on real remote sensed images, tested over two different datasets acquired by the Quickbird and Geoeye-1 platforms, which show superior performances compared to applying a basic radiometric compression algorithm to both sources before performing a pansharpening protocol. The validation of the final products in both scenarios allows to visually and numerically appreciate the tradeoff between the compression of the source data and the quality loss suffered
    corecore