1,267 research outputs found

    Image fusion techniqes for remote sensing applications

    Get PDF
    Image fusion refers to the acquisition, processing and synergistic combination of information provided by various sensors or by the same sensor in many measuring contexts. The aim of this survey paper is to describe three typical applications of data fusion in remote sensing. The first study case considers the problem of the Synthetic Aperture Radar (SAR) Interferometry, where a pair of antennas are used to obtain an elevation map of the observed scene; the second one refers to the fusion of multisensor and multitemporal (Landsat Thematic Mapper and SAR) images of the same site acquired at different times, by using neural networks; the third one presents a processor to fuse multifrequency, multipolarization and mutiresolution SAR images, based on wavelet transform and multiscale Kalman filter. Each study case presents also results achieved by the proposed techniques applied to real data

    Classification accuracy increase using multisensor data fusion

    Get PDF
    The practical use of very high resolution visible and near-infrared (VNIR) data is still growing (IKONOS, Quickbird, GeoEye-1, etc.) but for classification purposes the number of bands is limited in comparison to full spectral imaging. These limitations may lead to the confusion of materials such as different roofs, pavements, roads, etc. and therefore may provide wrong interpretation and use of classification products. Employment of hyperspectral data is another solution, but their low spatial resolution (comparing to multispectral data) restrict their usage for many applications. Another improvement can be achieved by fusion approaches of multisensory data since this may increase the quality of scene classification. Integration of Synthetic Aperture Radar (SAR) and optical data is widely performed for automatic classification, interpretation, and change detection. In this paper we present an approach for very high resolution SAR and multispectral data fusion for automatic classification in urban areas. Single polarization TerraSAR-X (SpotLight mode) and multispectral data are integrated using the INFOFUSE framework, consisting of feature extraction (information fission), unsupervised clustering (data representation on a finite domain and dimensionality reduction), and data aggregation (Bayesian or neural network). This framework allows a relevant way of multisource data combination following consensus theory. The classification is not influenced by the limitations of dimensionality, and the calculation complexity primarily depends on the step of dimensionality reduction. Fusion of single polarization TerraSAR-X, WorldView-2 (VNIR or full set), and Digital Surface Model (DSM) data allow for different types of urban objects to be classified into predefined classes of interest with increased accuracy. The comparison to classification results of WorldView-2 multispectral data (8 spectral bands) is provided and the numerical evaluation of the method in comparison to other established methods illustrates the advantage in the classification accuracy for many classes such as buildings, low vegetation, sport objects, forest, roads, rail roads, etc

    Registration of Multisensor Images through a Conditional Generative Adversarial Network and a Correlation-Type Similarity Measure

    Get PDF
    The automatic registration of multisensor remote sensing images is a highly challenging task due to the inherently different physical, statistical, and textural characteristics of the input data. Information-theoretic measures are often used to favor comparing local intensity distributions in the images. In this paper, a novel method based on the combination of a deep learning architecture and a correlation-type area-based functional is proposed for the registration of a multisensor pair of images, including an optical image and a synthetic aperture radar (SAR) image. The method makes use of a conditional generative adversarial network (cGAN) in order to address image-to-image translation across the optical and SAR data sources. Then, once the optical and SAR data are brought to a common domain, an area-based ℓ2 similarity measure is used together with the COBYLA constrained maximization algorithm for registration purposes. While correlation-type functionals are usually ineffective in the application to multisensor registration, exploiting the image-to-image translation capabilities of cGAN architectures allows moving the complexity of the comparison to the domain adaptation step, thus enabling the use of a simple ℓ2 similarity measure, favoring high computational efficiency, and opening the possibility to process a large amount of data at runtime. Experiments with multispectral and panchromatic optical data combined with SAR images suggest the effectiveness of this strategy and the capability of the proposed method to achieve more accurate registration as compared to state-of-the-art approaches

    High-resolution optical and SAR image fusion for building database updating

    Get PDF
    This paper addresses the issue of cartographic database (DB) creation or updating using high-resolution synthetic aperture radar and optical images. In cartographic applications, objects of interest are mainly buildings and roads. This paper proposes a processing chain to create or update building DBs. The approach is composed of two steps. First, if a DB is available, the presence of each DB object is checked in the images. Then, we verify if objects coming from an image segmentation should be included in the DB. To do those two steps, relevant features are extracted from images in the neighborhood of the considered object. The object removal/inclusion in the DB is based on a score obtained by the fusion of features in the framework of Dempster–Shafer evidence theory

    Potential of multisensor data and strategies for data acquisition and analysis

    Get PDF
    Registration and simultaneous analysis of multisensor images is useful because the multiple data sets can be compressed through image processing techniques to facilitate interpretation. This also allows integration of other spatial data sets. Techniques being developed to analyze multisensor images involve comparison of image data with a library of attributes based on physical properties measured by each sensor. This results in the ability to characterize geologic units based on their similarity to the library attributes, as well as discriminate among them. Several studies can provide information on ways to optimize multisensor remote sensing. Continued analyses of the Death Valley and San Rafael Swell data sets can provide insight into tradeoffs in spectral and spatial resolutions of the various sensors used to obtain the coregistered data sets. These include imagery from LANDSAT, SEASAT, HCMM, SIR-A, 11-channel VIS-NIR, thermal inertia images, and aircraft L- and X-band radar

    A comparison of multisensor integration methods for land cover classification in the Brazilian Amazon.

    Get PDF
    Many data fusion methods are available, but it is poorly understood which fusion method is suitable for integrating Landsat Thematic Mapper (TM) and radar data for land cover classification. This research explores the integration of Landsat TM and radar images (i.e., ALOS PALSAR L-band and RADARSAT-2 C-band) for land cover classification in a moist tropical region of the Brazilian Amazon. Different data fusion methods?principal component analysis (PCA), wavelet-merging technique (Wavelet), high-pass filter resolution-merging (HPF), and normalized multiplication (NMM)?were explored. Land cover classification was conducted with maximum likelihood classification based on different scenarios. This research indicates that individual radar data yield much poorer land cover classifications than TM data, and PALSAR L-band data perform relatively better than RADARSAT-2 C-band data. Compared to the TM data, the Wavelet multisensor fusion improved overall classification by 3.3%?5.7%, HPF performed similarly, but PCA and NMM reduced overall classification accuracy by 5.1%?6.1% and 7.6% ?12.7%, respectively. Different polarization options, such as HH and HV, work similarly when used in data fusion. This research underscores the importance of selecting a suitable data fusion method that can preserve spectral fidelity while improving spatial resolution

    Machine Learning and Pattern Recognition Methods for Remote Sensing Image Registration and Fusion

    Get PDF
    In the last decade, the remote sensing world has dramatically evolved. New types of sensor, each one collecting data with possibly different modalities, have been designed, developed, and deployed. Moreover, new missions have been planned and launched, aimed not only at collecting data of the Earth's surface, but also at acquiring planetary data in support of the study of the whole Solar system. Indeed, such a variety of technologies highlights the need for automatic methods able to effectively exploit all the available information. In the last years, lot of effort has been put in the design and development of advanced data fusion methods able to extract and make use of all the information available from as many complementary information sources as possible. Indeed, the goal of this thesis is to present novel machine learning and pattern recognition methodologies designed to support the exploitation of diverse sources of information, such as multisensor, multimodal, or multiresolution imagery. In this context, image registration plays a major role as is allows bringing two or more digital images into precise alignment for analysis and comparison. Here, image registration is tackled using both feature-based and area-based strategies. In the former case, the features of interest are extracted using a stochastic geometry model based on marked point processes, while, in the latter case, information theoretic functionals and the domain adaptation capabilities of generative adversarial networks are exploited. In addition, multisensor image registration is also applied in a large scale scenario by introducing a tiling-based strategy aimed at minimizing the computational burden, which is usually heavy in the multisensor case due to the need for information theoretic similarity measures. Moreover, automatic change detection with multiresolution and multimodality imagery is addressed via a novel Markovian framework based on a linear mixture model and on an ad-hoc multimodal energy function minimized using graph cuts or belied propagation methods. The statistics of the data at the various spatial scales is modelled through appropriate generalized Gaussian distributions and by iteratively estimating a set of virtual images, at the finest resolution, representing the data that would have been collected in case all the sensors worked at that resolution. All such methodologies have been experimentally evaluated with respect to different datasets, and with particular focus on the trade-off between the achievable performances and the demands in terms of computational resources. Moreover, such methods are also compared with state-of-the-art solutions, and are analyzed in terms of future developments, giving insights to possible future lines of research in this field
    corecore