99 research outputs found

    Application of Generalized Partial Volume Estimation for Mutual Information based Registration of High Resolution SAR and Optical Imagery

    Get PDF
    Mutual information (MI) has proven its effectiveness for automated multimodal image registration for numerous remote sensing applications like image fusion. We analyze MI performance with respect to joint histogram bin size and the employed joint histogramming technique. The affect of generalized partial volume estimation (GPVE) utilizing B-spline kernels with different histogram bin sizes on MI performance has been thoroughly explored for registration of high resolution SAR (TerraSAR-X) and optical (IKONOS-2) satellite images. Our experiments highlight possibility of an inconsistent MI behavior with different joint histogram bin size which gets reduced with an increase in order of B-spline kernel employed in GPVE. In general, bin size reduction and/or increasing B-spline order have a smoothing affect on MI surfaces and even the lowest order B-spline with a suitable histogram bin size can achieve same pixel level accuracy as achieved by the higher order kernels more consistently

    Classification accuracy increase using multisensor data fusion

    Get PDF
    The practical use of very high resolution visible and near-infrared (VNIR) data is still growing (IKONOS, Quickbird, GeoEye-1, etc.) but for classification purposes the number of bands is limited in comparison to full spectral imaging. These limitations may lead to the confusion of materials such as different roofs, pavements, roads, etc. and therefore may provide wrong interpretation and use of classification products. Employment of hyperspectral data is another solution, but their low spatial resolution (comparing to multispectral data) restrict their usage for many applications. Another improvement can be achieved by fusion approaches of multisensory data since this may increase the quality of scene classification. Integration of Synthetic Aperture Radar (SAR) and optical data is widely performed for automatic classification, interpretation, and change detection. In this paper we present an approach for very high resolution SAR and multispectral data fusion for automatic classification in urban areas. Single polarization TerraSAR-X (SpotLight mode) and multispectral data are integrated using the INFOFUSE framework, consisting of feature extraction (information fission), unsupervised clustering (data representation on a finite domain and dimensionality reduction), and data aggregation (Bayesian or neural network). This framework allows a relevant way of multisource data combination following consensus theory. The classification is not influenced by the limitations of dimensionality, and the calculation complexity primarily depends on the step of dimensionality reduction. Fusion of single polarization TerraSAR-X, WorldView-2 (VNIR or full set), and Digital Surface Model (DSM) data allow for different types of urban objects to be classified into predefined classes of interest with increased accuracy. The comparison to classification results of WorldView-2 multispectral data (8 spectral bands) is provided and the numerical evaluation of the method in comparison to other established methods illustrates the advantage in the classification accuracy for many classes such as buildings, low vegetation, sport objects, forest, roads, rail roads, etc

    A Framework for SAR-Optical Stereogrammetry over Urban Areas

    Get PDF
    Currently, numerous remote sensing satellites provide a huge volume of diverse earth observation data. As these data show different features regarding resolution, accuracy, coverage, and spectral imaging ability, fusion techniques are required to integrate the different properties of each sensor and produce useful information. For example, synthetic aperture radar (SAR) data can be fused with optical imagery to produce 3D information using stereogrammetric methods. The main focus of this study is to investigate the possibility of applying a stereogrammetry pipeline to very-high-resolution (VHR) SAR-optical image pairs. For this purpose, the applicability of semi-global matching is investigated in this unconventional multi-sensor setting. To support the image matching by reducing the search space and accelerating the identification of correct, reliable matches, the possibility of establishing an epipolarity constraint for VHR SAR-optical image pairs is investigated as well. In addition, it is shown that the absolute geolocation accuracy of VHR optical imagery with respect to VHR SAR imagery such as provided by TerraSAR-X can be improved by a multi-sensor block adjustment formulation based on rational polynomial coefficients. Finally, the feasibility of generating point clouds with a median accuracy of about 2m is demonstrated and confirms the potential of 3D reconstruction from SAR-optical image pairs over urban areas.Comment: This is the pre-acceptance version, to read the final version, please go to ISPRS Journal of Photogrammetry and Remote Sensing on ScienceDirec

    Remote Sensing for International Stability and Security - Integrating GMOSS Achievements in GMES

    Get PDF
    The Joint Research Centre of the European Commission hosted a two-day workshop "Remote sensing for international stability and security: integrating GMOSS achievements in GMES". Its aim was to disseminate the scientific and technical achievements of the Global Monitoring for Security and Stability (GMOSS) network of excellence to partners of ongoing and future GMES projects such as RESPOND, LIMES, RISK-EOS,PREVIEW, BOSS4GMES, SAFER, G-MOSAIC. The objectives of this workshop were: ¿ To bring together scientific and technical people from the GMOSS NoE and from thematically related GMES projects. ¿ To discuss and compare alternative technical solutions (e.g. final experimental understanding from GMOSS, operational procedures applied in projects such as RESPOND, pre-operational application procedures foreseen from LIMES, etc.) ¿ To draft a list of technical and scientific challenges relevant in the next future. ¿ To open GMOSS to a wider forum in the JRC This report contains abstracts of the fifteen contributions presented by European researchers. The different presentations addressed pre-processing, feature recognition, change detection and applications which represents also the structure of the report. The second part includes poster abstracts presented during a separate poster session.JRC.G.2-Global security and crisis managemen

    Alphabet-based Multisensory Data Fusion and Classification using Factor Graphs

    Get PDF
    The way of multisensory data integration is a crucial step of any data fusion method. Different physical types of sensors (optic, thermal, acoustic, or radar) with different resolutions, and different types of GIS digital data (elevation, vector map) require a proper method for data integration. Incommensurability of the data may not allow to use conventional statistical methods for fusion and processing of the data. A correct and established way of multisensory data integration is required to deal with such incommensurable data as the employment of an inappropriate methodology may lead to errors in the fusion process. To perform a proper multisensory data fusion several strategies were developed (Bayesian, linear (log linear) opinion pool, neural networks, fuzzy logic approaches). Employment of these approaches is motivated by weighted consensus theory, which lead to fusion processes that are correctly performed for the variety of data properties

    REGISTRATION OF OPTICAL AND SAR SATELLITE IMAGES BASED ON GEOMETRIC FEATURE TEMPLATES

    Get PDF

    Exploiting Deep Matching and SAR Data for the Geo-Localization Accuracy Improvement of Optical Satellite Images

    Get PDF
    Improving the geo-localization of optical satellite images is an important pre-processing step for many remote sensing tasks like scene monitoring over time or the scene analysis after sudden events. These tasks often require the fusion of geo-referenced and precisely co-registered multi-sensor data. Images captured by high resolution synthetic aperture radar (SAR) satellites have an absolute geo-location accuracy within few decimeters. This renders SAR images interesting as a source for the geo-location improvement of optical images, whose geo-location accuracy is in the range of some meters. In this paper, we are investigating a deep learning based approach for the geo-localization accuracy improvement of optical satellite images through SAR reference data. Image registration between SAR and optical satellite images requires few but accurate and reliable matching points. To derive such matching points a neural network based on a Siamese network architecture was trained to learn the two dimensional spatial shift between optical and SAR image patches. The neural network was trained over TerraSAR-X and PRISM image pairs covering greater urban areas spread over Europe. The results of the proposed method confirm that accurate and reliable matching points are generated with a higher matching accuracy and precision than state-of-the-art approaches

    A Study of Types of Sensors used in Remote Sensing

    Get PDF
    Of late, the science of Remote Sensing has been gaining a lot of interest and attention due to its wide variety of applications. Remotely sensed data can be used in various fields such as medicine, agriculture, engineering, weather forecasting, military tactics, disaster management etc. only to name a few. This article presents a study of the two categories of sensors namely optical and microwave which are used for remotely sensing the occurrence of disasters such as earthquakes, floods, landslides, avalanches, tropical cyclones and suspicious movements. The remotely sensed data acquired either through satellites or through ground based- synthetic aperture radar systems could be used to avert or mitigate a disaster or to perform a post-disaster analysis

    A NOVEL IMAGE REGISTRATION ALGORITHM FOR SAR AND OPTICAL IMAGES BASED ON VIRTUAL POINTS

    Get PDF
    corecore