919 research outputs found

    On the possibility of automatic multisensor image registration

    Get PDF
    International audienceMultisensor image registration is needed in a large number of applications of remote sensing imagery. The accuracy achieved with usual methods (manual control points extraction, estimation of an analytical deformation model) is not satisfactory for many applications where a subpixel accuracy for each pixel of the image is needed (change detection or image fusion, for instance). Unfortunately, there are few works in the literature about the fine registration of multisensor images and even less about the extension of approaches similar to those based on fine correlation for the case of monomodal imagery. In this paper, we analyze the problem of the automatic multisensor image registration and we introduce similarity measures which can replace the correlation coefficient in a deformation map estimation scheme. We show an example where the deformation map between a radar image and an optical one is fully automatically estimated

    Multisource Data Integration in Remote Sensing

    Get PDF
    Papers presented at the workshop on Multisource Data Integration in Remote Sensing are compiled. The full text of these papers is included. New instruments and new sensors are discussed that can provide us with a large variety of new views of the real world. This huge amount of data has to be combined and integrated in a (computer-) model of this world. Multiple sources may give complimentary views of the world - consistent observations from different (and independent) data sources support each other and increase their credibility, while contradictions may be caused by noise, errors during processing, or misinterpretations, and can be identified as such. As a consequence, integration results are very reliable and represent a valid source of information for any geographical information system

    Automatic registration of multi-modal airborne imagery

    Get PDF
    This dissertation presents a novel technique based on Maximization of Mutual Information (MMI) and multi-resolution to design an algorithm for automatic registration of multi-sensor images captured by various airborne cameras. In contrast to conventional methods that extract and employ feature points, MMI-based algorithms utilize the mutual information found between two given images to compute the registration parameters. These, in turn, are then utilized to perform multi-sensor registration for remote sensing images. The results indicate that the proposed algorithms are very effective in registering infrared images taken at three different wavelengths with a high resolution visual image of a given scene. The MMI technique has proven to be very robust with images acquired with the Wild Airborne Sensor Program (WASP) multi-sensor instrument. This dissertation also shows how wavelet based techniques can be used in a multi-resolution analysis framework to significantly increase computational efficiency for images captured at different resolutions. The fundamental result of this thesis is the technique of using features in the images to enhance the robustness, accuracy and speed of MMI registration. This is done by using features to focus MMI on places that are rich in information. The new algorithm smoothly integrates with MMI and avoids any need for feature-matching, and then the applications of such extensions are studied. The first extension is the registration of cartographic maps and image datum, which is very important for map updating and change detection. This is a difficult problem because map features such as roads and buildings may be mis-located and features extracted from images may not correspond to map features. Nonetheless, it is possible to obtain a general global registration of maps and images by applying statistical techniques to map and image features. To solve the map-to-image registration problem this research extends the MMI technique through a focus-of-attention mechanism that forces MMI to utilize correspondences that have a high probability of being information rich. The gradient-based parameter search and exhaustive parameter search methods are also compared. Both qualitative and quantitative analysis are used to assess the registration accuracy. Another difficult application is the fusion of the LIDAR elevation or intensity data with imagery. Such applications are even more challenging when automated registrations algorithms are needed. To improve the registration robustness, a salient area extraction algorithm is developed to overcome the distortion in the airborne and satellite images from different sensors. This extension combines the SIFT and Harris feature detection algorithms with MMI and the Harris corner label map to address difficult multi-modal registration problems through a combination of selection and focus-of-attention mechanisms together with mutual information. This two-step approach overcomes the above problems and provides a good initialization for the final step of the registration process. Experimental results are provided that demonstrate a variety of mapping applications including multi-modal IR imagery, map and image registration and image and LIDAR registration

    The agricultural impact of the 2015–2016 floods in Ireland as mapped through Sentinel 1 satellite imagery

    Get PDF
    peer-reviewedIrish Journal of Agricultural and Food Research | Volume 58: Issue 1 The agricultural impact of the 2015–2016 floods in Ireland as mapped through Sentinel 1 satellite imagery R. O’Haraemail , S. Green and T. McCarthy DOI: https://doi.org/10.2478/ijafr-2019-0006 | Published online: 11 Oct 2019 PDF Abstract Article PDF References Recommendations Abstract The capability of Sentinel 1 C-band (5 cm wavelength) synthetic aperture radio detection and ranging (RADAR) (abbreviated as SAR) for flood mapping is demonstrated, and this approach is used to map the extent of the extensive floods that occurred throughout the Republic of Ireland in the winter of 2015–2016. Thirty-three Sentinel 1 images were used to map the area and duration of floods over a 6-mo period from November 2015 to April 2016. Flood maps for 11 separate dates charted the development and persistence of floods nationally. The maximum flood extent during this period was estimated to be ~24,356 ha. The depth of rainfall influenced the magnitude of flood in the preceding 5 d and over more extended periods to a lesser degree. Reduced photosynthetic activity on farms affected by flooding was observed in Landsat 8 vegetation index difference images compared to the previous spring. The accuracy of the flood map was assessed against reports of flooding from affected farms, as well as other satellite-derived maps from Copernicus Emergency Management Service and Sentinel 2. Monte Carlo simulated elevation data (20 m resolution, 2.5 m root mean square error [RMSE]) were used to estimate the flood’s depth and volume. Although the modelled flood height showed a strong correlation with the measured river heights, differences of several metres were observed. Future mapping strategies are discussed, which include high–temporal-resolution soil moisture data, as part of an integrated multisensor approach to flood response over a range of spatial scales

    Robust Fine Registration of Multisensor Remote Sensing Images Based on Enhanced Subpixel Phase Correlation

    Get PDF
    Automatic fine registration of multisensor images plays an essential role in many remote sensing applications. However, it is always a challenging task due to significant radiometric and textural differences. In this paper, an enhanced subpixel phase correlation method is proposed, which embeds phase congruency-based structural representation, L1-norm-based rank-one matrix approximation with adaptive masking, and stable robust model fitting into the conventional calculation framework in the frequency domain. The aim is to improve the accuracy and robustness of subpixel translation estimation in practical cases. In addition, template matching using the enhanced subpixel phase correlation is integrated to realize reliable fine registration, which is able to extract a sufficient number of well-distributed and high-accuracy tie points and reduce the local misalignment for coarsely coregistered multisensor remote sensing images. Experiments undertaken with images from different satellites and sensors were carried out in two parts: tie point matching and fine registration. The results of qualitative analysis and quantitative comparison with the state-of-the-art area-based and feature-based matching methods demonstrate the effectiveness and reliability of the proposed method for multisensor matching and registration.TU Berlin, Open-Access-Mittel – 202

    JERS-1 SAR and LANDSAT-5 TM image data fusion: An application approach for lithological mapping

    Get PDF
    Satellite image data fusion is an image processing set of procedures utilise either for image optimisation for visual photointerpretation, or for automated thematic classification with low error rate and high accuracy. Lithological mapping using remote sensing image data relies on the spectral and textural information of the rock units of the area to be mapped. These pieces of information can be derived from Landsat optical TM and JERS-1 SAR images respectively. Prior to extracting such information (spectral and textural) and fusing them together, geometric image co-registration between TM and the SAR, atmospheric correction of the TM, and SAR despeckling are required. In this thesis, an appropriate atmospheric model is developed and implemented utilising the dark pixel subtraction method for atmospheric correction. For SAR despeckling, an efficient new method is also developed to test whether the SAR filter used remove the textural information or not. For image optimisation for visual photointerpretation, a new method of spectral coding of the six bands of the optical TM data is developed. The new spectral coding method is used to produce efficient colour composite with high separability between the spectral classes similar to that if the whole six optical TM bands are used together. This spectral coded colour composite is used as a spectral component, which is then fused with the textural component represented by the despeckled JERS-1 SAR using the fusion tools, including the colour transform and the PCT. The Grey Level Cooccurrence Matrix (GLCM) technique is used to build the textural data set using the speckle filtered JERS-1 SAR data making seven textural GLCM measures. For automated thematic mapping and by the use of both the six TM spectral data and the seven textural GLCM measures, a new method of classification has been developed using the Maximum Likelihood Classifier (MLC). The method is named the sequential maximum likelihood classification and works efficiently by comparison the classified textural pixels, the classified spectral pixels, and the classified textural-spectral pixels, and gives the means of utilising the textural and spectral information for automated lithological mapping
    • …
    corecore