6,346 research outputs found

    Ship Detection and Segmentation using Image Correlation

    Get PDF
    There have been intensive research interests in ship detection and segmentation due to high demands on a wide range of civil applications in the last two decades. However, existing approaches, which are mainly based on statistical properties of images, fail to detect smaller ships and boats. Specifically, known techniques are not robust enough in view of inevitable small geometric and photometric changes in images consisting of ships. In this paper a novel approach for ship detection is proposed based on correlation of maritime images. The idea comes from the observation that a fine pattern of the sea surface changes considerably from time to time whereas the ship appearance basically keeps unchanged. We want to examine whether the images have a common unaltered part, a ship in this case. To this end, we developed a method - Focused Correlation (FC) to achieve robustness to geometric distortions of the image content. Various experiments have been conducted to evaluate the effectiveness of the proposed approach.Comment: 8 pages, to be published in proc. of conference IEEE SMC 201

    Robust Feature Matching Method for SAR and Optical Images by Using Gaussian-Gamma-Shaped Bi-Windows-Based Descriptor and Geometric Constraint

    Get PDF
    Improving the matching reliability of multi-sensor imagery is one of the most challenging issues in recent years, particularly for synthetic aperture radar (SAR) and optical images. It is difficult to deal with the noise influence, geometric distortions, and nonlinear radiometric difference between SAR and optical images. In this paper, a method for SAR and optical images matching is proposed. First, interest points that are robust to speckle noise in SAR images are detected by improving the original phase-congruency-based detector. Second, feature descriptors are constructed for all interest points by combining a new Gaussian-Gamma-shaped bi-windows-based gradient operator and the histogram of oriented gradient pattern. Third, descriptor similarity and geometrical relationship are combined to constrain the matching processing. Finally, an approach based on global and local constraints is proposed to eliminate outliers. In the experiments, SAR images including COSMO-Skymed, RADARSAT-2, TerraSAR-X and HJ-1C images, and optical images including ZY-3 and Google Earth images are used to evaluate the performance of the proposed method. The experimental results demonstrate that the proposed method provides significant improvements in the number of correct matches and matching precision compared with the state-of-the-art SIFT-like methods. Near 1 pixel registration accuracy is obtained based on the matching results of the proposed method

    New techniques for the automatic registration of microwave and optical remotely sensed images

    Get PDF
    Remote sensing is a remarkable tool for monitoring and mapping the land and ocean surfaces of the Earth. Recently, with the launch of many new Earth observation satellites, there has been an increase in the amount of data that is being acquired, and the potential for mapping is greater than ever before. Furthermore, sensors which are currently operational are acquiring data in many different parts of the electromagnetic spectrum. It has long been known that by combining images that have been acquired at different wavelengths, or at different times, the ability to detect and recognise features on the ground is greatly increased. This thesis investigates the possibilities for automatically combining radar and optical remotely sensed images. The process of combining images, known as data integration, is a two step procedure: geometric integration (image registration) and radiometric integration (data fusion). Data fusion is essentially an automatic procedure, but the problems associated with automatic registration of multisource images have not, in general, been resolved. This thesis proposes a method of automatic image registration based on the extraction and matching of common features which are visible in both images. The first stage of the registration procedure uses patches as the matching primitives in order to determine the approximate alignment of the images. The second stage refines the registration results by matching edge features. Throughout the development of the proposed registration algorithm, reliability, robustness and automation were always considered priorities. Tests with both small images (512x512 pixels) and full scene images showed that the algorithm could successfully register images to an acceptable level of accuracy

    Exploiting Deep Matching and SAR Data for the Geo-Localization Accuracy Improvement of Optical Satellite Images

    Get PDF
    Improving the geo-localization of optical satellite images is an important pre-processing step for many remote sensing tasks like scene monitoring over time or the scene analysis after sudden events. These tasks often require the fusion of geo-referenced and precisely co-registered multi-sensor data. Images captured by high resolution synthetic aperture radar (SAR) satellites have an absolute geo-location accuracy within few decimeters. This renders SAR images interesting as a source for the geo-location improvement of optical images, whose geo-location accuracy is in the range of some meters. In this paper, we are investigating a deep learning based approach for the geo-localization accuracy improvement of optical satellite images through SAR reference data. Image registration between SAR and optical satellite images requires few but accurate and reliable matching points. To derive such matching points a neural network based on a Siamese network architecture was trained to learn the two dimensional spatial shift between optical and SAR image patches. The neural network was trained over TerraSAR-X and PRISM image pairs covering greater urban areas spread over Europe. The results of the proposed method confirm that accurate and reliable matching points are generated with a higher matching accuracy and precision than state-of-the-art approaches

    Interferometric Synthetic Aperture RADAR and Radargrammetry towards the Categorization of Building Changes

    Get PDF
    The purpose of this work is the investigation of SAR techniques relying on multi image acquisition for fully automatic and rapid change detection analysis at building level. In particular, the benefits and limitations of a complementary use of two specific SAR techniques, InSAR and radargrammetry, in an emergency context are examined in term of quickness, globality and accuracy. The analysis is performed using spaceborne SAR data

    Information Extraction and Modeling from Remote Sensing Images: Application to the Enhancement of Digital Elevation Models

    Get PDF
    To deal with high complexity data such as remote sensing images presenting metric resolution over large areas, an innovative, fast and robust image processing system is presented. The modeling of increasing level of information is used to extract, represent and link image features to semantic content. The potential of the proposed techniques is demonstrated with an application to enhance and regularize digital elevation models based on information collected from RS images

    Synthetic Aperture Radar (SAR) Meets Deep Learning

    Get PDF
    This reprint focuses on the application of the combination of synthetic aperture radars and depth learning technology. It aims to further promote the development of SAR image intelligent interpretation technology. A synthetic aperture radar (SAR) is an important active microwave imaging sensor, whose all-day and all-weather working capacity give it an important place in the remote sensing community. Since the United States launched the first SAR satellite, SAR has received much attention in the remote sensing community, e.g., in geological exploration, topographic mapping, disaster forecast, and traffic monitoring. It is valuable and meaningful, therefore, to study SAR-based remote sensing applications. In recent years, deep learning represented by convolution neural networks has promoted significant progress in the computer vision community, e.g., in face recognition, the driverless field and Internet of things (IoT). Deep learning can enable computational models with multiple processing layers to learn data representations with multiple-level abstractions. This can greatly improve the performance of various applications. This reprint provides a platform for researchers to handle the above significant challenges and present their innovative and cutting-edge research results when applying deep learning to SAR in various manuscript types, e.g., articles, letters, reviews and technical reports
    corecore