2,527 research outputs found

    Moving Target Analysis in ISAR Image Sequences with a Multiframe Marked Point Process Model

    Get PDF
    In this paper we propose a Multiframe Marked Point Process model of line segments and point groups for automatic target structure extraction and tracking in Inverse Synthetic Aperture Radar (ISAR) image sequences. For the purpose of dealing with scatterer scintillations and high speckle noise in the ISAR frames, we obtain the resulting target sequence by an iterative optimization process, which simultaneously considers the observed image data and various prior geometric interaction constraints between the target appearances in the consecutive frames. A detailed quantitative evaluation is performed on 8 real ISAR image sequences of different carrier ship and airplane targets, using a test database containing 545 manually annotated frames

    Unsupervised Detection of Planetary Craters by a Marked Point Process

    Get PDF
    With the launch of several planetary missions in the last decade, a large amount of planetary images is being acquired. Preferably, automatic and robust processing techniques need to be used for data analysis because of the huge amount of the acquired data. Here, the aim is to achieve a robust and general methodology for crater detection. A novel technique based on a marked point process is proposed. First, the contours in the image are extracted. The object boundaries are modeled as a configuration of an unknown number of random ellipses, i.e., the contour image is considered as a realization of a marked point process. Then, an energy function is defined, containing both an a priori energy and a likelihood term. The global minimum of this function is estimated by using reversible jump Monte-Carlo Markov chain dynamics and a simulated annealing scheme. The main idea behind marked point processes is to model objects within a stochastic framework: Marked point processes represent a very promising current approach in the stochastic image modeling and provide a powerful and methodologically rigorous framework to efficiently map and detect objects and structures in an image with an excellent robustness to noise. The proposed method for crater detection has several feasible applications. One such application area is image registration by matching the extracted features

    Classificação da cobertura da terra na planície de inundação do Lago Grande de Curuai (Amazônia, Brasil) utilizando dados multisensor e fusão de imagens

    Get PDF
    Given the limitations of different types of remote sensing images, automated land-cover classifications of the Amazon várzea may yield poor accuracy indexes. One way to improve accuracy is through the combination of images from different sensors, by either image fusion or multi-sensor classifications. Therefore, the objective of this study was to determine which classification method is more efficient in improving land cover classification accuracies for the Amazon várzea and similar wetland environments - (a) synthetically fused optical and SAR images or (b) multi-sensor classification of paired SAR and optical images. Land cover classifications based on images from a single sensor (Landsat TM or Radarsat-2) are compared with multi-sensor and image fusion classifications. Object-based image analyses (OBIA) and the J.48 data-mining algorithm were used for automated classification, and classification accuracies were assessed using the kappa index of agreement and the recently proposed allocation and quantity disagreement measures. Overall, optical-based classifications had better accuracy than SAR-based classifications. Once both datasets were combined using the multi-sensor approach, there was a 2% decrease in allocation disagreement, as the method was able to overcome part of the limitations present in both images. Accuracy decreased when image fusion methods were used, however. We therefore concluded that the multi-sensor classification method is more appropriate for classifying land cover in the Amazon várzea

    Registration of Multisensor Images through a Conditional Generative Adversarial Network and a Correlation-Type Similarity Measure

    Get PDF
    The automatic registration of multisensor remote sensing images is a highly challenging task due to the inherently different physical, statistical, and textural characteristics of the input data. Information-theoretic measures are often used to favor comparing local intensity distributions in the images. In this paper, a novel method based on the combination of a deep learning architecture and a correlation-type area-based functional is proposed for the registration of a multisensor pair of images, including an optical image and a synthetic aperture radar (SAR) image. The method makes use of a conditional generative adversarial network (cGAN) in order to address image-to-image translation across the optical and SAR data sources. Then, once the optical and SAR data are brought to a common domain, an area-based â„“2 similarity measure is used together with the COBYLA constrained maximization algorithm for registration purposes. While correlation-type functionals are usually ineffective in the application to multisensor registration, exploiting the image-to-image translation capabilities of cGAN architectures allows moving the complexity of the comparison to the domain adaptation step, thus enabling the use of a simple â„“2 similarity measure, favoring high computational efficiency, and opening the possibility to process a large amount of data at runtime. Experiments with multispectral and panchromatic optical data combined with SAR images suggest the effectiveness of this strategy and the capability of the proposed method to achieve more accurate registration as compared to state-of-the-art approaches

    Extraction of Vehicle Groups in Airborne Lidar Point Clouds with Two-Level Point Processes

    Get PDF
    In this paper we present a new object based hierarchical model for joint probabilistic extraction of vehicles and groups of corresponding vehicles - called traffic segments - in airborne Lidar point clouds collected from dense urban areas. Firstly, the 3-D point set is classified into terrain, vehicle, roof, vegetation and clutter classes. Then the points with the corresponding class labels and echo strength (i.e. intensity) values are projected to the ground. In the obtained 2-D class and intensity maps we approximate the top view projections of vehicles by rectangles. Since our tasks are simultaneously the extraction of the rectangle population which describes the position, size and orientation of the vehicles and grouping the vehicles into the traffic segments, we propose a hierarchical, Two-Level Marked Point Process (L2MPP) model for the problem. The output vehicle and traffic segment configurations are extracted by an iterative stochastic optimization algorithm. We have tested the proposed method with real data of a discrete return Lidar sensor providing up to four range measurements for each laser pulse. Using manually annotated Ground Truth information on a data set containing 1009 vehicles, we provide quantitative evaluation results showing that the L2MPP model surpasses two earlier grid-based approaches, a 3-D point-cloud-based process and a single layer MPP solution. The accuracy of the proposed method measured in F-rate is 97% at object level, 83% at pixel level and 95% at group level

    Towards a 20m global building map from Sentinel-1 SAR Data

    Get PDF
    This study introduces a technique for automatically mapping built-up areas using synthetic aperture radar (SAR) backscattering intensity and interferometric multi-temporal coherence generated from Sentinel-1 data in the framework of the Copernicus program. The underlying hypothesis is that, in SAR images, built-up areas exhibit very high backscattering values that are coherent in time. Several particular characteristics of the Sentinel-1 satellite mission are put to good use, such as its high revisit time, the availability of dual-polarized data, and its small orbital tube. The newly developed algorithm is based on an adaptive parametric thresholding that first identifies pixels with high backscattering values in both VV and VH polarimetric channels. The interferometric SAR coherence is then used to reduce false alarms. These are caused by land cover classes (other than buildings) that are characterized by high backscattering values that are not coherent in time (e.g., certain types of vegetated areas). The algorithm was tested on Sentinel-1 Interferometric Wide Swath data from five different test sites located in semiarid and arid regions in the Mediterranean region and Northern Africa. The resulting building maps were compared with the Global Urban Footprint (GUF) derived from the TerraSAR-X mission data and, on average, a 92% agreement was obtained.Peer ReviewedPostprint (published version

    Multi-Decadal Changes in Mangrove Extent, Age and Species in the Red River Estuaries of Viet Nam

    Get PDF
    This research investigated the performance of four different machine learning supervised image classifiers: artificial neural network (ANN), decision tree (DT), random forest (RF), and support vector machine (SVM) using SPOT-7 and Sentinel-1 images to classify mangrove age and species in 2019 in a Red River estuary, typical of others found in northern Viet Nam. The four classifiers were chosen because they are considered to have high accuracy, however, their use in mangrove age and species classifications has thus far been limited. A time-series of Landsat images from 1975 to 2019 was used to map mangrove extent changes using the unsupervised classification method of iterative self-organizing data analysis technique (ISODATA) and a comparison with accuracy of K-means classification, which found that mangrove extent has increased, despite a fall in the 1980s, indicating the success of mangrove plantation and forest protection efforts by local people in the study area. To evaluate the supervised image classifiers, 183 in situ training plots were assessed, 70% of them were used to train the supervised algorithms, with 30% of them employed to validate the results. In order to improve mangrove species separations, Gram–Schmidt and principal component analysis image fusion techniques were applied to generate better quality images. All supervised and unsupervised (2019) results of mangrove age, species, and extent were mapped and accuracy was evaluated. Confusion matrices were calculated showing that the classified layers agreed with the ground-truth data where most producer and user accuracies were greater than 80%. The overall accuracy and Kappa coefficients (around 0.9) indicated that the image classifications were very good. The test showed that SVM was the most accurate, followed by DT, ANN, and RF in this case study. The changes in mangrove extent identified in this study and the methods tested for using remotely sensed data will be valuable to monitoring and evaluation assessments of mangrove plantation projects

    Automatic Target Classification in Passive ISAR Range-Crossrange Images

    Get PDF
    • …
    corecore