259 research outputs found

    An Adaptive Semi-Parametric and Context-Based Approach to Unsupervised Change Detection in Multitemporal Remote-Sensing Images

    Get PDF
    In this paper, a novel automatic approach to the unsupervised identification of changes in multitemporal remote-sensing images is proposed. This approach, unlike classical ones, is based on the formulation of the unsupervised change-detection problem in terms of the Bayesian decision theory. In this context, an adaptive semi-parametric technique for the unsupervised estimation of the statistical terms associated with the gray levels of changed and unchanged pixels in a difference image is presented. Such a technique exploits the effectivenesses of two theoretically well-founded estimation procedures: the reduced Parzen estimate (RPE) procedure and the expectation-maximization (EM) algorithm. Then, thanks to the resulting estimates and to a Markov Random Field (MRF) approach used to model the spatial-contextual information contained in the multitemporal images considered, a change detection map is generated. The adaptive semi-parametric nature of the proposed technique allows its application to different kinds of remote-sensing images. Experimental results, obtained on two sets of multitemporal remote-sensing images acquired by two different sensors, confirm the validity of the proposed approach

    New techniques for the automatic registration of microwave and optical remotely sensed images

    Get PDF
    Remote sensing is a remarkable tool for monitoring and mapping the land and ocean surfaces of the Earth. Recently, with the launch of many new Earth observation satellites, there has been an increase in the amount of data that is being acquired, and the potential for mapping is greater than ever before. Furthermore, sensors which are currently operational are acquiring data in many different parts of the electromagnetic spectrum. It has long been known that by combining images that have been acquired at different wavelengths, or at different times, the ability to detect and recognise features on the ground is greatly increased. This thesis investigates the possibilities for automatically combining radar and optical remotely sensed images. The process of combining images, known as data integration, is a two step procedure: geometric integration (image registration) and radiometric integration (data fusion). Data fusion is essentially an automatic procedure, but the problems associated with automatic registration of multisource images have not, in general, been resolved. This thesis proposes a method of automatic image registration based on the extraction and matching of common features which are visible in both images. The first stage of the registration procedure uses patches as the matching primitives in order to determine the approximate alignment of the images. The second stage refines the registration results by matching edge features. Throughout the development of the proposed registration algorithm, reliability, robustness and automation were always considered priorities. Tests with both small images (512x512 pixels) and full scene images showed that the algorithm could successfully register images to an acceptable level of accuracy

    Multisource Data Integration in Remote Sensing

    Get PDF
    Papers presented at the workshop on Multisource Data Integration in Remote Sensing are compiled. The full text of these papers is included. New instruments and new sensors are discussed that can provide us with a large variety of new views of the real world. This huge amount of data has to be combined and integrated in a (computer-) model of this world. Multiple sources may give complimentary views of the world - consistent observations from different (and independent) data sources support each other and increase their credibility, while contradictions may be caused by noise, errors during processing, or misinterpretations, and can be identified as such. As a consequence, integration results are very reliable and represent a valid source of information for any geographical information system

    Segmentation and Classification of Multimodal Imagery

    Get PDF
    Segmentation and classification are two important computer vision tasks that transform input data into a compact representation that allow fast and efficient analysis. Several challenges exist in generating accurate segmentation or classification results. In a video, for example, objects often change the appearance and are partially occluded, making it difficult to delineate the object from its surroundings. This thesis proposes video segmentation and aerial image classification algorithms to address some of the problems and provide accurate results. We developed a gradient driven three-dimensional segmentation technique that partitions a video into spatiotemporal objects. The algorithm utilizes the local gradient computed at each pixel location together with the global boundary map acquired through deep learning methods to generate initial pixel groups by traversing from low to high gradient regions. A local clustering method is then employed to refine these initial pixel groups. The refined sub-volumes in the homogeneous regions of video are selected as initial seeds and iteratively combined with adjacent groups based on intensity similarities. The volume growth is terminated at the color boundaries of the video. The over-segments obtained from the above steps are then merged hierarchically by a multivariate approach yielding a final segmentation map for each frame. In addition, we also implemented a streaming version of the above algorithm that requires a lower computational memory. The results illustrate that our proposed methodology compares favorably well, on a qualitative and quantitative level, in segmentation quality and computational efficiency with the latest state of the art techniques. We also developed a convolutional neural network (CNN)-based method to efficiently combine information from multisensor remotely sensed images for pixel-wise semantic classification. The CNN features obtained from multiple spectral bands are fused at the initial layers of deep neural networks as opposed to final layers. The early fusion architecture has fewer parameters and thereby reduces the computational time and GPU memory during training and inference. We also introduce a composite architecture that fuses features throughout the network. The methods were validated on four different datasets: ISPRS Potsdam, Vaihingen, IEEE Zeebruges, and Sentinel-1, Sentinel-2 dataset. For the Sentinel-1,-2 datasets, we obtain the ground truth labels for three classes from OpenStreetMap. Results on all the images show early fusion, specifically after layer three of the network, achieves results similar to or better than a decision level fusion mechanism. The performance of the proposed architecture is also on par with the state-of-the-art results

    Automatic near real-time flood detection in high resolution X-band synthetic aperture radar satellite data using context-based classification on irregular graphs

    Get PDF
    This thesis is an outcome of the project “Flood and damage assessment using very high resolution SAR data” (SAR-HQ), which is embedded in the interdisciplinary oriented RIMAX (Risk Management of Extreme Flood Events) programme, funded by the Federal Ministry of Education and Research (BMBF). It comprises the results of three scientific papers on automatic near real-time flood detection in high resolution X-band synthetic aperture radar (SAR) satellite data for operational rapid mapping activities in terms of disaster and crisis-management support. Flood situations seem to become more frequent and destructive in many regions of the world. A rising awareness of the availability of satellite based cartographic information has led to an increase in requests to corresponding mapping services to support civil-protection and relief organizations with disaster-related mapping and analysis activities. Due to the rising number of satellite systems with high revisit frequencies, a strengthened pool of SAR data is available during operational flood mapping activities. This offers the possibility to observe the whole extent of even large-scale flood events and their spatio-temporal evolution, but also calls for computationally efficient and automatic flood detection methods, which should drastically reduce the user input required by an active image interpreter. This thesis provides solutions for the near real-time derivation of detailed flood parameters such as flood extent, flood-related backscatter changes as well as flood classification probabilities from the new generation of high resolution X-band SAR satellite imagery in a completely unsupervised way. These data are, in comparison to images from conventional medium-resolution SAR sensors, characterized by an increased intra-class and decreased inter-class variability due to the reduced mixed pixel phenomenon. This problem is addressed by utilizing multi-contextual models on irregular hierarchical graphs, which consider that semantic image information is less represented in single pixels but in homogeneous image objects and their mutual relation. A hybrid Markov random field (MRF) model is developed, which integrates scale-dependent as well as spatio-temporal contextual information into the classification process by combining hierarchical causal Markov image modeling on automatically generated irregular hierarchical graphs with noncausal Markov modeling related to planar MRFs. This model is initialized in an unsupervised manner by an automatic tile-based thresholding approach, which solves the flood detection problem in large-size SAR data with small a priori class probabilities by statistical parameterization of local bi-modal class-conditional density functions in a time efficient manner. Experiments performed on TerraSAR-X StripMap data of Southwest England and ScanSAR data of north-eastern Namibia during large-scale flooding show the effectiveness of the proposed methods in terms of classification accuracy, computational performance, and transferability. It is further demonstrated that hierarchical causal Markov models such as hierarchical maximum a posteriori (HMAP) and hierarchical marginal posterior mode (HMPM) estimation can be effectively used for modeling the inter-spatial context of X-band SAR data in terms of flood and change detection purposes. Although the HMPM estimator is computationally more demanding than the HMAP estimator, it is found to be more suitable in terms of classification accuracy. Further, it offers the possibility to compute marginal posterior entropy-based confidence maps, which are used for the generation of flood possibility maps that express that the uncertainty in labeling of each image element. The supplementary integration of intra-spatial and, optionally, temporal contextual information into the Markov model results in a reduction of classification errors. It is observed that the application of the hybrid multi-contextual Markov model on irregular graphs is able to enhance classification results in comparison to modeling on regular structures of quadtrees, which is the hierarchical representation of images usually used in MRF-based image analysis. X-band SAR systems are generally not suited for detecting flooding under dense vegetation canopies such as forests due to the low capability of the X-band signal to penetrate into media. Within this thesis a method is proposed for the automatic derivation of flood areas beneath shrubs and grasses from TerraSAR-X data. Furthermore, an approach is developed, which combines high resolution topographic information with multi-scale image segmentation to enhance the mapping accuracy in areas consisting of flooded vegetation and anthropogenic objects as well as to remove non-water look-alike areas

    Robust Modular Feature-Based Terrain-Aided Visual Navigation and Mapping

    Get PDF
    The visual feature-based Terrain-Aided Navigation (TAN) system presented in this thesis addresses the problem of constraining inertial drift introduced into the location estimate of Unmanned Aerial Vehicles (UAVs) in GPS-denied environment. The presented TAN system utilises salient visual features representing semantic or human-interpretable objects (roads, forest and water boundaries) from onboard aerial imagery and associates them to a database of reference features created a-priori, through application of the same feature detection algorithms to satellite imagery. Correlation of the detected features with the reference features via a series of the robust data association steps allows a localisation solution to be achieved with a finite absolute bound precision defined by the certainty of the reference dataset. The feature-based Visual Navigation System (VNS) presented in this thesis was originally developed for a navigation application using simulated multi-year satellite image datasets. The extension of the system application into the mapping domain, in turn, has been based on the real (not simulated) flight data and imagery. In the mapping study the full potential of the system, being a versatile tool for enhancing the accuracy of the information derived from the aerial imagery has been demonstrated. Not only have the visual features, such as road networks, shorelines and water bodies, been used to obtain a position ’fix’, they have also been used in reverse for accurate mapping of vehicles detected on the roads into an inertial space with improved precision. Combined correction of the geo-coding errors and improved aircraft localisation formed a robust solution to the defense mapping application. A system of the proposed design will provide a complete independent navigation solution to an autonomous UAV and additionally give it object tracking capability

    Code-Aligned Autoencoders for Unsupervised Change Detection in Multimodal Remote Sensing Images

    Get PDF
    Image translation with convolutional autoencoders has recently been used as an approach to multimodal change detection (CD) in bitemporal satellite images. A main challenge is the alignment of the code spaces by reducing the contribution of change pixels to the learning of the translation function. Many existing approaches train the networks by exploiting supervised information of the change areas, which, however, is not always available. We propose to extract relational pixel information captured by domain-specific affinity matrices at the input and use this to enforce alignment of the code spaces and reduce the impact of change pixels on the learning objective. A change prior is derived in an unsupervised fashion from pixel pair affinities that are comparable across domains. To achieve code space alignment, we enforce pixels with similar affinity relations in the input domains to be correlated also in code space. We demonstrate the utility of this procedure in combination with cycle consistency. The proposed approach is compared with the state-of-the-art machine learning and deep learning algorithms. Experiments conducted on four real and representative datasets show the effectiveness of our methodology

    Image Simulation in Remote Sensing

    Get PDF
    Remote sensing is being actively researched in the fields of environment, military and urban planning through technologies such as monitoring of natural climate phenomena on the earth, land cover classification, and object detection. Recently, satellites equipped with observation cameras of various resolutions were launched, and remote sensing images are acquired by various observation methods including cluster satellites. However, the atmospheric and environmental conditions present in the observed scene degrade the quality of images or interrupt the capture of the Earth's surface information. One method to overcome this is by generating synthetic images through image simulation. Synthetic images can be generated by using statistical or knowledge-based models or by using spectral and optic-based models to create a simulated image in place of the unobtained image at a required time. Various proposed methodologies will provide economical utility in the generation of image learning materials and time series data through image simulation. The 6 published articles cover various topics and applications central to Remote sensing image simulation. Although submission to this Special Issue is now closed, the need for further in-depth research and development related to image simulation of High-spatial and spectral resolution, sensor fusion and colorization remains.I would like to take this opportunity to express my most profound appreciation to the MDPI Book staff, the editorial team of Applied Sciences journal, especially Ms. Nimo Lang, the assistant editor of this Special Issue, talented authors, and professional reviewers

    Determination of surface water area using multitemporal SAR imagery

    Get PDF
    Inland water and freshwater constitute a valuable natural resource in economic, cultural, scientific and educational terms. Their conservation and management are critical to the interests of all humans, nations and governments. In many regions these precious heritages are in crisis. The main focus of this research is to investigate the capability of time variable ENVISAT ASAR imagery to extract water surface and assess the water surface area variations of lake Poyang in the basin of Yangtze river, the largest freshwater lake in China. Nevertheless, the lake has been in a critical situation in recent years due to a decrease of surface water caused by climate change and human activities. In order to classify water and land areas and to achieve the temporal changes of water surface area from ASAR images during the period 2006-2011, the image segmentation technique was implemented. For this purpose, a thorough analysis of the SAR system and its properties is first discussed. Indeed, some impairments can affect the SAR imaging signals. These impairments such as different types of scattering, surface roughness, dielectric property of water, speckle and geometric distortions can reduce SAR image quality. To avoid these distortions or to reduce their impact, it is therefore important to pre-process SAR images effectively and accurately. All the images were pre-processed using NEST software provided by ESA. To calculate the water surface area, each image was tiled into 9 parts and then it is segmented using two different methods. Firstly histogram for each tile is observed. Using a local adaptive thresholding technique, two local maxima were determined on the histogram and then in between these local maxima, a local minimum is determined which can be considered as the threshold. In the second technique a Gaussian curve was fitted using Levenberg-Marquardt method (1944 and 1963) to obtain a threshold. These thresholds are used to segment the image into homogeneous land and water regions. Later, the time series for both methods is derived from the estimated water surface areas. The results indicate an intense decreasing trend in Poyang Lake surface area during the period 2006-2011. Especially between 2010 and 2011, the lake significantly lost its surface area as compared to the year 2006. Finally, the results are presented for both locally adaptive thresholding and Levenberg-Marquardt methods. These results illustrate the effectiveness of the locally adaptive thresholding method to detect water surface change. A continuous monitoring of water surface change would lead to a long term time series, which is definitely beneficial for water management purposes
    corecore