662 research outputs found

    Deep learning-based change detection in remote sensing images:a review

    Get PDF
    Images gathered from different satellites are vastly available these days due to the fast development of remote sensing (RS) technology. These images significantly enhance the data sources of change detection (CD). CD is a technique of recognizing the dissimilarities in the images acquired at distinct intervals and are used for numerous applications, such as urban area development, disaster management, land cover object identification, etc. In recent years, deep learning (DL) techniques have been used tremendously in change detection processes, where it has achieved great success because of their practical applications. Some researchers have even claimed that DL approaches outperform traditional approaches and enhance change detection accuracy. Therefore, this review focuses on deep learning techniques, such as supervised, unsupervised, and semi-supervised for different change detection datasets, such as SAR, multispectral, hyperspectral, VHR, and heterogeneous images, and their advantages and disadvantages will be highlighted. In the end, some significant challenges are discussed to understand the context of improvements in change detection datasets and deep learning models. Overall, this review will be beneficial for the future development of CD methods

    An Overview on the Generation and Detection of Synthetic and Manipulated Satellite Images

    Get PDF
    Due to the reduction of technological costs and the increase of satellites launches, satellite images are becoming more popular and easier to obtain. Besides serving benevolent purposes, satellite data can also be used for malicious reasons such as misinformation. As a matter of fact, satellite images can be easily manipulated relying on general image editing tools. Moreover, with the surge of Deep Neural Networks (DNNs) that can generate realistic synthetic imagery belonging to various domains, additional threats related to the diffusion of synthetically generated satellite images are emerging. In this paper, we review the State of the Art (SOTA) on the generation and manipulation of satellite images. In particular, we focus on both the generation of synthetic satellite imagery from scratch, and the semantic manipulation of satellite images by means of image-transfer technologies, including the transformation of images obtained from one type of sensor to another one. We also describe forensic detection techniques that have been researched so far to classify and detect synthetic image forgeries. While we focus mostly on forensic techniques explicitly tailored to the detection of AI-generated synthetic contents, we also review some methods designed for general splicing detection, which can in principle also be used to spot AI manipulate imagesComment: 25 pages, 17 figures, 5 tables, APSIPA 202

    Deep Learning based Automated Forest Health Diagnosis from Aerial Images

    Get PDF
    Global climate change has had a drastic impact on our environment. Previous study showed that pest disaster occured from global climate change may cause a tremendous number of trees died and they inevitably became a factor of forest fire. An important portent of the forest fire is the condition of forests. Aerial image-based forest analysis can give an early detection of dead trees and living trees. In this paper, we applied a synthetic method to enlarge imagery dataset and present a new framework for automated dead tree detection from aerial images using a re-trained Mask RCNN (Mask Region-based Convolutional Neural Network) approach, with a transfer learning scheme. We apply our framework to our aerial imagery datasets,and compare eight fine-tuned models. The mean average precision score (mAP) for the best of these models reaches 54\%. Following the automated detection, we are able to automatically produce and calculate number of dead tree masks to label the dead trees in an image, as an indicator of forest health that could be linked to the causal analysis of environmental changes and the predictive likelihood of forest fire

    Artificial Neural Networks and Evolutionary Computation in Remote Sensing

    Get PDF
    Artificial neural networks (ANNs) and evolutionary computation methods have been successfully applied in remote sensing applications since they offer unique advantages for the analysis of remotely-sensed images. ANNs are effective in finding underlying relationships and structures within multidimensional datasets. Thanks to new sensors, we have images with more spectral bands at higher spatial resolutions, which clearly recall big data problems. For this purpose, evolutionary algorithms become the best solution for analysis. This book includes eleven high-quality papers, selected after a careful reviewing process, addressing current remote sensing problems. In the chapters of the book, superstructural optimization was suggested for the optimal design of feedforward neural networks, CNN networks were deployed for a nanosatellite payload to select images eligible for transmission to ground, a new weight feature value convolutional neural network (WFCNN) was applied for fine remote sensing image segmentation and extracting improved land-use information, mask regional-convolutional neural networks (Mask R-CNN) was employed for extracting valley fill faces, state-of-the-art convolutional neural network (CNN)-based object detection models were applied to automatically detect airplanes and ships in VHR satellite images, a coarse-to-fine detection strategy was employed to detect ships at different sizes, and a deep quadruplet network (DQN) was proposed for hyperspectral image classification

    Glint Avoidance and Removal in the Maritime Environment

    Get PDF
    In-scene glint greatly affect the usability of maritime imagery and several glint removal algorithms have been developed that work well in some situations. However, glint removal algorithms produce several unique artifacts when applied to very high resolution systems, particularly those with temporally offset bands. The optimal solution to avoid these artifacts is to avoid imaging in areas of high glint. The glint avoidance tool (GAT) was developed to avoid glint conditions and provide a mea- sure of parameter detectability. This work recreates the glint avoidance tool using Hydrolight, as a validation of a fast GAT using an in-water radiative transfer model which neglects in-water scattering. Because avoiding glint is not always possible, this research concentrates on the impact of glint and residual artifacts using RIT\u27s Digital Imaging and Remote Sensing Image Generation (DIRSIG) dynamic wave model and Hydrolight back-end to create accurate case 1 synthetic imagery. The synthetic imagery was used to analyze the impact of glint on automated anomaly detection, glint removal, and development of a new glint compensation technique for sensors with temporally offset bands

    Automatically generated training data for land cover classification with cnns using sentinel-2 images

    Get PDF
    Pixel-wise classification of remote sensing imagery is highly interesting for tasks like land cover classification or change detection. The acquisition of large training data sets for these tasks is challenging, but necessary to obtain good results with deep learning algorithms such as convolutional neural networks (CNN). In this paper we present a method for the automatic generation of a large amount of training data by combining satellite imagery with reference data from an available geospatial database. Due to this combination of different data sources the resulting training data contain a certain amount of incorrect labels. We evaluate the influence of this so called label noise regarding the time difference between acquisition of the two data sources, the amount of training data and the class structure. We combine Sentinel-2 images with reference data from a geospatial database provided by the German Land Survey Office of Lower Saxony (LGLN). With different training sets we train a fully convolutional neural network (FCN) and classify four land cover classes (code Building, Agriculture, Forest, Water/code). Our results show that the errors in the training samples do not have a large influence on the resulting classifiers. This is probably due to the fact that the noise is randomly distributed and thus, neighbours of incorrect samples are predominantly correct. As expected, a larger amount of training data improves the results, especially for the less well represented classes. Other influences are different illuminations conditions and seasonal effects during data acquisition. To better adapt the classifier to these different conditions they should also be included in the training data. © 2020 International Archives of the Photogrammetry, Remote Sensing and Spatial Information Sciences - ISPRS Archives

    Machine Learning in Sensors and Imaging

    Get PDF
    Machine learning is extending its applications in various fields, such as image processing, the Internet of Things, user interface, big data, manufacturing, management, etc. As data are required to build machine learning networks, sensors are one of the most important technologies. In addition, machine learning networks can contribute to the improvement in sensor performance and the creation of new sensor applications. This Special Issue addresses all types of machine learning applications related to sensors and imaging. It covers computer vision-based control, activity recognition, fuzzy label classification, failure classification, motor temperature estimation, the camera calibration of intelligent vehicles, error detection, color prior model, compressive sensing, wildfire risk assessment, shelf auditing, forest-growing stem volume estimation, road management, image denoising, and touchscreens

    UBathy: a new approach for bathymetric inversion from video imagery

    Get PDF
    A new approach to infer the bathymetry from coastal video monitoring systems is presented. The methodology uses principal component analysis of the Hilbert transform of video images to obtain the components of the wave propagation field and their corresponding frequency and wavenumber. Incident and reflected constituents and subharmonics components are also found. Local water depth is then successfully estimated through wave dispersion relationship. The method is first applied to monochromatic and polychromatic synthetic wave trains propagated using linear wave theory over an alongshore uniform bathymetry in order to analyze the influence of different parameters on the results. To assess the ability of the approach to infer the bathymetry under more realistic conditions and to explore the influence of other parameters, nonlinear wave propagation is also performed using a fully nonlinear Boussinesq-type model over a complex bathymetry. In the synthetic cases, the relative root mean square error obtained in bathymetry recovery (for water depths 0.75m¿h¿8.0m) ranges from ~1% to ~3% for infinitesimal-amplitude wave cases (monochromatic or polychromatic) to ~15% in the most complex case (nonlinear polychromatic waves). Finally, the new methodology is satisfactorily validated through a real field site video.Postprint (published version

    Monitoring desertification in Biskra, Algeria using Landsat 8 and Sentinel-1A images

    Full text link
    [EN] Desertification is the persistent degradation of ecosystems caused by environmental changes and human activities. This is a global problem closely related to climate change with severe consequences in urban locations. For that reason, monitoring those locations with low cost and freely available satellite images could be useful for local agencies. This work studies a strategy for the observation of desertification in Biskra (Algeria) with Landsat 8 images and Synthetic Aperture Radar (SAR) images from Sentinel-1A satellite. Radar images are now available from a growing number of missions. These microwave images add valuable additional information to the existing optical products involving soil roughness and moisture content. They are also a very valuable tool to detect man made objects. However, radar images are still difficult to exploit due to their inherent Speckle noise and their characteristics. This study searches the best methodology for the insertion of radar data to a previously designed approach that uses optical data. Several algorithms are implemented and evaluated and the best technique in terms of overall accuracy and Kappa coefficient is selected for the final change map production. The approach achieves Land Use Land Cover (LULC) change detection using Support Vector Machine (SVM) and segmentation. The most useful change indices are obtained for the best methodology product. The simple improved methodology including radar images provides excellent results and it clearly outperforms the baseline optical technique.Azzouzi, SA.; Vidal Pantaleoni, A.; Bentounes, HA. (2018). Monitoring desertification in Biskra, Algeria using Landsat 8 and Sentinel-1A images. IEEE Access. 6:30844-30854. https://doi.org/10.1109/ACCESS.2018.2837081S3084430854

    Accelerated genetic algorithm based on search-space decomposition for change detection in remote sensing images

    Get PDF
    Detecting change areas among two or more remote sensing images is a key technique in remote sensing. It usually consists of generating and analyzing a difference image thus to produce a change map. Analyzing the difference image to obtain the change map is essentially a binary classification problem, and can be solved by optimization algorithms. This paper proposes an accelerated genetic algorithm based on search-space decomposition (SD-aGA) for change detection in remote sensing images. Firstly, the BM3D algorithm is used to preprocess the remote sensing image to enhance useful information and suppress noises. The difference image is then obtained using the logarithmic ratio method. Secondly, after saliency detection, fuzzy c-means algorithm is conducted on the salient region detected in the difference image to identify the changed, unchanged and undetermined pixels. Only those undetermined pixels are considered by the optimization algorithm, which reduces the search space significantly. Inspired by the idea of the divide-and-conquer strategy, the difference image is decomposed into sub-blocks with a method similar to down-sampling, where only those undetermined pixels are analyzed and optimized by SD-aGA in parallel. The category labels of the undetermined pixels in each sub-block are optimized according to an improved objective function with neighborhood information. Finally the decision results of the category labels of all the pixels in the sub-blocks are remapped to their original positions in the difference image and then merged globally. Decision fusion is conducted on each pixel based on the decision results in the local neighborhood to produce the final change map. The proposed method is tested on six diverse remote sensing image benchmark datasets and compared against six state-of-the-art methods. Segmentations on the synthetic image and natural image corrupted by different noise are also carried out for comparison. Results demonstrate the excellent performance of the proposed SD-aGA on handling noises and detecting the changed areas accurately. In particular, compared with the traditional genetic algorithm, SD-aGA can obtain a much higher degree of detection accuracy with much less computational time
    • …
    corecore