28 research outputs found
Deep learning-based change detection in remote sensing images:a review
Images gathered from different satellites are vastly available these days due to the fast development of remote sensing (RS) technology. These images significantly enhance the data sources of change detection (CD). CD is a technique of recognizing the dissimilarities in the images acquired at distinct intervals and are used for numerous applications, such as urban area development, disaster management, land cover object identification, etc. In recent years, deep learning (DL) techniques have been used tremendously in change detection processes, where it has achieved great success because of their practical applications. Some researchers have even claimed that DL approaches outperform traditional approaches and enhance change detection accuracy. Therefore, this review focuses on deep learning techniques, such as supervised, unsupervised, and semi-supervised for different change detection datasets, such as SAR, multispectral, hyperspectral, VHR, and heterogeneous images, and their advantages and disadvantages will be highlighted. In the end, some significant challenges are discussed to understand the context of improvements in change detection datasets and deep learning models. Overall, this review will be beneficial for the future development of CD methods
Image Restoration for Remote Sensing: Overview and Toolbox
Remote sensing provides valuable information about objects or areas from a
distance in either active (e.g., RADAR and LiDAR) or passive (e.g.,
multispectral and hyperspectral) modes. The quality of data acquired by
remotely sensed imaging sensors (both active and passive) is often degraded by
a variety of noise types and artifacts. Image restoration, which is a vibrant
field of research in the remote sensing community, is the task of recovering
the true unknown image from the degraded observed image. Each imaging sensor
induces unique noise types and artifacts into the observed image. This fact has
led to the expansion of restoration techniques in different paths according to
each sensor type. This review paper brings together the advances of image
restoration techniques with particular focuses on synthetic aperture radar and
hyperspectral images as the most active sub-fields of image restoration in the
remote sensing community. We, therefore, provide a comprehensive,
discipline-specific starting point for researchers at different levels (i.e.,
students, researchers, and senior researchers) willing to investigate the
vibrant topic of data restoration by supplying sufficient detail and
references. Additionally, this review paper accompanies a toolbox to provide a
platform to encourage interested students and researchers in the field to
further explore the restoration techniques and fast-forward the community. The
toolboxes are provided in https://github.com/ImageRestorationToolbox.Comment: This paper is under review in GRS
Bayesian gravitation based classification for hyperspectral images.
Integration of spectral and spatial information is extremely important for the classification of high-resolution hyperspectral images (HSIs). Gravitation describes interaction among celestial bodies which can be applied to measure similarity between data for image classification. However, gravitation is hard to combine with spatial information and rarely been applied in HSI classification. This paper proposes a Bayesian Gravitation based Classification (BGC) to integrate the spectral and spatial information of local neighbors and training samples. In the BGC method, each testing pixel is first assumed as a massive object with unit volume and a particular density, where the density is taken as the data mass in BGC. Specifically, the data mass is formulated as an exponential function of the spectral distribution of its neighbors and the spatial prior distribution of its surrounding training samples based on the Bayesian theorem. Then, a joint data gravitation model is developed as the classification measure, in which the data mass is taken to weigh the contribution of different neighbors in a local region. Four benchmark HSI datasets, i.e. the Indian Pines, Pavia University, Salinas, and Grss_dfc_2014, are tested to verify the BGC method. The experimental results are compared with that of several well-known HSI classification methods, including the support vector machines, sparse representation, and other eight state-of-the-art HSI classification methods. The BGC shows apparent superiority in the classification of high-resolution HSIs and also flexibility for HSIs with limited samples
Permuted Spectral and Permuted Spectral-Spatial CNN Models for PolSAR- Multispectral Data based Land Cover Classification
International audienceIt is a challenge to develop methods which can process the PolSAR and multispectral (MS) data modalities together without losing information from either for remote sensing applications. This paper presents a study which attempts to introduce novel deep learning based remote sensing data processing frameworks that utilizes convolutional neural networks (CNNs) in both spatial and spectral domains to perform land cover (LC) classification with PolSAR-MS data. Also since earth observation remotely sensed data have usually larger spectral depth than normal camera image data, exploiting the spectral information in remote sensing (RS) data is crucial as well. In fact, convolutions in the sub-spectral space are intuitive and alternative to the process of feature selection. Recently, researchers have gained success in exploiting the spectral information of RS data, especially the hyperspectral data with CNNs. In this paper, exploitation of the spectral information in the PolSAR-MS data via a permuted localized spectral convolution along with localized spatial convolution is proposed. Further, the study in this paper also establishes the significance of performing permuted localized spectral convolutions over non-localized or localized spectral convolutions. Two models are proposed, namely a permuted local spectral convolutional network (Perm-LS-CNN) and a permuted local spectral-spatial convolutional network (Perm-LSS-CNN). These models are trained on ground truth class data points measured directly on the terrain. The evaluation of the generalization performance is done using ground truth knowledge on selected well known regions in the study areas. Comparison with other popular machine learning classifiers shows that the Perm-LSS-CNN model provides better classification results in terms of both accuracy and generalization
Sign language detection using convolutional neural network for teaching and learning application
Teaching lower school mathematic could be easy for everyone. For teaching in the situation that cannot speak, using sign language is the answer especially someone that have infected with vocal cord infection or critical spasmodic dysphonia or maybe disable people. However, the situation could be difficult, when the sign language is not understandable by the audience. Thus, the purpose of this research is to design a sign language detection scheme for teaching and learning activity. In this research, the image of hand gestures from teacher or presenter will be taken by using a web camera for the system to anticipate and display the image's name. This proposed scheme will detects hand movements and convert it be meaningful information. As a result, it show the model can be the most consistent in term of accuracy and loss compared to others method. Furthermore, the proposed algorithm is expected to contribute the body of knowledge and the society
Advances in Image Processing, Analysis and Recognition Technology
For many decades, researchers have been trying to make computers’ analysis of images as effective as the system of human vision is. For this purpose, many algorithms and systems have previously been created. The whole process covers various stages, including image processing, representation and recognition. The results of this work can be applied to many computer-assisted areas of everyday life. They improve particular activities and provide handy tools, which are sometimes only for entertainment, but quite often, they significantly increase our safety. In fact, the practical implementation of image processing algorithms is particularly wide. Moreover, the rapid growth of computational complexity and computer efficiency has allowed for the development of more sophisticated and effective algorithms and tools. Although significant progress has been made so far, many issues still remain, resulting in the need for the development of novel approaches
Deep learning methods applied to digital elevation models: state of the art
Deep Learning (DL) has a wide variety of applications in various
thematic domains, including spatial information. Although with
limitations, it is also starting to be considered in operations
related to Digital Elevation Models (DEMs). This study aims to
review the methods of DL applied in the field of altimetric spatial
information in general, and DEMs in particular. Void Filling (VF),
Super-Resolution (SR), landform classification and hydrography
extraction are just some of the operations where traditional methods
are being replaced by DL methods. Our review concludes
that although these methods have great potential, there are
aspects that need to be improved. More appropriate terrain information
or algorithm parameterisation are some of the challenges
that this methodology still needs to face.Functional Quality of Digital Elevation Models in Engineering’ of the State Agency Research of SpainPID2019-106195RB- I00/AEI/10.13039/50110001103