655 research outputs found

    A Comprehensive Survey of Deep Learning in Remote Sensing: Theories, Tools and Challenges for the Community

    Full text link
    In recent years, deep learning (DL), a re-branding of neural networks (NNs), has risen to the top in numerous areas, namely computer vision (CV), speech recognition, natural language processing, etc. Whereas remote sensing (RS) possesses a number of unique challenges, primarily related to sensors and applications, inevitably RS draws from many of the same theories as CV; e.g., statistics, fusion, and machine learning, to name a few. This means that the RS community should be aware of, if not at the leading edge of, of advancements like DL. Herein, we provide the most comprehensive survey of state-of-the-art RS DL research. We also review recent new developments in the DL field that can be used in DL for RS. Namely, we focus on theories, tools and challenges for the RS community. Specifically, we focus on unsolved challenges and opportunities as it relates to (i) inadequate data sets, (ii) human-understandable solutions for modelling physical phenomena, (iii) Big Data, (iv) non-traditional heterogeneous data sources, (v) DL architectures and learning algorithms for spectral, spatial and temporal data, (vi) transfer learning, (vii) an improved theoretical understanding of DL systems, (viii) high barriers to entry, and (ix) training and optimizing the DL.Comment: 64 pages, 411 references. To appear in Journal of Applied Remote Sensin

    Visualization of hyperspectral images on parallel and distributed platform: Apache Spark

    Get PDF
    The field of hyperspectral image storage and processing has undergone a remarkable evolution in recent years. The visualization of these images represents a challenge as the number of bands exceeds three bands, since direct visualization using the trivial system red, green and blue (RGB) or hue, saturation and lightness (HSL) is not feasible. One potential solution to resolve this problem is the reduction of the dimensionality of the image to three dimensions and thereafter assigning each dimension to a color. Conventional tools and algorithms have become incapable of producing results within a reasonable time. In this paper, we present a new distributed method of visualization of hyperspectral image based on the principal component analysis (PCA) and implemented in a distributed parallel environment (Apache Spark). The visualization of the big hyperspectral images with the proposed method is made in a smaller time and with the same performance as the classical method of visualization

    Hyperspectral Unmixing Overview: Geometrical, Statistical, and Sparse Regression-Based Approaches

    Get PDF
    Imaging spectrometers measure electromagnetic energy scattered in their instantaneous field view in hundreds or thousands of spectral channels with higher spectral resolution than multispectral cameras. Imaging spectrometers are therefore often referred to as hyperspectral cameras (HSCs). Higher spectral resolution enables material identification via spectroscopic analysis, which facilitates countless applications that require identifying materials in scenarios unsuitable for classical spectroscopic analysis. Due to low spatial resolution of HSCs, microscopic material mixing, and multiple scattering, spectra measured by HSCs are mixtures of spectra of materials in a scene. Thus, accurate estimation requires unmixing. Pixels are assumed to be mixtures of a few materials, called endmembers. Unmixing involves estimating all or some of: the number of endmembers, their spectral signatures, and their abundances at each pixel. Unmixing is a challenging, ill-posed inverse problem because of model inaccuracies, observation noise, environmental conditions, endmember variability, and data set size. Researchers have devised and investigated many models searching for robust, stable, tractable, and accurate unmixing algorithms. This paper presents an overview of unmixing methods from the time of Keshava and Mustard's unmixing tutorial [1] to the present. Mixing models are first discussed. Signal-subspace, geometrical, statistical, sparsity-based, and spatial-contextual unmixing algorithms are described. Mathematical problems and potential solutions are described. Algorithm characteristics are illustrated experimentally.Comment: This work has been accepted for publication in IEEE Journal of Selected Topics in Applied Earth Observations and Remote Sensin

    Tree species classification from AVIRIS-NG hyperspectral imagery using convolutional neural networks

    Full text link
    This study focuses on the automatic classification of tree species using a three-dimensional convolutional neural network (CNN) based on field-sampled ground reference data, a LiDAR point cloud and AVIRIS-NG airborne hyperspectral remote sensing imagery with 2 m spatial resolution acquired on 14 June 2021. I created a tree species map for my 10.4 km2 study area which is located in the Jurapark Aargau, a Swiss regional park of national interest. I collected ground reference data for six major tree species present in the study area (Quercus robur, Fagus sylvatica, Fraxinus excelsior, Pinus sylvestris, Tilia platyphyllos, total n = 331). To match the sampled ground reference to the AVIRIS-NG 425 band hyperspectral imagery, I delineated individual tree crowns (ITCs) from a canopy height model (CHM) based on LiDAR point cloud data. After matching the ground reference data to the hyperspectral imagery, I split the extracted image patches to training, validation, and testing subsets. The amount of training, validation and testing data was increased by applying image augmentation through rotating, flipping, and changing the brightness of the original input data. The classifier is a CNN trained on the first 32 principal components (PC’s) extracted from AVIRIS-NG data. The CNN uses image patches of 5 × 5 pixels and consists of two convolutional layers and two fully connected layers. The latter of which is responsible for the final classification using the softmax activation function. The results show that the CNN classifier outperforms comparable conventional classification methods. The CNN model is able to predict the correct tree species with an overall accuracy of 70% and an average F1-score of 0.67. A random forest classifier reached an overall accuracy of 67% and an average F1-score of 0.61 while a support-vector machine classified the tree species with an overall accuracy of 66% and an average F1-score of 0.62. This work highlights that CNNs based on imaging spectroscopy data can produce highly accurate high resolution tree species distribution maps based on a relatively small set of training data thanks to the high dimensionality of hyperspectral images and the ability of CNNs to utilize spatial and spectral features of the data. These maps provide valuable input for modelling the distributions of other plant and animal species and ecosystem services. In addition, this work illustrates the importance of direct collaboration with environmental practitioners to ensure user needs are met. This aspect will be evaluated further in future work by assessing how these products are used by environmental practitioners and as input for modelling purposes

    On Understanding Big Data Impacts in Remotely Sensed Image Classification Using Support Vector Machine Methods

    Get PDF
    Owing to the recent development of sensor resolutions onboard different Earth observation platforms, remote sensing is an important source of information for mapping and monitoring natural and man-made land covers. Of particular importance is the increasing amounts of available hyperspectral data originating from airborne and satellite sensors such as AVIRIS, HyMap, and Hyperion with very high spectral resolution (i.e., high number of spectral channels) containing rich information for a wide range of applications. A relevant example is the separation of different types of land-cover classes using the data in order to understand, e.g., impacts of natural disasters or changing of city buildings over time. More recently, such increases in the data volume, velocity, and variety of data contributed to the term big data that stand for challenges shared with many other scientific disciplines. On one hand, the amount of available data is increasing in a way that raises the demand for automatic data analysis elements since many of the available data collections are massively underutilized lacking experts for manual investigation. On the other hand, proven statistical methods (e.g., dimensionality reduction) driven by manual approaches have a significant impact in reducing the amount of big data toward smaller smart data contributing to the more recently used terms data value and veracity (i.e., less noise, lower dimensions that capture the most important information). This paper aims to take stock of which proven statistical data mining methods in remote sensing are used to contribute to smart data analysis processes in the light of possible automation as well as scalable and parallel processing techniques. We focus on parallel support vector machines (SVMs) as one of the best out-of-the-box classification methods.Sponsored by: IEEE Geoscience & Remote Sensing SocietyRitrýnt tímaritPeer reviewedPre prin

    Feature extraction and fusion for classification of remote sensing imagery

    Get PDF

    Hyperspectral Remote Sensing Data Analysis and Future Challenges

    Full text link

    Reconfigurable Computing for Space

    Get PDF

    A hybrid CUDA, OpenMP, and MPI parallel TCA-based domain adaptation for classification of very high-resolution remote sensing images

    Get PDF
    Domain Adaptation (DA) is a technique that aims at extracting information from a labeled remote sensing image to allow classifying a different image obtained by the same sensor but at a different geographical location. This is a very complex problem from the computational point of view, specially due to the very high-resolution of multispectral images. TCANet is a deep learning neural network for DA classification problems that has been proven as very accurate for solving them. TCANet consists of several stages based on the application of convolutional filters obtained through Transfer Component Analysis (TCA) computed over the input images. It does not require backpropagation training, in contrast to the usual CNN-based networks, as the convolutional filters are directly computed based on the TCA transform applied over the training samples. In this paper, a hybrid parallel TCA-based domain adaptation technique for solving the classification of very high-resolution multispectral images is presented. It is designed for efficient execution on a multi-node computer by using Message Passing Interface (MPI), exploiting the available Graphical Processing Units (GPUs), and making efficient use of each multicore node by using Open Multi-Processing (OpenMP). As a result, an accurate DA technique from the point of view of classification and with high speedup values over the sequential version is obtained, increasing the applicability of the technique to real problemsOpen Access funding provided thanks to the CRUE-CSIC agreement with Springer Nature. This work was supported in part by the Ministerio de Ciencia e Innovación, Government of Spain (grant numbers PID2019-104834GB-I00 and TED2021-130367B-I00), the Consellería de Educación, Universidade e Formación Profesional (grant number 2019–2022 ED431G-2019/04 and 2021–2024 ED431C 2022/16), and by the Junta de Castilla y León (project VA226P20 (PROPHET II Project)). All are co-funded by the European Regional Development Fund (ERDF)S
    corecore