2,328 research outputs found
Conditional Random Field and Deep Feature Learning for Hyperspectral Image Segmentation
Image segmentation is considered to be one of the critical tasks in
hyperspectral remote sensing image processing. Recently, convolutional neural
network (CNN) has established itself as a powerful model in segmentation and
classification by demonstrating excellent performances. The use of a graphical
model such as a conditional random field (CRF) contributes further in capturing
contextual information and thus improving the segmentation performance. In this
paper, we propose a method to segment hyperspectral images by considering both
spectral and spatial information via a combined framework consisting of CNN and
CRF. We use multiple spectral cubes to learn deep features using CNN, and then
formulate deep CRF with CNN-based unary and pairwise potential functions to
effectively extract the semantic correlations between patches consisting of
three-dimensional data cubes. Effective piecewise training is applied in order
to avoid the computationally expensive iterative CRF inference. Furthermore, we
introduce a deep deconvolution network that improves the segmentation masks. We
also introduce a new dataset and experimented our proposed method on it along
with several widely adopted benchmark datasets to evaluate the effectiveness of
our method. By comparing our results with those from several state-of-the-art
models, we show the promising potential of our method.Comment: Submitted for Journal (Version 2
Machine learning based hyperspectral image analysis: A survey
Hyperspectral sensors enable the study of the chemical properties of scene
materials remotely for the purpose of identification, detection, and chemical
composition analysis of objects in the environment. Hence, hyperspectral images
captured from earth observing satellites and aircraft have been increasingly
important in agriculture, environmental monitoring, urban planning, mining, and
defense. Machine learning algorithms due to their outstanding predictive power
have become a key tool for modern hyperspectral image analysis. Therefore, a
solid understanding of machine learning techniques have become essential for
remote sensing researchers and practitioners. This paper reviews and compares
recent machine learning-based hyperspectral image analysis methods published in
literature. We organize the methods by the image analysis task and by the type
of machine learning algorithm, and present a two-way mapping between the image
analysis tasks and the types of machine learning algorithms that can be applied
to them. The paper is comprehensive in coverage of both hyperspectral image
analysis tasks and machine learning algorithms. The image analysis tasks
considered are land cover classification, target detection, unmixing, and
physical parameter estimation. The machine learning algorithms covered are
Gaussian models, linear regression, logistic regression, support vector
machines, Gaussian mixture model, latent linear models, sparse linear models,
Gaussian mixture models, ensemble learning, directed graphical models,
undirected graphical models, clustering, Gaussian processes, Dirichlet
processes, and deep learning. We also discuss the open challenges in the field
of hyperspectral image analysis and explore possible future directions
Validating Hyperspectral Image Segmentation
Hyperspectral satellite imaging attracts enormous research attention in the
remote sensing community, hence automated approaches for precise segmentation
of such imagery are being rapidly developed. In this letter, we share our
observations on the strategy for validating hyperspectral image segmentation
algorithms currently followed in the literature, and show that it can lead to
over-optimistic experimental insights. We introduce a new routine for
generating segmentation benchmarks, and use it to elaborate ready-to-use
hyperspectral training-test data partitions. They can be utilized for fair
validation of new and existing algorithms without any training-test data
leakage.Comment: Submitted to IEEE Geoscience and Remote Sensing Letter
Hyperspectral Image Classification with Markov Random Fields and a Convolutional Neural Network
This paper presents a new supervised classification algorithm for remotely
sensed hyperspectral image (HSI) which integrates spectral and spatial
information in a unified Bayesian framework. First, we formulate the HSI
classification problem from a Bayesian perspective. Then, we adopt a
convolutional neural network (CNN) to learn the posterior class distributions
using a patch-wise training strategy to better use the spatial information.
Next, spatial information is further considered by placing a spatial smoothness
prior on the labels. Finally, we iteratively update the CNN parameters using
stochastic gradient decent (SGD) and update the class labels of all pixel
vectors using an alpha-expansion min-cut-based algorithm. Compared with other
state-of-the-art methods, the proposed classification method achieves better
performance on one synthetic dataset and two benchmark HSI datasets in a number
of experimental settings
Multisource and Multitemporal Data Fusion in Remote Sensing
The sharp and recent increase in the availability of data captured by
different sensors combined with their considerably heterogeneous natures poses
a serious challenge for the effective and efficient processing of remotely
sensed data. Such an increase in remote sensing and ancillary datasets,
however, opens up the possibility of utilizing multimodal datasets in a joint
manner to further improve the performance of the processing approaches with
respect to the application at hand. Multisource data fusion has, therefore,
received enormous attention from researchers worldwide for a wide variety of
applications. Moreover, thanks to the revisit capability of several spaceborne
sensors, the integration of the temporal information with the spatial and/or
spectral/backscattering information of the remotely sensed data is possible and
helps to move from a representation of 2D/3D data to 4D data structures, where
the time variable adds new information as well as challenges for the
information extraction algorithms. There are a huge number of research works
dedicated to multisource and multitemporal data fusion, but the methods for the
fusion of different modalities have expanded in different paths according to
each research community. This paper brings together the advances of multisource
and multitemporal data fusion approaches with respect to different research
communities and provides a thorough and discipline-specific starting point for
researchers at different levels (i.e., students, researchers, and senior
researchers) willing to conduct novel investigations on this challenging topic
by supplying sufficient detail and references
Unsupervised Deep Feature Extraction for Remote Sensing Image Classification
This paper introduces the use of single layer and deep convolutional networks
for remote sensing data analysis. Direct application to multi- and
hyper-spectral imagery of supervised (shallow or deep) convolutional networks
is very challenging given the high input data dimensionality and the relatively
small amount of available labeled data. Therefore, we propose the use of greedy
layer-wise unsupervised pre-training coupled with a highly efficient algorithm
for unsupervised learning of sparse features. The algorithm is rooted on sparse
representations and enforces both population and lifetime sparsity of the
extracted features, simultaneously. We successfully illustrate the expressive
power of the extracted representations in several scenarios: classification of
aerial scenes, as well as land-use classification in very high resolution
(VHR), or land-cover classification from multi- and hyper-spectral images. The
proposed algorithm clearly outperforms standard Principal Component Analysis
(PCA) and its kernel counterpart (kPCA), as well as current state-of-the-art
algorithms of aerial classification, while being extremely computationally
efficient at learning representations of data. Results show that single layer
convolutional networks can extract powerful discriminative features only when
the receptive field accounts for neighboring pixels, and are preferred when the
classification requires high resolution and detailed results. However, deep
architectures significantly outperform single layers variants, capturing
increasing levels of abstraction and complexity throughout the feature
hierarchy
Deep Neural Network Based Hyperspectral Pixel Classification With Factorized Spectral-Spatial Feature Representation
Deep learning has been widely used for hyperspectral pixel classification due
to its ability of generating deep feature representation. However, how to
construct an efficient and powerful network suitable for hyperspectral data is
still under exploration. In this paper, a novel neural network model is
designed for taking full advantage of the spectral-spatial structure of
hyperspectral data. Firstly, we extract pixel-based intrinsic features from
rich yet redundant spectral bands by a subnetwork with supervised pre-training
scheme. Secondly, in order to utilize the local spatial correlation among
pixels, we share the previous subnetwork as a spectral feature extractor for
each pixel in a patch of image, after which the spectral features of all pixels
in a patch are combined and feeded into the subsequent classification
subnetwork. Finally, the whole network is further fine-tuned to improve its
classification performance. Specially, the spectral-spatial factorization
scheme is applied in our model architecture, making the network size and the
number of parameters great less than the existing spectral-spatial deep
networks for hyperspectral image classification. Experiments on the
hyperspectral data sets show that, compared with some state-of-art deep
learning methods, our method achieves better classification results while
having smaller network size and less parameters.Comment: 12 pages, 10 figure
Hyperspectral Image Classification with Attention Aided CNNs
Convolutional neural networks (CNNs) have been widely used for hyperspectral
image classification. As a common process, small cubes are firstly cropped from
the hyperspectral image and then fed into CNNs to extract spectral and spatial
features. It is well known that different spectral bands and spatial positions
in the cubes have different discriminative abilities. If fully explored, this
prior information will help improve the learning capacity of CNNs. Along this
direction, we propose an attention aided CNN model for spectral-spatial
classification of hyperspectral images. Specifically, a spectral attention
sub-network and a spatial attention sub-network are proposed for spectral and
spatial classification, respectively. Both of them are based on the traditional
CNN model, and incorporate attention modules to aid networks focus on more
discriminative channels or positions. In the final classification phase, the
spectral classification result and the spatial classification result are
combined together via an adaptively weighted summation method. To evaluate the
effectiveness of the proposed model, we conduct experiments on three standard
hyperspectral datasets. The experimental results show that the proposed model
can achieve superior performance compared to several state-of-the-art
CNN-related models
HybridSN: Exploring 3D-2D CNN Feature Hierarchy for Hyperspectral Image Classification
Hyperspectral image (HSI) classification is widely used for the analysis of
remotely sensed images. Hyperspectral imagery includes varying bands of images.
Convolutional Neural Network (CNN) is one of the most frequently used deep
learning based methods for visual data processing. The use of CNN for HSI
classification is also visible in recent works. These approaches are mostly
based on 2D CNN. Whereas, the HSI classification performance is highly
dependent on both spatial and spectral information. Very few methods have
utilized the 3D CNN because of increased computational complexity. This letter
proposes a Hybrid Spectral Convolutional Neural Network (HybridSN) for HSI
classification. Basically, the HybridSN is a spectral-spatial 3D-CNN followed
by spatial 2D-CNN. The 3D-CNN facilitates the joint spatial-spectral feature
representation from a stack of spectral bands. The 2D-CNN on top of the 3D-CNN
further learns more abstract level spatial representation. Moreover, the use of
hybrid CNNs reduces the complexity of the model compared to 3D-CNN alone. To
test the performance of this hybrid approach, very rigorous HSI classification
experiments are performed over Indian Pines, Pavia University and Salinas Scene
remote sensing datasets. The results are compared with the state-of-the-art
hand-crafted as well as end-to-end deep learning based methods. A very
satisfactory performance is obtained using the proposed HybridSN for HSI
classification. The source code can be found at
\url{https://github.com/gokriznastic/HybridSN}.Comment: Published in IEEE Geoscience and Remote Sensing Letter
Going Deeper with Contextual CNN for Hyperspectral Image Classification
In this paper, we describe a novel deep convolutional neural network (CNN)
that is deeper and wider than other existing deep networks for hyperspectral
image classification. Unlike current state-of-the-art approaches in CNN-based
hyperspectral image classification, the proposed network, called contextual
deep CNN, can optimally explore local contextual interactions by jointly
exploiting local spatio-spectral relationships of neighboring individual pixel
vectors. The joint exploitation of the spatio-spectral information is achieved
by a multi-scale convolutional filter bank used as an initial component of the
proposed CNN pipeline. The initial spatial and spectral feature maps obtained
from the multi-scale filter bank are then combined together to form a joint
spatio-spectral feature map. The joint feature map representing rich spectral
and spatial properties of the hyperspectral image is then fed through a fully
convolutional network that eventually predicts the corresponding label of each
pixel vector. The proposed approach is tested on three benchmark datasets: the
Indian Pines dataset, the Salinas dataset and the University of Pavia dataset.
Performance comparison shows enhanced classification performance of the
proposed approach over the current state-of-the-art on the three datasets.Comment: 14 page
- …