2,608 research outputs found
Segmentation-Aware Hyperspectral Image Classification
In this paper, we propose an unified hyperspectral image classification
method which takes three-dimensional hyperspectral data cube as an input and
produces a classification map. In the proposed method, a deep neural network
which uses spectral and spatial information together with residual connections,
and pixel affinity network based segmentation-aware superpixels are used
together. In the architecture, segmentation-aware superpixels run on the
initial classification map of deep residual network, and apply majority voting
on obtained results. Experimental results show that our propoped method yields
state-of-the-art results in two benchmark datasets. Moreover, we also show that
the segmentation-aware superpixels have great contribution to the success of
hyperspectral image classification methods in cases where training data is
insufficient.Comment: To appear at International Geoscience and Remote Sensing Symposium
(IGARSS) 201
Multisource and Multitemporal Data Fusion in Remote Sensing
The sharp and recent increase in the availability of data captured by
different sensors combined with their considerably heterogeneous natures poses
a serious challenge for the effective and efficient processing of remotely
sensed data. Such an increase in remote sensing and ancillary datasets,
however, opens up the possibility of utilizing multimodal datasets in a joint
manner to further improve the performance of the processing approaches with
respect to the application at hand. Multisource data fusion has, therefore,
received enormous attention from researchers worldwide for a wide variety of
applications. Moreover, thanks to the revisit capability of several spaceborne
sensors, the integration of the temporal information with the spatial and/or
spectral/backscattering information of the remotely sensed data is possible and
helps to move from a representation of 2D/3D data to 4D data structures, where
the time variable adds new information as well as challenges for the
information extraction algorithms. There are a huge number of research works
dedicated to multisource and multitemporal data fusion, but the methods for the
fusion of different modalities have expanded in different paths according to
each research community. This paper brings together the advances of multisource
and multitemporal data fusion approaches with respect to different research
communities and provides a thorough and discipline-specific starting point for
researchers at different levels (i.e., students, researchers, and senior
researchers) willing to conduct novel investigations on this challenging topic
by supplying sufficient detail and references
Hyperspectral Image Classification with Markov Random Fields and a Convolutional Neural Network
This paper presents a new supervised classification algorithm for remotely
sensed hyperspectral image (HSI) which integrates spectral and spatial
information in a unified Bayesian framework. First, we formulate the HSI
classification problem from a Bayesian perspective. Then, we adopt a
convolutional neural network (CNN) to learn the posterior class distributions
using a patch-wise training strategy to better use the spatial information.
Next, spatial information is further considered by placing a spatial smoothness
prior on the labels. Finally, we iteratively update the CNN parameters using
stochastic gradient decent (SGD) and update the class labels of all pixel
vectors using an alpha-expansion min-cut-based algorithm. Compared with other
state-of-the-art methods, the proposed classification method achieves better
performance on one synthetic dataset and two benchmark HSI datasets in a number
of experimental settings
Hyperspectral Image Classification with Attention Aided CNNs
Convolutional neural networks (CNNs) have been widely used for hyperspectral
image classification. As a common process, small cubes are firstly cropped from
the hyperspectral image and then fed into CNNs to extract spectral and spatial
features. It is well known that different spectral bands and spatial positions
in the cubes have different discriminative abilities. If fully explored, this
prior information will help improve the learning capacity of CNNs. Along this
direction, we propose an attention aided CNN model for spectral-spatial
classification of hyperspectral images. Specifically, a spectral attention
sub-network and a spatial attention sub-network are proposed for spectral and
spatial classification, respectively. Both of them are based on the traditional
CNN model, and incorporate attention modules to aid networks focus on more
discriminative channels or positions. In the final classification phase, the
spectral classification result and the spatial classification result are
combined together via an adaptively weighted summation method. To evaluate the
effectiveness of the proposed model, we conduct experiments on three standard
hyperspectral datasets. The experimental results show that the proposed model
can achieve superior performance compared to several state-of-the-art
CNN-related models
Deep Neural Network Based Hyperspectral Pixel Classification With Factorized Spectral-Spatial Feature Representation
Deep learning has been widely used for hyperspectral pixel classification due
to its ability of generating deep feature representation. However, how to
construct an efficient and powerful network suitable for hyperspectral data is
still under exploration. In this paper, a novel neural network model is
designed for taking full advantage of the spectral-spatial structure of
hyperspectral data. Firstly, we extract pixel-based intrinsic features from
rich yet redundant spectral bands by a subnetwork with supervised pre-training
scheme. Secondly, in order to utilize the local spatial correlation among
pixels, we share the previous subnetwork as a spectral feature extractor for
each pixel in a patch of image, after which the spectral features of all pixels
in a patch are combined and feeded into the subsequent classification
subnetwork. Finally, the whole network is further fine-tuned to improve its
classification performance. Specially, the spectral-spatial factorization
scheme is applied in our model architecture, making the network size and the
number of parameters great less than the existing spectral-spatial deep
networks for hyperspectral image classification. Experiments on the
hyperspectral data sets show that, compared with some state-of-art deep
learning methods, our method achieves better classification results while
having smaller network size and less parameters.Comment: 12 pages, 10 figure
A CNN-based Spatial Feature Fusion Algorithm for Hyperspectral Imagery Classification
The shortage of training samples remains one of the main obstacles in
applying the artificial neural networks (ANN) to the hyperspectral images
classification. To fuse the spatial and spectral information, pixel patches are
often utilized to train a model, which may further aggregate this problem. In
the existing works, an ANN model supervised by center-loss (ANNC) was
introduced. Training merely with spectral information, the ANNC yields
discriminative spectral features suitable for the subsequent classification
tasks. In this paper, a CNN-based spatial feature fusion (CSFF) algorithm is
proposed, which allows a smart fusion of the spatial information to the
spectral features extracted by ANNC. As a critical part of CSFF, a CNN-based
discriminant model is introduced to estimate whether two paring pixels belong
to the same class. At the testing stage, by applying the discriminant model to
the pixel-pairs generated by the test pixel and its neighbors, the local
structure is estimated and represented as a customized convolutional kernel.
The spectral-spatial feature is obtained by a convolutional operation between
the estimated kernel and the corresponding spectral features within a
neighborhood. At last, the label of the test pixel is predicted by classifying
the resulting spectral-spatial feature. Without increasing the number of
training samples or involving pixel patches at the training stage, the CSFF
framework achieves the state-of-the-art by declining classification
failures in experiments on three well-known hyperspectral images
Spectral-Spatial Feature Extraction and Classification by ANN Supervised with Center Loss in Hyperspectral Imagery
In this paper, we propose a spectral-spatial feature extraction and
classification framework based on artificial neuron network (ANN) in the
context of hyperspectral imagery. With limited labeled samples, only spectral
information is exploited for training and spatial context is integrated
posteriorly at the testing stage. Taking advantage of recent advances in face
recognition, a joint supervision symbol that combines softmax loss and center
loss is adopted to train the proposed network, by which intra-class features
are gathered while inter-class variations are enlarged. Based on the learned
architecture, the extracted spectrum-based features are classified by a center
classifier. Moreover, to fuse the spectral and spatial information, an adaptive
spectral-spatial center classifier is developed, where multiscale neighborhoods
are considered simultaneously, and the final label is determined using an
adaptive voting strategy. Finally, experimental results on three well-known
datasets validate the effectiveness of the proposed methods compared with the
state-of-the-art approaches.Comment: 17 pages, 10 figure
Hybrid Noise Removal in Hyperspectral Imagery With a Spatial-Spectral Gradient Network
The existence of hybrid noise in hyperspectral images (HSIs) severely
degrades the data quality, reduces the interpretation accuracy of HSIs, and
restricts the subsequent HSIs applications. In this paper, the spatial-spectral
gradient network (SSGN) is presented for mixed noise removal in HSIs. The
proposed method employs a spatial-spectral gradient learning strategy, in
consideration of the unique spatial structure directionality of sparse noise
and spectral differences with additional complementary information for better
extracting intrinsic and deep features of HSIs. Based on a fully cascaded
multi-scale convolutional network, SSGN can simultaneously deal with the
different types of noise in different HSIs or spectra by the use of the same
model. The simulated and real-data experiments undertaken in this study
confirmed that the proposed SSGN performs better at mixed noise removal than
the other state-of-the-art HSI denoising algorithms, in evaluation indices,
visual assessments, and time consumption.Comment: Accept by IEEE TGR
1D-Convolutional Capsule Network for Hyperspectral Image Classification
Recently, convolutional neural networks (CNNs) have achieved excellent
performances in many computer vision tasks. Specifically, for hyperspectral
images (HSIs) classification, CNNs often require very complex structure due to
the high dimension of HSIs. The complex structure of CNNs results in
prohibitive training efforts. Moreover, the common situation in HSIs
classification task is the lack of labeled samples, which results in accuracy
deterioration of CNNs. In this work, we develop an easy-to-implement capsule
network to alleviate the aforementioned problems, i.e., 1D-convolution capsule
network (1D-ConvCapsNet). Firstly, 1D-ConvCapsNet separately extracts spatial
and spectral information on spatial and spectral domains, which is more
lightweight than 3D-convolution due to fewer parameters. Secondly,
1D-ConvCapsNet utilizes the capsule-wise constraint window method to reduce
parameter amount and computational complexity of conventional capsule network.
Finally, 1D-ConvCapsNet obtains accurate predictions with respect to input
samples via dynamic routing. The effectiveness of the 1D-ConvCapsNet is
verified by three representative HSI datasets. Experimental results demonstrate
that 1D-ConvCapsNet is superior to state-of-the-art methods in both the
accuracy and training effort
Missing Data Reconstruction in Remote Sensing image with a Unified Spatial-Temporal-Spectral Deep Convolutional Neural Network
Because of the internal malfunction of satellite sensors and poor atmospheric
conditions such as thick cloud, the acquired remote sensing data often suffer
from missing information, i.e., the data usability is greatly reduced. In this
paper, a novel method of missing information reconstruction in remote sensing
images is proposed. The unified spatial-temporal-spectral framework based on a
deep convolutional neural network (STS-CNN) employs a unified deep
convolutional neural network combined with spatial-temporal-spectral
supplementary information. In addition, to address the fact that most methods
can only deal with a single missing information reconstruction task, the
proposed approach can solve three typical missing information reconstruction
tasks: 1) dead lines in Aqua MODIS band 6; 2) the Landsat ETM+ Scan Line
Corrector (SLC)-off problem; and 3) thick cloud removal. It should be noted
that the proposed model can use multi-source data (spatial, spectral, and
temporal) as the input of the unified framework. The results of both simulated
and real-data experiments demonstrate that the proposed model exhibits high
effectiveness in the three missing information reconstruction tasks listed
above.Comment: To be published in IEEE Transactions on Geoscience and Remote Sensin
- …