301 research outputs found
Domain-adaptive Message Passing Graph Neural Network
Cross-network node classification (CNNC), which aims to classify nodes in a
label-deficient target network by transferring the knowledge from a source
network with abundant labels, draws increasing attention recently. To address
CNNC, we propose a domain-adaptive message passing graph neural network
(DM-GNN), which integrates graph neural network (GNN) with conditional
adversarial domain adaptation. DM-GNN is capable of learning informative
representations for node classification that are also transferrable across
networks. Firstly, a GNN encoder is constructed by dual feature extractors to
separate ego-embedding learning from neighbor-embedding learning so as to
jointly capture commonality and discrimination between connected nodes.
Secondly, a label propagation node classifier is proposed to refine each node's
label prediction by combining its own prediction and its neighbors' prediction.
In addition, a label-aware propagation scheme is devised for the labeled source
network to promote intra-class propagation while avoiding inter-class
propagation, thus yielding label-discriminative source embeddings. Thirdly,
conditional adversarial domain adaptation is performed to take the
neighborhood-refined class-label information into account during adversarial
domain adaptation, so that the class-conditional distributions across networks
can be better matched. Comparisons with eleven state-of-the-art methods
demonstrate the effectiveness of the proposed DM-GNN
Graph Relation Network: Modeling Relations Between Scenes for Multilabel Remote-Sensing Image Classification and Retrieval
Due to the proliferation of large-scale remote-sensing (RS) archives with multiple annotations, multilabel RS scene classification and retrieval are becoming increasingly popular. Although some recent deep learning-based methods are able to achieve promising results in this context, the lack of research on how to learn embedding spaces under the multilabel assumption often makes these models unable to preserve complex semantic relations pervading aerial scenes, which is an important limitation in RS applications. To fill this gap, we propose a new graph relation network (GRN) for multilabel RS scene categorization. Our GRN is able to model the relations between samples (or scenes) by making use of a graph structure which is fed into network learning. For this purpose, we define a new loss function called scalable neighbor discriminative loss with binary cross entropy (SNDL-BCE) that is able to embed the graph structures through the networks more effectively. The proposed approach can guide deep learning techniques (such as convolutional neural networks) to a more discriminative metric space, where semantically similar RS scenes are closely embedded and dissimilar images are separated from a novel multilabel viewpoint. To achieve this goal, our GRN jointly maximizes a weighted leave-one-out K-nearest neighbors (KNN) score in the training set, where the weight matrix describes the contributions of the nearest neighbors associated with each RS image on its class decision, and the likelihood of the class discrimination in the multilabel scenario. An extensive experimental comparison, conducted on three multilabel RS scene data archives, validates the effectiveness of the proposed GRN in terms of KNN classification and image retrieval. The codes of this article will be made publicly available for reproducible research in the community
Impact of Feature Representation on Remote Sensing Image Retrieval
Remote sensing images are acquired using special platforms, sensors and are classified as aerial, multispectral and hyperspectral images. Multispectral and hyperspectral images are represented using large spectral vectors as compared to normal Red, Green, Blue (RGB) images. Hence, remote sensing image retrieval process from large archives is a challenging task. Remote sensing image retrieval mainly consist of feature representation as first step and finding out similar images to a query image as second step. Feature representation plays important part in the performance of remote sensing image retrieval process. Research work focuses on impact of feature representation of remote sensing images on the performance of remote sensing image retrieval. This study shows that more discriminative features of remote sensing images are needed to improve performance of remote sensing image retrieval process
A Comprehensive Survey of Deep Learning in Remote Sensing: Theories, Tools and Challenges for the Community
In recent years, deep learning (DL), a re-branding of neural networks (NNs),
has risen to the top in numerous areas, namely computer vision (CV), speech
recognition, natural language processing, etc. Whereas remote sensing (RS)
possesses a number of unique challenges, primarily related to sensors and
applications, inevitably RS draws from many of the same theories as CV; e.g.,
statistics, fusion, and machine learning, to name a few. This means that the RS
community should be aware of, if not at the leading edge of, of advancements
like DL. Herein, we provide the most comprehensive survey of state-of-the-art
RS DL research. We also review recent new developments in the DL field that can
be used in DL for RS. Namely, we focus on theories, tools and challenges for
the RS community. Specifically, we focus on unsolved challenges and
opportunities as it relates to (i) inadequate data sets, (ii)
human-understandable solutions for modelling physical phenomena, (iii) Big
Data, (iv) non-traditional heterogeneous data sources, (v) DL architectures and
learning algorithms for spectral, spatial and temporal data, (vi) transfer
learning, (vii) an improved theoretical understanding of DL systems, (viii)
high barriers to entry, and (ix) training and optimizing the DL.Comment: 64 pages, 411 references. To appear in Journal of Applied Remote
Sensin
Multi-view Graph Convolutional Networks with Differentiable Node Selection
Multi-view data containing complementary and consensus information can
facilitate representation learning by exploiting the intact integration of
multi-view features. Because most objects in real world often have underlying
connections, organizing multi-view data as heterogeneous graphs is beneficial
to extracting latent information among different objects. Due to the powerful
capability to gather information of neighborhood nodes, in this paper, we apply
Graph Convolutional Network (GCN) to cope with heterogeneous-graph data
originating from multi-view data, which is still under-explored in the field of
GCN. In order to improve the quality of network topology and alleviate the
interference of noises yielded by graph fusion, some methods undertake sorting
operations before the graph convolution procedure. These GCN-based methods
generally sort and select the most confident neighborhood nodes for each
vertex, such as picking the top-k nodes according to pre-defined confidence
values. Nonetheless, this is problematic due to the non-differentiable sorting
operators and inflexible graph embedding learning, which may result in blocked
gradient computations and undesired performance. To cope with these issues, we
propose a joint framework dubbed Multi-view Graph Convolutional Network with
Differentiable Node Selection (MGCN-DNS), which is constituted of an adaptive
graph fusion layer, a graph learning module and a differentiable node selection
schema. MGCN-DNS accepts multi-channel graph-structural data as inputs and aims
to learn more robust graph fusion through a differentiable neural network. The
effectiveness of the proposed method is verified by rigorous comparisons with
considerable state-of-the-art approaches in terms of multi-view semi-supervised
classification tasks
Graph Relation Network: Modeling Relations between Scenes for Multi-Label Remote Sensing Image Classification and Retrieval
Due to the proliferation of large-scale remote-sensing (RS) archives with multiple annotations, multilabel RS scene classification and retrieval are becoming increasingly popular. Although some recent deep learning-based methods are able to achieve promising results in this context, the lack of research on how to learn embedding spaces under the multilabel assumption often makes these models unable to preserve complex semantic relations pervading aerial scenes, which is an important limitation in RS applications. To fill this gap, we propose a new graph relation network (GRN) for multilabel RS scene categorization. Our GRN is able to model the relations between samples (or scenes) by making use of a graph structure which is fed into network learning. For this purpose, we define a new loss function called scalable neighbor discriminative loss with binary cross entropy (SNDL-BCE) that is able to embed the graph structures through the networks more effectively. The proposed approach can guide deep learning techniques (such as convolutional neural networks) to a more discriminative metric space, where semantically similar RS scenes are closely embedded and dissimilar images are separated from a novel multilabel viewpoint. To achieve this goal, our GRN jointly maximizes a weighted leave-one-out K -nearest neighbors ( K NN) score in the training set, where the weight matrix describes the contributions of the nearest neighbors associated with each RS image on its class decision, and the likelihood of the class discrimination in the multilabel scenario. An extensive experimental comparison, conducted on three multilabel RS scene data archives, validates the effectiveness of the proposed GRN in terms of K NN classification and image retrieval. The codes of this article will be made publicly available for reproducible research in the community
Toulouse Hyperspectral Data Set: a benchmark data set to assess semi-supervised spectral representation learning and pixel-wise classification techniques
Airborne hyperspectral images can be used to map the land cover in large
urban areas, thanks to their very high spatial and spectral resolutions on a
wide spectral domain. While the spectral dimension of hyperspectral images is
highly informative of the chemical composition of the land surface, the use of
state-of-the-art machine learning algorithms to map the land cover has been
dramatically limited by the availability of training data. To cope with the
scarcity of annotations, semi-supervised and self-supervised techniques have
lately raised a lot of interest in the community. Yet, the publicly available
hyperspectral data sets commonly used to benchmark machine learning models are
not totally suited to evaluate their generalization performances due to one or
several of the following properties: a limited geographical coverage (which
does not reflect the spectral diversity in metropolitan areas), a small number
of land cover classes and a lack of appropriate standard train / test splits
for semi-supervised and self-supervised learning. Therefore, we release in this
paper the Toulouse Hyperspectral Data Set that stands out from other data sets
in the above-mentioned respects in order to meet key issues in spectral
representation learning and classification over large-scale hyperspectral
images with very few labeled pixels. Besides, we discuss and experiment the
self-supervised task of Masked Autoencoders and establish a baseline for
pixel-wise classification based on a conventional autoencoder combined with a
Random Forest classifier achieving 82% overall accuracy and 74% F1 score. The
Toulouse Hyperspectral Data Set and our code are publicly available at
https://www.toulouse-hyperspectral-data-set.com and
https://www.github.com/Romain3Ch216/tlse-experiments, respectively.Comment: 17 pages, 13 figure
Combined analytical approach empowers precise spectroscopic interpretation of subcellular components of pancreatic cancer cells
The lack of specific and sensitive early diagnostic options for pancreatic cancer (PC) results in patients being largely diagnosed with late-stage disease, thus inoperable and burdened with high mortality. Molecular spectroscopic methodologies, such as Raman or infrared spectroscopies, show promise in becoming a leader in screening for early-stage cancer diseases, including PC. However, should such technology be introduced, the identification of differentiating spectral features between various cancer types is required. This would not be possible without the precise extraction of spectra without the contamination by necrosis, inflammation, desmoplasia, or extracellular fluids such as mucous that surround tumor cells. Moreover, an efficient methodology for their interpretation has not been well defined. In this study, we compared different methods of spectral analysis to find the best for investigating the biomolecular composition of PC cells cytoplasm and nuclei separately. Sixteen PC tissue samples of main PC subtypes (ductal adenocarcinoma, intraductal papillary mucinous carcinoma, and ampulla of Vater carcinoma) were collected with Raman hyperspectral mapping, resulting in 191,355 Raman spectra and analyzed with comparative methodologies, specifically, hierarchical cluster analysis, non-negative matrix factorization, T-distributed stochastic neighbor embedding, principal components analysis (PCA), and convolutional neural networks (CNN). As a result, we propose an innovative approach to spectra classification by CNN, combined with PCA for molecular characterization. The CNN-based spectra classification achieved over 98% successful validation rate. Subsequent analyses of spectral features revealed differences among PC subtypes and between the cytoplasm and nuclei of their cells. Our study establishes an optimal methodology for cancer tissue spectral data classification and interpretation that allows precise and cognitive studies of cancer cells and their subcellular components, without mixing the results with cancer-surrounding tissue. As a proof of concept, we describe findings that add to the spectroscopic understanding of PC
- …