153,838 research outputs found
Equivariant geometric learning for digital rock physics: estimating formation factor and effective permeability tensors from Morse graph
We present a SE(3)-equivariant graph neural network (GNN) approach that
directly predicting the formation factor and effective permeability from
micro-CT images. FFT solvers are established to compute both the formation
factor and effective permeability, while the topology and geometry of the pore
space are represented by a persistence-based Morse graph. Together, they
constitute the database for training, validating, and testing the neural
networks. While the graph and Euclidean convolutional approaches both employ
neural networks to generate low-dimensional latent space to represent the
features of the micro-structures for forward predictions, the SE(3) equivariant
neural network is found to generate more accurate predictions, especially when
the training data is limited. Numerical experiments have also shown that the
new SE(3) approach leads to predictions that fulfill the material frame
indifference whereas the predictions from classical convolutional neural
networks (CNN) may suffer from spurious dependence on the coordinate system of
the training data. Comparisons among predictions inferred from training the CNN
and those from graph convolutional neural networks (GNN) with and without the
equivariant constraint indicate that the equivariant graph neural network seems
to perform better than the CNN and GNN without enforcing equivariant
constraints
Neural Graph Embedding for Neural Architecture Search
Existing neural architecture search (NAS) methods often operate in discrete or continuous spaces directly, which ignores the graphical topology knowledge of neural networks. This leads to suboptimal search performance and efficiency, given the factor that neural networks are essentially directed acyclic graphs (DAG). In this work, we address this limitation by introducing a novel idea of neural graph embedding (NGE). Specifically, we represent the building block (i.e. the cell) of neural networks with a neural DAG, and learn it by leveraging a Graph Convolutional Network to propagate and model the intrinsic topology information of network architectures. This results in a generic neural network representation integrable with different existing NAS frameworks. Extensive experiments show the superiority of NGE over the state-of-the-art methods on image classification and semantic segmentation
Classification-Aided Robust Multiple Target Tracking Using Neural Enhanced Message Passing
We address the challenge of tracking an unknown number of targets in strong
clutter environments using measurements from a radar sensor. Leveraging the
range-Doppler spectra information, we identify the measurement classes, which
serve as additional information to enhance clutter rejection and data
association, thus bolstering the robustness of target tracking. We first
introduce a novel neural enhanced message passing approach, where the beliefs
obtained by the unified message passing are fed into the neural network as
additional information. The output beliefs are then utilized to refine the
original beliefs. Then, we propose a classification-aided robust multiple
target tracking algorithm, employing the neural enhanced message passing
technique. This algorithm is comprised of three modules: a message-passing
module, a neural network module, and a Dempster-Shafer module. The
message-passing module is used to represent the statistical model by the factor
graph and infers target kinematic states, visibility states, and data
associations based on the spatial measurement information. The neural network
module is employed to extract features from range-Doppler spectra and derive
beliefs on whether a measurement is target-generated or clutter-generated. The
Dempster-Shafer module is used to fuse the beliefs obtained from both the
factor graph and the neural network. As a result, our proposed algorithm adopts
a model-and-data-driven framework, effectively enhancing clutter suppression
and data association, leading to significant improvements in multiple target
tracking performance. We validate the effectiveness of our approach using both
simulated and real data scenarios, demonstrating its capability to handle
challenging tracking scenarios in practical radar applications.Comment: 15 page
Towards Deeper Graph Neural Networks
Graph neural networks have shown significant success in the field of graph
representation learning. Graph convolutions perform neighborhood aggregation
and represent one of the most important graph operations. Nevertheless, one
layer of these neighborhood aggregation methods only consider immediate
neighbors, and the performance decreases when going deeper to enable larger
receptive fields. Several recent studies attribute this performance
deterioration to the over-smoothing issue, which states that repeated
propagation makes node representations of different classes indistinguishable.
In this work, we study this observation systematically and develop new insights
towards deeper graph neural networks. First, we provide a systematical analysis
on this issue and argue that the key factor compromising the performance
significantly is the entanglement of representation transformation and
propagation in current graph convolution operations. After decoupling these two
operations, deeper graph neural networks can be used to learn graph node
representations from larger receptive fields. We further provide a theoretical
analysis of the above observation when building very deep models, which can
serve as a rigorous and gentle description of the over-smoothing issue. Based
on our theoretical and empirical analysis, we propose Deep Adaptive Graph
Neural Network (DAGNN) to adaptively incorporate information from large
receptive fields. A set of experiments on citation, co-authorship, and
co-purchase datasets have confirmed our analysis and insights and demonstrated
the superiority of our proposed methods.Comment: 11 pages, KDD202
GPNet: Simplifying Graph Neural Networks via Multi-channel Geometric Polynomials
Graph Neural Networks (GNNs) are a promising deep learning approach for
circumventing many real-world problems on graph-structured data. However, these
models usually have at least one of four fundamental limitations:
over-smoothing, over-fitting, difficult to train, and strong homophily
assumption. For example, Simple Graph Convolution (SGC) is known to suffer from
the first and fourth limitations. To tackle these limitations, we identify a
set of key designs including (D1) dilated convolution, (D2) multi-channel
learning, (D3) self-attention score, and (D4) sign factor to boost learning
from different types (i.e. homophily and heterophily) and scales (i.e. small,
medium, and large) of networks, and combine them into a graph neural network,
GPNet, a simple and efficient one-layer model. We theoretically analyze the
model and show that it can approximate various graph filters by adjusting the
self-attention score and sign factor. Experiments show that GPNet consistently
outperforms baselines in terms of average rank, average accuracy, complexity,
and parameters on semi-supervised and full-supervised tasks, and achieves
competitive performance compared to state-of-the-art model with inductive
learning task.Comment: 15 pages, 15 figure
- …