357 research outputs found
Graph Learning for Anomaly Analytics: Algorithms, Applications, and Challenges
Anomaly analytics is a popular and vital task in various research contexts,
which has been studied for several decades. At the same time, deep learning has
shown its capacity in solving many graph-based tasks like, node classification,
link prediction, and graph classification. Recently, many studies are extending
graph learning models for solving anomaly analytics problems, resulting in
beneficial advances in graph-based anomaly analytics techniques. In this
survey, we provide a comprehensive overview of graph learning methods for
anomaly analytics tasks. We classify them into four categories based on their
model architectures, namely graph convolutional network (GCN), graph attention
network (GAT), graph autoencoder (GAE), and other graph learning models. The
differences between these methods are also compared in a systematic manner.
Furthermore, we outline several graph-based anomaly analytics applications
across various domains in the real world. Finally, we discuss five potential
future research directions in this rapidly growing field
Graph learning for anomaly analytics : algorithms, applications, and challenges
Anomaly analytics is a popular and vital task in various research contexts that has been studied for several decades. At the same time, deep learning has shown its capacity in solving many graph-based tasks, like node classification, link prediction, and graph classification. Recently, many studies are extending graph learning models for solving anomaly analytics problems, resulting in beneficial advances in graph-based anomaly analytics techniques. In this survey, we provide a comprehensive overview of graph learning methods for anomaly analytics tasks. We classify them into four categories based on their model architectures, namely graph convolutional network, graph attention network, graph autoencoder, and other graph learning models. The differences between these methods are also compared in a systematic manner. Furthermore, we outline several graph-based anomaly analytics applications across various domains in the real world. Finally, we discuss five potential future research directions in this rapidly growing field. © 2023 Association for Computing Machinery
Graph Clustering with Graph Neural Networks
Graph Neural Networks (GNNs) have achieved state-of-the-art results on many
graph analysis tasks such as node classification and link prediction. However,
important unsupervised problems on graphs, such as graph clustering, have
proved more resistant to advances in GNNs. In this paper, we study unsupervised
training of GNN pooling in terms of their clustering capabilities.
We start by drawing a connection between graph clustering and graph pooling:
intuitively, a good graph clustering is what one would expect from a GNN
pooling layer. Counterintuitively, we show that this is not true for
state-of-the-art pooling methods, such as MinCut pooling. To address these
deficiencies, we introduce Deep Modularity Networks (DMoN), an unsupervised
pooling method inspired by the modularity measure of clustering quality, and
show how it tackles recovery of the challenging clustering structure of
real-world graphs. In order to clarify the regimes where existing methods fail,
we carefully design a set of experiments on synthetic data which show that DMoN
is able to jointly leverage the signal from the graph structure and node
attributes. Similarly, on real-world data, we show that DMoN produces high
quality clusters which correlate strongly with ground truth labels, achieving
state-of-the-art results
Robust Learning under Distributional Shifts
Designing robust models is critical for reliable deployment of artificial intelligence systems. Deep neural networks perform exceptionally well on test samples that are drawn from the same distribution as the training set. However, they perform poorly when there is a mismatch between training and test conditions, a phenomenon called distributional shift. For instance, the perception system of a self-driving car can produce erratic predictions when it encounters a new test sample with a different illumination or weather condition not seen during training. Such inconsistencies are undesirable, and can potentially create life-threatening conditions as these models are deployed in safety-critical applications.
In this dissertation, we develop several techniques for effectively handling distributional shifts in deep learning systems.
In the first part of the dissertation, we focus on detecting out-of-distribution shifts that can be used for flagging outlier samples at test-time. We develop a likelihood estimation framework based on deep generative models for this task. In the second part, we study the domain adaptation problem where the objective is to tune the neural network models to adapt to a specific target distribution of interest. We design novel adaptation algorithms, understand and analyze them under various settings. In the last part of the dissertation, we develop robust learning algorithms that can generalize to novel distributional shifts. In particular, we focus on two types of shifts - covariate and adversarial shifts. All developed algorithms are rigorously evaluated on several benchmark datasets
- …