564 research outputs found
Model Selection for Stochastic Block Models
As a flexible representation for complex systems, networks (graphs) model entities and their interactions as nodes and edges. In many real-world networks, nodes divide naturally into functional communities, where nodes in the same group connect to the rest of the network in similar ways. Discovering such communities is an important part of modeling networks, as community structure offers clues to the processes which generated the graph. The stochastic block model is a popular network model based on community structures. It splits nodes into blocks, within which all nodes are stochastically equivalent in terms of how they connect to the rest of the network. As a generative model, it has a well-defined likelihood function with consistent parameter estimates. It is also highly flexible, capable of modeling a wide variety of community structures, including degree specific and overlapping communities. Performance of different block models vary under different scenarios. Picking the right model is crucial for successful network modeling. A good model choice should balance the trade-off between complexity and fit. The task of model selection is to automatically choose such a model given the data and the inference task. As a problem of wide interest, numerous statistical model selection techniques have been developed for classic independent data. Unfortunately, it has been a common mistake to use these techniques in block models without rigorous examinations of their derivations, ignoring the fact that some of the fundamental assumptions has been violated by moving into the domain of relational data sets such as networks. In this dissertation, I thoroughly exam the literature of statistical model selection techniques, including both Frequentist and Bayesian approaches. My goal is to develop principled statistical model selection criteria for block models by adapting classic methods for network data. I do this by running bootstrapping simulations with an efficient algorithm, and correcting classic model selection theories for block models based on the simulation data. The new model selection methods are verified by both synthetic and real world data sets
A Comprehensive Review of Community Detection in Graphs
The study of complex networks has significantly advanced our understanding of
community structures which serves as a crucial feature of real-world graphs.
Detecting communities in graphs is a challenging problem with applications in
sociology, biology, and computer science. Despite the efforts of an
interdisciplinary community of scientists, a satisfactory solution to this
problem has not yet been achieved. This review article delves into the topic of
community detection in graphs, which serves as a crucial role in understanding
the organization and functioning of complex systems. We begin by introducing
the concept of community structure, which refers to the arrangement of vertices
into clusters, with strong internal connections and weaker connections between
clusters. Then, we provide a thorough exposition of various community detection
methods, including a new method designed by us. Additionally, we explore
real-world applications of community detection in diverse networks. In
conclusion, this comprehensive review provides a deep understanding of
community detection in graphs. It serves as a valuable resource for researchers
and practitioners in multiple disciplines, offering insights into the
challenges, methodologies, and applications of community detection in complex
networks
Bayesian stochastic blockmodels for community detection in networks and community-structured covariance selection
Networks have been widely used to describe interactions among objects in diverse fields. Given the interest in explaining a network by its structure, much attention has been drawn to finding clusters of nodes with dense connections within clusters but sparse connections between clusters. Such clusters are called communities, and identifying such clusters is known as community detection. Here, to perform community detection, I focus on stochastic blockmodels (SBM), a class of statistically-based generative models. I present a flexible SBM that represents different types of data as well as node attributes under a Bayesian framework. The proposed models explicitly capture community behavior by guaranteeing that connections are denser within communities than between communities.
First, I present a degree-corrected SBM based on a logistic regression formulation to model binary networks. To fit the model, I obtain posterior samples via Gibbs sampling based on Polya-Gamma latent variables. I conduct inference based on a novel, canonically mapped centroid estimator that formally addresses label non-identifiability and captures representative community assignments. Next, to accommodate large-scale datasets, I further extend the degree-corrected SBM to a broader family of generalized linear models with group correction terms. To conduct exact inference efficiently, I develop an iteratively-reweighted least squares procedure that implicitly updates sufficient statistics on the network to obtain maximum a posteriori (MAP) estimators. I demonstrate the proposed model and estimation on simulated benchmark networks and various real-world datasets.
Finally, I develop a Bayesian SBM for community-structured covariance selection. Here, I assume that the data at each node are Gaussian and a latent network where two nodes are not connected if their observations are conditionally independent given observations of other nodes. Under the context of biological and social applications, I expect that this latent network shows a block dependency structure that represents community behavior. Thus, to identify the latent network and detect communities, I propose a hierarchical prior in two levels: a spike-and-slab prior on off-diagonal entries of the concentration matrix for variable selection and a degree-corrected SBM to capture community behavior. I develop an efficient routine based on ridge regularization and MAP estimation to conduct inference
Computation in Complex Networks
Complex networks are one of the most challenging research focuses of disciplines, including physics, mathematics, biology, medicine, engineering, and computer science, among others. The interest in complex networks is increasingly growing, due to their ability to model several daily life systems, such as technology networks, the Internet, and communication, chemical, neural, social, political and financial networks. The Special Issue βComputation in Complex Networks" of Entropy offers a multidisciplinary view on how some complex systems behave, providing a collection of original and high-quality papers within the research fields of: β’ Community detection β’ Complex network modelling β’ Complex network analysis β’ Node classification β’ Information spreading and control β’ Network robustness β’ Social networks β’ Network medicin
Advances in Learning and Understanding with Graphs through Machine Learning
Graphs have increasingly become a crucial way of representing large, complex and disparate datasets from a range of domains, including many scientific disciplines. Graphs are particularly useful at capturing complex relationships or interdependencies within or even between datasets, and enable unique insights which are not possible with other data formats. Over recent years, significant improvements in the ability of machine learning approaches to automatically learn from and identify patterns in datasets have been made.
However due to the unique nature of graphs, and the data they are used to represent, employing machine learning with graphs has thus far proved challenging. A review of relevant literature has revealed that key challenges include issues arising with macro-scale graph learning, interpretability of machine learned representations and a failure to incorporate the temporal dimension present in many datasets. Thus, the work and contributions presented in this thesis primarily investigate how modern machine learning techniques can be adapted to tackle key graph mining tasks, with a particular focus on optimal macro-level representation, interpretability and incorporating temporal dynamics into the learning process. The majority of methods employed are novel approaches centered around attempting to use artificial neural networks in order to learn from graph datasets.
Firstly, by devising a novel graph fingerprint technique, it is demonstrated that this can successfully be applied to two different tasks whilst out-performing established baselines, namely graph comparison and classification. Secondly, it is shown that a mapping can be found between certain topological features and graph embeddings. This, for perhaps the the first time, suggests that it is possible that machines are learning something analogous to human knowledge acquisition, thus bringing interpretability to the graph embedding process. Thirdly, in exploring two new models for incorporating temporal information into the graph learning process, it is found that including such information is crucial to predictive performance in certain key tasks, such as link prediction, where state-of-the-art baselines are out-performed.
The overall contribution of this work is to provide greater insight into and explanation of the ways in which machine learning with respect to graphs is emerging as a crucial set of techniques for understanding complex datasets. This is important as these techniques can potentially be applied to a broad range of scientific disciplines. The thesis concludes with an assessment of limitations and recommendations for future research
Adversarial Attack on Community Detection by Hiding Individuals
It has been demonstrated that adversarial graphs, i.e., graphs with
imperceptible perturbations added, can cause deep graph models to fail on
node/graph classification tasks. In this paper, we extend adversarial graphs to
the problem of community detection which is much more difficult. We focus on
black-box attack and aim to hide targeted individuals from the detection of
deep graph community detection models, which has many applications in
real-world scenarios, for example, protecting personal privacy in social
networks and understanding camouflage patterns in transaction networks. We
propose an iterative learning framework that takes turns to update two modules:
one working as the constrained graph generator and the other as the surrogate
community detection model. We also find that the adversarial graphs generated
by our method can be transferred to other learning based community detection
models.Comment: In Proceedings of The Web Conference 2020, April 20-24, 2020, Taipei,
Taiwan. 11 page
Exploring QCD matter in extreme conditions with Machine Learning
In recent years, machine learning has emerged as a powerful computational
tool and novel problem-solving perspective for physics, offering new avenues
for studying strongly interacting QCD matter properties under extreme
conditions. This review article aims to provide an overview of the current
state of this intersection of fields, focusing on the application of machine
learning to theoretical studies in high energy nuclear physics. It covers
diverse aspects, including heavy ion collisions, lattice field theory, and
neutron stars, and discuss how machine learning can be used to explore and
facilitate the physics goals of understanding QCD matter. The review also
provides a commonality overview from a methodology perspective, from
data-driven perspective to physics-driven perspective. We conclude by
discussing the challenges and future prospects of machine learning applications
in high energy nuclear physics, also underscoring the importance of
incorporating physics priors into the purely data-driven learning toolbox. This
review highlights the critical role of machine learning as a valuable
computational paradigm for advancing physics exploration in high energy nuclear
physics.Comment: 146 pages,53 figure
λμ’ , μ΄μ’ , κ·Έλ¦¬κ³ λ무 ννμ κ·Έλνλ₯Ό μν λΉμ§λ νν νμ΅
νμλ
Όλ¬Έ(λ°μ¬) -- μμΈλνκ΅λνμ : 곡과λν μ κΈ°Β·μ 보곡νλΆ, 2022. 8. μ΅μ§μ.κ·Έλν λ°μ΄ν°μ λν λΉμ§λ νν νμ΅μ λͺ©μ μ κ·Έλνμ ꡬ쑰μ λ
Έλμ μμ±μ μ λ°μνλ μ μ©ν λ
Έλ λ¨μ νΉμ κ·Έλν λ¨μμ λ²‘ν° νν ννμ νμ΅νλ κ²μ΄λ€. μ΅κ·Ό, κ·Έλν λ°μ΄ν°μ λν΄ κ°λ ₯ν νν νμ΅ λ₯λ ₯μ κ°μΆ κ·Έλν μ κ²½λ§μ νμ©ν λΉμ§λ κ·Έλν νν νμ΅ λͺ¨λΈμ μ€κ³κ° μ£Όλͺ©μ λ°κ³ μλ€. λ§μ λ°©λ²λ€μ ν μ’
λ₯μ μ£μ§μ ν μ’
λ₯μ λ
Έλκ° μ‘΄μ¬νλ λμ’
κ·Έλνμ λν νμ΅μ μ§μ€μ νλ€. νμ§λ§ μ΄ μΈμμ μλ§μ μ’
λ₯μ κ΄κ³κ° μ‘΄μ¬νκΈ° λλ¬Έμ, κ·Έλν λν ꡬ쑰μ , μλ―Έλ‘ μ μμ±μ ν΅ν΄ λ€μν μ’
λ₯λ‘ λΆλ₯ν μ μλ€. κ·Έλμ, κ·Έλνλ‘λΆν° μ μ©ν ννμ νμ΅νκΈ° μν΄μλ λΉμ§λ νμ΅ νλ μμν¬λ μ
λ ₯ κ·Έλνμ νΉμ§μ μ λλ‘ κ³ λ €ν΄μΌλ§ νλ€. λ³Έ νμλ
Όλ¬Έμμ μ°λ¦¬λ λ리 μ ν μ μλ μΈκ°μ§ κ·Έλν κ΅¬μ‘°μΈ λμ’
κ·Έλν, νΈλ¦¬ ννμ κ·Έλν, κ·Έλ¦¬κ³ μ΄μ’
κ·Έλνμ λν κ·Έλν μ κ²½λ§μ νμ©νλ λΉμ§λ νμ΅ λͺ¨λΈλ€μ μ μνλ€.
μ²μμΌλ‘, μ°λ¦¬λ λμ’
κ·Έλνμ λ
Έλμ λνμ¬ μ μ°¨μ ννμ νμ΅νλ κ·Έλν 컨볼루μ
μ€ν μΈμ½λ λͺ¨λΈμ μ μνλ€. κΈ°μ‘΄μ κ·Έλν μ€ν μΈμ½λλ ꡬ쑰μ μ μ²΄κ° νμ΅μ΄ λΆκ°λ₯ν΄μ μ νμ μΈ νν νμ΅ λ₯λ ₯μ κ°μ§ μ μλ λ°λ©΄μ, μ μνλ μ€ν μΈμ½λλ λ
Έλμ νΌμ³λ₯Ό 볡μνλ©°,ꡬ쑰μ μ μ²΄κ° νμ΅μ΄ κ°λ₯νλ€. λ
Έλμ νΌμ³λ₯Ό 볡μνκΈ° μν΄μ, μ°λ¦¬λ μΈμ½λ λΆλΆμ μν μ΄ μ΄μν λ
ΈλλΌλ¦¬ μ μ¬ν ννμ κ°μ§κ² νλ λΌνλΌμμ μ€λ¬΄λ©μ΄λΌλ κ²μ μ£Όλͺ©νμ¬ λμ½λ λΆλΆμμλ μ΄μ λ
Έλμ ννκ³Ό λ©μ΄μ§κ² νλ λΌνλΌμμ μ€νλμ νλλ‘ μ€κ³νμλ€. λν λΌνλΌμμ μ€νλμ κ·Έλλ‘ μ μ©νλ©΄ λΆμμ μ±μ μ λ°ν μ μκΈ° λλ¬Έμ, μ£μ§μ κ°μ€μΉ κ°μ μμ κ°μ μ€ μ μλ λΆνΈν κ·Έλνλ₯Ό νμ©νμ¬ μμ μ μΈ λΌνλΌμμ μ€νλμ ννλ₯Ό μ μνμλ€. λμ’
κ·Έλνμ λν λ
Έλ ν΄λ¬μ€ν°λ§κ³Ό λ§ν¬ μμΈ‘ μ€νμ ν΅νμ¬ μ μνλ λ°©λ²μ΄ μμ μ μΌλ‘ μ°μν μ±λ₯μ 보μμ νμΈνμλ€.
λμ§Έλ‘, μ°λ¦¬λ νΈλ¦¬μ ννλ₯Ό κ°μ§λ κ³μΈ΅μ μΈ κ΄κ³λ₯Ό κ°μ§κ³ μλ κ·Έλνμ λ
Έλ ννμ μ ννκ² νμ΅νκΈ° μνμ¬ μ곑μ 곡κ°μμ λμνλ μ€ν μΈμ½λ λͺ¨λΈμ μ μνλ€. μ ν΄λ¦¬λμΈ κ³΅κ°μ νΈλ¦¬λ₯Ό μ¬μνκΈ°μ λΆμ μ νλ€λ μ΅κ·Όμ λΆμμ ν΅νμ¬, μ곑μ 곡κ°μμ κ·Έλν μ κ²½λ§μ λ μ΄μ΄λ₯Ό νμ©νμ¬ λ
Έλμ μ μ°¨μ ννμ νμ΅νκ² λλ€. μ΄ λ, κ·Έλν μ κ²½λ§μ΄ μ곑μ κΈ°ννμμ κ³μΈ΅ μ 보λ₯Ό λ΄κ³ μλ 거리μ κ°μ νμ©νμ¬ λ
Έλμ μ΄μμ¬μ΄μ μ€μλλ₯Ό νμ©νλλ‘ μ€κ³νμλ€. μ°λ¦¬λ λ
Όλ¬Έ μΈμ© κ΄κ³ λ€νΈμν¬, κ³ν΅λ, μ΄λ―Έμ§ μ¬μ΄μ λ€νΈμν¬λ±μ λν΄ μ μν λͺ¨λΈμ μ μ©νμ¬ λ
Έλ ν΄λ¬μ€ν°λ§κ³Ό λ§ν¬ μμΈ‘ μ€νμ νμμΌλ©°, νΈλ¦¬μ ννλ₯Ό κ°μ§λ κ·Έλνμ λν΄μ μ μν λͺ¨λΈμ΄ μ ν΄λ¦¬λμΈ κ³΅κ°μμ μννλ λͺ¨λΈμ λΉν΄ ν₯μλ μ±λ₯μ 보μλ€λ κ²μ νμΈνμλ€.
λ§μ§λ§μΌλ‘, μ°λ¦¬λ μ¬λ¬ μ’
λ₯μ λ
Έλμ μ£μ§λ₯Ό κ°μ§λ μ΄μ’
κ·Έλνμ λν λμ‘° νμ΅ λͺ¨λΈμ μ μνλ€. μ°λ¦¬λ κΈ°μ‘΄μ λ°©λ²λ€μ΄ νμ΅νκΈ° μ΄μ μ μΆ©λΆν λλ©μΈ μ§μμ μ¬μ©νμ¬ μ€κ³ν λ©νν¨μ€λ λ©νκ·Έλνμ μμ‘΄νλ€λ λ¨μ κ³Ό λ§μ μ΄μ’
κ·Έλνμ μ£μ§κ° λ€λ₯Έ λ
Έλ μ’
λ₯μ¬μ΄μ κ΄κ³μ μ§μ€νκ³ μλ€λ μ μ μ£Όλͺ©νμλ€. μ΄λ₯Ό ν΅ν΄ μ°λ¦¬λ μ¬μ κ³Όμ μ΄ νμμμΌλ©° λ€λ₯Έ μ’
λ₯ μ¬μ΄μ κ΄κ³μ λνμ¬ κ°μ μ’
λ₯ μ¬μ΄μ κ΄κ³λ λμμ ν¨μ¨μ μΌλ‘ νμ΅νκ² νλ λ©νλ
ΈλλΌλ κ°λ
μ μ μνμλ€. λν λ©νλ
Έλλ₯Ό κΈ°λ°μΌλ‘νλ κ·Έλν μ κ²½λ§κ³Ό λμ‘° νμ΅ λͺ¨λΈμ μ μνμλ€. μ°λ¦¬λ μ μν λͺ¨λΈμ λ©νν¨μ€λ₯Ό μ¬μ©νλ μ΄μ’
κ·Έλν νμ΅ λͺ¨λΈκ³Ό λ
Έλ ν΄λ¬μ€ν°λ§ λ±μ μ€ν μ±λ₯μΌλ‘ λΉκ΅ν΄λ³΄μμ λ, λΉλ±νκ±°λ λμ μ±λ₯μ 보μμμ νμΈνμλ€.The goal of unsupervised graph representation learning is extracting useful node-wise or graph-wise vector representation that is aware of the intrinsic structures of the graph and its attributes. These days, designing methodology of unsupervised graph representation learning based on graph neural networks has growing attention due to their powerful representation ability. Many methods are focused on a homogeneous graph that is a network with a single type of node and a single type of edge. However, as many types of relationships exist in this world, graphs can also be classified into various types by structural and semantic properties. For this reason, to learn useful representations from graphs, the unsupervised learning framework must consider the characteristics of the input graph. In this dissertation, we focus on designing unsupervised learning models using graph neural networks for three graph structures that are widely available: homogeneous graphs, tree-like graphs, and heterogeneous graphs.
First, we propose a symmetric graph convolutional autoencoder which produces a low-dimensional latent representation from a homogeneous graph. In contrast to the existing graph autoencoders with asymmetric decoder parts, the proposed autoencoder has a newly designed decoder which builds a completely symmetric autoencoder form. For the reconstruction of node features, the decoder is designed based on Laplacian sharpening as the counterpart of Laplacian smoothing of the encoder, which allows utilizing the graph structure in the whole processes of the proposed autoencoder architecture. In order to prevent the numerical instability of the network caused by the Laplacian sharpening introduction, we further propose a new numerically stable form of the Laplacian sharpening by incorporating the signed graphs. The experimental results of clustering, link prediction and visualization tasks on homogeneous graphs strongly support that the proposed model is stable and outperforms various state-of-the-art algorithms.
Second, we analyze how unsupervised tasks can benefit from learned representations in hyperbolic space. To explore how well the hierarchical structure of unlabeled data can be represented in hyperbolic spaces, we design a novel hyperbolic message passing autoencoder whose overall auto-encoding is performed in hyperbolic space. The proposed model conducts auto-encoding the networks via fully utilizing hyperbolic geometry in message passing. Through extensive quantitative and qualitative analyses, we validate the properties and benefits of the unsupervised hyperbolic representations of tree-like graphs.
Third, we propose the novel concept of metanode for message passing to learn both heterogeneous and homogeneous relationships between any two nodes without meta-paths and meta-graphs. Unlike conventional methods, metanodes do not require a predetermined step to manipulate the given relations between different types to enrich relational information. Going one step further, we propose a metanode-based message passing layer and a contrastive learning model using the proposed layer. In our experiments, we show the competitive performance of the proposed metanode-based message passing method on node clustering and node classification tasks, when compared to state-of-the-art methods for message passing networks for heterogeneous graphs.1 Introduction 1
2 Representation Learning on Graph-Structured Data 4
2.1 Basic Introduction 4
2.1.1 Notations 5
2.2 Traditional Approaches 5
2.2.1 Graph Statistic 5
2.2.2 Neighborhood Overlap 7
2.2.3 Graph Kernel 9
2.2.4 Spectral Approaches 10
2.3 Node Embeddings I: Factorization and Random Walks 15
2.3.1 Factorization-based Methods 15
2.3.2 Random Walk-based Methods 16
2.4 Node Embeddings II: Graph Neural Networks 17
2.4.1 Overview of Framework 17
2.4.2 Representative Models 18
2.5 Learning in Unsupervised Environments 21
2.5.1 Predictive Coding 21
2.5.2 Contrastive Coding 22
2.6 Applications 24
2.6.1 Classifications 24
2.6.2 Link Prediction 26
3 Autoencoder Architecture for Homogeneous Graphs 27
3.1 Overview 27
3.2 Preliminaries 30
3.2.1 Spectral Convolution on Graphs 30
3.2.2 Laplacian Smoothing 32
3.3 Methodology 33
3.3.1 Laplacian Sharpening 33
3.3.2 Numerically Stable Laplacian Sharpening 34
3.3.3 Subspace Clustering Cost for Image Clustering 37
3.3.4 Training 39
3.4 Experiments 40
3.4.1 Datasets 40
3.4.2 Experimental Settings 42
3.4.3 Comparing Methods 42
3.4.4 Node Clustering 43
3.4.5 Image Clustering 45
3.4.6 Ablation Studies 46
3.4.7 Link Prediction 47
3.4.8 Visualization 47
3.5 Summary 49
4 Autoencoder Architecture for Tree-like Graphs 50
4.1 Overview 50
4.2 Preliminaries 52
4.2.1 Hyperbolic Embeddings 52
4.2.2 Hyperbolic Geometry 53
4.3 Methodology 55
4.3.1 Geometry-Aware Message Passing 56
4.3.2 Nonlinear Activation 57
4.3.3 Loss Function 58
4.4 Experiments 58
4.4.1 Datasets 59
4.4.2 Compared Methods 61
4.4.3 Experimental Details 62
4.4.4 Node Clustering and Link Prediction 64
4.4.5 Image Clustering 66
4.4.6 Structure-Aware Unsupervised Embeddings 68
4.4.7 Hyperbolic Distance to Filter Training Samples 71
4.4.8 Ablation Studies 74
4.5 Further Discussions 75
4.5.1 Connection to Contrastive Learning 75
4.5.2 Failure Cases of Hyperbolic Embedding Spaces 75
4.6 Summary 77
5 Contrastive Learning for Heterogeneous Graphs 78
5.1 Overview 78
5.2 Preliminaries 82
5.2.1 Meta-path 82
5.2.2 Representation Learning on Heterogeneous Graphs 82
5.2.3 Contrastive methods for Heterogeneous Graphs 83
5.3 Methodology 84
5.3.1 Definitions 84
5.3.2 Metanode-based Message Passing Layer 86
5.3.3 Contrastive Learning Framework 88
5.4 Experiments 89
5.4.1 Experimental Details 90
5.4.2 Node Classification 94
5.4.3 Node Clustering 96
5.4.4 Visualization 96
5.4.5 Effectiveness of Metanodes 97
5.5 Summary 99
6 Conclusions 101λ°
- β¦