773 research outputs found

    동쒅, 이쒅, 그리고 λ‚˜λ¬΄ ν˜•νƒœμ˜ κ·Έλž˜ν”„λ₯Ό μœ„ν•œ 비지도 ν‘œν˜„ ν•™μŠ΅

    Get PDF
    ν•™μœ„λ…Όλ¬Έ(박사) -- μ„œμšΈλŒ€ν•™κ΅λŒ€ν•™μ› : κ³΅κ³ΌλŒ€ν•™ 전기·정보곡학뢀, 2022. 8. μ΅œμ§„μ˜.κ·Έλž˜ν”„ 데이터에 λŒ€ν•œ 비지도 ν‘œν˜„ ν•™μŠ΅μ˜ λͺ©μ μ€ κ·Έλž˜ν”„μ˜ ꡬ쑰와 λ…Έλ“œμ˜ 속성을 잘 λ°˜μ˜ν•˜λŠ” μœ μš©ν•œ λ…Έλ“œ λ‹¨μœ„ ν˜Ήμ€ κ·Έλž˜ν”„ λ‹¨μœ„μ˜ 벑터 ν˜•νƒœ ν‘œν˜„μ„ ν•™μŠ΅ν•˜λŠ” 것이닀. 졜근, κ·Έλž˜ν”„ 데이터에 λŒ€ν•΄ κ°•λ ₯ν•œ ν‘œν˜„ ν•™μŠ΅ λŠ₯λ ₯을 κ°–μΆ˜ κ·Έλž˜ν”„ 신경망을 ν™œμš©ν•œ 비지도 κ·Έλž˜ν”„ ν‘œν˜„ ν•™μŠ΅ λͺ¨λΈμ˜ 섀계가 μ£Όλͺ©μ„ λ°›κ³  μžˆλ‹€. λ§Žμ€ 방법듀은 ν•œ μ’…λ₯˜μ˜ 엣지와 ν•œ μ’…λ₯˜μ˜ λ…Έλ“œκ°€ μ‘΄μž¬ν•˜λŠ” 동쒅 κ·Έλž˜ν”„μ— λŒ€ν•œ ν•™μŠ΅μ— 집쀑을 ν•œλ‹€. ν•˜μ§€λ§Œ 이 세상에 μˆ˜λ§Žμ€ μ’…λ₯˜μ˜ 관계가 μ‘΄μž¬ν•˜κΈ° λ•Œλ¬Έμ—, κ·Έλž˜ν”„ λ˜ν•œ ꡬ쑰적, 의미둠적 속성을 톡해 λ‹€μ–‘ν•œ μ’…λ₯˜λ‘œ λΆ„λ₯˜ν•  수 μžˆλ‹€. κ·Έλž˜μ„œ, κ·Έλž˜ν”„λ‘œλΆ€ν„° μœ μš©ν•œ ν‘œν˜„μ„ ν•™μŠ΅ν•˜κΈ° μœ„ν•΄μ„œλŠ” 비지도 ν•™μŠ΅ ν”„λ ˆμž„μ›Œν¬λŠ” μž…λ ₯ κ·Έλž˜ν”„μ˜ νŠΉμ§•μ„ μ œλŒ€λ‘œ κ³ λ €ν•΄μ•Όλ§Œ ν•œλ‹€. λ³Έ ν•™μœ„λ…Όλ¬Έμ—μ„œ μš°λ¦¬λŠ” 널리 μ ‘ν•  수 μžˆλŠ” 세가지 κ·Έλž˜ν”„ ꡬ쑰인 동쒅 κ·Έλž˜ν”„, 트리 ν˜•νƒœμ˜ κ·Έλž˜ν”„, 그리고 이쒅 κ·Έλž˜ν”„μ— λŒ€ν•œ κ·Έλž˜ν”„ 신경망을 ν™œμš©ν•˜λŠ” 비지도 ν•™μŠ΅ λͺ¨λΈλ“€μ„ μ œμ•ˆν•œλ‹€. 처음으둜, μš°λ¦¬λŠ” 동쒅 κ·Έλž˜ν”„μ˜ λ…Έλ“œμ— λŒ€ν•˜μ—¬ 저차원 ν‘œν˜„μ„ ν•™μŠ΅ν•˜λŠ” κ·Έλž˜ν”„ μ»¨λ³Όλ£¨μ…˜ μ˜€ν† μΈμ½”λ” λͺ¨λΈμ„ μ œμ•ˆν•œλ‹€. 기쑴의 κ·Έλž˜ν”„ μ˜€ν† μΈμ½”λ”λŠ” ꡬ쑰의 전체가 ν•™μŠ΅μ΄ λΆˆκ°€λŠ₯ν•΄μ„œ μ œν•œμ μΈ ν‘œν˜„ ν•™μŠ΅ λŠ₯λ ₯을 κ°€μ§ˆ 수 μžˆλŠ” λ°˜λ©΄μ—, μ œμ•ˆν•˜λŠ” μ˜€ν† μΈμ½”λ”λŠ” λ…Έλ“œμ˜ 피쳐λ₯Ό λ³΅μ›ν•˜λ©°,ꡬ쑰의 전체가 ν•™μŠ΅μ΄ κ°€λŠ₯ν•˜λ‹€. λ…Έλ“œμ˜ 피쳐λ₯Ό λ³΅μ›ν•˜κΈ° μœ„ν•΄μ„œ, μš°λ¦¬λŠ” 인코더 λΆ€λΆ„μ˜ 역할이 μ΄μ›ƒν•œ λ…Έλ“œλΌλ¦¬ μœ μ‚¬ν•œ ν‘œν˜„μ„ κ°€μ§€κ²Œ ν•˜λŠ” λΌν”ŒλΌμ‹œμ•ˆ μŠ€λ¬΄λ”©μ΄λΌλŠ” 것에 μ£Όλͺ©ν•˜μ—¬ 디코더 λΆ€λΆ„μ—μ„œλŠ” 이웃 λ…Έλ“œμ˜ ν‘œν˜„κ³Ό λ©€μ–΄μ§€κ²Œ ν•˜λŠ” λΌν”ŒλΌμ‹œμ•ˆ 샀프닝을 ν•˜λ„λ‘ μ„€κ³„ν•˜μ˜€λ‹€. λ˜ν•œ λΌν”ŒλΌμ‹œμ•ˆ 샀프닝을 κ·ΈλŒ€λ‘œ μ μš©ν•˜λ©΄ λΆˆμ•ˆμ •μ„±μ„ μœ λ°œν•  수 있기 λ•Œλ¬Έμ—, μ—£μ§€μ˜ κ°€μ€‘μΉ˜ 값에 음의 값을 쀄 수 μžˆλŠ” λΆ€ν˜Έν˜• κ·Έλž˜ν”„λ₯Ό ν™œμš©ν•˜μ—¬ μ•ˆμ •μ μΈ λΌν”ŒλΌμ‹œμ•ˆ μƒ€ν”„λ‹μ˜ ν˜•νƒœλ₯Ό μ œμ•ˆν•˜μ˜€λ‹€. 동쒅 κ·Έλž˜ν”„μ— λŒ€ν•œ λ…Έλ“œ ν΄λŸ¬μŠ€ν„°λ§κ³Ό 링크 예츑 μ‹€ν—˜μ„ ν†΅ν•˜μ—¬ μ œμ•ˆν•˜λŠ” 방법이 μ•ˆμ •μ μœΌλ‘œ μš°μˆ˜ν•œ μ„±λŠ₯을 λ³΄μž„μ„ ν™•μΈν•˜μ˜€λ‹€. λ‘˜μ§Έλ‘œ, μš°λ¦¬λŠ” 트리의 ν˜•νƒœλ₯Ό κ°€μ§€λŠ” 계측적인 관계λ₯Ό 가지고 μžˆλŠ” κ·Έλž˜ν”„μ˜ λ…Έλ“œ ν‘œν˜„μ„ μ •ν™•ν•˜κ²Œ ν•™μŠ΅ν•˜κΈ° μœ„ν•˜μ—¬ μŒκ³‘μ„  κ³΅κ°„μ—μ„œ λ™μž‘ν•˜λŠ” μ˜€ν† μΈμ½”λ” λͺ¨λΈμ„ μ œμ•ˆν•œλ‹€. μœ ν΄λ¦¬λ””μ–Έ 곡간은 트리λ₯Ό μ‚¬μƒν•˜κΈ°μ— λΆ€μ μ ˆν•˜λ‹€λŠ” 졜근의 뢄석을 ν†΅ν•˜μ—¬, μŒκ³‘μ„  κ³΅κ°„μ—μ„œ κ·Έλž˜ν”„ μ‹ κ²½λ§μ˜ λ ˆμ΄μ–΄λ₯Ό ν™œμš©ν•˜μ—¬ λ…Έλ“œμ˜ 저차원 ν‘œν˜„μ„ ν•™μŠ΅ν•˜κ²Œ λœλ‹€. 이 λ•Œ, κ·Έλž˜ν”„ 신경망이 μŒκ³‘μ„  κΈ°ν•˜ν•™μ—μ„œ 계측 정보λ₯Ό λ‹΄κ³  μžˆλŠ” 거리의 값을 ν™œμš©ν•˜μ—¬ λ…Έλ“œμ˜ μ΄μ›ƒμ‚¬μ΄μ˜ μ€‘μš”λ„λ₯Ό ν™œμš©ν•˜λ„λ‘ μ„€κ³„ν•˜μ˜€λ‹€. μš°λ¦¬λŠ” λ…Όλ¬Έ 인용 관계 λ„€νŠΈμ›Œν¬, 계톡도, 이미지 μ‚¬μ΄μ˜ λ„€νŠΈμ›Œν¬λ“±μ— λŒ€ν•΄ μ œμ•ˆν•œ λͺ¨λΈμ„ μ μš©ν•˜μ—¬ λ…Έλ“œ ν΄λŸ¬μŠ€ν„°λ§κ³Ό 링크 예츑 μ‹€ν—˜μ„ ν•˜μ˜€μœΌλ©°, 트리의 ν˜•νƒœλ₯Ό κ°€μ§€λŠ” κ·Έλž˜ν”„μ— λŒ€ν•΄μ„œ μ œμ•ˆν•œ λͺ¨λΈμ΄ μœ ν΄λ¦¬λ””μ–Έ κ³΅κ°„μ—μ„œ μˆ˜ν–‰ν•˜λŠ” λͺ¨λΈμ— λΉ„ν•΄ ν–₯μƒλœ μ„±λŠ₯을 λ³΄μ˜€λ‹€λŠ” 것을 ν™•μΈν•˜μ˜€λ‹€. λ§ˆμ§€λ§‰μœΌλ‘œ, μš°λ¦¬λŠ” μ—¬λŸ¬ μ’…λ₯˜μ˜ λ…Έλ“œμ™€ 엣지λ₯Ό κ°€μ§€λŠ” μ΄μ’…κ·Έλž˜ν”„μ— λŒ€ν•œ λŒ€μ‘° ν•™μŠ΅ λͺ¨λΈμ„ μ œμ•ˆν•œλ‹€. μš°λ¦¬λŠ” 기쑴의 방법듀이 ν•™μŠ΅ν•˜κΈ° 이전에 μΆ©λΆ„ν•œ 도메인 지식을 μ‚¬μš©ν•˜μ—¬ μ„€κ³„ν•œ λ©”νƒ€νŒ¨μŠ€λ‚˜ λ©”νƒ€κ·Έλž˜ν”„μ— μ˜μ‘΄ν•œλ‹€λŠ” 단점과 λ§Žμ€ μ΄μ’…κ·Έλž˜ν”„μ˜ 엣지가 λ‹€λ₯Έ λ…Έλ“œ μ’…λ₯˜μ‚¬μ΄μ˜ 관계에 μ§‘μ€‘ν•˜κ³  μžˆλ‹€λŠ” 점을 μ£Όλͺ©ν•˜μ˜€λ‹€. 이λ₯Ό 톡해 μš°λ¦¬λŠ” 사전과정이 ν•„μš”μ—†μœΌλ©° λ‹€λ₯Έ μ’…λ₯˜ μ‚¬μ΄μ˜ 관계에 λ”ν•˜μ—¬ 같은 μ’…λ₯˜ μ‚¬μ΄μ˜ 관계도 λ™μ‹œμ— 효율적으둜 ν•™μŠ΅ν•˜κ²Œ ν•˜λŠ” λ©”νƒ€λ…Έλ“œλΌλŠ” κ°œλ…μ„ μ œμ•ˆν•˜μ˜€λ‹€. λ˜ν•œ λ©”νƒ€λ…Έλ“œλ₯Ό κΈ°λ°˜μœΌλ‘œν•˜λŠ” κ·Έλž˜ν”„ 신경망과 λŒ€μ‘° ν•™μŠ΅ λͺ¨λΈμ„ μ œμ•ˆν•˜μ˜€λ‹€. μš°λ¦¬λŠ” μ œμ•ˆν•œ λͺ¨λΈμ„ λ©”νƒ€νŒ¨μŠ€λ₯Ό μ‚¬μš©ν•˜λŠ” μ΄μ’…κ·Έλž˜ν”„ ν•™μŠ΅ λͺ¨λΈκ³Ό λ…Έλ“œ ν΄λŸ¬μŠ€ν„°λ§ λ“±μ˜ μ‹€ν—˜ μ„±λŠ₯으둜 λΉ„κ΅ν•΄λ³΄μ•˜μ„ λ•Œ, λΉ„λ“±ν•˜κ±°λ‚˜ 높은 μ„±λŠ₯을 λ³΄μ˜€μŒμ„ ν™•μΈν•˜μ˜€λ‹€.The goal of unsupervised graph representation learning is extracting useful node-wise or graph-wise vector representation that is aware of the intrinsic structures of the graph and its attributes. These days, designing methodology of unsupervised graph representation learning based on graph neural networks has growing attention due to their powerful representation ability. Many methods are focused on a homogeneous graph that is a network with a single type of node and a single type of edge. However, as many types of relationships exist in this world, graphs can also be classified into various types by structural and semantic properties. For this reason, to learn useful representations from graphs, the unsupervised learning framework must consider the characteristics of the input graph. In this dissertation, we focus on designing unsupervised learning models using graph neural networks for three graph structures that are widely available: homogeneous graphs, tree-like graphs, and heterogeneous graphs. First, we propose a symmetric graph convolutional autoencoder which produces a low-dimensional latent representation from a homogeneous graph. In contrast to the existing graph autoencoders with asymmetric decoder parts, the proposed autoencoder has a newly designed decoder which builds a completely symmetric autoencoder form. For the reconstruction of node features, the decoder is designed based on Laplacian sharpening as the counterpart of Laplacian smoothing of the encoder, which allows utilizing the graph structure in the whole processes of the proposed autoencoder architecture. In order to prevent the numerical instability of the network caused by the Laplacian sharpening introduction, we further propose a new numerically stable form of the Laplacian sharpening by incorporating the signed graphs. The experimental results of clustering, link prediction and visualization tasks on homogeneous graphs strongly support that the proposed model is stable and outperforms various state-of-the-art algorithms. Second, we analyze how unsupervised tasks can benefit from learned representations in hyperbolic space. To explore how well the hierarchical structure of unlabeled data can be represented in hyperbolic spaces, we design a novel hyperbolic message passing autoencoder whose overall auto-encoding is performed in hyperbolic space. The proposed model conducts auto-encoding the networks via fully utilizing hyperbolic geometry in message passing. Through extensive quantitative and qualitative analyses, we validate the properties and benefits of the unsupervised hyperbolic representations of tree-like graphs. Third, we propose the novel concept of metanode for message passing to learn both heterogeneous and homogeneous relationships between any two nodes without meta-paths and meta-graphs. Unlike conventional methods, metanodes do not require a predetermined step to manipulate the given relations between different types to enrich relational information. Going one step further, we propose a metanode-based message passing layer and a contrastive learning model using the proposed layer. In our experiments, we show the competitive performance of the proposed metanode-based message passing method on node clustering and node classification tasks, when compared to state-of-the-art methods for message passing networks for heterogeneous graphs.1 Introduction 1 2 Representation Learning on Graph-Structured Data 4 2.1 Basic Introduction 4 2.1.1 Notations 5 2.2 Traditional Approaches 5 2.2.1 Graph Statistic 5 2.2.2 Neighborhood Overlap 7 2.2.3 Graph Kernel 9 2.2.4 Spectral Approaches 10 2.3 Node Embeddings I: Factorization and Random Walks 15 2.3.1 Factorization-based Methods 15 2.3.2 Random Walk-based Methods 16 2.4 Node Embeddings II: Graph Neural Networks 17 2.4.1 Overview of Framework 17 2.4.2 Representative Models 18 2.5 Learning in Unsupervised Environments 21 2.5.1 Predictive Coding 21 2.5.2 Contrastive Coding 22 2.6 Applications 24 2.6.1 Classifications 24 2.6.2 Link Prediction 26 3 Autoencoder Architecture for Homogeneous Graphs 27 3.1 Overview 27 3.2 Preliminaries 30 3.2.1 Spectral Convolution on Graphs 30 3.2.2 Laplacian Smoothing 32 3.3 Methodology 33 3.3.1 Laplacian Sharpening 33 3.3.2 Numerically Stable Laplacian Sharpening 34 3.3.3 Subspace Clustering Cost for Image Clustering 37 3.3.4 Training 39 3.4 Experiments 40 3.4.1 Datasets 40 3.4.2 Experimental Settings 42 3.4.3 Comparing Methods 42 3.4.4 Node Clustering 43 3.4.5 Image Clustering 45 3.4.6 Ablation Studies 46 3.4.7 Link Prediction 47 3.4.8 Visualization 47 3.5 Summary 49 4 Autoencoder Architecture for Tree-like Graphs 50 4.1 Overview 50 4.2 Preliminaries 52 4.2.1 Hyperbolic Embeddings 52 4.2.2 Hyperbolic Geometry 53 4.3 Methodology 55 4.3.1 Geometry-Aware Message Passing 56 4.3.2 Nonlinear Activation 57 4.3.3 Loss Function 58 4.4 Experiments 58 4.4.1 Datasets 59 4.4.2 Compared Methods 61 4.4.3 Experimental Details 62 4.4.4 Node Clustering and Link Prediction 64 4.4.5 Image Clustering 66 4.4.6 Structure-Aware Unsupervised Embeddings 68 4.4.7 Hyperbolic Distance to Filter Training Samples 71 4.4.8 Ablation Studies 74 4.5 Further Discussions 75 4.5.1 Connection to Contrastive Learning 75 4.5.2 Failure Cases of Hyperbolic Embedding Spaces 75 4.6 Summary 77 5 Contrastive Learning for Heterogeneous Graphs 78 5.1 Overview 78 5.2 Preliminaries 82 5.2.1 Meta-path 82 5.2.2 Representation Learning on Heterogeneous Graphs 82 5.2.3 Contrastive methods for Heterogeneous Graphs 83 5.3 Methodology 84 5.3.1 Definitions 84 5.3.2 Metanode-based Message Passing Layer 86 5.3.3 Contrastive Learning Framework 88 5.4 Experiments 89 5.4.1 Experimental Details 90 5.4.2 Node Classification 94 5.4.3 Node Clustering 96 5.4.4 Visualization 96 5.4.5 Effectiveness of Metanodes 97 5.5 Summary 99 6 Conclusions 101λ°•

    A Graph-Based Semi-Supervised k Nearest-Neighbor Method for Nonlinear Manifold Distributed Data Classification

    Get PDF
    kk Nearest Neighbors (kkNN) is one of the most widely used supervised learning algorithms to classify Gaussian distributed data, but it does not achieve good results when it is applied to nonlinear manifold distributed data, especially when a very limited amount of labeled samples are available. In this paper, we propose a new graph-based kkNN algorithm which can effectively handle both Gaussian distributed data and nonlinear manifold distributed data. To achieve this goal, we first propose a constrained Tired Random Walk (TRW) by constructing an RR-level nearest-neighbor strengthened tree over the graph, and then compute a TRW matrix for similarity measurement purposes. After this, the nearest neighbors are identified according to the TRW matrix and the class label of a query point is determined by the sum of all the TRW weights of its nearest neighbors. To deal with online situations, we also propose a new algorithm to handle sequential samples based a local neighborhood reconstruction. Comparison experiments are conducted on both synthetic data sets and real-world data sets to demonstrate the validity of the proposed new kkNN algorithm and its improvements to other version of kkNN algorithms. Given the widespread appearance of manifold structures in real-world problems and the popularity of the traditional kkNN algorithm, the proposed manifold version kkNN shows promising potential for classifying manifold-distributed data.Comment: 32 pages, 12 figures, 7 table
    • …
    corecore