100 research outputs found
No Pattern, No Recognition: a Survey about Reproducibility and Distortion Issues of Text Clustering and Topic Modeling
Extracting knowledge from unlabeled texts using machine learning algorithms
can be complex. Document categorization and information retrieval are two
applications that may benefit from unsupervised learning (e.g., text clustering
and topic modeling), including exploratory data analysis. However, the
unsupervised learning paradigm poses reproducibility issues. The initialization
can lead to variability depending on the machine learning algorithm.
Furthermore, the distortions can be misleading when regarding cluster geometry.
Amongst the causes, the presence of outliers and anomalies can be a determining
factor. Despite the relevance of initialization and outlier issues for text
clustering and topic modeling, the authors did not find an in-depth analysis of
them. This survey provides a systematic literature review (2011-2022) of these
subareas and proposes a common terminology since similar procedures have
different terms. The authors describe research opportunities, trends, and open
issues. The appendices summarize the theoretical background of the text
vectorization, the factorization, and the clustering algorithms that are
directly or indirectly related to the reviewed works
λμ’ , μ΄μ’ , κ·Έλ¦¬κ³ λ무 ννμ κ·Έλνλ₯Ό μν λΉμ§λ νν νμ΅
νμλ
Όλ¬Έ(λ°μ¬) -- μμΈλνκ΅λνμ : 곡과λν μ κΈ°Β·μ 보곡νλΆ, 2022. 8. μ΅μ§μ.κ·Έλν λ°μ΄ν°μ λν λΉμ§λ νν νμ΅μ λͺ©μ μ κ·Έλνμ ꡬ쑰μ λ
Έλμ μμ±μ μ λ°μνλ μ μ©ν λ
Έλ λ¨μ νΉμ κ·Έλν λ¨μμ λ²‘ν° νν ννμ νμ΅νλ κ²μ΄λ€. μ΅κ·Ό, κ·Έλν λ°μ΄ν°μ λν΄ κ°λ ₯ν νν νμ΅ λ₯λ ₯μ κ°μΆ κ·Έλν μ κ²½λ§μ νμ©ν λΉμ§λ κ·Έλν νν νμ΅ λͺ¨λΈμ μ€κ³κ° μ£Όλͺ©μ λ°κ³ μλ€. λ§μ λ°©λ²λ€μ ν μ’
λ₯μ μ£μ§μ ν μ’
λ₯μ λ
Έλκ° μ‘΄μ¬νλ λμ’
κ·Έλνμ λν νμ΅μ μ§μ€μ νλ€. νμ§λ§ μ΄ μΈμμ μλ§μ μ’
λ₯μ κ΄κ³κ° μ‘΄μ¬νκΈ° λλ¬Έμ, κ·Έλν λν ꡬ쑰μ , μλ―Έλ‘ μ μμ±μ ν΅ν΄ λ€μν μ’
λ₯λ‘ λΆλ₯ν μ μλ€. κ·Έλμ, κ·Έλνλ‘λΆν° μ μ©ν ννμ νμ΅νκΈ° μν΄μλ λΉμ§λ νμ΅ νλ μμν¬λ μ
λ ₯ κ·Έλνμ νΉμ§μ μ λλ‘ κ³ λ €ν΄μΌλ§ νλ€. λ³Έ νμλ
Όλ¬Έμμ μ°λ¦¬λ λ리 μ ν μ μλ μΈκ°μ§ κ·Έλν κ΅¬μ‘°μΈ λμ’
κ·Έλν, νΈλ¦¬ ννμ κ·Έλν, κ·Έλ¦¬κ³ μ΄μ’
κ·Έλνμ λν κ·Έλν μ κ²½λ§μ νμ©νλ λΉμ§λ νμ΅ λͺ¨λΈλ€μ μ μνλ€.
μ²μμΌλ‘, μ°λ¦¬λ λμ’
κ·Έλνμ λ
Έλμ λνμ¬ μ μ°¨μ ννμ νμ΅νλ κ·Έλν 컨볼루μ
μ€ν μΈμ½λ λͺ¨λΈμ μ μνλ€. κΈ°μ‘΄μ κ·Έλν μ€ν μΈμ½λλ ꡬ쑰μ μ μ²΄κ° νμ΅μ΄ λΆκ°λ₯ν΄μ μ νμ μΈ νν νμ΅ λ₯λ ₯μ κ°μ§ μ μλ λ°λ©΄μ, μ μνλ μ€ν μΈμ½λλ λ
Έλμ νΌμ³λ₯Ό 볡μνλ©°,ꡬ쑰μ μ μ²΄κ° νμ΅μ΄ κ°λ₯νλ€. λ
Έλμ νΌμ³λ₯Ό 볡μνκΈ° μν΄μ, μ°λ¦¬λ μΈμ½λ λΆλΆμ μν μ΄ μ΄μν λ
ΈλλΌλ¦¬ μ μ¬ν ννμ κ°μ§κ² νλ λΌνλΌμμ μ€λ¬΄λ©μ΄λΌλ κ²μ μ£Όλͺ©νμ¬ λμ½λ λΆλΆμμλ μ΄μ λ
Έλμ ννκ³Ό λ©μ΄μ§κ² νλ λΌνλΌμμ μ€νλμ νλλ‘ μ€κ³νμλ€. λν λΌνλΌμμ μ€νλμ κ·Έλλ‘ μ μ©νλ©΄ λΆμμ μ±μ μ λ°ν μ μκΈ° λλ¬Έμ, μ£μ§μ κ°μ€μΉ κ°μ μμ κ°μ μ€ μ μλ λΆνΈν κ·Έλνλ₯Ό νμ©νμ¬ μμ μ μΈ λΌνλΌμμ μ€νλμ ννλ₯Ό μ μνμλ€. λμ’
κ·Έλνμ λν λ
Έλ ν΄λ¬μ€ν°λ§κ³Ό λ§ν¬ μμΈ‘ μ€νμ ν΅νμ¬ μ μνλ λ°©λ²μ΄ μμ μ μΌλ‘ μ°μν μ±λ₯μ 보μμ νμΈνμλ€.
λμ§Έλ‘, μ°λ¦¬λ νΈλ¦¬μ ννλ₯Ό κ°μ§λ κ³μΈ΅μ μΈ κ΄κ³λ₯Ό κ°μ§κ³ μλ κ·Έλνμ λ
Έλ ννμ μ ννκ² νμ΅νκΈ° μνμ¬ μ곑μ 곡κ°μμ λμνλ μ€ν μΈμ½λ λͺ¨λΈμ μ μνλ€. μ ν΄λ¦¬λμΈ κ³΅κ°μ νΈλ¦¬λ₯Ό μ¬μνκΈ°μ λΆμ μ νλ€λ μ΅κ·Όμ λΆμμ ν΅νμ¬, μ곑μ 곡κ°μμ κ·Έλν μ κ²½λ§μ λ μ΄μ΄λ₯Ό νμ©νμ¬ λ
Έλμ μ μ°¨μ ννμ νμ΅νκ² λλ€. μ΄ λ, κ·Έλν μ κ²½λ§μ΄ μ곑μ κΈ°ννμμ κ³μΈ΅ μ 보λ₯Ό λ΄κ³ μλ 거리μ κ°μ νμ©νμ¬ λ
Έλμ μ΄μμ¬μ΄μ μ€μλλ₯Ό νμ©νλλ‘ μ€κ³νμλ€. μ°λ¦¬λ λ
Όλ¬Έ μΈμ© κ΄κ³ λ€νΈμν¬, κ³ν΅λ, μ΄λ―Έμ§ μ¬μ΄μ λ€νΈμν¬λ±μ λν΄ μ μν λͺ¨λΈμ μ μ©νμ¬ λ
Έλ ν΄λ¬μ€ν°λ§κ³Ό λ§ν¬ μμΈ‘ μ€νμ νμμΌλ©°, νΈλ¦¬μ ννλ₯Ό κ°μ§λ κ·Έλνμ λν΄μ μ μν λͺ¨λΈμ΄ μ ν΄λ¦¬λμΈ κ³΅κ°μμ μννλ λͺ¨λΈμ λΉν΄ ν₯μλ μ±λ₯μ 보μλ€λ κ²μ νμΈνμλ€.
λ§μ§λ§μΌλ‘, μ°λ¦¬λ μ¬λ¬ μ’
λ₯μ λ
Έλμ μ£μ§λ₯Ό κ°μ§λ μ΄μ’
κ·Έλνμ λν λμ‘° νμ΅ λͺ¨λΈμ μ μνλ€. μ°λ¦¬λ κΈ°μ‘΄μ λ°©λ²λ€μ΄ νμ΅νκΈ° μ΄μ μ μΆ©λΆν λλ©μΈ μ§μμ μ¬μ©νμ¬ μ€κ³ν λ©νν¨μ€λ λ©νκ·Έλνμ μμ‘΄νλ€λ λ¨μ κ³Ό λ§μ μ΄μ’
κ·Έλνμ μ£μ§κ° λ€λ₯Έ λ
Έλ μ’
λ₯μ¬μ΄μ κ΄κ³μ μ§μ€νκ³ μλ€λ μ μ μ£Όλͺ©νμλ€. μ΄λ₯Ό ν΅ν΄ μ°λ¦¬λ μ¬μ κ³Όμ μ΄ νμμμΌλ©° λ€λ₯Έ μ’
λ₯ μ¬μ΄μ κ΄κ³μ λνμ¬ κ°μ μ’
λ₯ μ¬μ΄μ κ΄κ³λ λμμ ν¨μ¨μ μΌλ‘ νμ΅νκ² νλ λ©νλ
ΈλλΌλ κ°λ
μ μ μνμλ€. λν λ©νλ
Έλλ₯Ό κΈ°λ°μΌλ‘νλ κ·Έλν μ κ²½λ§κ³Ό λμ‘° νμ΅ λͺ¨λΈμ μ μνμλ€. μ°λ¦¬λ μ μν λͺ¨λΈμ λ©νν¨μ€λ₯Ό μ¬μ©νλ μ΄μ’
κ·Έλν νμ΅ λͺ¨λΈκ³Ό λ
Έλ ν΄λ¬μ€ν°λ§ λ±μ μ€ν μ±λ₯μΌλ‘ λΉκ΅ν΄λ³΄μμ λ, λΉλ±νκ±°λ λμ μ±λ₯μ 보μμμ νμΈνμλ€.The goal of unsupervised graph representation learning is extracting useful node-wise or graph-wise vector representation that is aware of the intrinsic structures of the graph and its attributes. These days, designing methodology of unsupervised graph representation learning based on graph neural networks has growing attention due to their powerful representation ability. Many methods are focused on a homogeneous graph that is a network with a single type of node and a single type of edge. However, as many types of relationships exist in this world, graphs can also be classified into various types by structural and semantic properties. For this reason, to learn useful representations from graphs, the unsupervised learning framework must consider the characteristics of the input graph. In this dissertation, we focus on designing unsupervised learning models using graph neural networks for three graph structures that are widely available: homogeneous graphs, tree-like graphs, and heterogeneous graphs.
First, we propose a symmetric graph convolutional autoencoder which produces a low-dimensional latent representation from a homogeneous graph. In contrast to the existing graph autoencoders with asymmetric decoder parts, the proposed autoencoder has a newly designed decoder which builds a completely symmetric autoencoder form. For the reconstruction of node features, the decoder is designed based on Laplacian sharpening as the counterpart of Laplacian smoothing of the encoder, which allows utilizing the graph structure in the whole processes of the proposed autoencoder architecture. In order to prevent the numerical instability of the network caused by the Laplacian sharpening introduction, we further propose a new numerically stable form of the Laplacian sharpening by incorporating the signed graphs. The experimental results of clustering, link prediction and visualization tasks on homogeneous graphs strongly support that the proposed model is stable and outperforms various state-of-the-art algorithms.
Second, we analyze how unsupervised tasks can benefit from learned representations in hyperbolic space. To explore how well the hierarchical structure of unlabeled data can be represented in hyperbolic spaces, we design a novel hyperbolic message passing autoencoder whose overall auto-encoding is performed in hyperbolic space. The proposed model conducts auto-encoding the networks via fully utilizing hyperbolic geometry in message passing. Through extensive quantitative and qualitative analyses, we validate the properties and benefits of the unsupervised hyperbolic representations of tree-like graphs.
Third, we propose the novel concept of metanode for message passing to learn both heterogeneous and homogeneous relationships between any two nodes without meta-paths and meta-graphs. Unlike conventional methods, metanodes do not require a predetermined step to manipulate the given relations between different types to enrich relational information. Going one step further, we propose a metanode-based message passing layer and a contrastive learning model using the proposed layer. In our experiments, we show the competitive performance of the proposed metanode-based message passing method on node clustering and node classification tasks, when compared to state-of-the-art methods for message passing networks for heterogeneous graphs.1 Introduction 1
2 Representation Learning on Graph-Structured Data 4
2.1 Basic Introduction 4
2.1.1 Notations 5
2.2 Traditional Approaches 5
2.2.1 Graph Statistic 5
2.2.2 Neighborhood Overlap 7
2.2.3 Graph Kernel 9
2.2.4 Spectral Approaches 10
2.3 Node Embeddings I: Factorization and Random Walks 15
2.3.1 Factorization-based Methods 15
2.3.2 Random Walk-based Methods 16
2.4 Node Embeddings II: Graph Neural Networks 17
2.4.1 Overview of Framework 17
2.4.2 Representative Models 18
2.5 Learning in Unsupervised Environments 21
2.5.1 Predictive Coding 21
2.5.2 Contrastive Coding 22
2.6 Applications 24
2.6.1 Classifications 24
2.6.2 Link Prediction 26
3 Autoencoder Architecture for Homogeneous Graphs 27
3.1 Overview 27
3.2 Preliminaries 30
3.2.1 Spectral Convolution on Graphs 30
3.2.2 Laplacian Smoothing 32
3.3 Methodology 33
3.3.1 Laplacian Sharpening 33
3.3.2 Numerically Stable Laplacian Sharpening 34
3.3.3 Subspace Clustering Cost for Image Clustering 37
3.3.4 Training 39
3.4 Experiments 40
3.4.1 Datasets 40
3.4.2 Experimental Settings 42
3.4.3 Comparing Methods 42
3.4.4 Node Clustering 43
3.4.5 Image Clustering 45
3.4.6 Ablation Studies 46
3.4.7 Link Prediction 47
3.4.8 Visualization 47
3.5 Summary 49
4 Autoencoder Architecture for Tree-like Graphs 50
4.1 Overview 50
4.2 Preliminaries 52
4.2.1 Hyperbolic Embeddings 52
4.2.2 Hyperbolic Geometry 53
4.3 Methodology 55
4.3.1 Geometry-Aware Message Passing 56
4.3.2 Nonlinear Activation 57
4.3.3 Loss Function 58
4.4 Experiments 58
4.4.1 Datasets 59
4.4.2 Compared Methods 61
4.4.3 Experimental Details 62
4.4.4 Node Clustering and Link Prediction 64
4.4.5 Image Clustering 66
4.4.6 Structure-Aware Unsupervised Embeddings 68
4.4.7 Hyperbolic Distance to Filter Training Samples 71
4.4.8 Ablation Studies 74
4.5 Further Discussions 75
4.5.1 Connection to Contrastive Learning 75
4.5.2 Failure Cases of Hyperbolic Embedding Spaces 75
4.6 Summary 77
5 Contrastive Learning for Heterogeneous Graphs 78
5.1 Overview 78
5.2 Preliminaries 82
5.2.1 Meta-path 82
5.2.2 Representation Learning on Heterogeneous Graphs 82
5.2.3 Contrastive methods for Heterogeneous Graphs 83
5.3 Methodology 84
5.3.1 Definitions 84
5.3.2 Metanode-based Message Passing Layer 86
5.3.3 Contrastive Learning Framework 88
5.4 Experiments 89
5.4.1 Experimental Details 90
5.4.2 Node Classification 94
5.4.3 Node Clustering 96
5.4.4 Visualization 96
5.4.5 Effectiveness of Metanodes 97
5.5 Summary 99
6 Conclusions 101λ°
Algorithms, applications and systems towards interpretable pattern mining from multi-aspect data
How do humans move around in the urban space and how do they differ when the city undergoes terrorist attacks? How do users behave in Massive Open Online courses~(MOOCs) and how do they differ if some of them achieve certificates while some of them not? What areas in the court elite players, such as Stephen Curry, LeBron James, like to make their shots in the course of the game? How can we uncover the hidden habits that govern our online purchases? Are there unspoken agendas in how different states pass legislation of certain kinds? At the heart of these seemingly unconnected puzzles is this same mystery of multi-aspect mining, i.g., how can we mine and interpret the hidden pattern from a dataset that simultaneously reveals the associations, or changes of the associations, among various aspects of the data (e.g., a shot could be described with three aspects, player, time of the game, and area in the court)? Solving this problem could open gates to a deep understanding of underlying mechanisms for many real-world phenomena. While much of the research in multi-aspect mining contribute broad scope of innovations in the mining part, interpretation of patterns from the perspective of users (or domain experts) is often overlooked. Questions like what do they require for patterns, how good are the patterns, or how to read them, have barely been addressed. Without efficient and effective ways of involving users in the process of multi-aspect mining, the results are likely to lead to something difficult for them to comprehend.
This dissertation proposes the M^3 framework, which consists of multiplex pattern discovery, multifaceted pattern evaluation, and multipurpose pattern presentation, to tackle the challenges of multi-aspect pattern discovery. Based on this framework, we develop algorithms, applications, and analytic systems to enable interpretable pattern discovery from multi-aspect data. Following the concept of meaningful multiplex pattern discovery, we propose PairFac to close the gap between human information needs and naive mining optimization. We demonstrate its effectiveness in the context of impact discovery in the aftermath of urban disasters. We develop iDisc to target the crossing of multiplex pattern discovery with multifaceted pattern evaluation. iDisc meets the specific information need in understanding multi-level, contrastive behavior patterns. As an example, we use iDisc to predict student performance outcomes in Massive Open Online Courses given users' latent behaviors. FacIt is an interactive visual analytic system that sits at the intersection of all three components and enables for interpretable, fine-tunable, and scrutinizable pattern discovery from multi-aspect data. We demonstrate each work's significance and implications in its respective problem context. As a whole, this series of studies is an effort to instantiate the M^3 framework and push the field of multi-aspect mining towards a more human-centric process in real-world applications
Recommended from our members
Exponential Family Embeddings
Word embeddings are a powerful approach for capturing semantic similarity among terms in a vocabulary. Exponential family embeddings extend the idea of word embeddings to other types of high-dimensional data. Exponential family embeddings have three ingredients; embeddings as latent variables, a predefined conditioning set for each observation called the context and a conditional likelihood from the exponential family. The embeddings are inferred with a scalable algorithm. This thesis highlights three advantages of the exponential family embeddings model class: (A) The approximations used for existing methods such as word2vec can be understood as a biased stochastic gradients procedure on a specific type of exponential family embedding model --- the Bernoulli embedding. (B) By choosing different likelihoods from the exponential family we can generalize the task of learning distributed representations to different application domains. For example, we can learn embeddings of grocery items from shopping data, embeddings of movies from click data, or embeddings of neurons from recordings of zebrafish brains. On all three applications, we find exponential family embedding models to be more effective than other types of dimensionality reduction. They better reconstruct held-out data and find interesting qualitative structure. (C) Finally, the probabilistic modeling perspective allows us to incorporate structure and domain knowledge in the embedding space. We develop models for studying how language varies over time, differs between related groups of data, and how word usage differs between languages. Key to the success of these methods is that the embeddings share statistical information through hierarchical priors or neural networks. We demonstrate the benefits of this approach in empirical studies of Senate speeches, scientific abstracts, and shopping baskets
Temporal Link Prediction: A Unified Framework, Taxonomy, and Review
Dynamic graphs serve as a generic abstraction and description of the
evolutionary behaviors of various complex systems (e.g., social networks and
communication networks). Temporal link prediction (TLP) is a classic yet
challenging inference task on dynamic graphs, which predicts possible future
linkage based on historical topology. The predicted future topology can be used
to support some advanced applications on real-world systems (e.g., resource
pre-allocation) for better system performance. This survey provides a
comprehensive review of existing TLP methods. Concretely, we first give the
formal problem statements and preliminaries regarding data models, task
settings, and learning paradigms that are commonly used in related research. A
hierarchical fine-grained taxonomy is further introduced to categorize existing
methods in terms of their data models, learning paradigms, and techniques. From
a generic perspective, we propose a unified encoder-decoder framework to
formulate all the methods reviewed, where different approaches only differ in
terms of some components of the framework. Moreover, we envision serving the
community with an open-source project OpenTLP that refactors or implements some
representative TLP methods using the proposed unified framework and summarizes
other public resources. As a conclusion, we finally discuss advanced topics in
recent research and highlight possible future directions
Unsupervised Learning of Latent Structure from Linear and Nonlinear Measurements
University of Minnesota Ph.D. dissertation. June 2019. Major: Electrical Engineering. Advisor: Nicholas Sidiropoulos. 1 computer file (PDF); xii, 118 pages.The past few decades have seen a rapid expansion of our digital world. While early dwellers of the Internet exchanged simple text messages via email, modern citizens of the digital world conduct a much richer set of activities online: entertainment, banking, booking for restaurants and hotels, just to name a few. In our digitally enriched lives, we not only enjoy great convenience and efficiency, but also leave behind massive amounts of data that offer ample opportunities for improving these digital services, and creating new ones. Meanwhile, technical advancements have facilitated the emergence of new sensors and networks, that can measure, exchange and log data about real world events. These technologies have been applied to many different scenarios, including environmental monitoring, advanced manufacturing, healthcare, and scientific research in physics, chemistry, bio-technology and social science, to name a few. Leveraging the abundant data, learning-based and data-driven methods have become a dominating paradigm across different areas, with data analytics driving many of the recent developments. However, the massive amount of data also bring considerable challenges for analytics. Among them, the collected data are often high-dimensional, with the true knowledge and signal of interest hidden underneath. It is of great importance to reduce data dimension, and transform the data into the right space. In some cases, the data are generated from certain generative models that are identifiable, making it possible to reduce the data back to the original space. In addition, we are often interested in performing some analysis on the data after dimensionality reduction (DR), and it would be helpful to be mindful about these subsequent analysis steps when performing DR, as latent structures can serve as a valuable prior. Based on this reasoning, we develop two methods, one for the linear generative model case, and the other one for the nonlinear case. In a related setting, we study parameter estimation under unknown nonlinear distortion. In this case, the unknown nonlinearity in measurements poses a severe challenge. In practice, various mechanisms can introduce nonlinearity in the measured data. To combat this challenge, we put forth a nonlinear mixture model, which is well-grounded in real world applications. We show that this model is in fact identifiable up to some trivial indeterminancy. We develop an efficient algorithm to recover latent parameters of this model, and confirm the effectiveness of our theory and algorithm via numerical experiments
Structured representation learning from complex data
This thesis advances several theoretical and practical aspects of the recently introduced restricted Boltzmann machine - a powerful probabilistic and generative framework for modelling data and learning representations. The contributions of this study represent a systematic and common theme in learning structured representations from complex data
- β¦