3,941 research outputs found
Graph Regularized Nonnegative Latent Factor Analysis Model for Temporal Link Prediction in Cryptocurrency Transaction Networks
With the development of blockchain technology, the cryptocurrency based on
blockchain technology is becoming more and more popular. This gave birth to a
huge cryptocurrency transaction network has received widespread attention. Link
prediction learning structure of network is helpful to understand the mechanism
of network, so it is also widely studied in cryptocurrency network. However,
the dynamics of cryptocurrency transaction networks have been neglected in the
past researches. We use graph regularized method to link past transaction
records with future transactions. Based on this, we propose a single latent
factor-dependent, non-negative, multiplicative and graph
regularized-incorporated update (SLF-NMGRU) algorithm and further propose graph
regularized nonnegative latent factor analysis (GrNLFA) model. Finally,
experiments on a real cryptocurrency transaction network show that the proposed
method improves both the accuracy and the computational efficienc
Temporal Link Prediction: A Unified Framework, Taxonomy, and Review
Dynamic graphs serve as a generic abstraction and description of the
evolutionary behaviors of various complex systems (e.g., social networks and
communication networks). Temporal link prediction (TLP) is a classic yet
challenging inference task on dynamic graphs, which predicts possible future
linkage based on historical topology. The predicted future topology can be used
to support some advanced applications on real-world systems (e.g., resource
pre-allocation) for better system performance. This survey provides a
comprehensive review of existing TLP methods. Concretely, we first give the
formal problem statements and preliminaries regarding data models, task
settings, and learning paradigms that are commonly used in related research. A
hierarchical fine-grained taxonomy is further introduced to categorize existing
methods in terms of their data models, learning paradigms, and techniques. From
a generic perspective, we propose a unified encoder-decoder framework to
formulate all the methods reviewed, where different approaches only differ in
terms of some components of the framework. Moreover, we envision serving the
community with an open-source project OpenTLP that refactors or implements some
representative TLP methods using the proposed unified framework and summarizes
other public resources. As a conclusion, we finally discuss advanced topics in
recent research and highlight possible future directions
A Survey on Graph Representation Learning Methods
Graphs representation learning has been a very active research area in recent
years. The goal of graph representation learning is to generate graph
representation vectors that capture the structure and features of large graphs
accurately. This is especially important because the quality of the graph
representation vectors will affect the performance of these vectors in
downstream tasks such as node classification, link prediction and anomaly
detection. Many techniques are proposed for generating effective graph
representation vectors. Two of the most prevalent categories of graph
representation learning are graph embedding methods without using graph neural
nets (GNN), which we denote as non-GNN based graph embedding methods, and graph
neural nets (GNN) based methods. Non-GNN graph embedding methods are based on
techniques such as random walks, temporal point processes and neural network
learning methods. GNN-based methods, on the other hand, are the application of
deep learning on graph data. In this survey, we provide an overview of these
two categories and cover the current state-of-the-art methods for both static
and dynamic graphs. Finally, we explore some open and ongoing research
directions for future work
- …