36 research outputs found
A Dynamic Linear Bias Incorporation Scheme for Nonnegative Latent Factor Analysis
High-Dimensional and Incomplete (HDI) data is commonly encountered in big
data-related applications like social network services systems, which are
concerning the limited interactions among numerous nodes. Knowledge acquisition
from HDI data is a vital issue in the domain of data science due to their
embedded rich patterns like node behaviors, where the fundamental task is to
perform HDI data representation learning. Nonnegative Latent Factor Analysis
(NLFA) models have proven to possess the superiority to address this issue,
where a linear bias incorporation (LBI) scheme is important in present the
training overshooting and fluctuation, as well as preventing the model from
premature convergence. However, existing LBI schemes are all statistic ones
where the linear biases are fixed, which significantly restricts the
scalability of the resultant NLFA model and results in loss of representation
learning ability to HDI data. Motivated by the above discoveries, this paper
innovatively presents the dynamic linear bias incorporation (DLBI) scheme. It
firstly extends the linear bias vectors into matrices, and then builds a binary
weight matrix to switch the active/inactive states of the linear biases. The
weight matrix's each entry switches between the binary states dynamically
corresponding to the linear bias value variation, thereby establishing the
dynamic linear biases for an NLFA model. Empirical studies on three HDI
datasets from real applications demonstrate that the proposed DLBI-based NLFA
model obtains higher representation accuracy several than state-of-the-art
models do, as well as highly-competitive computational efficiency.Comment: arXiv admin note: substantial text overlap with arXiv:2306.03911,
arXiv:2302.12122, arXiv:2306.0364
Graph Regularized Nonnegative Latent Factor Analysis Model for Temporal Link Prediction in Cryptocurrency Transaction Networks
With the development of blockchain technology, the cryptocurrency based on
blockchain technology is becoming more and more popular. This gave birth to a
huge cryptocurrency transaction network has received widespread attention. Link
prediction learning structure of network is helpful to understand the mechanism
of network, so it is also widely studied in cryptocurrency network. However,
the dynamics of cryptocurrency transaction networks have been neglected in the
past researches. We use graph regularized method to link past transaction
records with future transactions. Based on this, we propose a single latent
factor-dependent, non-negative, multiplicative and graph
regularized-incorporated update (SLF-NMGRU) algorithm and further propose graph
regularized nonnegative latent factor analysis (GrNLFA) model. Finally,
experiments on a real cryptocurrency transaction network show that the proposed
method improves both the accuracy and the computational efficienc
An effective scheme for QoS estimation via alternating direction method-based matrix factorization
Accurately estimating unknown quality-of-service (QoS) data based on historical records of Web-service invocations is vital for automatic service selection. This work presents an effective scheme for addressing this issue via alternating direction method-based matrix factorization. Its main idea consists of a) adopting the principle of the alternating direction method to decompose the task of building a matrix factorization-based QoS-estimator into small subtasks, where each one trains a subset of desired parameters based on the latest status of the whole parameter set; b) building an ensemble of diversified single models with sophisticated diversifying and aggregating mechanism; and c) parallelizing the construction process of the ensemble to drastically reduce the time cost. Experimental results on two industrial QoS datasets demonstrate that with the proposed scheme, more accurate QoS estimates can be achieved than its peers with comparable computing time with the help of its practical parallelization.This work was supported in part by the FDCT (Fundo para o Desenvolvimento das Ciências e da Tecnologia) under Grant119/2014/A3, in part by the National Natu-ral Science Foundation of China under Grant 61370150, and Grant 61433014; in part by the Young Scientist Foun-dation of Chongqing under Grant cstc2014kjrc-qnrc40005; in part by the Chongqing Research Program of Basic Re-search and Frontier Technology under Grant cstc2015jcyjB0244; in part by the Postdoctoral Science Funded Project of Chongqing under Grant Xm2014043; in part by the Fundamental Research Funds for the Central Universities under Grant 106112015CDJXY180005; in part by the Specialized Research Fund for the Doctoral Pro-gram of Higher Education under Grant 20120191120030
Large-scale Dynamic Network Representation via Tensor Ring Decomposition
Large-scale Dynamic Networks (LDNs) are becoming increasingly important in
the Internet age, yet the dynamic nature of these networks captures the
evolution of the network structure and how edge weights change over time,
posing unique challenges for data analysis and modeling. A Latent Factorization
of Tensors (LFT) model facilitates efficient representation learning for a LDN.
But the existing LFT models are almost based on Canonical Polyadic
Factorization (CPF). Therefore, this work proposes a model based on Tensor Ring
(TR) decomposition for efficient representation learning for a LDN.
Specifically, we incorporate the principle of single latent factor-dependent,
non-negative, and multiplicative update (SLF-NMU) into the TR decomposition
model, and analyze the particular bias form of TR decomposition. Experimental
studies on two real LDNs demonstrate that the propose method achieves higher
accuracy than existing models
A Dual Latent State Learning Approach: Exploiting Regional Network Similarities for QoS Prediction
Individual objects, whether users or services, within a specific region often
exhibit similar network states due to their shared origin from the same city or
autonomous system (AS). Despite this regional network similarity, many existing
techniques overlook its potential, resulting in subpar performance arising from
challenges such as data sparsity and label imbalance. In this paper, we
introduce the regional-based dual latent state learning network(R2SL), a novel
deep learning framework designed to overcome the pitfalls of traditional
individual object-based prediction techniques in Quality of Service (QoS)
prediction. Unlike its predecessors, R2SL captures the nuances of regional
network behavior by deriving two distinct regional network latent states: the
city-network latent state and the AS-network latent state. These states are
constructed utilizing aggregated data from common regions rather than
individual object data. Furthermore, R2SL adopts an enhanced Huber loss
function that adjusts its linear loss component, providing a remedy for
prevalent label imbalance issues. To cap off the prediction process, a
multi-scale perception network is leveraged to interpret the integrated feature
map, a fusion of regional network latent features and other pertinent
information, ultimately accomplishing the QoS prediction. Through rigorous
testing on real-world QoS datasets, R2SL demonstrates superior performance
compared to prevailing state-of-the-art methods. Our R2SL approach ushers in an
innovative avenue for precise QoS predictions by fully harnessing the regional
network similarities inherent in objects
Recommended from our members
Convergence analysis of single latent factor-dependent, nonnegative, and multiplicative update-based nonnegative latent factor models
National Natural Science Foundation of China under grants 61772493 and 61933007; Natural Science Foundation of Chongqing (China) under grant cstc2019jcyjjqX0013; Pioneer Hundred Talents Program of Chinese Academy of Sciences
Recommended from our members
A Fast Non-Negative Latent Factor Model Based on Generalized Momentum Method
National Key Research and Development Program of China (Grant Number: 2017YFC0804002); 10.13039/501100001809-National Natural Science Foundation of China (Grant Number: 61772493 and 91646114); Chongqing Research Program of Technology Innovation and Application (Grant Number: cstc2017rgzn-zdyfX0020, cstc2017zdcy-zdyf0554 and cstc2017rgzn-zdyf0118); Chongqing Cultivation Program of Innovation and Entrepreneurship Demonstration Group (Grant Number: cstc2017kjrc-cxcytd0149); Chongqing Overseas Scholars Innovation Program (Grant Number: cx2017012 and cx2018011); 10.13039/501100002367-Pioneer Hundred Talents Program of Chinese Academy of Sciences
Towards the Deployment of Machine Learning Solutions in Network Traffic Classification: A Systematic Survey
International audienceTraffic analysis is a compound of strategies intended to find relationships, patterns, anomalies, and misconfigurations, among others things, in Internet traffic. In particular, traffic classification is a subgroup of strategies in this field that aims at identifying the application's name or type of Internet traffic. Nowadays, traffic classification has become a challenging task due to the rise of new technologies, such as traffic encryption and encapsulation, which decrease the performance of classical traffic classification strategies. Machine Learning gains interest as a new direction in this field, showing signs of future success, such as knowledge extraction from encrypted traffic, and more accurate Quality of Service management. Machine Learning is fast becoming a key tool to build traffic classification solutions in real network traffic scenarios; in this sense, the purpose of this investigation is to explore the elements that allow this technique to work in the traffic classification field. Therefore, a systematic review is introduced based on the steps to achieve traffic classification by using Machine Learning techniques. The main aim is to understand and to identify the procedures followed by the existing works to achieve their goals. As a result, this survey paper finds a set of trends derived from the analysis performed on this domain; in this manner, the authors expect to outline future directions for Machine Learning based traffic classification