17,794 research outputs found
Distributed Temporal Link Prediction Algorithm Based on Label Propagation
Link prediction has steadily become an important research topic in the area of complex networks. However, the current link prediction algorithms typically neglect the evolution process and they tend to exhibit low accuracy and scalability when applied to large-scale networks. In this article, we propose a novel distributed temporal link prediction algorithm based on label propagation (DTLPLP), governed by the dynamical properties of the interactions between nodes. In particular, nodes are associated with labels, which include details of their sources, and the corresponding similarity value. When such labels are propagated across neighbouring nodes, they are updated based on the weights of the incident links, and the values from same source nodes are aggregated to evaluate the scores of links in the predicted network. Furthermore, DTLPLP has been designed to be distributed and parallelised, and thus suitable for large-scale network analysis. As part of the validation process, we have designed a prototype system developed in Pregel, which is a distributed network analysis framework. Experiments are conducted on the Enron e-mails and the General Relativity and Quantum Cosmology Scientific Collaboration networks. The experimental results show that compared to the most of link prediction algorithms, DTLPLP offers enhanced accuracy, stability and scalability
Recommended from our members
The Recurrent Temporal Discriminative Restricted Boltzmann Machines
Classification of sequence data is the topic of interest for dynamic Bayesian models and Recurrent Neural Networks (RNNs). While the former can explicitly model the temporal dependencies between class variables, the latter have a capability of learning representations. Several attempts have been made to improve performance by combining these two approaches or increasing the processing capability of the hidden units in RNNs. This often results in complex models with a large number of learning parameters. In this paper, a compact model is proposed which offers both representation learning and temporal inference of class variables by rolling Restricted Boltzmann Machines (RBMs) and class variables over time. We address the key issue of intractability in this variant of RBMs by optimising a conditional distribution, instead of a joint distribution. Experiments reported in the paper on melody modelling and optical character recognition show that the proposed model can outperform the state-of-the-art. Also, the experimental results on optical character recognition, part-of-speech tagging and text chunking demonstrate that our model is comparable to recurrent neural networks with complex memory gates while requiring far fewer parameters
Thirty Years of Machine Learning: The Road to Pareto-Optimal Wireless Networks
Future wireless networks have a substantial potential in terms of supporting
a broad range of complex compelling applications both in military and civilian
fields, where the users are able to enjoy high-rate, low-latency, low-cost and
reliable information services. Achieving this ambitious goal requires new radio
techniques for adaptive learning and intelligent decision making because of the
complex heterogeneous nature of the network structures and wireless services.
Machine learning (ML) algorithms have great success in supporting big data
analytics, efficient parameter estimation and interactive decision making.
Hence, in this article, we review the thirty-year history of ML by elaborating
on supervised learning, unsupervised learning, reinforcement learning and deep
learning. Furthermore, we investigate their employment in the compelling
applications of wireless networks, including heterogeneous networks (HetNets),
cognitive radios (CR), Internet of things (IoT), machine to machine networks
(M2M), and so on. This article aims for assisting the readers in clarifying the
motivation and methodology of the various ML algorithms, so as to invoke them
for hitherto unexplored services as well as scenarios of future wireless
networks.Comment: 46 pages, 22 fig
Partially Blind Handovers for mmWave New Radio Aided by Sub-6 GHz LTE Signaling
For a base station that supports cellular communications in sub-6 GHz LTE and
millimeter (mmWave) bands, we propose a supervised machine learning algorithm
to improve the success rate in the handover between the two radio frequencies
using sub-6 GHz and mmWave prior channel measurements within a temporal window.
The main contributions of our paper are to 1) introduce partially blind
handovers, 2) employ machine learning to perform handover success predictions
from sub-6 GHz to mmWave frequencies, and 3) show that this machine learning
based algorithm combined with partially blind handovers can improve the
handover success rate in a realistic network setup of colocated cells.
Simulation results show improvement in handover success rates for our proposed
algorithm compared to standard handover algorithms.Comment: (c) 2018 IEEE. Personal use of this material is permitted. Permission
from IEEE must be obtained for all other uses, in any current or future
media, including reprinting/republishing this material for advertising or
promotional purposes, creating new collective works, for resale or
redistribution to servers or lists, or reuse of any copyrighted component of
this work in other work
- …